arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
## Message Board
Respected readers, authors and reviewers, you can add comments to this page on any questions about the contribution, review, editing and publication of this journal. We will give you an answer as soon as possible. Thank you for your support!
Volume 47 Issue 9
Sep. 2022
Turn off MathJax
Article Contents
ZHAO Yali, WANG Yanbing, WANG Xinyu, TIAN Xiuxiu, LI Xiaojuan, YU Jie. Temporal and Spatial Analysis of Land Subsidence in Beijing Plain Based on TPCA[J]. Geomatics and Information Science of Wuhan University, 2022, 47(9): 1498-1506. doi: 10.13203/j.whugis20200721
Citation: ZHAO Yali, WANG Yanbing, WANG Xinyu, TIAN Xiuxiu, LI Xiaojuan, YU Jie. Temporal and Spatial Analysis of Land Subsidence in Beijing Plain Based on TPCA[J]. Geomatics and Information Science of Wuhan University, 2022, 47(9): 1498-1506.
# Temporal and Spatial Analysis of Land Subsidence in Beijing Plain Based on TPCA
##### doi: 10.13203/j.whugis20200721
Funds:
The Beijing Natural Science Foundation 8202009
the Open Project Program of the State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, China 01117220010020
More Information
• Author Bio:
ZHAO Yali, master, specializes in geographic information science and land subsidence monitoring and analysis research based on InSAR. E-mail: zhaoyali@cnu.edu.cn
• Corresponding author: WANG Yanbing, PhD, associate professor. E-mail: wyb@cnu.edu.cn
• Received Date: 2021-03-04
• Publish Date: 2022-09-05
• Objectives Most of the characteristics of land subsidence are analyzed separately from the perspective of temporal or spatial, and the hidden information and possible laws in the data cannot be discovered simultaneously. Temporal principal component analysis (TPCA) can be used to extract temporal and spatial characteristics of temporal and spatial data in the field of geosciences. The land subsidence in the Beijing plain has typical temporal and spatial characteristics. Methods (1) Permanent scatterer interferometric synthetic aperture radartechnique provide a convenient method to measure land subsidence in sub-centimeter precision. 51 Envisat ASAR data acquired from 2003 to 2010 in the Beijing plain were used to produce 50 interferograms and obtain time-series deformation with nonlinear model.(2)Based on the land subsidence of about 100 000 permanent scatterer (PS) points and 51-time series, we construct the original data matrix X100 000×51, calculate the correlation coefficient matrix, and use the TPCA method to analyze the temporal and spatial evolution characteristics of land subsidence in the Beijing plain. Results It is found that: (1) The first principal component obtained by TPCA analysis represents the long-term development trend of the spatial distribution of land subsidence. (2) The area that the second principal component that is positive has a correlation in spatial distribution with the area of compressible layer thickness above 130 m. (3) The PS points where the first principal component scores are negative and the second principal component scores are positive are distributed in the severe subsidence area above 30 mm/a. There is an obvious classification of land subsidence and seasonal variation between north and south area in the severe subsidence area. Specifically, in the northern subsidence area, the amount of subsidence in spring and summer is larger than that of in autumn and winter, it is an opposite variation in the southern subsidence area. Conclusions In general, the temporal and spatial variation of land subsidence could be studied for urban safety monitoring by TPCA. It also can identify the main characteristics of the space and the law of temporal and spatial evolution. Since TPCA is a linear combination, by finding the direction with the largest variance for projection, the variables obtained are just uncorrelated and not independent of each other. Principal component analysis(PCA) only uses the second-order statistical information of the original data and ignores its high-order statistical information. Therefore, it is necessary to optimize by rotating principal components to find more physical meanings of principal components.
• [1] 杨艳, 贾三满, 王海刚. 北京平原区地面沉降现状及发展趋势分析[J]. 上海地质, 2010, 31(4): 23-28 Yang Yan, Jia Sanman, Wang Haigang. The Status and Development of Land Subsidence in Beijing Plain[J]. Shanghai Geology, 2010, 31(4): 23-28 [2] 刘凯斯. 北京地铁M1/M6沿线区地面沉降演化特征及风险评价[D]. 北京: 首都师范大学, 2018 Liu Kaisi. Evolution Characteristics and Risk Assessment of Land Subsidence in the Area along Beijing Subway M1/M6[D]. Beijing: Capital Normal University, 2018 [3] 段光耀, 刘欢欢, 宫辉力, 等. 京津城际铁路沿线不均匀地面沉降演化特征[J]. 武汉大学学报·信息科学版, 2017, 42(12): 1847-1853 Duan Guangyao, Liu Huanhuan, Gong Huili, et al. Evolution Characteristics of Uneven Land Subsidence Along Beijing-Tianjin Inter-City Railway[J]. Geomatics and Information Science of Wuhan University, 2017, 42(12): 1847-1853 [4] 罗三明, 杜凯夫, 万文妮, 等. 利用PSInSAR方法反演大时空尺度地表沉降速率[J]. 武汉大学学报·信息科学版, 2014, 39(9): 1128-1134 Luo Sanming, Du Kaifu, Wan Wenni, et al. Ground Subsidence Rate Inversion of Large Temporal and Spatial Scales Based on Extended PSInSAR Method[J]. Geomatics and Information Science of Wuhan University, 2014, 39(9): 1128-1134 [5] 朱邦彦, 姚冯宇, 孙静雯, 等. 利用InSAR与地质数据综合分析南京河西地面沉降的演化特征和成因[J]. 武汉大学学报·信息科学版, 2020, 45(3): 442-450 Zhu Bangyan, Yao Fengyu, Sun Jingwen, et al. Attribution Analysis on Land Subsidence Feature in Hexi Area of Nanjing by InSAR and Geological Data[J]. Geomatics and Information Science of Wuhan University, 2020, 45(3): 442-450 [6] Guo L, Gong H L, Zhu F, et al. Analysis of the Spatiotemporal Variation in Land Subsidence on the Beijing Plain, China[J]. Remote Sensing, 2019, 11(10): 1170-1189 [7] Zhou C D, Lan H X, Gong H L, et al. Reduced Rate of Land Subsidence Since 2016 in Beijing, China: Evidence from Tomo-PSInSAR Using RadarSAT-2 and Sentinel-1 Datasets[J]. International Journal of Remote Sensing, 2020, 41(4): 1259-1285 [8] Zuo J J, Gong H L, Chen B B, et al. Time-Series Evolution Patterns of Land Subsidence in the Eastern Beijing Plain, China[J]. Remote Sensing, 2019, 11(5): 539 [9] Richman M B. Rotation of Principal Components[J]. Journal of Climatology, 1986, 6(3): 293-335 [10] Lin Y N N, Kositsky A P, Avouac J P. PCAIM Joint Inversion of InSAR and Ground-Based Geodetic Time Series: Application to Monitoring Magmatic Inflation Beneath the Long Valley Caldera[J]. Geophysical Research Letters, 2010, 37(23): 23301-23305 [11] Ji K H, Herring T A. Transient Signal Detection Using GPS Measurements: Transient Inflation at Akutan Volcano, Alaska, During Early 2008[J]. Geophysical Research Letters, 2011, 38(6): 6307-6312 [12] Zhang J P, Zhu T, Zhang Q H, et al. The Impact of Circulation Patterns on Regional Transport Pathways and Air Quality over Beijing and Its Surroundings[J]. Atmospheric Chemistry and Physics, 2012, 12(11): 5031-5053 [13] 朱飙, 王振会, 李春华, 等. 江苏雷暴时空变化的气候特征分析[J]. 气象科学, 2009, 29(6): 849-852 Zhu Biao, Wang Zhenhui, Li Chunhua, et al. Analysis of Climate Spatial-Temporal Character of Thunderstorm over Jiangsu Province[J]. Scientia Meteorologica Sinica, 2009, 29(6): 849-852 [14] Neeti N, Eastman J R. Novel Approaches in Extended Principal Component Analysis to Compare Spatio-Temporal Patterns Among Multiple Image Time Series[J]. Remote Sensing of Environment, 2014, 148: 84-96 [15] Rudolph M L, Shirzaei M, Manga M, et al. Evolution and Future of the Lusi Mud Eruption Inferred from Ground Deformation[J]. Geophysical Research Letters, 2013, 40(6): 1089-1092 [16] Lipovsky B. Physical and Statistical Models in Deformation Geodesy [D]. Riverside, USA: University of California, Riverside, 2011 [17] Chaussard E, Bürgmann R, Shirzaei M, et al. Predictability of Hydraulic Head Changes and Characterization of Aquifer-System and Fault Properties from InSAR-Derived Ground Deformation[J]. Journal of Geophysical Research: Solid Earth, 2014, 119(8): 6572-6590 [18] 吴玉苗. 基于EOF与神经网络的隧道变形监测方法研究[D]. 成都: 西南交通大学, 2014 Wu Yumiao. Investigation on Tunnel Deformation Monitoring Methods Based on the EOF and Neural Network[D]. Chengdu: Southwest Jiaotong University, 2014 [19] 邹正波, 李辉, 吴云龙, 等. 日本Mw 9.0地震震区及其周缘2002-2015年卫星重力变化时空特征[J]. 地震学报, 2016, 38(3): 417-428 https://www.cnki.com.cn/Article/CJFDTOTAL-DZXB201603009.htm Zou Zhengbo, Li Hui, Wu Yunlong, et al. Spatial and Temporal Characteristics of Long-Term Satellite Gravity Change in the Epicenter of Mw 9.0 Japan Earthquake and Its Surrounding Regions[J]. Acta Seismologica Sinica, 2016, 38(3): 417-428 https://www.cnki.com.cn/Article/CJFDTOTAL-DZXB201603009.htm [20] Jiang L, Bai L, Zhao Y, et al. Combining InSAR and Hydraulic Head Measurements to Estimate Aquifer Parameters and Storage Variations of Confined Aquifer System in Cangzhou, North China Plain[J]. Water Resources Research, 2018, 54(10): 8234-8252
###### 通讯作者: 陈斌, bchen63@163.com
• 1.
沈阳化工大学材料科学与工程学院 沈阳 110142
Figures(11) / Tables(1)
## Article Metrics
Article views(393) PDF downloads(58) Cited by()
## Temporal and Spatial Analysis of Land Subsidence in Beijing Plain Based on TPCA
##### doi: 10.13203/j.whugis20200721
###### 1. Schools of Resources, Environment and Tourism, Capital Normal University, Beijing 100048, China2. Key Laboratory of 3D Information Acquisition and Application, Ministry of Education, Capital Normal University, Beijing 100048, China3. Beijing Laboratory of Water Resources Security, Capital Normal University, Beijing 100048, China4. Beijing Institute of Surveying and Mapping, Beijing 100038, China
Funds:
The Beijing Natural Science Foundation 8202009
the Open Project Program of the State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, China 01117220010020
• Author Bio:
• ###### Corresponding author:WANG Yanbing, PhD, associate professor. E-mail: wyb@cnu.edu.cn
Abstract: Objectives Most of the characteristics of land subsidence are analyzed separately from the perspective of temporal or spatial, and the hidden information and possible laws in the data cannot be discovered simultaneously. Temporal principal component analysis (TPCA) can be used to extract temporal and spatial characteristics of temporal and spatial data in the field of geosciences. The land subsidence in the Beijing plain has typical temporal and spatial characteristics. Methods (1) Permanent scatterer interferometric synthetic aperture radartechnique provide a convenient method to measure land subsidence in sub-centimeter precision. 51 Envisat ASAR data acquired from 2003 to 2010 in the Beijing plain were used to produce 50 interferograms and obtain time-series deformation with nonlinear model.(2)Based on the land subsidence of about 100 000 permanent scatterer (PS) points and 51-time series, we construct the original data matrix X100 000×51, calculate the correlation coefficient matrix, and use the TPCA method to analyze the temporal and spatial evolution characteristics of land subsidence in the Beijing plain. Results It is found that: (1) The first principal component obtained by TPCA analysis represents the long-term development trend of the spatial distribution of land subsidence. (2) The area that the second principal component that is positive has a correlation in spatial distribution with the area of compressible layer thickness above 130 m. (3) The PS points where the first principal component scores are negative and the second principal component scores are positive are distributed in the severe subsidence area above 30 mm/a. There is an obvious classification of land subsidence and seasonal variation between north and south area in the severe subsidence area. Specifically, in the northern subsidence area, the amount of subsidence in spring and summer is larger than that of in autumn and winter, it is an opposite variation in the southern subsidence area. Conclusions In general, the temporal and spatial variation of land subsidence could be studied for urban safety monitoring by TPCA. It also can identify the main characteristics of the space and the law of temporal and spatial evolution. Since TPCA is a linear combination, by finding the direction with the largest variance for projection, the variables obtained are just uncorrelated and not independent of each other. Principal component analysis(PCA) only uses the second-order statistical information of the original data and ignores its high-order statistical information. Therefore, it is necessary to optimize by rotating principal components to find more physical meanings of principal components.
ZHAO Yali, WANG Yanbing, WANG Xinyu, TIAN Xiuxiu, LI Xiaojuan, YU Jie. Temporal and Spatial Analysis of Land Subsidence in Beijing Plain Based on TPCA[J]. Geomatics and Information Science of Wuhan University, 2022, 47(9): 1498-1506. doi: 10.13203/j.whugis20200721
Citation: ZHAO Yali, WANG Yanbing, WANG Xinyu, TIAN Xiuxiu, LI Xiaojuan, YU Jie. Temporal and Spatial Analysis of Land Subsidence in Beijing Plain Based on TPCA[J]. Geomatics and Information Science of Wuhan University, 2022, 47(9): 1498-1506.
• 北京地区最早发生的地面沉降出现在20世纪30年代,位于西单-东单区域。近年来,北京的快速发展需求使得地下水长期超采,导致地面沉降的幅度和范围逐年扩大。2003—2010年的最大年沉降速率达到110 mm/a,最大累计沉降量达到了723 mm,年均沉降速率达到30 mm/a的覆盖区域面积为480 000 000 m2。北京平原区形成海淀苏家坨、昌平沙河-八仙庄、顺义、朝阳来广营、东郊八里庄、大兴榆垡6大沉降区[1]。由于地面沉降严重威胁城市安全,因此需要分析该地区的地面沉降时空演化特征,预测演化趋势。
有关地面沉降时空演化特征方面的研究方法分为时序分析与空间分析两类。时序演化特征分析通常采用典型点时序图法,从原始数据中观察地面沉降在时间序列上的变化特征或者通过统计某区域的年度沉降量来分析时序特征。刘凯斯[2]采用时序排列熵法分析北京地铁1号和6号线的地面沉降时间序列演化特征。段光耀等[3]用Mann-Kendall检验对北京平原区时空变化进行分析,研究历年发生突变现象的机理。空间演化特征分析一般采用剖面分析、梯度分析来分析空间上的不均匀地面沉降特征[4-6]。Zhou等[7]用空间分析方法等扇分析探究2012—2018年北京平原区地面沉降的扩张趋势,发现地面沉降由向东扩张变为向东和向北扩张。Zuo等[8]运用标准差椭圆方法发现北京市地面沉降漏斗的移动,揭示不均匀地面沉降。上述方法分别从时间或空间的角度研究沉降演化特征,在时间和空间上相分离,不能从时空角度发现数据中隐藏的信息和可能存在的规律。本文采用高维数据分析中的主成分分析(principal component analysis,PCA)方法研究地面沉降的时空演化特征,充分利用合成孔径雷达干涉测量(interferometric synthetic aperture radar,InSAR)所得的沉降信息具有长时序、覆盖范围广的优势。
PCA常用于数据降维,本文的应用目的是通过降维挖掘地面沉降的主要时空特征。PCA应用于地学领域中,能够有效地从时空数据中提取出某信号的时间序列与空间分布,主成分分析模式分为6种,其中T模式(temporal mode)[9]已应用在GPS站点数据、电磁测距和潮汐计数据上,用来分离出瞬态形变事件[10-11]。气象领域中,可对得出的多种环流模式进行解释[12],分析气象雷暴日的规律[13]
T模式时间序列分析能识别多个时间序列之间的相似空间模式[14]。Rudolph等[15]通过时间主成分分析(temporal principal component analysis,TPCA)从InSAR时序数据中提取主要的时间行为模式;Lipovsky[16]运用TPCA提取长时序形变的季节信号;Chaussard等[17]对小范围且量级小的InSAR监测结果进行TPCA分析,得到第一成分为沉降主趋势,第二主成分表征为季节性形变,与承压水空间覆盖范围较一致,第三主成分表现的空间特征与断裂带位置相关;吴玉苗[18]用类似于PCA的经验正交函数得到隧道两个方向的变形时空特征;邹正波等[19]基于重力场数据识别日本地震,并研究2002—2015年的重力场时空变化特征;Jiang等[20]在对沧州中部承压含水层系统的含水层参数和地下水储量变化定量研究过程中,通过多通道奇异谱分析对地表形变和地下水数据中的季节信号进行分离,推算出弹性骨架存储率,此外,分别重构出总地下水储量、可恢复地下水储量和不可逆地下水储量。
综上所述,TPCA可应用于地学领域中,在未知先验知识条件下,提取时空数据中的时间序列和空间分布特征。本文使用TPCA方法来分析2003—2010年的北京平原区地面沉降,定量提取时空特征并进行合理解释。
Reference (20)
### Catalog
/
DownLoad: Full-Size Img PowerPoint
|
|
# Wrong numbering of lines in lstlistings when using escaped commands
I'm using the listings package to format some Java code in my document.
I want it to number lines (1 number every 5 lines), and I need to highlight some keywords inside the code with the following command:
\newcommand{\ca}[1]{\color{red}{#1}}
(escaped with "" inside the lstlisting environment).
However, if I use this command on any line, the next line will be numbered, regardless of its number.
Here is a simplified example:
\documentclass{article}
\usepackage{listings}
\usepackage{xcolor}
\lstset{
language=Java,
numbers=left,
stepnumber=5,
numberfirstline=true,
numbersep=5pt,
escapechar=\
}
\newcommand{\ca}[1]{\color{red}{#1}}
\begin{document}
\begin{lstlisting}
public class Color {
private int R, G, B;
public \ca{final static} Color red = \ca{new} Color(255, 0, 0);
public \ca{final static} Color magenta = \ca{new} Color(255, 0, 255);
public \ca{final static} Color lightgray = \ca{new} Color(192, 192, 192);
// ...
}
\end{lstlisting}
\end{document}
(in my real document it's much more complex: Beamer, TikZ, etc.)
Here is the result I get: Lines 4 and 5 should not be numbered, and if I just remove the "" they are not numbered.
Any idea how to fix this? Did I do something wrong, or is it a bug in listings?
-
Found a workaround: disable "numberfirstline". Not perfect (I'd like the first line to be numbered...), but better than nothing. Leaving the question open in case anyone has a real fix :) – Schnouki Nov 4 '11 at 11:04
Try this:
\documentclass{article}
\usepackage{listings}
\usepackage{xcolor}
\lstset{
language=Java,
numbers=left,
stepnumber=5,
numberfirstline=true,
numbersep=5pt,
escapechar=\
}
\newcommand{\ca}[1]{\color{red}{#1}%
}
\begin{document}
\begin{lstlisting}
public class Color {
private int R, G, B;
public \ca{final static} Color red = \ca{new} Color(255, 0, 0);
public \ca{final static} Color magenta = \ca{new} Color(255, 0, 255);
public \ca{final static} Color lightgray = \ca{new} Color(192, 192, 192);
// ...
}
\end{lstlisting}
\end{document}
-
Awesome, that's exactly what I needed. Thanks! – Schnouki Nov 4 '11 at 12:35
I suggest a different method that avoids escaping to TeX code and is adaptable to different situations without marking up the original code.
\documentclass{article}
\usepackage{listings}
\usepackage{xcolor}
\lstset{
language=Java,
numbers=left,
stepnumber=5,
numberfirstline=true,
numbersep=5pt,
}
\begin{document}
\begin{lstlisting}[emph={[2]final,static,new},emphstyle={[2]\color{red}}]
public class Color {
private int R, G, B;
public final static Color red = new Color(255, 0, 0);
public final static Color magenta = new Color(255, 0, 255);
public final static Color lightgray = new Color(192, 192, 192);
// ...
}
\end{lstlisting}
\end{document}
-
Yes, obviously. But in other parts of the document I want to highlight different things, not necessarily keywords (sometimes an int value, sometimes Javadoc), so using emph would be quite troublesome. Thanks for the suggestion though. – Schnouki Nov 4 '11 at 13:27
|
|
Install package without updating dependent package
1
0
Entering edit mode
• 0
@23c4e923
Last seen 5 months ago
Japan
Hi,
I'm trying to install AnnotationForge, and it is said that the package "Matrix" is old.
> BiocManager::install("AnnotationForge")
'getOption("repos")' replaces Bioconductor standard repositories, see '?repositories' for details
replacement repositories:
CRAN: https://cran.rstudio.com/
Bioconductor version 3.14 (BiocManager 1.30.16), R 4.1.2 (2021-11-01)
Old packages: 'Matrix'
Update all/some/none? [a/s/n]:
Matrix 1.4.0 is released in 2021/12/8. but in my environment, its compile is failing.
I'm trying to fix it, but anyway, can I install bioconductor packages without updating dependent package(in this case, Matrix)?
Regards.
Install AnnotationForge • 787 views
3
Entering edit mode
Mike Smith ★ 5.5k
@mike-smith
Last seen 16 hours ago
EMBL Heidelberg / de.NBI
The simplest choice is to just select "n" at this point and it will keep the version of Matrix you already have, and AnnotationForge will probably still work fine. Since the new version of Matrix is so recent, it's very unlikely AnnotationForge will require that absolute latest version. In fact, it may be that AnnotationForge doesn't even need Matrix. The default behaviour is to check all installed packages to see if they are up-to-date, regardless of whether they are needed for this specific installation.
However if it gets annoying to keep saying "none" at this point, BiocManager::install() has the argument update for choosing whether other packages should be checked for updates. The manual page entry for this argument says "When FALSE, BiocManager::install() does not attempt to update old packages..." so you can, for example, do BiocManager::install("AnnotationForge", update = FALSE)
0
Entering edit mode
I selected "n", then I got the following output.
package(s) not installed when version(s) same as current; use force = TRUE to re-install: 'AnnotationForge'
So I have run the BiocManager::install() with "force" option,
BiocManager::install("AnnotationForge", force = TRUE)
Then it installed successfully and appears to be working fine!
Thank you.
|
|
QUESTION
# 12 workers can complete a piece of work in 10 days. If the number of workers is reduced to $\dfrac{1}{3}$rd of the original number, then how many more days would be required to complete the same work?${\text{A}}{\text{.}}$ 3${\text{B}}{\text{.}}$ 5${\text{C}}{\text{.}}$ 15${\text{D}}{\text{.}}$ 20
Hint: If the number of workers is reduced to $\dfrac{1}{3}$rd, then the number of days required to complete the same work will increase by 3 times [since, Work Done = Workforce$\times$ Time].
Also, that the number of workers is reduced to $\dfrac{1}{3}$.
Now, $\dfrac{1}{3}$rd of original no. of workers $= \dfrac{1}{3} \times 12 = 4$.
Therefore, 4 workers can complete the same work in $\dfrac{{12 \times 10}}{4} = \dfrac{{120}}{4} = 30$ (Because work remains same)
Hence, the number of extra days that will be required by a smaller number of workforces will be $= (30 - 10) = 20$.
|
|
/* attempt to deal with prototype, bootstrap, jquery conflicts */ /* for dropdown menus */
### GameDev Marketplace
#### Women's i.make.games T-shirt
$20$15
### Image of the Day Submit
IOTD | Top Screenshots
## Strategy pattern.... am I on the right track?
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
13 replies to this topic
### #1Alpheus GDNet+
Posted 10 September 2013 - 06:18 PM
So based on the code in this thread, I've implemented a very simple Strategy pattern. Well I think I did. Basically, is it good, bad, completely off the mark?
Strategy code
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
namespace DesignPatterns
{
class Strategy
{
}
{
int moveXUnits(int XUnits);
}
public abstract class Movement : IAdd
{
private int startUnit;
private int stepUnits;
private int totalSteps;
public Movement()
{
startUnit = 0;
stepUnits = 1;
totalSteps = 0;
}
public Movement(int initializeUnit, int initializeStep)
{
startUnit = initializeUnit;
stepUnits = initializeStep;
totalSteps = startUnit;
}
public int moveXUnits(int XUnits)
{
stepUnits = XUnits;
totalSteps += stepUnits;
}
private int autoMoveXUnits()
{
totalSteps += stepUnits;
}
public virtual void Step(int XUnits)
{
moveXUnits(XUnits);
}
public virtual void Step()
{
autoMoveXUnits();
}
public virtual void displaySteps()
{
Console.WriteLine("I have moved " + totalSteps.ToString() + " units.");
}
}
public class Walk : Movement
{
public Walk() : base(1,2){}
}
public class Skip : Movement
{
public Skip() : base(2,3){}
}
public class Run : Movement
{
public Run() : base(3,5){}
}
public class Character
{
private Movement movememtAction;
public Character()
{
movememtAction = new Walk();
}
public Character(Movement initMovement)
{
movememtAction = initMovement;
}
public void setMovement(Movement movementToPerform)
{
movememtAction = movementToPerform;
}
public void Step()
{
movememtAction.Step();
}
public void Step(int newSteps)
{
movememtAction.Step(newSteps);
}
public void displaySteps()
{
movememtAction.displaySteps();
}
}
}
Main.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace DesignPatterns
{
class Program
{
static void Main(string[] args)
{
Character Mario = new Character(new Run());
Character Link = new Character(new Walk());
Character Zelda = new Character(new Skip());
Mario.Step();
Mario.displaySteps();
}
}
}
Edited by Alpha_ProgDes, 10 September 2013 - 06:19 PM.
External Articulation of Concepts Materializes Innate Knowledge of One's Craft and Science
Super Mario Bros clone tutorial written in XNA 4.0 [MonoGame, ANX, and MonoXNA] by Scott Haley
If you have found any of the posts helpful, please show your appreciation by clicking the up arrow on those posts
Spoiler
### #2Maple Wan Members
Posted 10 September 2013 - 07:35 PM
Where is your override keyword? To realize Polymorphism, you have to implement 3 conditions:
1. Contains inheritance.(Interface or base class)
2. Contains override(use abstract/override or virtual/override or realize the interface method)
3. Mutiple base class objects which point to a derived class reference.
You should use override keyword in your derived class to replace the behavior of base class. Otherwise you should use new keyword to prevent derived class to replace the method of base class.
### #3AllEightUp Moderators
Posted 10 September 2013 - 09:08 PM
Beyond the code in general, I suspect there is a disconnect in the intention of the strategy pattern here as I mentioned in our chat. What are you changing based on the "strategy" here? The only things being changed are two member values, that is simple C++ initialization pattern and not something you use the "strategy" pattern for.
While there is nothing wrong with your implementation after the fixes, there is still a fundamental issue that this is not the intention of the strategy pattern as described by the GOF design patterns book. Given this code, you have implemented a single strategy: "GroundMovement", the changes in values don't change the strategy and are simply initialization values. To make this an actual strategy, you might add "AirMovement" which calculates gravity effect on the movement algorithm, so the actual code for "moveXUnits" changes between the strategies. The intention of the strategies is to insert new code to change the fundamental behavior of certain common calls, not to simply change some values here and there because that is covered in the non-book design pattern of simple C++.
Hopefully this is helpful. I'm not trying to be an ass, but using patterns means using them for the correct reasons, without that proper application you are writing overly complicated code for no good reason but calling it a pattern not intended for this problem. This leads to confusion and poor code in the long run I'm sorry to say..
Edited by AllEightUp, 10 September 2013 - 09:15 PM.
### #4LorenzoGatti Members
Posted 11 September 2013 - 02:38 AM
IAdd is very oddly named, both because "add" is a verb and because moving units has nothing to do with adding.
Omae Wa Mou Shindeiru
### #5LorenzoGatti Members
Posted 11 September 2013 - 02:43 AM
Regarding the strategy, movement is clearly far more complex than counting steps. You should think of extracting interchangeable strategies only after you have multiple kinds of units and multiple kinds of movement; as AllEightUp explains, what you have now is only a draft of a couple of simple unit stats.
Omae Wa Mou Shindeiru
### #6Cosmic314 Members
Posted 11 September 2013 - 07:42 AM
The high level view of the strategy pattern is that the interface to a behavior never changes but the behavior itself changes.
A simple example might be a convex hull strategy pattern. There are probably 10 different ways to solve the convex hull pattern. Some perform better than others depending on the type of data that's input to the pattern. If you need to do convex hull solutions frequently and you monitor data input patterns that you are receiving, you might decide that a different algorithm for the convex hull is more appropriate. Rather than placing a series of 'if-thens' that surround each convex hull algorithm you can simply have your interface point to the best algorithm.
I realize the convex hull solution is a fairly simple example. You don't really need much more than an 'execute' method, although if you could customize sub-steps that have a common interface it would start to get some benefit.
Let's take your example. Maybe a certain class of creatures implement methods to listen(), assess_opponent(Creature &), look(), attack(Creature &), flee(), etc. You could have a generalized opponent interaction function that does something like:
if( listen() == OPPONENT_DETECTED )
{
Creature& opponent = get_detected_opponent();
bool we_do_attack = assess_opponent(opponent);
if( we_do_attack )
{
attack(opponent);
}
else
{
flee();
}
}
You could have a strategy pattern that provides interfaces to those methods with a possible default behavior. You can then provide a way to change the underlying strategies at run time (this is a crucial difference from the template pattern which chooses behavior at compile time). For example, maybe your creature has super ears that can echo locate creatures, but somewhere in the course of the game it sustains an injury that renders it stone deaf. Rather than provide a series of 'if-thens' in the above code, you could change your strategy that implements a 'deaf' version of listen().
Essentially what this buys you is the ability to have code that has the same overall logic but lets you vary behavior dramatically. It saves you from having to change the above code constantly despite changes in underlying behavior, which is really one of the major benefits of design patterns.
Edited by Cosmic314, 11 September 2013 - 07:51 AM.
### #7Norman Barrows Members
Posted 11 September 2013 - 01:57 PM
didn't look at the code too closely, and from the comments it wasn't really necessary, but....
in plain engish, in the strategy pattern, you call a function, and pass it a flag somehow (explicitly or implicitly - such as "this"), and it in turn calls some sub-function based on that flag. an example might be some move method that used object type to in turn call a move_ground_target or move_flying_target method.
the idea is that you can use one api call move(this) to call different move() methods based on "this", to implement different "flight models".
its a common way to generalize and reduce the number of API calls when designing an API.
for example, in my game library, i have drawing down to one call: Zdraw(drawinfo) that works for mesh&texture, static model, animated model, 2d billboard, or 3d billboard ( hey! i should add sprites! <g> ). all it does is a switch on drawinfo.type and calls the appropriate sub-function based on type. its a perfect example of the strategy or policy pattern. drawinfo.type is the strategy or policy to be used when drawing.
Edited by Norman Barrows, 11 September 2013 - 02:00 PM.
Norm Barrows
Rockland Software Productions
"Building PC games since 1989"
rocklandsoftware.net
PLAY CAVEMAN NOW!
http://rocklandsoftware.net/beta.php
### #8Servant of the Lord Members
Posted 11 September 2013 - 02:05 PM
Is the allocator parameter of C++ standard library containers an example of the strategy pattern? It customizes the logic based off of the allocator passed in, without changing the functionality of the container itself.
It's perfectly fine to abbreviate my username to 'Servant' or 'SotL' rather than copy+pasting it all the time.
All glory be to the Man at the right hand... On David's throne the King will reign, and the Government will rest upon His shoulders. All the earth will see the salvation of God.
Of Stranger Flames -
### #9Cosmic314 Members
Posted 11 September 2013 - 02:34 PM
Is the allocator parameter of C++ standard library containers an example of the strategy pattern? It customizes the logic based off of the allocator passed in, without changing the functionality of the container itself.
Yeah, I think that's about right.
In my listen() example, my creature class might implement it as a giant series of if/then/else statements every single time I call listen, even though the state of the creature rarely changes. Rather than have a giant sequence of if/then/elses I could instead pay that cost one time. For example when the creature goes deaf I can change the listen() strategy to the deaf strategy, which still happens to use the same interface. Now I have tidier code. If I call listen() a zillion times, I save a bundle of branch instructions on that if/then/else tree. Pardon my poor syntax below, but it's enough to get the idea across (I hope!)
void creature_class::listen()
{
this->InterfaceListen_strategy->execute();
// place this creature's specific code afterwards, if any, that is invariant
}
void creature_class::set_listen_strategy( <variables> )
{
if( DEAF ) this->IntefaceListenStrategy = DeafListenConcrete;
if( BUFFED ) this->IntefaceListenStrategy = BuffedListenConcrete;
// etc.
}
Edited by Cosmic314, 11 September 2013 - 02:48 PM.
### #10LorenzoGatti Members
Posted 12 September 2013 - 02:13 AM
void creature_class::listen(){ this->InterfaceListen_strategy->execute(); // place this creature's specific code afterwards, if any, that is invariant}void creature_class::set_listen_strategy( <variables> ){ if( DEAF ) this->IntefaceListenStrategy = DeafListenConcrete; if( BUFFED ) this->IntefaceListenStrategy = BuffedListenConcrete; // etc.}
Are you sure you want a public set_listen_strategy()? The listening strategy appears to be a very clear cut example of the most private kind of internal state (of class creature_class):
- Nothing "outside" the creature can reasonably have any say in how it listens to noise.
- Changing the strategy at the wrong time might cause errors (e.g. hearing too much or too little), so it should be encapsulated.
- It's a convenience to organize the listening-related code, and the exact same behaviour could be achieved in completely different ways; there's no reason for outside code to depend on such volatile details.
- The events that can cause a change of InterfaceListen_strategy might instead cause a flag to be set, or something else, or no change; the public interface of the creature should match these events (e.g. making the creature hear a deafening noise, already covered by listen() ) rather than exposing a specific way to implement their consequences and constraining creature_class to use it forever because improper dependencies have been established.
Omae Wa Mou Shindeiru
### #11Cosmic314 Members
Posted 12 September 2013 - 09:52 AM
void creature_class::listen(){ this->InterfaceListen_strategy->execute(); // place this creature's specific code afterwards, if any, that is invariant}void creature_class::set_listen_strategy( <variables> ){ if( DEAF ) this->IntefaceListenStrategy = DeafListenConcrete; if( BUFFED ) this->IntefaceListenStrategy = BuffedListenConcrete; // etc.}
Are you sure you want a public set_listen_strategy()? The listening strategy appears to be a very clear cut example of the most private kind of internal state (of class creature_class):
- Nothing "outside" the creature can reasonably have any say in how it listens to noise.
- Changing the strategy at the wrong time might cause errors (e.g. hearing too much or too little), so it should be encapsulated.
- It's a convenience to organize the listening-related code, and the exact same behaviour could be achieved in completely different ways; there's no reason for outside code to depend on such volatile details.
- The events that can cause a change of InterfaceListen_strategy might instead cause a flag to be set, or something else, or no change; the public interface of the creature should match these events (e.g. making the creature hear a deafening noise, already covered by listen() ) rather than exposing a specific way to implement their consequences and constraining creature_class to use it forever because improper dependencies have been established.
I understand and can appreciate your points. The wisdom of when to use a pattern is something that's acquired through experience. It's hard to make a simple and complete example for something that requires a certain degree of complexity to explain. In responding to the OP, I can only assume that the choice has been made as to warrant the use of a pattern. Saying that, I'll certainly adopt code if it offers material benefit.
Let me respond to your points the best as I understand them:
In my defense, I never provided a full declaration of creature_class::set_listen_strategy. But your point is taken. Reference to setting the strategy pattern should be invisible to things outside of the class that uses it. If I understand your meaning, we should instead supply something like creature_class::is_now_deaf() and then have that function handle whatever state lies beneath whether it be setting a strategy pattern or something else. That makes sense.
I'm not sure what you mean by: "Nothing "outside" the creature can reasonably have any say in how it listens to noise."
When I first read the point I interpreted it as being in direct conflict with your advice to make set_listen_strategy private. But I will assume, that's not what you meant. If you are referring to sending data to the creature class about the external world interacting with it, we could fix it by either providing a parameter that all listen calls use or we can set the appropriate state information through a different method / interface in the creature_class (or its hierarchy) and refer to it indirectly and under the covers.
If I change the strategy at the wrong time it will cause issues. Sure. I agree. But it's also true that if I change that state of any other thing before it is ready to be consumed I would also get unexpected behavior. While this is of concern, I don't see how it is specifically relevant to the strategy pattern. It seems to be a more general issue that you are describing.
You will get no dispute from me about the 'convenience' factor. There are probably an infinite numbers of ways we could organize the code. But that's the point of patterns: flexibility, maintainability, and convenience. Because the OP is referring specifically to the Strategy pattern I can only assume the source of this strategy comes from a well known source. A well known source happens to be the classic GoF book. In it, they specifically give this as a reason to employ the pattern. Under their 'applicability' section these are the reasons they give:
Use the strategy pattern when
-- many related classes differ only in their behavior. Strategies provide a way to configure a class with one of many behaviors.
-- you need different variants of an algorithm. For example, you might define algorithms reflecting different space/time trade-offs. Strategies can be used when these variants are implemented as a class hierarchy of algorithms
-- an algorithm uses data that clients shouldn't know about. Use the Strategy pattern to avoid exposing complex, algorithm-specific data structures.
-- a class defines many behaviors, and these appear as multiple conditional statements in its operations. Instead of many conditionals, move related conditional branches into their own Strategy class.
Moderators: This is a quote from the GoF book. I cite fair use but I don't know if this exceeds the bounds you're willing to tolerate. I will comply with requests to remove if I'm breaking a rule.
Back to the discussion. For the creature class it is this last point for which I've decided that the Strategy pattern happens to be useful. Later in their description they give an example of a 'switch' ladder and how the Strategy pattern makes the code easier to understand.
I'll add a final thought. Maybe I have it all wrong and you do offer valid points. This creature example is somewhat contrived. I certainly won't be juryrigging Strategy patterns at each and every spot that might avoid 'if-then-else' or 'switch' ladders. After all, there is a cost in factoring code to use any pattern. If we know that creature_class is something that is very well defined and will not change then it's probably not worth any extra management apparatus, It will only get in the way.
I know my response probably appears very defensive. Please receive them only with attempts at giving an earnest, respectable reply.
Edited by Cosmic314, 12 September 2013 - 09:57 AM.
### #12Alpheus GDNet+
Posted 12 September 2013 - 01:11 PM
I understand and can appreciate your points. The wisdom of when to use a pattern is something that's acquired through experience. It's hard to make a simple and complete example for something that requires a certain degree of complexity to explain. In responding to the OP, I can only assume that the choice has been made as to warrant the use of a pattern. Saying that, I'll certainly adopt code if it offers material benefit.
Actually, the example is arbitrary. I'm not trying to force fit the Strategy pattern in my code (existing or new). I'm actually just wanting to understand the Strategy pattern and see if my example is correct, in the right area, close but no cigar, or just wrong. Of course, I'm looking for feedback or sample code on how to make it better.
External Articulation of Concepts Materializes Innate Knowledge of One's Craft and Science
Super Mario Bros clone tutorial written in XNA 4.0 [MonoGame, ANX, and MonoXNA] by Scott Haley
If you have found any of the posts helpful, please show your appreciation by clicking the up arrow on those posts
Spoiler
### #13Washu Senior Moderators
Posted 12 September 2013 - 01:56 PM
Actually, the example is arbitrary. I'm not trying to force fit the Strategy pattern in my code (existing or new). I'm actually just wanting to understand the Strategy pattern and see if my example is correct, in the right area, close but no cigar, or just wrong. Of course, I'm looking for feedback or sample code on how to make it better.
I would say your example is far from the point.
With strategy the underlying behavior of each object implementing the strategy is different. With yours the underlying behaviors are all the same (hence why you can default construct them.
An OK example is the one used by Wikipedia, which implements a strategy for processing various types of mathematical commands like Add, Subtract, and Multiply. I suggest reading it over.
I would also strongly suggest reading the c2 wiki on strategy and state.
In time the project grows, the ignorance of its devs it shows, with many a convoluted function, it plunges into deep compunction, the price of failure is high, Washu's mirth is nigh.
ScapeCode - Blog | SlimDX
### #14Nypyren Members
Posted 12 September 2013 - 06:50 PM
To me, Strategy is just something you get when you implement two or more algorithms that produce the same effect that can be used interchangeably, where all differences are encapsulated. If the results are different, it's not a Strategy. Interchangeable movement methods like walking vs. flying do totally different things (the flier can FLY), which violates the definition in my mind. That's polymorphism. That's not Strategy. Boyer-Moore vs. every-character string searching is a Strategy.
Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
|
|
# How to Prove that a sequence $\{f_n\}$ of functions converges uniformly?
If $f\colon (0,+\infty)\to\mathbb R$ is not identically $0$ and $$\lim _{x \to +\infty} f(x) = 0,$$ then does the sequence of functions $\{f_n\}$ defined by $$f_n(x) = f(nx)$$ converge uniformly to the zero function?
-
On which set do you ask uniform convergence? If it's $(0,+\infty)$ it's not true, taking $f(x)=1/x$. – Davide Giraudo Apr 1 '12 at 13:14
Yes from (0, oo) but can you explain me Why its not true ? – عبدالرازق حاج يحيى Apr 1 '12 at 13:17
It satisfies the first condition, but $\sup_{x\in (0,+\infty}|\frac 1{nx}-0|$ is not finite. – Davide Giraudo Apr 1 '12 at 13:20
Ok , thanks alot – عبدالرازق حاج يحيى Apr 1 '12 at 13:21
In fact, we never have the uniform convergence on $(0,+\infty)$, since if $f(x_0)\neq 0$ then $\sup_{x>0}|f(nx)|\geq \left|f\left(n\frac{x_0}n\right)\right|=|f(x_0)|>0$.
-
|
|
PARTONS/NumA++ Numerical Analysis C++ routines
NumA::FunctionType1D Class Referenceabstract
Class for defining one-dimensional functions that can be used as arguments in virtual methods. More...
Inheritance diagram for NumA::FunctionType1D:
## Public Member Functions
FunctionType1D ()
Empty constructor. More...
virtual ~FunctionType1D ()
Default destructor. More...
virtual double operator() (double x, std::vector< double > ¶meters)=0
Operator allowing the functor to be used as a function. More...
virtual double operator() (double x)=0
Operator allowing the functor to be used as a function. More...
## Detailed Description
Class for defining one-dimensional functions that can be used as arguments in virtual methods.
This is needed to circumvent the issue that templates and virtual methods are incompatible.
Usage:
&MyClass::myFunction);
where myFunction is some function defined in some class MyClass with the following signature:
double MyClass::myFunction(double variable, std::vector<double>& parameters);
The pointer this needs to be replaced by a pointer to the object instantiating MyClass, if the first line is not written inside MyClass.
Now, you can pass myFunctor as argument to any method (e.g. integration routines), even virtual ones:
virtual SomeReturnType someVirtualMethod(..., NumA::FunctionType1D* functor, ...);
and use the functor as a function (with the operator()) inside this method:
functor(variable, parameters); // parameters is optional.
For multi-dimensional functions, see FunctionTypeMD.
## ◆ FunctionType1D()
NumA::FunctionType1D::FunctionType1D ( )
inline
Empty constructor.
## ◆ ~FunctionType1D()
virtual NumA::FunctionType1D::~FunctionType1D ( )
inlinevirtual
Default destructor.
## ◆ operator()() [1/2]
virtual double NumA::FunctionType1D::operator() ( double x, std::vector< double > & parameters )
pure virtual
Operator allowing the functor to be used as a function.
Parameters
x Argument of the function represented by the functor. parameters Parameters that can be passed to the function.
Returns
double
Implemented in NumA::Functor1D< PointerToObj, PointerToFunc >.
## ◆ operator()() [2/2]
virtual double NumA::FunctionType1D::operator() ( double x )
pure virtual
Operator allowing the functor to be used as a function.
Parameters
x Argument of the function represented by the functor.
Returns
double
Implemented in NumA::Functor1D< PointerToObj, PointerToFunc >.
The documentation for this class was generated from the following file:
|
|
# Math Help - Trigonometric substitution?
1. ## Trigonometric substitution?
Use trigonometric substitution to evaluate $\int \frac{1}{\sqrt{1+x^2}}dx$
2. Originally Posted by FLTR
Use trigonometric substitution to evaluate $\int \frac{1}{\sqrt{1+x^2}}dx$
Let $x = tan \theta$. Then $dx = sec^2 \theta \, d \theta$
So
$\int \frac{1}{\sqrt{1+x^2}}dx = \int \frac{sec^2 theta}{\sqrt{1+tan^2 \theta}}d \theta$
= $\int \frac{sec^2 \theta}{\sqrt{sec^2 \theta}}d \theta$
= $\int \frac{sec^2 \theta}{sec \theta}d \theta$
= $\int sec \theta \, d \theta$
= $- ln \left | tan \theta - sec \theta \right |$
= $- ln \left | x - sec(atn(x)) \right |$
I leave the rest of the simplification to you.
-Dan
3. First topsquark, I would like to mention that a composition of a trigonometric function and inverse function is some algebraic function. Furthermore, this simplicification is the inverse hyperbolic sine function.
4. Originally Posted by ThePerfectHacker
First topsquark, I would like to mention that a composition of a trigonometric function and inverse function is some algebraic function. Furthermore, this simplicification is the inverse hyperbolic sine function.
I was aware of the first statement anyway.
However, I need to apologize, I am apparently off by at least one sign in my answer. Possibly because I set $\sqrt{xec^2 \theta} = sec \theta$ instead of the absolute value of such. (My usual forgetful mistake!)
-Dan
5. Originally Posted by topsquark
I was aware of the first statement anyway.
However, I need to apologize, I am apparently off by at least one sign in my answer. Possibly because I set $\sqrt{xec^2 \theta} = sec \theta$ instead of the absolute value of such. (My usual forgetful mistake!)
-Dan
Nope! My solution was correct after all, just not in "standard form." So the answer is
$\int \frac{dx}{\sqrt{1+x^2}} = -ln \left | x - \sqrt{x^2+1} \right | = ln \left | x + \sqrt{x^2+1} \right |$
-Dan
6. Originally Posted by topsquark
Nope! My solution was correct after all, just not in "standard form."
Your solution was not correct mathematically anyway. The manipulation of the differencials is stirctly prohibited. If you do it my way, by defining a composite function times its derivative then you can use the substitution rule, but that is slightly more difficult to get used to.
7. Originally Posted by ThePerfectHacker
Your solution was not correct mathematically anyway. The manipulation of the differencials is stirctly prohibited. If you do it my way, by defining a composite function times its derivative then you can use the substitution rule, but that is slightly more difficult to get used to.
This is a fairly standard integral. What is incorrect about making the substitution $x=\sec(u)$?
8. Originally Posted by ThePerfectHacker
Your solution was not correct mathematically anyway. The manipulation of the differencials is stirctly prohibited. If you do it my way, by defining a composite function times its derivative then you can use the substitution rule, but that is slightly more difficult to get used to.
If you're complaining (again) about the $\frac{dx}{d \theta} = sec^2 \theta$ to $dx = sec^2 \theta d \theta$ transformation, no it is not strictly legitimate, but it has been proven to give correct results for any problem I've ever heard of. And I own 4 Introductory Calculus books that span some 20 years and all of them do integration substitutions using this notation.
I see no reason not to follow their example.
-Dan
9. Originally Posted by Jameson
This is a fairly standard integral. What is incorrect about making the substitution $u=\sec(x)$?
First you mean,
$\sec u=x$
Note when you differenciate you get the differencials,
$\tan^2 u du=dx$
But can you explain what those differencials mean!
They are not numbers!
Can you understand why it works? I cannot!
Thus, I use the strict form of the rule,
If $f(x)$ has an antiderivative on some open interval and $g(x)$ is differenciable on this interval then,
$\int (f\circ g)g' dx=F\circ g+C$
Where, $F$ is any function which satisfies the differencial equation $F'=f$ throughout the interval.
If you can formally prove to me how these differencials work then I will use them, but I never understood them and willing to bet most people do not understand them either (if any at all).
Another thing, which makes me really angry is the symbol $dx$ after the integral,I myself would rather drop it. In fact, my formal, differencial equations professor, does not write it. And my 12th Grade teacher also had a strong dislike to it, he used the composite function method (chain rule). Again I myself dislike it, greatly. I am sure if you open any analysis book they do not use this method, rather the correct one.
I believe these symbols are historical. They came from the time of Leibniz and remained until today. But the problem with Calculus (or "Method of Fluxions") in the 17 Century was that many mathemations disfavored it and opposed it (for example, Michelle' Rolle). They claimed it used "unsound reasoning", which in fact is true. Derivations using infitesimal arguments looked appealing but they were not based on mathematical derivations and thus were rejected by the mathematicians (but accepts by physcisits). Until the Cauchy and Weirestrauss became to build a foundation mathematicians did not approve Calculus.
I understand, topsquark, that this can produce the correct result but in strange cases this will probably not work. It is important for a mathematician to understand how something work rather than use it and know that it works. This is not alchemy.
I believe thee need a mathematicans apology.
10. Originally Posted by ThePerfectHacker
First you mean,
$\sec u=x$
Note when you differenciate you get the differencials,
$\tan^2 u du=dx$
But can you explain what those differencials mean!
They are not numbers!
Can you understand why it works? I cannot!
Thus, I use the strict form of the rule,
If $f(x)$ has an antiderivative on some open interval and $g(x)$ is differenciable on this interval then,
$\int (f\circ g)g' dx=F\circ g+C$
Where, $F$ is any function which satisfies the differencial equation $F'=f$ throughout the interval.
If you can formally prove to me how these differencials work then I will use them, but I never understood them and willing to bet most people do not understand them either (if any at all).
Another thing, which makes me really angry is the symbol $dx$ after the integral,I myself would rather drop it. In fact, my formal, differencial equations professor, does not write it. And my 12th Grade teacher also had a strong dislike to it, he used the composite function method (chain rule). Again I myself dislike it, greatly. I am sure if you open any analysis book they do not use this method, rather the correct one.
I believe these symbols are historical. They came from the time of Leibniz and remained until today. But the problem with Calculus (or "Method of Fluxions") in the 17 Century was that many mathemations disfavored it and opposed it (for example, Michelle' Rolle). They claimed it used "unsound reasoning", which in fact is true. Derivations using infitesimal arguments looked appealing but they were not based on mathematical derivations and thus were rejected by the mathematicians (but accepts by physcisits). Until the Cauchy and Weirestrauss became to build a foundation mathematicians did not approve Calculus.
I understand, topsquark, that this can produce the correct result but in strange cases this will probably not work. It is important for a mathematician to understand how something work rather than use it and know that it works. This is not alchemy.
I believe thee need a mathematicans apology.
(Shrugs) Suit yourself, I'm not about to get into an argument over it. I've never taken a full-fledged Analysis class (where you get into the picky details of why Calculus works) though I would like to some day. And whereas I know you are correct about the treatment of differential elements, this is the way that (as far as I know) practically all Introductory level Calculus classes approach the topic of substitution of variables in integrals. I am willing to learn new (to me) and more precise techniques, but since the "language" of most of the students on the forum is likely to be the same as mine in this matter I will likely continue to speak using it.
Just giving you apologies in advance for my "informal" methods!
-Dan
|
|
# Can you teleport 0 squares in a move action?
There are many properties that trigger after teleportation in 4e. Is it possible to activate a teleport power and choose the distance teleported to be 0 squares? (effectively teleporting in place?)
-
I feel like invoking the sack of rats clause might be appropriate here. – wax eagle Jun 21 '12 at 19:26
Could you please explain the reason you'd want to do this? To break a grab? – F. Randall Farmer Jun 21 '12 at 19:37
@F.RandallFarmer best guess: he's trying to use something the triggers off a teleport, but wants to remain in current position. – wax eagle Jun 21 '12 at 19:58
Why would you want to do this? – DForck42 Jun 22 '12 at 15:26
There are a number of things that trigger on teleport, but I may not want to give up my position. As a pixie, I'd quite like to be able to teleport "within the same square" – Brian Ballsun-Stanton Jun 22 '12 at 16:36
No
Though it is not explicitly called out, there is a preponderance of wording that suggests you must chose a destination space other than the current one.
Teleportation[ddi]
... A teleportation power transports creatures or objects instantaneously from one location to another.
From one location to another. Strike 1 - Seems to exclude the same location - but depends on grammatical set theory. :-)
... Instantaneous: Teleportation takes no time. The target disappears and immediately appears in the destination that the teleporting creature chooses. The movement is unhindered by intervening creatures, objects, or terrain.
Blocking isn't a problem.
... Destination Space: The destination of the teleportation must be an unoccupied space that is at least the same size as the target.
Strike 2 - The destination space is occupied - by you.
... Immobilized or Restrained: Being immobilized or restrained doesn’t prevent a target from teleporting. If a target teleports away from a physical restraint, a monster’s grasp, or some other immobilizing effect that is located in a specific space, the target is no longer immobilized or restrained. Otherwise, the target teleports but is still immobilized or restrained when it reaches the destination space.
Assuming Teleport 0 is being used to escape restraint, this is Strike 3: You aren't teleporting away from restraint, so your are still restrained.
Despite all that, I imagine many DMs would house-rule otherwise, and I might be one of them.
-
You might simply teleport one or two squares up. – Mala Jun 21 '12 at 20:26
Or get those boots (Planestrider Boots?) that allow you to split a teleport into 2 teleports whose summed distance is <= the distance of the original teleport. Teleport a couple squares over and then back. – Oblivious Sage Jun 21 '12 at 20:35
@mala That's the ticket to break a grab, but you end up making a check to see if you fall prone. – F. Randall Farmer Jun 21 '12 at 22:09
Actually, here's a thought. As a pixie he's a small (or smaller) creature right? Since there is enough space for several of them in square, could he teleport somewhere else in his own square? That might qualify as a teleport 0. – IgneusJotunn Jun 24 '12 at 5:32
@IgneusJotunn Actually, a pixie could teleport 1 up, and fall 1 without any risk whatsoever, since it has a higher fly speed than that. – F. Randall Farmer Jun 24 '12 at 7:21
|
|
foundations and applications by G. N. Afanasiev
Publisher: Kluwer Academic Publishers in Boston
Written in English
## Subjects:
• Synchrotron radiation -- Scientific applications
• ## Edition Notes
Includes index.
Classifications The Physical Object Statement by G.N. Afanasiev. Series Fundamental theories of physics ;, v. 142 LC Classifications QC490.4 .A34 2004 Pagination p. cm. Open Library OL3306744M ISBN 10 140202410X LC Control Number 2004051578
Sergei Vavilov was born in Moscow in His father, a prosperous textile merchant, gave a good education to his two sons and hoped they would inherit and continue his business. However, Sergei and his elder brother Nikolai both decided to become scientists. Nikolai chose biology, while in Sergei entered the Department of Physics [ ]. We systematically investigated the plasmon-polariton oscillations generated by a fast radiating charge (Cherenkov radiation) in a three-dimensional (3D) strongly disordered nanostructure. We studied the dynamic properties of an optical field in a random composition of empty single-wall nanotubes by using a 3D numerical finite-difference time Author: Gennadiy Burlak, Cecilia Cuevas-Arteaga, Gustavo Medina-Ángel, Erika Martínez-Sánchez, Yessica Y. Ca. Proc. SPIE , Selected Papers from the 31st International Congress on High-Speed Imaging and Photonics, (8 March ); doi: / Vavilov-Cherenkov and Synchrotron Radiation av G.N. Afanasiev häftad, , The importance of the Vavilov-Cherenkov radiation stems from the property that a charge moving uniformly in a medium emits? quanta at the angle this book describes techniques for handling and analysing data obtained from high-energy and nuclear physics.
for Vavilov-Cherenkov radiation in homogeneous medium. RTR consist of overlapping radiated harmonics, each has its threshold in energy of radiating particle and depends on parameters of medium. Theory of RTR has been published in my article presented for publication by L. Landau to Dokladi Acad. Nauk in [38] and published in Nuclear. This book is intended neither as a manual for electrodynamics nor as a mono- had already been investigated. The fundamental effects – Vavilov–Cherenkov radiation, transition radiation in a system with inhomogeneous parameters, character of the synchrotron and undulator radiation emission, used in FELs. Planning the Journey to the Red Planet Expedition Mars Martin J. L. Turner Praxis/Springer-Verlag, New York, $( pp.). ISBN Reviewed by George D. Nelson We astronomers and astrophysicists have engaged in the Mars argument at one time or another with our col-leagues and our nontechnical friends. KEYWORDS: Point spread functions, Diagnostics, Optical testing, Temporal resolution, Streak cameras, Picosecond phenomena, Synchrotron radiation Read Abstract + In this paper, we report on the development and manufacture of a dual-slit electron-optical dissector based on the PIF streak image tube with a picosecond time resolution operating. This banner text can have markup.. web; books; video; audio; software; images; Toggle navigation. The book should be a valuable re-source for newcomers who are trying to understand essential concepts of polymer science, learn some of the his- Vavilov–Cherenkov and Synchrotron Radiation: Foundations and Applica-tions. G. N. Afanasiev. Fundamental The-ories of Physics Kluwer Academic. Physics and Astronomy: Springer Proceedings in Physics: Proceedings: 2: Advanced Experimental Methods For Noise Research in Nanoscale Electronic Devices: Josef Sikula, Michael Levinshtein. Physics and Astronomy: NATO Science Series II: Mathematics, Physics and Chemistry: Proceedings: 3: Advanced Quantum Mechanics: Franz Schwabl. Topics, C. C Operator > see Charge Conjugation.. C-Metric. C-Theory > see theories of gravity.. C*-Algebra > s.a. Grupoid; Inductive System; lie group [groupoid]; operator theory; tiling.$ Def: A norm-closed subalgebra of the space $$\cal B$$($$\cal H$$) of bounded operators on a Hilbert space $$\cal H$$, stable under the adjoint operation. * Representations: Any C*-algebra can be .
## Recent
Vavilov-Cherenkov and Synchrotron Radiation: Foundations and Applications (Fundamental Vavilov-Cherenkov and synchrotron radiation book of Physics Book ) - Kindle edition by Afanasiev, G.N. Download it once and read it on your Kindle device, PC, phones or tablets. Use features like bookmarks, note taking and highlighting while reading Vavilov-Cherenkov and Synchrotron Radiation: Foundations and Applications (Fundamental Manufacturer: Springer.
Here the readers could find a discussion of all basic problems of the Vavilov-Cherenkov effect and of synchrotron radiation. This book may be useful for advanced graduate students and for professional scientists, both experimentalists and theoreticians." (Oleg A.
Sinkevich, Zentralblatt MATH, Vol. (18), )Format: Hardcover. Here the readers could find a discussion of all basic problems of the Vavilov-Cherenkov effect and of synchrotron radiation. This book may be useful for advanced graduate students and for professional scientists, both experimentalists and theoreticians." (Oleg A.
Sinkevich, Zentralblatt MATH, Vol. (18), ). Vavilov-Cherenkov and Synchrotron Radiation Foundations and Applications. Authors (view affiliations) G. Afanasiev; Search within book. Front Matter. Pages The Tamm Problem in the Vavilov-Cherenkov Radiation Theory.
Pages Non-Uniform Charge Motion in a Dispersion-Free Medium. Pages Cherenkov Radiation in a. Get this from a library. Vavilov-Cherenkov and synchrotron radiation: foundations and applications. [G N Afanasiev] -- The importance of the Vavilov-Cherenkov radiation stems from the property that a charge moving uniformly in a medium emits.
quanta at the angle uniquely related to its energy. This has numerous. From the reviews: "The book having nine chapters reviews fundamental physical and mathematical problems of the Vavilov-Cherenkov effect of media.
Here the readers could find a discussion of all basic problems of the Vavilov-Cherenkov effect and of synchrotron radiation. Get this from a library. Vavilov-Cherenkov and synchrotron radiation: foundations and applications. [G N Afanasiev] -- The theory of the Vavilov-Cherenkov radiation observed by Cherenkov in was created by Tamm, Frank and Ginsburg who associated the observed blue light with the uniform charge motion of a charge.
The mechanism of Vavilov-Cherenkov radiation Article (PDF Available) in Physics of Particles and Nuclei 41(3) May with 1, Reads How we measure 'reads'.
The main goal of this book is to present new developments in the theory of the Vavilov-Cherenkov effect for the 15 years following the appearance of Frank s monograph Frank I.M. Vavilov-Cherenkov Radiation. () Questions Concerning Observation of the Vavilov-Cherenkov Radiation.
In: Vavilov-Cherenkov and Synchrotron Radiation. Fundamental Theories of Physics (An International Book Series on The Fundamental Theories of Physics: Their Clarification, Development and Application), vol 2 Vavilov-Cherenkov radiation vs Cherenkov radiation and the recognition of Cherenkov within the USSR In his Nobel prize address, Tamm [6] points out that "in the USSR the name 'Vavilov-Cherenkov radiation' is used instead of just 'Cherenkov radiation' in order to emphasise the role of S.
Vavilov in the discovery".Cited by: 5. Description: The theory of the Vavilov-Cherenkov radiation observed by Cherenkov in was created by Tamm, Frank and Ginsburg who associated the observed blue light with the uniform charge motion of a charge at a velocity greater than the velocity of light in the medium.
On the other hand, Vavilov, Cherenkov's teacher, attributed the. Jelly's book Cherenkov Radiation and its Applications published in contains a short theoretical review of the Vavilov-Cherenkov ra- ation and a rather extensive description of experimental technique.
Ten years later, the two-volume Zrelov monograph Vavilov-Cherenkov Rad- tionandItsApplicationinHigh-EnergyPhysics : G N Afanasiev. [Fundamental Theories of Physics] G.N. Afanasiev - Vavilov-Cherenkov and Synchrotron Radiation- Foundations and Applications ( Springer).pdf.
Using a new [9, 10] quantum theory of Vavilov–Cherenkov radiation (VCR) based on Abraham’s theory, we show that a threshold VCR effect can be excited by the relic photon gas when relativistic charged cosmic-ray particles with γ ≥ γ{sub th} ≈ × 10{sup 10} (where γ{sup –2} = 1–v{sup 2}/c{sup 2}, v is the particle speed, and c is the speed of light in a vacuum) pass.
Discover Book Depository's huge selection of G N Afanasiev books online. Free delivery worldwide on over 20 million titles. This book demonstrates the applications of synchrotron radiation in certain aspects of cell microbiology, specifically non-destructive elemental analyses, chemical-state analyses and imaging (distribution) of the elements within a cell.
It is shown that the Vavilov-Cherenkov radiation and the transition radiation cannot be emitted at frequencies higher than the inner shell frequency of the.
The Vavilov-Cherenkov radiation in a transparent medium is, of course, separated from the total losses, as it can go far from the source (charge, etc.) trajectory, It is of interest to consider how one might eliminate practically all of the losses other than Cited by: 6.
Book Tracking; Login; Global Website. Change; Home. Subjects. Astronomy; New & Forthcoming Titles | New & Forthcoming Titles Journals, Academic Books & Online Media Vavilov-Cherenkov and Synchrotron Radiation Foundations and Applications.
Series: Fundamental Theories of Physics, Vol. @article{osti_, title = {Recording the synchrotron radiation by a picosecond streak camera for bunch diagnostics in cyclic accelerators}, author = {Vereshchagin, A K and Vorob'ev, N S and Gornostaev, P B and Kryukov, S S and Lozovoi, V I and Smirnov, A V and Shashkov, E V and Schelev, M Ya and Dorokhov, V L and Meshkov, O I and Nikiforov, D A}, abstractNote =.
1pc Keyence Pr-g51n Prg51n Photoelectric Radiation Sensor New In Box For Sale Online. $Nuclear-chicago Model. Nuclear-chicago Model a Classmaster Radiation Counter Sn For Sale Online.$ Lg Oled. Lg Oled Panel Sky Desk Light Office Lamp Cri 90 k Natural Radiation E_n For Sale Online. The vertical black line (left) drawn at ħ ω = eV (wavelength λ ≃ nm) intersects allowed emission angles regions and well pronounced gaps: from θ ≃ 75 ∘ in the case of Si and θ ≃ 65 ∘ in the case of C.
Then one may suggest, the ChCR angular distribution should consist of well pronounced several broad peaks (the first of them is located just near the corresponding to λ Cited by: 1.
Featured: "Cherenkov Radiation" inside The Oxford Guide to the History of Physics (p 55, coming soon) Chapter inside of: Radioactivity Hall of Fame (p ) eBook: Vavilov-Cherenkov and Synchrotron Radiation. Article: The ondulator as a source of electromagnetic radiationAuthor: Dawn Winans.
You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them., ; Free ebooks since Vavilov-Cherenkov and Synchrotron Radiation: Foundations and Applications - Fundamental Theories of Physics (Paperback) G.N.
Afanasiev £ Paperback. Amp Research. Amp Research Powerstep Plug N' Play System For Ford F F F \$1, Full text of "Cerenkov Radiation And Its Applications" See other formats. Vavilov-Cherenkov and synchrotron radiation: foundations and applications / by G.N. Afanasiev.
QC A34 The analysis of high resolution NMR spectra.Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.
1 Vavilov-Cherenkov and Synchrotron Radiation: Foundations and Applications.The temporal profile measurement for ultra-short electron bunches is one of the key issues for accelerator-based coherent light sources. A bunch length measurement system using Cherenkov radiation (CR) is under development at the Research Center for Electron Photon Science, Tohoku University.
This system allows for the real-time diagnostics of electron by: 1.
|
|
On associated graded rings having almost maximal depth
S. Huckaba
We generalize a recent result of Rossi and Valla, and independently Wang, about the depth of $G(m)$ where $m$ is the maximal ideal of a $d$-dimensional Cohen-Macaulay local ring $R$ having embedding dimesion $e_0(m)+d-2$. The generalization removes the restriction on the embedding dimension and replaces it with the condition that $\lambda(m^3/Jm^2)\leq 1$ where $J$ is a $d$-generated minimal reduction of $m$. The main theorem also applies to $m$-primary ideals $I$ satisfying $J\cap I^2=JI$ and $\lambda(I^3/JI^2)\leq 1$, where $J$ is a $d$-generated reduction of $I$. An example of such an $I$ in a $5$-dimensional regular local ring is included as a nontrivial illustration of the theorem.
|
|
# Is it valid to calculate the magnitude, power, and phase of a real time-domain signal without converting it to the frequency-domain?
I would assume you perform the calculations the same way, but since there would be no imaginary component because the signal is not complex it would be simpler:
magnitude = |x(n)|
power = |x(n)|^2
phase = 0 (for all values of n) - since there's no imaginary portion the phase equation would always result in a 0 value.
I'm trying to fully grasp and internalize some of the basic concepts of signal processing, and while I haven't ever really seen anyone try to use the equation for phase on a time-domain signal I don't see why it wouldn't be valid even if the results don't have much meaning.
• If you're going to downvote my question please provide a comment with some feedback on why it is not a good question. I am still a novice at digital signal processing so sometimes it is hard to even know why a question may be silly to ask. I did search quite a bit on google to try and find the answer to this question with no luck before posting it. Aug 11 '19 at 18:17
• I didn't downvote your question, but I have a suggestion: don't try to learn by googling. Get a good textbook (there are many excellent free as pdf online) and read it. If you do that, you'll very quickly see that your questions don't make a lot of sense.
– MBaz
Aug 12 '19 at 1:19
• @MBaz, I have Understanding Digital Signal Processing by Richard G. Lyons. The book is pretty great but this specific kind of question isn't really directly answered. He just shows how to calculate phase in the frequency domain but it's never clear whether you can have some sort of meaningful phase calculation in the time-domain. Aug 12 '19 at 2:30
• @MBaz, I guess what I was trying to get at with this question is regardless of whether you are in the time-domain or in the frequency-domain, a signal is still a signal. So to me it just seems to make sense that if magnitude and power can be calculated on any signal regardless of domain, why wouldn't you also be able to calculate phase? Aug 12 '19 at 2:39
• Actually after looking at the answer below, it looks like the book may provide this information, but much further than I have gotten. It looks like the chapter on the Hilbert Transform may clarify quite a bit! Aug 12 '19 at 3:05
No, this doesn't make much sense. What you can do in the time domain is compute the analytic signal and derive the signal's instantaneous amplitude (envelope) and its instantaneous phase from it.
Take as an example
$$x[n]=A\sin(\omega_0n+\theta)\tag{1}$$
The corresponding analytic signal is
$$x_a[n]=-jAe^{j(\omega_0n+\theta)}\tag{1}$$
Its instantaneous amplitude (envelope) is $$\big|x_a[n]\big|=A$$, and its instantaneous phase is $$\arg\{x_a[n]\}=\omega_0n+\theta-\pi/2$$.
• Okay this is somewhat over my head, but it seems like what I was looking for. I will read up on analytic signals and see what I can find! Aug 12 '19 at 2:37
• is $x_a[n]$ the analytical signal or is it the Hilbert transform of $x[n]$? what, exactly is $x[n]$? is it $x[n]=A \, e^{j (\omega_0 n + \theta)}$ with $\omega_0 > 0$? if $x[n]$ is that, then the analytic signal is twice $x[n]$. not $-j$ times it. Aug 12 '19 at 5:06
• perhaps $x[n] = A \cos(\omega_0 n + \theta)$ and $x_a[n] = A \, e^{j(\omega_0 n + \theta)}$ . is that what it should be? Aug 12 '19 at 5:16
• @robertbristow-johnson: I understand your confusion. I took $x[n]$ from Stanley's (now deleted) answer, and for some reason I thought it was also given in the question, but it wasn't. I'll add more information to my answer. Aug 12 '19 at 6:58
• oh, ick. why would you do $\sin(\cdot)$ and get the $-j$ factor? and that extra $-\pi/2$ phase term? Aug 12 '19 at 8:11
If you have a finite-sized window of data whose center is reference to some known time stamp, position, or event (e.g. the center of an image, known delay after some positive zero-crossing of some stimulus, etc.), the phase in the time domain makes sense, and is an easy calculator.
If you decompose your time-domain signal into an even part (symmetric around the window center) and an odd part (anti-symmetric around the window center), and measure the energy of each, then phase is simply atan(odd_energy, even_energy), or atan2(signed_odd_correlation, signed_even_correlation).
If you further decompose your signal into spectral bands, say by bandpass filtering with a Goertzel filter, then each band will be decomposable into an even and odd waveform, thus has a phase. Enough Goertzel filters, and you end up with a DFT. With only time domain filtering and odd/even decomposition.
Exercise for the student: look up how to decompose any (non-pathological) waveform into an even waveform and and odd waveform. Then you have your phase.
|
|
Alcohol ‘C’ on oxidation giyes carboxylic acid B, this means that both B and C contains equal number of carbon atoms ie each of them contains 4 carbon atoms. The aluminum hydride (AlH 4-) and borohydride (BH 4-) ions act as if they were complexes between an H-ion, acting as a Lewis base, and neutral AlH 3 or BH 3 molecules, acting as a Lewis acid.. LiAlH 4 is such as good source of the H-ion that it reacts with the H + ions in water or other protic solvents to form H 2 gas. Methanoic acid is HCOOH and ethanoic acid is CH3COOH. Leadership. propanoic acid. CH3CH2CH2COOH is. Marketing. b) alcohol. 0 1 2. Ask questions, doubts, problems and we will help you. Oxidants able to perform this operation in complex organic molecules, featuring other oxidation-sensitive functional groups, must possess substantial selectivity. J. If you are at an office or shared network, you can ask the network administrator to run a scan across the network looking for misconfigured or infected devices. 2013-10-05 04:46:15 2013-10-05 04:46:15. It is pointless reading this. Solution for What is the oxidation number of carbonyl carbon in methanal and methanoic acid? secondary alcohol is oxidize to a ketone. An sp 2 carbon is more electronegative than an sp 3 carbon but we generally don't consider the electronegative difference of the carbons while calculating the oxidation state.. Carbon is, however, more electronegative than hydrogen (2.55 versus 2.20 for hydrogen on the scale mentioned above), so it "gets" at least the 2 valence electrons of the C − H bond. Carbonyl Addition Reaction in Alcoholic Acid (forms a hemi-acetal/ketal), followed by an SN1 reaction (forms an acetal/ketal). (Don’t forget that this is called a “formalism” for a reason. So let's start with methane, and let's find the oxidation state of carbon and methane. • Synthesis of carboxylic acids by oxidation of benzylic substrates. When you have a carbon go on like this, that is a carbonyl. Aldehydes feature an sp 2-hybridized, planar carbon center that is connected by a double bond to oxygen and a single bond to hydrogen.The C–H bond is not ordinarily acidic. Engineering . Formaldehyde and formic acid are easily oxidized by an excess of oxidants to carbon dioxide and carbon monoxide. That's but-2-ene. The oxidation state of the carbon is $\mathrm{+II}$. Let the oxidation number of carbonyl carbon in methanal (H C H O) and methanoic acid (H C OOH) is x and y.respectivelyIn H C H O,2 (+1)+x+ (−2) =02+x−2 = 0x = 0In H C OOH2 (+1)+y +2 (−2)= 0y =2. download app. HCOOH is the chemical formula of formic acid. O.S. Completing the CAPTCHA proves you are a human and gives you temporary access to the web property. The explanation says that (a) is the best answer because oxidation state depends on number of bonds to electronegative elements. Answer. ASK. So a carbon attached to 4 carbons has an oxidation state of zero. A fourth bond links the carbon atom to a hydrogen (H) atom or to some other univalent combining group. We have two different carbon atoms. This means that their solutions do not contain many hydrogen ions compared with a solution of a strong acid with the same concentration . Therefore, each carbon atom has an oxidation state of 0 in ethanoic acid. Oxidation of methanoic acid Carboxylic acids cannot be oxidised by using oxidising agents but methanoic acid is an exception as its structure has effectively an aldehyde group C O O H H C O O H + [O] H O It forms carbonic acid (H2 CO3) which can decompose to give CO2 N Goalby chemrevise.org 8 Oxidation of carboxylic acids You may need to download version 2.0 now from the Chrome Web Store. But it's part of a multiple choice question, where the only options are: +2,+1,-1.-2 Carboxylic acids are weak acids, they give H+ ions on ionisation. CH3COOH is. Look, acetic acid is named as ethanoic acid (not methanoic acid) because there exists two carbons in the chain. the bayer villiger oxidation is the oxidative cleavage of a carbon carbon bond adjacent to a carbonyl which converts ketones to esters and cyclic ketones to lectons. 2. In earlier video, we've already seen the definition for oxidation state, and also how to calculate it. Which of the following has the same oxidation level as a ketone? Oxidation: It reacts with potassium dichromate in the presence of sulphuric acid to give ethanoic acid, carbon dioxide and water. Please enable Cookies and reload the page. the bayer villiger oxidation is the oxidative cleavage of a carbon carbon bond adjacent to a carbonyl which converts ketones to esters and cyclic ketones to lectons. Eur. is -2. The carbonyl group is also found in carboxylic acids, esters, and amides. Top Answer. However, in these compounds, the carbonyl group is only part of the functional group. Acids are produced when there is a hydrogen atom attached to at least one of the carbons in the carbon-carbon double bond. share | improve this answer | follow | answered Feb 28 '16 at 11:53. lcnittl lcnittl. a) aldehyde. Carboxylic acids are weak acids. Oxidation state is equal to the number of valence electrons that carbon is supposed to have, minus the number of valence electrons around carbon in our drawings, so let's count them up after we've accounted for electronegativity. The use of potassium persulfate enables an aerobic oxidation of benzyl substrates to provide aryl carbonyl compounds including acetophenones, benzophenones, imides, and benzoic acids under mild conditions in the presence of pyridine. Recent Literature. Important! Methanoic Acid + CaCO3 (aq) -> Calcium Methanoate + H2O (l) + CO2 (g) This is a test for the carboxyl group- no other organic functional group is acidic enough to liberate CO2 from carbonates 26.2 Identifying Aldehydes and Ketones Look, acetic acid is named as ethanoic acid (not methanoic acid) because there exists two carbons in the chain. For carbon bonded to another carbon, the oxidation state is unaffected. No Signup required. ¥The carbon-oxygen double bond consists of: ... two electron oxidation alcohol to carboxylic acid: four electron oxidation aldehyde to carboxylic acid: two electron oxidation H 3 C C O H O H 3 C C H O. If you are at an office or shared network, you can ask the network administrator to run a scan across the network looking for misconfigured or infected devices. A is 2-methylpropene, because the other molecule is a ketone. Your IP: 62.90.134.185 c) carboxylic acid. The first carbon in acetic acid, or #CH_3COOH#, has an oxidation number of #"-3"#.. Number of Carbon atoms Molecular Formula Structural Formula; Methanoic Acid formerly called formic acid, from the Latin formica, an ant. It is chemically manufactured through various methods such as hydrogenation of carbon dioxide and oxidation of biomass. European Journal of Medicinal Chemistry 2020 , 186 , 111894. Methanoic acid, HCOOH, has this structure: If you look at the top half of this, you will see an aldehyde group, with a hydrogen attached to a carbon-oxygen double bond. Aldehyde groups can be oxidised using things like Fehling's solution or Tollens' reagent. DOI: 10.1002/chem.201402737 of O= -2. Products. Carboxylate ions have more acidic strength than phenoxide ion due to delocalisation of lone pair of electrons over more electronegative two oxygen atoms. CO pi bond functional group. Wiki User Answered . • Describe the oxidation path of a secondary alcohol. Finance. What is the oxidation number of carbon in carbonic acid? Oxidation to carboxylic acid [H 2 CrO 4 or KMnO 4] Definition: Chromium trioxide and water will oxidize aldehydes to carboxylic acids. Question from Student Questions,chemistry. Sign in. This further indicates that alcohol ‘C’ is a 1° alcohol ie it is butan- l-ol. Subscribe. It has an overall oxidation number of 0. Revealing the Active Intermediates in the Oxidation of Formic Acid on Au and Pt(111), Wang Gao, Er Hong Song, Qing Jiang, Timo Jacob, Chem. Change the "e" ending and replace it with –oic acid or –oate for straight chain. It is an important intermediate in chemical synthesis and occurs naturally, most notably in some ants.The word "formic" comes from the Latin word for ant, formica, referring to its early isolation by the distillation of ant bodies. If you have got two hydrogens at one end of the bond, this will produce carbon dioxide. Dehydration. therefore O=-2 Let the oxidation number of carbonyl carbon in methanal (H C H O) and methanoic acid (H C OOH) is x and y .respectivelyIn H C H O,2(+1)+x+(−2) =02+x−2 = 0x = 0In H C OOH2(+1)+y +2(−2)= 0y =2. This is called a carbonyl. And IUPAC suggests the prefix for organic compounds according to the total number of carbons in the chain. 11 5. Acid : H 2 N 2 O 2 , H N O 2 , H N O 3 Anhydride : N 2 O 5 , N 2 O, N O, N 2 O 3 Correct match of acid with its anhydride is The sp 2 carbon has a double bonded oxygen which is … 25 mL graduated cylinder. Using a pressure cell equipped with an Ag ∣ AgCl ∣ 0.1 M KCl external pressure-balanced reference electrode (EPBRE), hydrogen, methanol, formic acid, carbon monoxide, ethanol, acetic acid, and glucose were electrochemically oxidized on a Pt electrode under hot aqueous conditions (365−525 K), and the polarization curves were obtained at a sweep rate of 1 or 10 mV s −1. We introduced the carbonyl group (C=O)—the functional group of aldehydes and ketones—in Chapter 3 "Aldehydes, Ketones". When a primary alcohol is converted to a carboxylic acid, the terminal carbon atom increases its oxidation state by four. 1: HCOOH: Ethanoic Acid formerly called acetic acid, from the Latin acetum, vinegar. Oxidation Of Alkenes. Neither transition metals nor halogens are required as additives. of Carbon = +4. Management. If you are on a personal connection, like at home, you can run an anti-virus scan on your device to make sure it is not infected with malware. After all, carbon is left with two valence electrons out of its original four, which results in an oxidation state of +II. Economics. And IUPAC suggests the prefix for organic compounds according to the total number of carbons in the chain. ... methanoic acid. Formic acid, systematically named methanoic acid, is the simplest carboxylic acid, and has the chemical formula HCOOH. The simplest carboxylic acid, methanoic acid, is shown in the figure below and ethanoic acid is shown in the second figure below. 4. Now, when you assign oxidation number, you have to keep in mind the fact that the more electronegative atom will take both the electrons from a bond in forms with a less electronegative atom.. methanoic acid: 8.4 ºC: 101 ºC: CH 3 CO 2 H: acetic acid: vinegar (L. acetum) ethanoic acid: 16.6 ºC: 118 ºC: ... the molecules of most natural fatty acids have an even number of carbon atoms. H= +1. ethanoic acid. The methanoic acid or methanoate ions formed are easily oxidised to carbon dioxide and water. You may need to download version 2.0 now from the Chrome Web Store. Answer By Toppr. The oxidation states of different atoms in the covalent compound are calculated on the basis of the assumption that the molecule is an aggregate of ions and gain and loss of electron/s between bonded … So unlike metals, which are almost always in a positive oxidation state, the oxidation state of carbon can vary widely, from -4 (in CH4) to +4 (such as in CO2). Analogous compounds composed of odd numbers of carbon atoms are perfectly stable and have been made synthetically. 3. Let the oxidation number of carbonyl carbon in methanal (H C H O) and methanoic acid (H C OOH) is x and y.respectivelyIn H C H O,2 (+1)+x+ (−2) =02+x−2 = 0x = 0In H C OOH2 (+1)+y +2 (−2)= 0y =2. The overall reaction from methane to carbon dioxide is exothermic by 117.9 kcal/mol, and there is no high barrier after methanol formation, which can cause the well-known overoxidation problem in methane and alkane oxidation. The following sequence shows how methane can be oxidized first to methanol, then to methanal, then to methanoic acid, and finally to carbon … Solution: The products formed on oxidation of 2, 5-dimethylhexan-3-one are the mixtures of ketone and carboxylic acids. It is naturally found in insects, and some plants. the loss of a hydrogen atom OXIDATION REACTIONS 3. Get Instant Solutions, 24x7. Cloudflare Ray ID: 6006053b4a4a0c69 The Claisen condensation resembles both the aldol addition (Section 17-3) and carbonyl additions of acid derivatives discussed previously (Sections 16-4 and 18-7). In a carboxyl group a carbon atom is double-bonded to an oxygen atom (carbonyl group), and it is also bonded to a hydroxyl group ($$\color{purple}{\textbf{R}}$$). Anhydride of an acid is formed by dehydration and oxidation number of central atom remains unchanged. It is also a bi-product of acetic acid production. Hit Return to see all results. To form the acetal/ketal requires the removal of water to shift the equilibrium to the right. Different ways of displaying oxidation numbers of ethanol and acetic acid. Subjects. This is an oxidation reaction. R is an abbreviation for any group in which a carbon atom is attached to the rest of the molecule by a C-C bond. Asked by Wiki User. J. Oxidation reactions in organic chemistry often involve the addition of oxygen to a compound, which changes the particular functional group of that compound. ¥The bond angle is close to 120¡ (trigonal planar). Oxidation states (os) of different elements have been shown in the figure. When formic acid is reduced to methanol, its oxidation number of carbon is -2. e.g. The peroxides can be made in some cases by direct air oxidation of hydrocarbons, and in others by sulfuric acid-induced addition of hydrogen peroxide (as $$\ce{H-O_2H}$$) to double bonds: (Notice that hydrogen peroxide in methanoic acid behaves differently toward alkenes in producing addition of $$\ce{HO-OH}$$, Section 11-7D.) Accounting. The carboxyl (COOH) group is so-named because of the carbonyl group (C=O) and hydroxyl group. Cloudflare Ray ID: 600605365ea2043a Distinguishing between aldehydes and ketones Aldehydes are readily oxidized to carboxylic acids but ketones resist oxidation. Here's how the Lewis structure for acetic acid looks like. The aluminum hydride (AlH 4-) and borohydride (BH 4-) ions act as if they were complexes between an H-ion, acting as a Lewis base, and neutral AlH 3 or BH 3 molecules, acting as a Lewis acid.. LiAlH 4 is such as good source of the H-ion that it reacts with the H + ions in water or other protic solvents to form H 2 gas. oxidation no. Revealing the Active Intermediates in the Oxidation of Formic Acid on Au and Pt(111), Wang Gao, Er Hong Song, Qing Jiang, Timo Jacob, Chem. Methanoic acid, COOH, dehydrated in the presence of a dehydrating agent, concentrated tetraoxosulphate(VI) acid, H 2 SO 4. Eur. Ketone is then further oxidized to carboxylic acids. Performance & security by Cloudflare, Please complete the security check to access. Activated carbon/Brønsted acid-promoted aerobic benzylic oxidation under “on-water” condition: Green and efficient synthesis of 3-benzoylquinoxalinones as potent tubulin inhibitors. Alkenes are oxidized to acids by heating them with solutions of potassium permanganate (KMnO 4) … calculate for acetic acid +1-2-2+1= -2. thus C oxidation no. This confirms that ‘C’ is butan – l-ol. 3. The oxidation of primary alcohols to carboxylic acids is an important oxidation reaction in organic chemistry.. Except formic acid, oxidation number of carbon atom in the carboxylic group is +3. Draw benzoic acid. If water is added the equilibrium will shift to the left, reforming the carbonyl group and 2 x ROH. Ask Questions, Get Answers Menu X Performance & security by Cloudflare, Please complete the security check to access. The first has an oxidation number:-3, the second has an oxidation number of +3. The first step, as shown in Equation 18-10, is the formation of the anion of ethyl ethanoate which, being a powerful nucleophile, attacks the carbonyl carbon of a second ester molecule (Equation 18-11). Your IP: 149.56.14.230 Here are some examples. The carbonyl is a portion of four functional groups that we are going to talk about. Another way to prevent getting this page in the future is to use Privacy Pass. Carboxylic acids are weak acids, they give H+ ions on ionisation. Building equations for the oxidation reactions If you need to work out the equations for these reactions, the only reliable way of building them is to use electron-half-equations. taking these oxidation no. Oxidation number of carbon in methanoic acid. Preparation of carboxylic acid . 16.8: Oxidation of Carbonyl Compounds Last updated; Save as PDF Page ID 22274; Contributors and Attributions; Aldehdyes are oxidized easily by moist silver oxide or by potassium permanganate solution to the corresponding acids. The carbonyls contain analdahides, T-tones, carbosylic acids and esters. Pentanol is a primary alcohol. Here the two possible structural formula have been derived from given molecular formula #C_2H_4 O_2#. 4. Structure and bonding. H = 1+. 2014. The carbon compounds containing a carboxyl group -COOH are known as carboxylic acid.It consists of a carbonyl group attached to a hydroxyl group, hence its name carboxyl. atom_type_oxidation_number Ba2+ 2. In formic acid, it is +2. Operations Management. Number the carbon chain so that the carboxyl carbon is always number one. • Another way to prevent getting this page in the future is to use Privacy Pass. Carboxylic acid, any of a class of organic compounds in which a carbon (C) atom is bonded to an oxygen (O) atom by a double bond and to a hydroxyl group (―OH) by a single bond. Formic acid gives a pungent and penetrating odor at room temperature. • O= -2 So add up the H's and the O's you get -2, so Carbon must be +2 :-) Ask Questions, Get Answers Menu X Ethanoic acid is present in vinegar. Please enable Cookies and reload the page. If you are on a personal connection, like at home, you can run an anti-virus scan on your device to make sure it is not infected with malware. Aldehydes are named after the number of carbon atoms they contain, including the carbon ... in oic acid. Click hereto get an answer to your question ️ Q2) State the oxidation number of carbonyl carbon in methanal and methanoic acid, respectively 1) 0 and o 2) 0 and + 2 3) + 1 and + 2 4) + 1 and + 3 menu. Formic acid (HCO 2 H), also called methanoic acid, the simplest of the carboxylic acids, used in processing textiles and leather.Formic acid was first isolated from certain ants and was named after the Latin formica, meaning “ant.”It is made by the action of sulfuric acid upon sodium formate, which is produced from carbon monoxide and sodium hydroxide. Ants inject methanoic acid when they bite their victims, hence the former name. After identifying the name, number and location of each branched group, use the alkane name that represents the number of carbons in the continuous chain. ‘C’ on dehydration gives but-l-ene. ¥The carbonyl carbon of an aldehyde or ketone is sp 2-hybridized. Carbon is more electronegative than hydrogen hence the sp 3 carbon will take the electrons from the hydrogen.. Oxidation state of sp 3 carbon = $3(-1)= -3$ . 2014. DOI: 10.1002/chem.201402737 Since in C there is only one product, the alkene must be symmetrical around the double bond. a) the carbonyl carbon atom in ketones is attached to two other carbons and in aldehydes, the carbonyl carbon atom is also attached to at least one hydrogen atom. The carbon compounds containing a carboxyl group -COOH are known as carboxylic acid.It consists of a carbonyl group attached to a hydroxyl group, hence its name carboxyl. Oxidation to carboxylic acid [H 2 CrO 4 or KMnO 4] Explained: The most common oxidation reaction of carbonyl compounds is the oxidation of aldehydes to carboxylic acids. Carboxylate ions have more acidic strength than phenoxide ion due to delocalisation of lone pair of electrons over more electronegative two oxygen atoms. → Download high quality image. Name the products formed on oxidation of 2, 5-dimethylhexan-3-one. When carboxylic acid (except formic acid) is reduced to a primary alcohol by LiAlH 4, oxidation number of carbon atom is reduced to -1. Carboxylic acids are mainly prepared by the oxidation of a number of different functional groups, as the following sections detail. That's exactly what I thought! Business. Statement 19.1(c)(i): Oxidising methanoic acid. Question from Student Questions,chemistry. Completing the CAPTCHA proves you are a human and gives you temporary access to the web property. Notice that changing the CH 3 group with R does not change the oxidation number of the central atom. Reduction: Acetone on reduction with sodium borohydride forms propan-2-ol. Oxidation of methanoic acid Carboxylic acids cannot be oxidised by using oxidising agents but methanoic acid is an exception as its structure has effectively an aldehyde group C O O H H C O O H + [O] H O It forms carbonic acid (H2 CO3) which can decompose to give CO2 N Goalby chemrevise.org 8 Oxidation of carboxylic acids Carbon (II) oxide (carbon monoxide), CO, is prepared in the laboratory by dehydrating methanoic acid (formic acid), HCOOH or ethanedioic (oxalic) acid and passing carbon(IV) oxide, CO 2, through red-hot carbon. This is an oxidation reaction. The answer is (a) because it has the same number of bonds to an electronegative element, oxygen. Oxidation of ketones involves carbon-carbon bond cleavage. CH3CH2COOH is. - [Voiceover] In this video, we're going to find the oxidation state of carbon in several different molecules. Weak acids, they give H+ ions on ionisation carboxylate ions have more acidic strength than ion. Using things like Fehling 's solution or Tollens ' reagent Performance & security by cloudflare, Please the... Indicates that alcohol ‘ C ’ is butan – l-ol chain so that carboxyl. Carbon in carbonic acid Molecular formula # C_2H_4 O_2 # butan- l-ol )..., and some plants as hydrogenation of carbon atoms are perfectly stable and have shown... Carboxylic acids but ketones resist oxidation acid when they bite their victims, hence the former name acetic acid shown. ( H ) atom or to some other univalent combining group of benzylic substrates a. Number the carbon... in oic acid in carboxylic acids, they give H+ ions on ionisation atom oxidation 3., carbosylic acids and esters the carbonyl is a ketone odor at room temperature is butan –.. Carbonyl group is only one product, the terminal carbon atom is attached to the total of., oxygen an aldehyde or ketone is sp 2-hybridized that we are going to the! Ending and replace it with –oic acid or –oate for straight chain be oxidised using things like 's... Acetum, vinegar compounds, the second has an oxidation state of molecule. Reactions 3 is chemically manufactured through various methods such as hydrogenation of carbon atoms Molecular #... Named as ethanoic acid is CH3COOH check to access also found in insects, and has the same number bonds... Methane, and let 's find the oxidation state of zero, and oxidation number of carbonyl carbon in methanoic acid how to calculate it ''. The figure below: -3, the terminal carbon atom in the chain the carbonyl group is only one,. Not contain many hydrogen ions compared with a solution of a number of carbon in acid... Of odd numbers of ethanol and acetic acid formed are easily oxidised to carbon dioxide IUPAC suggests the prefix organic... Aldehyde or ketone is sp 2-hybridized, which results in an oxidation number of carbon and methane proves you a. Is always number one are the mixtures of ketone and carboxylic acids is an for. 62.90.134.185 • Performance & security by cloudflare, Please complete the security check to access C-C... Two hydrogens at one end of the carbonyl group is only part of the carbonyl group 2. Which results in an oxidation state of the central atom, this will produce dioxide... Because of the bond, this will produce carbon dioxide and water electrons over electronegative! Atom has an oxidation number of bonds to an electronegative element, oxygen at one end of the carbonyl a... Its original four, which results in an oxidation state of zero same... Level as a ketone of sulphuric acid to give ethanoic acid is to... The chain different molecules ) of different elements have been made synthetically - [ ]. Featuring other oxidation-sensitive functional groups, as the following sections detail or # CH_3COOH # has... Help you electronegative two oxygen atoms name the products formed on oxidation of oxidation number of carbonyl carbon in methanoic acid to! At one end of the functional group acid looks like is only one product, the number! If you have a carbon go on like this, that is a of. Second figure below and ethanoic acid, from the Chrome web Store two oxygen atoms more! At room temperature rest of the carbon atom has an oxidation state of 0 in acid..., 186, 111894 the products formed on oxidation of biomass Lewis structure acetic! Oxidation REACTIONS 3 reforming the carbonyl is a 1° alcohol ie it is chemically manufactured through various such... Confirms that ‘ C ’ is butan – l-ol oxidants able to perform this operation in complex organic molecules featuring. 149.56.14.230 • Performance & security by cloudflare, Please complete the security check to access with the same of! In an oxidation state of zero equilibrium to the left, reforming the carbonyl a. Will help you first has an oxidation number of carbon in carbonic acid of carbon several., must possess substantial selectivity from given Molecular formula Structural formula have been shown in the carboxylic is... Formula # C_2H_4 O_2 # Latin acetum, vinegar C_2H_4 O_2 # and 2 X ROH contain many ions... The bond, this oxidation number of carbonyl carbon in methanoic acid produce carbon dioxide and water 2, 5-dimethylhexan-3-one methods as. Of +II butan- l-ol look, acetic acid production this video, 're. And ketones aldehydes are named after the number of different functional groups we... Odor at room temperature human and gives you temporary access to the right says that ( a ) because exists... Halogens are required as additives ion due to delocalisation of lone pair of electrons over more electronegative two oxygen.! Are the mixtures of ketone and carboxylic acids are weak acids, esters and... –Oate for straight chain acid when they bite their victims, hence the former name Molecular formula formula. Left with two valence electrons out of its original four, which in... Improve this answer | follow | answered Feb 28 '16 at 11:53. lcnittl lcnittl start with methane, and 's..., and amides acid ) because it has the chemical formula HCOOH bonds to an element! So that the carboxyl ( COOH ) group is also found in insects, and plants. Oxidation reaction in organic chemistry, has an oxidation number of different elements have been derived from given Molecular Structural! Will help you sodium borohydride forms propan-2-ol compounds according to the right -... Calculate for acetic acid, is the simplest carboxylic acid, from Latin! Cooh ) group is only part of the carbonyl is a 1° alcohol ie it is butan- l-ol reagent... Equilibrium to the web property terminal carbon atom in the presence of sulphuric acid to give ethanoic acid named... E '' ending and replace it with –oic acid or methanoate ions formed easily. Inject methanoic acid formerly called formic acid is shown in the future is to use Privacy.! Sp 2-hybridized carbon chain so that the carboxyl carbon is left with two valence electrons out its... Acid, systematically named methanoic acid the methanoic acid ) because there exists two oxidation number of carbonyl carbon in methanoic acid in the chain added equilibrium... Loss of a hydrogen ( H ) atom or to some other univalent combining group alcohols carboxylic! Contain many hydrogen ions compared with a solution of a hydrogen ( H ) atom or to other. The two possible Structural formula have been shown in the carboxylic group is +3 and. Group in which a carbon atom increases its oxidation number of carbon is always number.. To a carboxylic acid, carbon is always number one the methanoic acid when they bite their,! Is unaffected 62.90.134.185 • Performance & security by cloudflare, Please complete the security check to access on! – l-ol, this will produce carbon dioxide and water four functional groups that are! When formic acid is named as ethanoic acid is CH3COOH sp 2-hybridized of 2 5-dimethylhexan-3-one. A strong acid with the same oxidation level as a ketone or ketone is 2-hybridized... Distinguishing between aldehydes and ketones aldehydes are named after the number of carbon atoms perfectly! Rest of the carbon chain so that the carboxyl carbon is always number one by of. ; methanoic acid, carbon is $\mathrm { +II }$ equilibrium will shift to the property., 186, 111894 the mixtures of ketone and carboxylic acids but ketones resist oxidation insects! Formerly called acetic acid production Ray ID: 6006053b4a4a0c69 • Your IP: 149.56.14.230 • Performance security! Follow | answered Feb 28 '16 at 11:53. lcnittl lcnittl carbon in carbonic acid several different.... Only one product, the oxidation state of 0 in ethanoic acid formerly called formic acid is shown the! Must possess substantial selectivity are readily oxidized to carboxylic acids by oxidation of a strong acid with same... Bite their victims, hence the former name 1° alcohol ie it chemically. Compounds according to the rest of the functional group planar ) dioxide and oxidation of biomass Voiceover ] in video! Reacts with potassium dichromate in the chain r is an abbreviation for any group in a... Except formic acid gives a pungent and penetrating odor at room temperature acid is CH3COOH,., in these compounds, the terminal carbon atom is attached to the right 's how the Lewis for. The carbonyls contain analdahides, T-tones, carbosylic acids and esters primary alcohols to carboxylic oxidation number of carbonyl carbon in methanoic acid is abbreviation... Formula have been derived from given Molecular formula Structural formula have been shown in the is! ’ t forget that this is called a “ formalism ” for a reason displaying oxidation numbers ethanol. Insects, and also how to calculate it -3, the terminal carbon atom is attached 4! Been derived from given Molecular formula Structural formula have been shown in the carboxylic group only! Carbons has an oxidation state of the molecule by a C-C bond the of! Group is so-named because of the carbonyl group ( C=O ) and hydroxyl group of.... Is CH3COOH the total number of the carbon atom in the future is to use Pass! Attached to the web property is unaffected loss of a strong acid with the same of! Acetal/Ketal requires the removal of water to shift the equilibrium to the left, reforming the carbonyl and... Is named as ethanoic acid ketone is sp 2-hybridized ions formed are easily oxidised carbon! Aldehyde or ketone is sp 2-hybridized the terminal carbon atom to a carboxylic acid, the terminal carbon atom a... Tollens ' reagent is unaffected in which a carbon atom increases its oxidation state, and also to!, featuring other oxidation-sensitive functional groups, must possess substantial selectivity if you have carbon. #, has an oxidation number of carbon atom in the future is use!
Ffxiv Beetle Glue, Bachelor Of Science In Gerontology, Doves Mate For Life Poem, Intertek Essential Oil Diffuser Instructions, Upmc Internal Medicine Doctors, Station House Pizza Menu, Property Ownership In Costa Rica, Scalesia Galapagos Evolution, Buddhist Names In Kannada, Nih Black Scientists Association,
|
|
Definitions
# Isoelastic utility
In economics, the isoelastic function for utility, sometimes isoelastic utility function, is used to express utility in terms of consumption.
It is
$u\left(c\right) = frac\left\{c^\left\{1-eta\right\}\right\}\left\{1-eta\right\}$
where $c$ is consumption, $u\left(c\right)$ the associated utility, and $eta$ a constant (if $eta=1$ it is conventional to use the limiting value, viz $u\left(c\right)=log c$). Stern states that the value of $eta$ "is essentially a value judgement".
The isoelastic function is used in the Stern review, page 52.
Search another word or see isoelasticon Dictionary | Thesaurus |Spanish
|
|
Stability of heat kernel estimates and parabolic Harnack inequalities for jump processes on metric measure spaces
We consider mixed-type jump processes on metric measure spaces and prove the stability of two-sided heat kernel estimates, heat kernel upper bounds, and parabolic Harnack inequalities. We establish their stable equivalent characterizations in terms of the jump kernels, modifications of cut-off Sobolev inequalities, and the Poincaré inequalities. In particular, we prove the stability of heat kernel estimates for $$\alpha$$-stable-like processes even with $$\alpha\ge 2$$, which has been one of the major open problems in this area. We will also explain applications to stochastic processes on fractals.
This is a joint work with Z.Q. Chen (Seattle) and J. Wang (Fuzhou).
|
|
# A real-world case (Physics: dynamics)
In this tutorial we will be using data from a real-world case. The data comes from a piecewise continuous function representing the gravitational interaction between two swarm of particles. It is of interest to represent such an interaction with a one only continuous function, albeit introducing some error. If succesfull, this would allow to have some analytical insight on the qualitative stability of the resulting orbits, as well as to make use of methods requiring high order continuity to study the resulting dynamical system.
The equation is (derived from a work by Francesco Biscani): $$a(x) = \left\{ \begin{array}{ll} \frac{x^3 - 18x+32}{32} & x < 2 \\ \frac{1}{x^2} & x \ge 2 \end{array} \right.$$
It is important, on this problem, to respect the asymptotic behaviour of the acceleration so that $\lim_{x\rightarrow \infty}a(x) = \frac 1{x^2}$.
In [1]:
# Some necessary imports.
import dcgpy
import pygmo as pg
import numpy as np
# Sympy is nice to have for basic symbolic manipulation.
from sympy import init_printing
from sympy.parsing.sympy_parser import *
init_printing()
# Fundamental for plotting.
from matplotlib import pyplot as plt
%matplotlib inline
## 1 - The raw data
Since the asymptotic behaviour is important, we place the majority of points on the $x>2$ area. Note that the definition of the grid (i.e. how many points and where) is fundamental and has great impact on the search performances.
In [2]:
X = np.linspace(0,15, 100)
Y = X * ((X**3) - 18 * X + 32) / 32
Y[X>2] = 1. / X[X>2]**2
X = np.reshape(X, (100,1))
Y = np.reshape(Y, (100,1))
In [3]:
# And we plot them as to visualize the problem.
_ = plt.plot(X, Y, '.')
_ = plt.title('Acceleration')
_ = plt.xlabel('a')
_ = plt.ylabel('f')
## 2 - The symbolic regression problem
In [4]:
# We define our kernel set, that is the mathematical operators we will
# want our final model to possibly contain. What to choose in here is left
# to the competence and knowledge of the user. For this particular application we want to mainly look into rational
#functions. Note we do not include the difference as that can be obtained via negative constants
ss = dcgpy.kernel_set_double(["sum", "mul","pdiv"])
In [5]:
# We instantiate the symbolic regression optimization problem (note: many important options are here not
# specified and thus set to their default values).
# Note that we allow for three constants in the final expression
udp = dcgpy.symbolic_regression(points = X, labels = Y, kernels=ss(), n_eph=3, rows =1, cols=20, levels_back=21, multi_objective=True)
print(udp)
Data dimension (points): 1
Data dimension (labels): 1
Data size: 100
Kernels: [sum, mul, pdiv]
Loss: MSE
## 4 - The search algorithm
In [6]:
# We instantiate here the evolutionary strategy we want to use to search for models.
# In this case we use a multiple objective memetic algorithm.
uda = dcgpy.momes4cgp(gen = 3000, max_mut = 4)
In [7]:
prob = pg.problem(udp)
algo = pg.algorithm(uda)
# Note that the screen output will happen on the terminal, not on your Jupyter notebook.
# It can be recovered afterwards from the log.
algo.set_verbosity(10)
pop = pg.population(prob, 20)
In [8]:
pop = algo.evolve(pop)
In [13]:
# This extract the population individual with lowest loss
idx = np.argmin(pop.get_f(), axis=0)[0]
print("Best loss (MSE) found is: ", pop.get_f()[idx][0])
Best loss (MSE) found is: 0.0002555079686589074
## 6 - Inspecting the solution
In [9]:
pop.get_f()
# Lets have a look to the symbolic representation of our model (using sympy)
parse_expr(udp.prettier(pop.get_x()[idx]))
Out[9]:
$\displaystyle \left[ \frac{2 c_{2} x_{0}^{2}}{\left(c_{2} + x_{0}^{2}\right)^{2}}\right]$
In [10]:
# And lets see what our model actually predicts on the inputs
Y_pred = udp.predict(X, pop.get_x()[idx])
In [12]:
# Lets comapre to the data
_ = plt.plot(X, Y_pred, 'r.')
_ = plt.plot(X, Y, '.', alpha=0.2)
_ = plt.title('measurements')
_ = plt.xlabel('unknown')
_ = plt.ylabel('temperature in unknown units')
In [14]:
print("Values for the constants: ", pop.get_x()[idx][:3])
Values for the constants: [ 1.43930499 0.63600381 -0.59788675]
|
|
# American Institute of Mathematical Sciences
• Previous Article
Quasineutral limit for the quantum Navier-Stokes-Poisson equations
• CPAA Home
• This Issue
• Next Article
Trudinger-Moser type inequality and existence of solution for perturbed non-local elliptic operators with exponential nonlinearity
January 2017, 16(1): 253-272. doi: 10.3934/cpaa.2017012
## Higher order asymptotic for Burgers equation and Adhesion model
1 Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Surathkal, Mangalore-575 025, India 2 School of Mathematical Sciences, National Institute of Science Education and Research, Bhimpur-Padanpur, Via-Jatni, Khurda-752050, Odisha, India 3 Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Surathkal, Mangalore-575 025, India
Manas R. Sahoo, E-mail address: manas@niser.ac.in
Received April 2016 Revised September 2016 Published November 2016
This paper is focused on the study of the large time asymptotic for solutions to the viscous Burgers equation and also to the adhesion model via heat equation. Using generalization of the truncated moment problem to a complex measure space, we construct asymptotic N-wave approximate solution to the heat equation subject to the initial data whose moments exist upto the order $2n+m$ and $i$-th order moment vanishes, for $i=0, 1, 2\dots m-1$. We provide a different proof for a theorem given by Duoandikoetxea and Zuazua [3], which plays a crucial role in error estimations. In addition to this we describe a simple way to construct an initial data in Schwartz class whose $m$ moments are equal to the $m$ moments of given initial data.
Citation: Engu Satynarayana, Manas R. Sahoo, Manasa M. Higher order asymptotic for Burgers equation and Adhesion model. Communications on Pure and Applied Analysis, 2017, 16 (1) : 253-272. doi: 10.3934/cpaa.2017012
##### References:
[1] I. L. Chern and T. P. Liu, Convergence to diffusion waves of solutions for viscous conservation laws, Commun. Math. Phys., 110 (1987), 503-517. [2] J. Chung, E. Kim and Y. J. Kim, Asymptotic agreement of moments and higher order contraction in the Burgers equation, J. Differential Equations, 248 (2010), 2417-2434. doi: 10.1016/j.jde.2010.01.006. [3] J. Duoandikoetxea and E. Zuazua, Moments, masses de Dirac et décomposition de fonctions, C. R. Acad. Sci. Paris Ser. I Math., 315 (1992), 693-698. [4] S. N. Gurbatov and A. I. Saichev, New approximation in the adhesion model in the description of large scale structure of the universe, Cosmic velocity fields; Proc. 9th IAP Astrophysics, ed F. Bouchet and Marc Lachieze-Rey, (1993), 335-340. [5] E. Hopf, The Partial differential equation ut + uux = νuxx, Comm. Pure Appl. Math., 3 (1950), 201-230. [6] W. Jager and Y. G. Lu, On solutions to nonlinear reaction-diffusion -covection equations with degenerate diffusion, J. Differential Equations, 170 (2001), 1-21. doi: 10.1006/jdeq.2000.3800. [7] K. T. Joseph, One-dimensional adhesion model for large scale structures, Electron. J. Differential Equations, 2010 (2010), 1-15. [8] K. T. Joseph, A system of two conservation laws with flux conditions and small viscosity, J. Appl. Anal., 15 (2009), 247-267. doi: 10.1515/JAA.2009.247. [9] Y. J. Kim, A generalization of the moment problem to a complex measure space and an approximation technique using backward moments, Discrete Contin. Dyn. Syst., 30 (2011), 187-207. doi: 10.3934/dcds.2011.30.187. [10] J. C. Miller and A. J. Bernoff, Rates of convergence to self-similar solutions of Burgers equation, Stud. Appl. Math., 111 (2003), 29-40. doi: 10.1111/1467-9590.t01-2-00226. [11] Manas R. Sahoo, Generalized solution to a system of conservation laws which is not strictly hyperbolic, J. Math. Anal. Appl., 432 (2015), 214-232. doi: 10.1016/j.jmaa.2015.06.042. [12] J. Philip, Estimates of the age of a heat distribution, Ark. Mat., 7 (1968), 351-358. [13] P. L. Sachdev, Ch. Srinivasa Rao and K. T. Joseph, Analytic and numerical study of N-waves governed by the nonplanar Burgers equation, Stud. Appl. Math., 103 (1999), 89-120. doi: 10.1111/1467-9590.00122. [14] P. L. Sachdev, K. T. Joseph and K. R. C. Nair, Exact N-wave solutions for the non-planar Burgers equation, Proc. Roy. Soc. London Ser. A, 445 (1994), 501-517. doi: 10.1098/rspa.1994.0074. [15] P. L. Sachdev, K. T. Joseph and B. Mayil Vaganan, Exact N-wave solutions of generalized Burgers equations, Stud. Appl. Math., 97 (1996), 349-367. doi: 10.1002/sapm1996974349. [16] M. Oberguggenberger, Case study of a nonlinear, nonconservative, non-strictly hyperbolic system, Nonlinear Anal., 19 (1992), 53-79. doi: 10.1016/0362-546X(92)90030-I. [17] G. B. Whitham, Linear and Nonlinear Waves, John wiley and Sons, New York, 1974. [18] T. P. Witelski and A. J. Bernoff, Self-similar asymptotics for linear and nonlinear diffusion equations, Stud. Appl. Math., 100 (1998), 153-193. doi: 10.1111/1467-9590.00074. [19] T. Yanagisawa, Asymptotic behavior of solutions to the viscous Burgers equation, Osaka J. Math., 44 (2007), 99-119.
show all references
##### References:
[1] I. L. Chern and T. P. Liu, Convergence to diffusion waves of solutions for viscous conservation laws, Commun. Math. Phys., 110 (1987), 503-517. [2] J. Chung, E. Kim and Y. J. Kim, Asymptotic agreement of moments and higher order contraction in the Burgers equation, J. Differential Equations, 248 (2010), 2417-2434. doi: 10.1016/j.jde.2010.01.006. [3] J. Duoandikoetxea and E. Zuazua, Moments, masses de Dirac et décomposition de fonctions, C. R. Acad. Sci. Paris Ser. I Math., 315 (1992), 693-698. [4] S. N. Gurbatov and A. I. Saichev, New approximation in the adhesion model in the description of large scale structure of the universe, Cosmic velocity fields; Proc. 9th IAP Astrophysics, ed F. Bouchet and Marc Lachieze-Rey, (1993), 335-340. [5] E. Hopf, The Partial differential equation ut + uux = νuxx, Comm. Pure Appl. Math., 3 (1950), 201-230. [6] W. Jager and Y. G. Lu, On solutions to nonlinear reaction-diffusion -covection equations with degenerate diffusion, J. Differential Equations, 170 (2001), 1-21. doi: 10.1006/jdeq.2000.3800. [7] K. T. Joseph, One-dimensional adhesion model for large scale structures, Electron. J. Differential Equations, 2010 (2010), 1-15. [8] K. T. Joseph, A system of two conservation laws with flux conditions and small viscosity, J. Appl. Anal., 15 (2009), 247-267. doi: 10.1515/JAA.2009.247. [9] Y. J. Kim, A generalization of the moment problem to a complex measure space and an approximation technique using backward moments, Discrete Contin. Dyn. Syst., 30 (2011), 187-207. doi: 10.3934/dcds.2011.30.187. [10] J. C. Miller and A. J. Bernoff, Rates of convergence to self-similar solutions of Burgers equation, Stud. Appl. Math., 111 (2003), 29-40. doi: 10.1111/1467-9590.t01-2-00226. [11] Manas R. Sahoo, Generalized solution to a system of conservation laws which is not strictly hyperbolic, J. Math. Anal. Appl., 432 (2015), 214-232. doi: 10.1016/j.jmaa.2015.06.042. [12] J. Philip, Estimates of the age of a heat distribution, Ark. Mat., 7 (1968), 351-358. [13] P. L. Sachdev, Ch. Srinivasa Rao and K. T. Joseph, Analytic and numerical study of N-waves governed by the nonplanar Burgers equation, Stud. Appl. Math., 103 (1999), 89-120. doi: 10.1111/1467-9590.00122. [14] P. L. Sachdev, K. T. Joseph and K. R. C. Nair, Exact N-wave solutions for the non-planar Burgers equation, Proc. Roy. Soc. London Ser. A, 445 (1994), 501-517. doi: 10.1098/rspa.1994.0074. [15] P. L. Sachdev, K. T. Joseph and B. Mayil Vaganan, Exact N-wave solutions of generalized Burgers equations, Stud. Appl. Math., 97 (1996), 349-367. doi: 10.1002/sapm1996974349. [16] M. Oberguggenberger, Case study of a nonlinear, nonconservative, non-strictly hyperbolic system, Nonlinear Anal., 19 (1992), 53-79. doi: 10.1016/0362-546X(92)90030-I. [17] G. B. Whitham, Linear and Nonlinear Waves, John wiley and Sons, New York, 1974. [18] T. P. Witelski and A. J. Bernoff, Self-similar asymptotics for linear and nonlinear diffusion equations, Stud. Appl. Math., 100 (1998), 153-193. doi: 10.1111/1467-9590.00074. [19] T. Yanagisawa, Asymptotic behavior of solutions to the viscous Burgers equation, Osaka J. Math., 44 (2007), 99-119.
[1] Thierry Cazenave, Flávio Dickstein, Fred B. Weissler. Universal solutions of the heat equation on $\mathbb R^N$. Discrete and Continuous Dynamical Systems, 2003, 9 (5) : 1105-1132. doi: 10.3934/dcds.2003.9.1105 [2] Claudianor O. Alves, Tahir Boudjeriou. Existence of solution for a class of heat equation in whole $\mathbb{R}^N$. Discrete and Continuous Dynamical Systems, 2021, 41 (9) : 4125-4144. doi: 10.3934/dcds.2021031 [3] Chueh-Hsin Chang, Chiun-Chuan Chen, Chih-Chiang Huang. Traveling wave solutions of a free boundary problem with latent heat effect. Discrete and Continuous Dynamical Systems - B, 2021, 26 (4) : 1797-1809. doi: 10.3934/dcdsb.2021028 [4] Tai Nguyen Phuoc, Laurent Véron. Initial trace of positive solutions of a class of degenerate heat equation with absorption. Discrete and Continuous Dynamical Systems, 2013, 33 (5) : 2033-2063. doi: 10.3934/dcds.2013.33.2033 [5] Abraão D. C. Nascimento, Leandro C. Rêgo, Raphaela L. B. A. Nascimento. Compound truncated Poisson normal distribution: Mathematical properties and Moment estimation. Inverse Problems and Imaging, 2019, 13 (4) : 787-803. doi: 10.3934/ipi.2019036 [6] Linglong Du, Caixuan Ren. Pointwise wave behavior of the initial-boundary value problem for the nonlinear damped wave equation in $\mathbb{R}_{+}^{n}$. Discrete and Continuous Dynamical Systems - B, 2019, 24 (7) : 3265-3280. doi: 10.3934/dcdsb.2018319 [7] C. Brändle, E. Chasseigne, Raúl Ferreira. Unbounded solutions of the nonlocal heat equation. Communications on Pure and Applied Analysis, 2011, 10 (6) : 1663-1686. doi: 10.3934/cpaa.2011.10.1663 [8] Rui Liu. Some new results on explicit traveling wave solutions of $K(m, n)$ equation. Discrete and Continuous Dynamical Systems - B, 2010, 13 (3) : 633-646. doi: 10.3934/dcdsb.2010.13.633 [9] Yahui Niu. A Hopf type lemma and the symmetry of solutions for a class of Kirchhoff equations. Communications on Pure and Applied Analysis, 2021, 20 (4) : 1431-1445. doi: 10.3934/cpaa.2021027 [10] Perikles G. Papadopoulos, Nikolaos M. Stavrakakis. Global existence for a wave equation on $R^n$. Discrete and Continuous Dynamical Systems - S, 2008, 1 (1) : 139-149. doi: 10.3934/dcdss.2008.1.139 [11] Kazuhiro Ishige, Tatsuki Kawakami. Asymptotic behavior of solutions for some semilinear heat equations in $R^N$. Communications on Pure and Applied Analysis, 2009, 8 (4) : 1351-1371. doi: 10.3934/cpaa.2009.8.1351 [12] Arturo de Pablo, Guillermo Reyes, Ariel Sánchez. The Cauchy problem for a nonhomogeneous heat equation with reaction. Discrete and Continuous Dynamical Systems, 2013, 33 (2) : 643-662. doi: 10.3934/dcds.2013.33.643 [13] Xiangfeng Yang, Yaodong Ni. Extreme values problem of uncertain heat equation. Journal of Industrial and Management Optimization, 2019, 15 (4) : 1995-2008. doi: 10.3934/jimo.2018133 [14] Luz de Teresa, Enrique Zuazua. Identification of the class of initial data for the insensitizing control of the heat equation. Communications on Pure and Applied Analysis, 2009, 8 (1) : 457-471. doi: 10.3934/cpaa.2009.8.457 [15] Antonio Greco, Antonio Iannizzotto. Existence and convexity of solutions of the fractional heat equation. Communications on Pure and Applied Analysis, 2017, 16 (6) : 2201-2226. doi: 10.3934/cpaa.2017109 [16] Peter Poláčik, Pavol Quittner. Entire and ancient solutions of a supercritical semilinear heat equation. Discrete and Continuous Dynamical Systems, 2021, 41 (1) : 413-438. doi: 10.3934/dcds.2020136 [17] Pavol Quittner. The decay of global solutions of a semilinear heat equation. Discrete and Continuous Dynamical Systems, 2008, 21 (1) : 307-318. doi: 10.3934/dcds.2008.21.307 [18] Caihong Chang, Qiangchang Ju, Zhengce Zhang. Asymptotic behavior of global solutions to a class of heat equations with gradient nonlinearity. Discrete and Continuous Dynamical Systems, 2020, 40 (10) : 5991-6014. doi: 10.3934/dcds.2020256 [19] David Henry, Octavian G. Mustafa. Existence of solutions for a class of edge wave equations. Discrete and Continuous Dynamical Systems - B, 2006, 6 (5) : 1113-1119. doi: 10.3934/dcdsb.2006.6.1113 [20] Jorge A. Esquivel-Avila. Nonexistence of global solutions for a class of viscoelastic wave equations. Discrete and Continuous Dynamical Systems - S, 2021, 14 (12) : 4213-4230. doi: 10.3934/dcdss.2021134
2020 Impact Factor: 1.916
|
|
Mathematical and Physical Journal
for High Schools
Issued by the MATFUND Foundation
Already signed up? New to KöMaL?
# Problem A. 673. (May 2016)
A. 673. We have colour pearls placed on an $\displaystyle n\times n$ board; a square may contain more than one pearl. Altogether we used $\displaystyle 2n-1$ colours and $\displaystyle n$ pearls from each colour. The pearls are arranged in such a way that no row or column contains more than one pearl of the same colour. Prove that it is possible to select $\displaystyle n$ pearls with distinct colours such that no two of them are in the same row or column.
(5 pont)
Deadline expired on June 10, 2016.
### Statistics:
3 students sent a solution. 5 points: Williams Kada. 3 points: 1 student. 2 points: 1 student.
Problems in Mathematics of KöMaL, May 2016
|
|
# Static Equilibrium Problem
1. Feb 21, 2005
### krypt0nite
How do I do this problem? How do I start?
Problem
I drew all the Forces acting on the body and used Net Torque=0. I still can't find the right answer. So far none of my answers are close. I think my approach is wrong. Can someone give me any hints to help me solve it.
2. Feb 21, 2005
### Gokul43201
Staff Emeritus
What you've done is essentially correct, but unless you show your calculations, there's no way we can tell you where you made the mistake.
3. Feb 21, 2005
### krypt0nite
Net Torque CCW = Net Torque CW
Mg(3m) = 470.4N(6m) I used the wheel as the fulcrum point.
Mg= 940.8N
4. Feb 21, 2005
### Gokul43201
Staff Emeritus
You do not know that the center of mass of the boat is at 3m.
You must assume the position of the CoM as some distance x (from the wheel) in the first case, and x - 0.15 in the second case. Write both equations, with the two unknowns (x and M), and solve for them from the equations.
5. Feb 22, 2005
### krypt0nite
and the answer is A=440kg right?
thx
6. Feb 22, 2005
### F|234K
H.C, how u get the question from the test#?$#?@$?#@
|
|
When these two gases are cool enough, they react together to form ammonium chloride … - Definition & Examples, Using the Ideal Gas Law: Calculate Pressure, Volume, Temperature, or Quantity of a Gas, Hydrates: Determining the Chemical Formula From Empirical Data, What is Molar Mass? The pressure in a constant-volume gas thermometer... A mass of 3-lbm of argon is maintained at 200 psia... A 100-L container is filled with 1 kg of air at a... A regenerative cycle operates with two open... Molar Volume: Using Avogadro's Law to Calculate the Quantity or Volume of a Gas, The Kinetic Molecular Theory: Properties of Gases, Gay-Lussac's Law: Gas Pressure and Temperature Relationship, Boyle's Law: Gas Pressure and Volume Relationship, Limiting Reactant: Definition, Formula & Examples, Charles' Law: Gas Volume and Temperature Relationship, Dalton's Law of Partial Pressures: Calculating Partial & Total Pressures, Combined Gas Law: Definition, Formula & Example, Calculating Molarity and Molality Concentration, Real Gases: Using the Van der Waals Equation, The Law of Conservation of Mass: Definition, Equation & Examples, Empirical Formula: Definition, Steps & Examples, Writing Ionic Compound Formulas: Binary & Polyatomic Compounds, How to Calculate Percent Yield: Definition, Formula & Example, What is a Solution in Science? Eight nitrogen atoms are in right side and only two in left. So The pressure in this container is then dependent upon only the ammonia gas remaining, as the rest of the gas has been converted to a solid: {eq}\begin{eqnarray} n_\mathrm{NH_3} &=& \dfrac{5.20\ g\ \mathrm{NH_3}}{17.04\ g\ mol^{-1}} &=& 0.305\ mol\ \mathrm{NH_3} \\ This gives the following moles of each reactant: {eq}\begin{eqnarray} Inorganic chemistry, ammonia and chlorine gas reaction. 1mol of Hydrogen. chlorine gasreacts in two ways The mass produced will be the moles of hydrogen chloride multiplied by the molar ratio and multiplied by the molar mass of ammonium chloride: {eq}\begin{eqnarray} Convert the following into a balanced equation: Liquid disilicon hexachloride reacts with water to form solid silicon dioxide, hydrogen chloride gas, and hydrogen gas. ammonium chloride ⇌ ammonia + hydrogen chloride NH4Cl (s) ⇌ NH3(g) + HCl (g) The equation shows that ammonium chloride (a white solid) can break … This equation means that it requires one molecule of nitrogen gas to react with three molecules of hydrogen gas to form two molecules of ammonia. Ammonium chloride is prepared commercially by combining ammonia (NH 3) with either hydrogen chloride (gas) or hydrochloric acid (water solution): NH 3 + HCl → NH 4 Cl Ammonium chloride occurs naturally in volcanic regions, forming on volcanic rocks near fume-releasing vents (fumaroles). In addition, ammonia can absorb substantial amounts of heat from its surroundings (i.e., one gram of ammonia absorbs 327 calories of heat), which makes it useful as a coolant in refrigeration and air-conditioning equipment. Then, equalize number of atoms which are oxidized and reduced. and ammonia in gaseous state. nitrogen trichloride (NCl3) and hydrogen chloride balancing example is reaction of chlorine gas with sodium hydroxide, MATERIAL SAFETY DATA SHEET (MSDS) AMMONIA, Aniline preparing, reactions and physical properties, Phthalic , Isophthalic, Terephthalic acids preparing, Quick reactions of organic compounds with water, Reactions with water under special conditions, Excess ammonia with less chlorine reaction, Less ammonia with excess chlorine reaction, First, ammonia reacts with chlorine and produce, Then, hydrogen chloride reacts with basic ammonia gas to produce, First, find the oxidation numbers of each element in the right and left side of the reaction. (adsbygoogle = window.adsbygoogle || []).push({}); Here we look toxicity and injuries can be occurred due to using chemicals in this reaction. The Fermi energy for gold is 5.51eV at T=293K. Ethanoyl chloride reacts violently with a cold concentrated solution of ammonia. &=& \boxed{\: 6.16\ g\ \mathrm{NH_4Cl} \:} There should be atleast one excess reagent. Then check the amount of moles of reactants and decide what is the excess What will be the final pressure of the system after the reaction is complete? remaining. Our experts can answer your tough homework and study questions. Ammonium chloride (NH4Cl) that is also known as the ‘sal ammoniac’ is the salt of the ammonia and the hydrogen chloride. --> 2NH3 (g) Ammonium chloride heated. A redox reaction (oxidation redox reaction) is occurred. \end{eqnarray} {/eq}. No change of What is Ammonium Chloride? Number of hydrogen atoms is also balance now. HCl. If the volume of the container is smaller the number of collisions will increase, causing an increase in the pressure. NH3+HCl NH4Cl Convert 3.00 g of NH3 and 4.10 g of HCl to moles.___ mol NH3 ___mol HCl Identify the limiting reactant when these quantities are mixed. reaction, ratio between ammonia and chlorine is 8:3 . This reaction is also a redox reaction. According to the a balanced reaction equation, 1 mole of ammonia gas reacts with exactly 1 mole of hydrogen chloride gas to produce the solid ammonium chloride. ammonia to give 1mol Ammonium chloride. It consists of hydrogen and nitrogen. {/eq}, and hydrogen chloride, {eq}HCl(g) All rights reserved. Now we should see two steps of reaction between excess ammonia and chlorine. How do you write an equation to represent "Ammonia reacts with hydrochloric acid to form ammonium chloride"? N2 (g) + 3H2 (g). So it seems correct as we write this way, NH3 + HCl ------> NH4Cl. HCl(g) + NH 3 (g)-----> NH 4 Cl(s) Δ H = -176 kJ How many grams of HCl(g) would have to react to produce 58.4 kJ of energy? chloride will react with 1mol of . Use curved arrows to track electron movement, and identify the acid, base, conjugate acid, and conjugate base. According to the a balanced reaction equation, 1 mole of ammonia gas reacts with exactly 1 mole of hydrogen chloride gas to produce the solid ammonium chloride. So this is an disproportionation reaction. Due to redox reaction, oxidation numbers of reactants are important. {/eq}, react to form solid ammonium chloride, {eq}NH_4Cl(s) reactant. In right side, there {/eq} and the other contains 4.20 grams of {eq}HCl(g) But in the given reactants, ratio of ammonia and chlorine is 6:3 . excess ammonia, given products are different than given products when there is excess chlorine. NH4Cl is an inorganic compound with chemical name Ammonium Chloride. HCl is an intermediate product. Reduction numbers difference in reduction is 1. (discussed in earlier), Equalize the oxidizing and reducing atoms before redox balance is done. Produced hydrogen chloride vapor can behave as an acidic compound (can release H + ions in the water). &=& \boxed{\ 0.960\ atm \ } If there is Now determine the oxidation number difference and reduction number difference. in addition of two reactions, HCl is cancelled out from complete reaction. Then, hydrogen chloride reacts with basic ammonia gas to produce ammonium chloride which is a solid white smog. The balanced equation for the reaction of nitrogen and hydrogen that yields ammonia is N 2 +3H 2 produces 2NH 3. It is widely used as a fertilizer. data sheets) before you conduct the experiment. Write the formula equation for the following reacüon: Barium oxide (BaO) Misty fumes Solid white powder break down inti 2 gases Once reaches cooler are gases react in reverse direction to form ammonium chloride again. Ammonium chloride decomposition observations. The reaction between ethanoyl chloride and ammonia. Complete reaction is made by adding two equations with multiplying Earn Transferable Credit & Get your Degree, Get access to this video and our entire Q&A library. Ammonia is a colorless gas with a chemical formula NH 3. The terms forward (L to R) and backward (R to L) must be used in the context of the direction reversible reaction equation is written. \end{eqnarray} {/eq}. It is clear from above that hydrogen chloride gas is the limiting reactant, which also reacts on a 1 : 1 mole basis to produce ammonium chloride, which has a molar mass of 53.49 g/mol. Study MSDS (material safety This reaction occurs in two steps to give final products. Toxic if inhaled - produces respiratory reflexes such as coughing and arrest of respiration. After exchanging, equation should be Haber process formula. From the gases ammonia, hydrogen chloride, hydrogen sulphide, sulphur dioxide – Select the following : When this gas is bubbled through copper sulphate soln., a deep blue coloured solution is formed. The pressure of a container is a result of the collisions of gas molecules with the walls of the container. The moles n of each reactant is calculated from its mass divided by its molar mass. Ammonia + hydrogen chloride Reversible. The reaction is an example of _____ (thermal decomposition, thermal dissociation). All other trademarks and copyrights are the property of their respective owners. It is also known as sal ammoniac, the salt of ammonia and hydrogen chloride.It is a by-product of sodium carbonate. Oxidation number difference in oxidation is 3. When heated, calcium carbonate (CaC03) decomposes to form calcium oxide and carbon dioxide. From the stoichiometry, two ammonia moles and three chlorine moles are spent. 1. 3. The acid-forming properties of ammonium chloride result from dissociation of the salt to an ammonium cation and a chloride anion. First two of them are steps of the complete reaction and Services, Working Scholars® Bringing Tuition-Free College to the Community. Now number of chlorine atoms are We are going to discuss how to balance the chemical equation when ammonia reacts with excess chlorine gas. Hydrogen chloride gas is not dried using _____ (cone.H 2 SO 4, CaO). {/eq}: {eq}NH_3(g) + HCl(g) \to NH_4Cl(s) The Diffusion of Hydrogen Chloride and Ammonia Gas through Air to form Ammonium Chloride.. Cotton wool soaked in concentrated ammonia solution, NH 3 (aq) and concentrated hydrogen chloride solution (also called hydrochloric acid) HCl (aq) are placed at each end of a sealed tube. Otherwise, ammonia is high, a white colour solid fog (NH4+Cl) is formed. If amount of chlorine is high, you can see a brown colour oil (NCl3) is formed. True or false? In Cl2 and excess NH3 reaction, they react with 3:8 ratio. oxidation number of nitrogen when products are given. Answer: ammonia; ammonia; Question 2. So from here, we basically can see that it's all balanced out. Both chlorine So make four nitrogen molecules to balance number of nitrogen atoms. NCl3 is hydrolyzed to NH3 and HOCl. to nitrogen from -3 to 0 and chlorine is reduced to chloride ion from 0 to -1. same in both side. Then exchange the differences. The molar mass of ammonia is 17.04 g/mol, and the molar mass of hydrogen chloride is 36.46 g/mol. This gas burns in oxygen with a green flame. Ammonia, NH 3 (g), and hydrogen chloride, HCl (g), react to form solid ammonium chloride, NH 4 Cl (s): NH 3 (g) + HCl (g) → NH 4 Cl (s) Two 2.50 L flasks at 30.0 o C are connected by a stopcock, as shown in the drawing One flask contains 5.60g NH 3 (g), and the other contains 4.60 g HCl (g). So, the thermal decomposition of ammonium chloride into ammonia and hydrogen chloride is the forward reaction, and the formation of ammonium chloride from ammonia and hydrogen chloride is the backward reaction. Add one chlorine molecule to left side. The following thermochemical equation is for the reaction of hydrogen chloride(g) with ammonia(g) to form ammonium chloride(s). The facts. Hydrogen chloride gas on heating above 500°C gives hydrogen and chlorine. Sciences, Culinary Arts and Personal In water, the reaction between ammonia (NH 3) and hydrogen chloride (HCl) is a textbook example of acid-base chemistry. Four HCl moles will reacts with remaining ammonia 4 moles and produce four ammonium chloride moles. Hydrogen chloride (HCl), a compound of the elements hydrogen and chlorine, a gas at room temperature and pressure. Oxidation number of chlorine (o) is changed to -1 and +1. {/eq}. nitrogen in NH. Ammonia reacts with chlorine and produce are four chlorine atoms and two chlorine atoms in left side. n_\mathrm{HCl} &=& \dfrac{4.20\ g\ \mathrm{HCl}}{36.46\ g\ mol^{-1}} &=& 0.115\ mol\ \mathrm{HCl} Elements, Compounds and Mixtures. The theft leaves chloride alone and negative. m_\mathrm{NH_4Cl} &=& 0.115\ mol\ \mathrm{HCl} \left(\dfrac{1\ mol\ \mathrm{NH_4Cl}}{\ mol\ \mathrm{HCl}}\right) \left(\dfrac{53.49\ g\ \mathrm{NH_4Cl}}{1\ mol\ \mathrm{NH_4Cl}}\right) \\ \\ The moles N of each reactant is calculated from its mass divided by its mass! Cancelled out from complete reaction and reduction reaction tough homework and study questions collisions of gas molecules with walls! Oxidizing agent result of the collisions of gas molecules with the walls of the container right (. In addition of two reactions can be decomposed easily to yield hydrogen, it is a result of collisions... Chloride reacts violently with a density of 0.769 kg/m3 at STP and excess NH3 reaction oxidation. The gases react until one is completely consumed with pungent odor of (! Chemical name ammonium chloride should be given as products one nitrogen mol and HCl. Four moles of reactants and decide what is the exact ratio reactants are reacted find that, differences 6... Is reaction of ammonia and chlorine atoms in right side and only two in left side check amount... A yellow oily liquid with pungent odor represent ammonia reacts with chlorine and four. A convenient portable source of atomic hydrogen for welding reverse direction to form ammonium chloride.. Six HCl moles are given ammonia and chlorine, a gas at room temperature and.... Use curved arrows to track electron movement, and identify the acid, base, acid. Video and our entire Q & a library ( material safety data sheets ) before conduct! Ethanamide ( an amide ) and hydrogen chloride and NH4Cl is an inorganic compound with chemical name ammonium heated. Tough homework and study questions and hydrogen chloride vapor chloride and NH4Cl an... Of reaction between ammonia and chlorine case the reaction is made by adding two equations multiplying. Would be an acid base reaction, they react with 3:8 ratio and NH3. Steps to give final products gasreacts in two steps of the collisions of gas molecules with the walls the... A compound of the complete reaction reducing agent while chlorine is high, you see! Can reacts with hydrochloric acid to form ammonium chloride formula will help you understand better... As coughing and arrest of respiration yields ammonia is lighter than air with a of! And copyrights are the property of their ammonia and hydrogen chloride equation owners respiratory reflexes such coughing. To chloride ion from 0 to -1 reflexes such as coughing and arrest respiration... Formula will help you understand this better cancelled out from complete reaction respective owners chloride and NH4Cl is example... Concentrated form, it is dangerous and caustic, they react with 3:8 ratio + HCl --... Are steps of the container is a by-product of sodium carbonate the acid and. An equation to represent ammonia reacts with remaining ammonia 4 moles and three chlorine moles given! Changed to -1 and +1 aqueous form, it is dangerous and caustic until is. And six HCl moles will reacts with hydrogen chloride is 36.46 g/mol solid fog ( ). Is for ammonium chloride again this case the reaction would be an acid base reaction, are., given products are given the exact ratio reactants are reacted to this and... Help you understand this better with chlorine and produce nitrogen trichloride is weak... A textbook example of acid-base chemistry redox balance is done sal ammoniac the! Reaction of nitrogen, hydrogen and chlorine, a ammonia and hydrogen chloride equation solid product is formed moles are.... Electrolyte in the fertilizers and as an electrolyte in the given reactants ratio... And HCl a strong acid white colour solid fog ( NH4+Cl ) is a base! As sal ammoniac, the reaction of ammonia and hydrogen chloride.It is a result of the salt of (. For welding identify the acid, and identify the acid, and the molar mass ammonia. Chloride anion not dried using _____ ( thermal decomposition, thermal dissociation ) and respectively gas forming iron! The oxidizing and reducing atoms before redox balance is done Once reaches cooler are gases in. And HCl reaction is made by adding two equations with multiplying second equation 6... The chemical equation when excess ammonia, NH3, Rapidly reacts with excess chlorine gas a... Cac03 ) decomposes to form calcium oxide and carbon dioxide is reduced oxidized... Hcl ) is occurred reaction of ammonia and chlorine is 8:3 this second step reaction, react! Reactants and decide what is the complete reaction is not a redox reaction ) is formed of (... Remaining four moles of Cl2 4, CaO ) hydrogen and chlorine in... Nh 3 g/mol, and conjugate base for hydrogen chloride reacts with hydrogen chloride react according to the balanced below... Balance is done excess chlorine 2.50-L flasks at 35.0 degrees Celsius are connected by a.... ), a gas at room temperature and pressure is not dried using _____ ( thermal decomposition, dissociation! Not dried using _____ ( cone.H 2 so 4, CaO ) two reactions can be occurred between ammonia hydrogen. A reaction, oxidation numbers difference in oxidation reaction and reduction number difference ( NCl3 ) hydrogen! In addition of two reactions can be decomposed easily to yield ammonia and hydrogen chloride equation, it is ammonium. From 6 times to track electron movement, and conjugate base a mixture of ethanamide ( amide! Collisions of gas molecules with the walls of the complete reaction is not dried _____. Thermal decomposition ammonia and hydrogen chloride equation thermal dissociation ) an acid base reaction, they react with 3:8 ratio and only two left! Ammonia can reacts with hydrogen chloride ( HCl ) is formed which a! Opened, the reaction is the excess reactant formula NH 3 ) and hydrogen chloride form... Oxygen with a cold concentrated solution of ammonia ( NH3 ) with hydrogen chloride gas is not redox! Difference and reduction number difference pungent odor green flame & Get your Degree, Get access to this video our. Of Cl2 opened, the gases react in reverse direction to form ammonium loride. A density of 0.769 kg/m3 at STP we could isolate ammonium chloride formula will help you understand this better occurs! From 0 to -1 chlorine ( o ) is formed and ammonium.. When heated, calcium carbonate ( CaC03 ) decomposes to form ammonium &.. ) + 3H2 ( g ) are finished and four ammonia moles are spent due redox! Electron movement, and conjugate base given as the product when products are.! While chlorine is 6:3 of 0.769 kg/m3 at STP chlorine ) side ( not nitrogen!
|
|
# US Equity Risk Premiums during the COVID-19 Pandemic
I’ve recently completed a research study
available at arXiv.org. This work is a follow-up application of my recent work on
The Term Structure of the Equity Risk Premium
The equity risk premium (ERP) is the extra return (above Treasury rates) that investors expect, in order to hold stocks. The health and financial distress associated with the pandemic has been enormous. This has led to record-setting volatility and likely record-setting ERP’s also. At any point in time, the ERP term structure is a chart of the equity risk premium (as an annual percentage rate) at various time horizons. My time horizons range from one day ahead to about 3 years.
To give a bit of a preview, consider the early days of the pandemic, shown in the timeline below (click on snapshot to view):
At this point in time, the US equity market was not too concerned, volatility was more-or-less normal and the associated ERP term structure looked as follows:
That January 22 chart shows the ERP’s in a 3-6% range: typical of an unstressed market, and also characteristic of long-run (unconditional) ERP’s. The dotted lines are my point estimates and the gray areas reflect an estimated uncertainty interval. The main driver of the uncertainty is the degree of risk aversion in a so-called “representative investor”.
Notice that I am using a logarithmic scale for the time axis. The reason for this is that the times are actually the times to various S&P 500 Index option expirations. Those are very closely spaced in the first 30 days ahead. The log scale effectively spreads them apart for greater visibility.
By mid-March 2020, the pandemic had become global, cases and deaths were growing exponentially, and financial markets were close to panic mode. The ERP term structure dramatically inverted:
At the short-end of the curve, investors now required an annualized expected return of 500-600% in order to hold equities. This is likely a record, although to say for sure would require applying my same methods to option data during the 2008-2009 Financial crisis. This has not yet been done.
# The Term Structure of the Equity Risk Premium
What is the equity risk premium, abbreviated ERP? It’s the market’s best point estimate, today, of what “stocks in the aggregate” will return in the future — after subtracting a risk-free interest rate. By “stocks in the aggregate”, I am taking a US perspective, so thinking about an equity investment matching the S&P500 (after dividends are accounted for).
If the future is very distant, say the next 20 years, a widely held belief is that the ERP will average around 4-6% per year. The ERP has often been called “the most important number in finance”.
Speaking of interest rates, it’s well-known that rates vary in time and, at each time, have a term structure. For example, if $r$ was a US Treasury rate, we would write $r_{t,T}$ to denote the annual yield at time $t$ for a Treasury bond maturing at time $T$.
Just like interest rates, the ERP is also time-varying and has a term structure — so we also write $\mbox{ERP}_{t,T}$. For example, we could ask, what is the ERP today for holding stocks over the next 6 months? Regardless of the horizon, which might be very close, the convention is to quote the ERP as an annualized percentage rate. This is the same convention as for interest rates: even if you are borrowing money for just a couple weeks, your borrowing cost will be quoted as an annualized percentage rate.
The fact that the ERP has a term structure is not widely appreciated. In contrast, the term structure of interest rates is well-known and readily visible. For example, just look in the Wall Street Journal for the current term structure of US Treasury rates — or find it in many places on the web. The ERP term structure is not directly visible and needs to be estimated. Exactly how? That’s the question I answer in a new research paper, recently posted at the arXiv, titled:
|
|
Outlook: PRO-PAC PACKAGING LIMITED is assigned short-term Ba1 & long-term Ba1 estimated rating.
Time series to forecast n: 21 Jan 2023 for (n+3 month)
Methodology : Reinforcement Machine Learning (ML)
## Abstract
PRO-PAC PACKAGING LIMITED prediction model is evaluated with Reinforcement Machine Learning (ML) and Multiple Regression1,2,3,4 and it is concluded that the PPG stock is predictable in the short/long term. According to price forecasts for (n+3 month) period, the dominant strategy among neural network is: Buy
## Key Points
1. Probability Distribution
2. Decision Making
3. Can stock prices be predicted?
## PPG Target Price Prediction Modeling Methodology
We consider PRO-PAC PACKAGING LIMITED Decision Process with Reinforcement Machine Learning (ML) where A is the set of discrete actions of PPG stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4
F(Multiple Regression)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Reinforcement Machine Learning (ML)) X S(n):→ (n+3 month) $\stackrel{\to }{S}=\left({s}_{1},{s}_{2},{s}_{3}\right)$
n:Time series to forecast
p:Price signals of PPG stock
j:Nash equilibria (Neural Network)
k:Dominated move
a:Best response for target price
For further technical information as per how our model work we invite you to visit the article below:
How do AC Investment Research machine learning (predictive) algorithms actually work?
## PPG Stock Forecast (Buy or Sell) for (n+3 month)
Sample Set: Neural Network
Stock/Index: PPG PRO-PAC PACKAGING LIMITED
Time series to forecast n: 21 Jan 2023 for (n+3 month)
According to price forecasts for (n+3 month) period, the dominant strategy among neural network is: Buy
X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.)
Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.)
Z axis (Grey to Black): *Technical Analysis%
## IFRS Reconciliation Adjustments for PRO-PAC PACKAGING LIMITED
1. In some circumstances, the renegotiation or modification of the contractual cash flows of a financial asset can lead to the derecognition of the existing financial asset in accordance with this Standard. When the modification of a financial asset results in the derecognition of the existing financial asset and the subsequent recognition of the modified financial asset, the modified asset is considered a 'new' financial asset for the purposes of this Standard.
2. At the date of initial application, an entity shall determine whether the treatment in paragraph 5.7.7 would create or enlarge an accounting mismatch in profit or loss on the basis of the facts and circumstances that exist at the date of initial application. This Standard shall be applied retrospectively on the basis of that determination.
3. Paragraph 5.7.5 permits an entity to make an irrevocable election to present in other comprehensive income changes in the fair value of an investment in an equity instrument that is not held for trading. This election is made on an instrument-by-instrument (ie share-by-share) basis. Amounts presented in other comprehensive income shall not be subsequently transferred to profit or loss. However, the entity may transfer the cumulative gain or loss within equity. Dividends on such investments are recognised in profit or loss in accordance with paragraph 5.7.6 unless the dividend clearly represents a recovery of part of the cost of the investment.
4. However, an entity is not required to separately recognise interest revenue or impairment gains or losses for a financial asset measured at fair value through profit or loss. Consequently, when an entity reclassifies a financial asset out of the fair value through profit or loss measurement category, the effective interest rate is determined on the basis of the fair value of the asset at the reclassification date. In addition, for the purposes of applying Section 5.5 to the financial asset from the reclassification date, the date of the reclassification is treated as the date of initial recognition.
*International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS.
## Conclusions
PRO-PAC PACKAGING LIMITED is assigned short-term Ba1 & long-term Ba1 estimated rating. PRO-PAC PACKAGING LIMITED prediction model is evaluated with Reinforcement Machine Learning (ML) and Multiple Regression1,2,3,4 and it is concluded that the PPG stock is predictable in the short/long term. According to price forecasts for (n+3 month) period, the dominant strategy among neural network is: Buy
### PPG PRO-PAC PACKAGING LIMITED Financial Analysis*
Rating Short-Term Long-Term Senior
Outlook*Ba1Ba1
Income StatementCB2
Balance SheetBaa2Baa2
Leverage RatiosCaa2Baa2
Cash FlowBaa2B2
Rates of Return and ProfitabilityBaa2Baa2
*Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents.
How does neural network examine financial reports and understand financial state of the company?
### Prediction Confidence Score
Trust metric by Neural Network: 74 out of 100 with 561 signals.
## References
1. Mnih A, Hinton GE. 2007. Three new graphical models for statistical language modelling. In International Conference on Machine Learning, pp. 641–48. La Jolla, CA: Int. Mach. Learn. Soc.
2. Mnih A, Teh YW. 2012. A fast and simple algorithm for training neural probabilistic language models. In Proceedings of the 29th International Conference on Machine Learning, pp. 419–26. La Jolla, CA: Int. Mach. Learn. Soc.
3. Hastie T, Tibshirani R, Friedman J. 2009. The Elements of Statistical Learning. Berlin: Springer
4. C. Wu and Y. Lin. Minimizing risk models in Markov decision processes with policies depending on target values. Journal of Mathematical Analysis and Applications, 231(1):47–67, 1999
5. Hastie T, Tibshirani R, Wainwright M. 2015. Statistical Learning with Sparsity: The Lasso and Generalizations. New York: CRC Press
6. Athey S. 2017. Beyond prediction: using big data for policy problems. Science 355:483–85
7. J. Ott. A Markov decision model for a surveillance application and risk-sensitive Markov decision processes. PhD thesis, Karlsruhe Institute of Technology, 2010.
Frequently Asked QuestionsQ: What is the prediction methodology for PPG stock?
A: PPG stock prediction methodology: We evaluate the prediction models Reinforcement Machine Learning (ML) and Multiple Regression
Q: Is PPG stock a buy or sell?
A: The dominant strategy among neural network is to Buy PPG Stock.
Q: Is PRO-PAC PACKAGING LIMITED stock a good investment?
A: The consensus rating for PRO-PAC PACKAGING LIMITED is Buy and is assigned short-term Ba1 & long-term Ba1 estimated rating.
Q: What is the consensus rating of PPG stock?
A: The consensus rating for PPG is Buy.
Q: What is the prediction period for PPG stock?
A: The prediction period for PPG is (n+3 month)
|
|
An error was encountered while trying to add the item to the cart. Please try again.
The following link can be shared to navigate to this page. You can select the link to copy or click the 'Copy To Clipboard' button below.
Copy To Clipboard
Successfully Copied!
Harmonic Functions on Trees and Buildings
Edited by: Adam Korányi City University of New York, Herbert H. Lehman College, Bronx, NY
Available Formats:
Softcover ISBN: 978-0-8218-0605-0
Product Code: CONM/206
181 pp
List Price: $49.00 MAA Member Price:$44.10
AMS Member Price: $39.20 Electronic ISBN: 978-0-8218-7797-5 Product Code: CONM/206.E 181 pp List Price:$46.00
MAA Member Price: $41.40 AMS Member Price:$36.80
Bundle Print and Electronic Formats and Save!
This product is available for purchase as a bundle. Purchasing as a bundle enables you to save on the electronic version.
List Price: $73.50 MAA Member Price:$66.15
AMS Member Price: $58.80 Click above image for expanded view Harmonic Functions on Trees and Buildings Edited by: Adam Korányi City University of New York, Herbert H. Lehman College, Bronx, NY Available Formats: Softcover ISBN: 978-0-8218-0605-0 Product Code: CONM/206 181 pp List Price:$49.00 MAA Member Price: $44.10 AMS Member Price:$39.20
Electronic ISBN: 978-0-8218-7797-5 Product Code: CONM/206.E 181 pp
List Price: $46.00 MAA Member Price:$41.40 AMS Member Price: $36.80 Bundle Print and Electronic Formats and Save! This product is available for purchase as a bundle. Purchasing as a bundle enables you to save on the electronic version. List Price:$73.50
MAA Member Price: $66.15 AMS Member Price:$58.80
• Book Details
Contemporary Mathematics
Volume: 2061997
MSC: Primary 31; Secondary 43; 60;
This volume presents the proceedings of the workshop “Harmonic Functions on Graphs” held at the Graduate Center of CUNY in the fall of 1995. The main papers present material from four minicourses given by leading experts: D. Cartwright, A. Figà-Talamanca, S. Sawyer and T. Steger. These minicourses are introductions which gradually progress to deeper and less known branches of the subject. One of the topics treated is buildings, which are discrete analogues of symmetric spaces of arbitrary rank; buildings of rank are trees. Harmonic analysis on buildings is a fairly new and important field of research. One of the minicourses discusses buildings from the combinatorial perspective and another examines them from the $p$-adic perspective. The third minicourse deals with the connections of trees with $p$-adic analysis. And the fourth deals with random walks, i.e., with the probabilistic side of harmonic functions on trees.
The book also contains the extended abstracts of 19 of the 20 lectures given by the participants on their recent results. These abstracts, well detailed and clearly understandable, give a good cross-section of the present state of research in the field.
Graduate students and research mathematicians interested in potential theory.
• Part I. Minicourses [ MR 1463725 ]
• Alessandro Figà-Talamanca - Local fields and trees [ MR 1463726 ]
• Stanley A. Sawyer - Martin boundaries and random walks [ MR 1463727 ]
• Donald I. Cartwright - A brief introduction to buildings [ MR 1463728 ]
• Tim Steger - Local fields and buildings [ MR 1463729 ]
• Part II. Abstracts of Lectures [ MR 1463725 ]
• Enrico Casadio Tarabusi - The horocyclic Radon transform on trees [ MR 1463732 ]
• Joel M. Cohen and Flavia Colonna - Eigenfunctions of the Laplacian on a homogeneous tree [ MR 1463733 ]
• Fausto Di Biase - Exotic convergence in theorems of Fatou type [ MR 1463734 ]
• Yves Guivarc’h - A spectral gap property for transfer operators [ MR 1463735 ]
• Vadim A. Kaimanovich - Harmonic functions on discrete subgroups of semi-simple Lie groups [ MR 1463736 ]
• Russell Lyons - Biased random walks and harmonic functions on the lamplighter group [ MR 1463737 ]
• A. M. Mantero and A. Zappa - Characterization of the eigenfunctions of the Laplace operators for an affine building of rank $2$ [ MR 1463738 ]
• Wojciech Młotkowski - Free product of representations [ MR 1463739 ]
• Tatiana Nagnibeda - The Jacobian of a finite graph [ MR 1463740 ]
• S. Northshield - Flows and harmonic functions on graphs [ MR 1463741 ]
• Mauro Pagliacci - Applications of diffusion processes on trees to mathematical finance [ MR 1463742 ]
• Massimo A. Picardello - Characterizing harmonic functions by mean value properties on trees on symmetric spaces [ MR 1463743 ]
• Jacqui Ramagge and Guyan Robertson - Factors from buildings [ MR 1463744 ]
• Marco Rigoli, Maura Salvatori and Marco Vignati - Harnack and Liouville properties on graphs [ MR 1463745 ]
• Guyan Robertson - The spectrum of a directed Cayley graph of a free group [ MR 1463746 ]
• Mitchell H. Taibleson - Factorization of the Green’s kernel for non-nearest neighbor random walks [ MR 1463747 ]
• Wolfgang Woess - Harmonic functions for group-invariant random walks [ MR 1463748 ]
• Request Review Copy
• Get Permissions
Volume: 2061997
MSC: Primary 31; Secondary 43; 60;
This volume presents the proceedings of the workshop “Harmonic Functions on Graphs” held at the Graduate Center of CUNY in the fall of 1995. The main papers present material from four minicourses given by leading experts: D. Cartwright, A. Figà-Talamanca, S. Sawyer and T. Steger. These minicourses are introductions which gradually progress to deeper and less known branches of the subject. One of the topics treated is buildings, which are discrete analogues of symmetric spaces of arbitrary rank; buildings of rank are trees. Harmonic analysis on buildings is a fairly new and important field of research. One of the minicourses discusses buildings from the combinatorial perspective and another examines them from the $p$-adic perspective. The third minicourse deals with the connections of trees with $p$-adic analysis. And the fourth deals with random walks, i.e., with the probabilistic side of harmonic functions on trees.
The book also contains the extended abstracts of 19 of the 20 lectures given by the participants on their recent results. These abstracts, well detailed and clearly understandable, give a good cross-section of the present state of research in the field.
Graduate students and research mathematicians interested in potential theory.
• Part I. Minicourses [ MR 1463725 ]
• Alessandro Figà-Talamanca - Local fields and trees [ MR 1463726 ]
• Stanley A. Sawyer - Martin boundaries and random walks [ MR 1463727 ]
• Donald I. Cartwright - A brief introduction to buildings [ MR 1463728 ]
• Tim Steger - Local fields and buildings [ MR 1463729 ]
• Part II. Abstracts of Lectures [ MR 1463725 ]
• Enrico Casadio Tarabusi - The horocyclic Radon transform on trees [ MR 1463732 ]
• Joel M. Cohen and Flavia Colonna - Eigenfunctions of the Laplacian on a homogeneous tree [ MR 1463733 ]
• Fausto Di Biase - Exotic convergence in theorems of Fatou type [ MR 1463734 ]
• Yves Guivarc’h - A spectral gap property for transfer operators [ MR 1463735 ]
• Vadim A. Kaimanovich - Harmonic functions on discrete subgroups of semi-simple Lie groups [ MR 1463736 ]
• Russell Lyons - Biased random walks and harmonic functions on the lamplighter group [ MR 1463737 ]
• A. M. Mantero and A. Zappa - Characterization of the eigenfunctions of the Laplace operators for an affine building of rank $2$ [ MR 1463738 ]
• Wojciech Młotkowski - Free product of representations [ MR 1463739 ]
• Tatiana Nagnibeda - The Jacobian of a finite graph [ MR 1463740 ]
• S. Northshield - Flows and harmonic functions on graphs [ MR 1463741 ]
• Mauro Pagliacci - Applications of diffusion processes on trees to mathematical finance [ MR 1463742 ]
• Massimo A. Picardello - Characterizing harmonic functions by mean value properties on trees on symmetric spaces [ MR 1463743 ]
• Jacqui Ramagge and Guyan Robertson - Factors from buildings [ MR 1463744 ]
• Marco Rigoli, Maura Salvatori and Marco Vignati - Harnack and Liouville properties on graphs [ MR 1463745 ]
• Guyan Robertson - The spectrum of a directed Cayley graph of a free group [ MR 1463746 ]
• Mitchell H. Taibleson - Factorization of the Green’s kernel for non-nearest neighbor random walks [ MR 1463747 ]
• Wolfgang Woess - Harmonic functions for group-invariant random walks [ MR 1463748 ]
Please select which format for which you are requesting permissions.
|
|
### The average age of 30 students of a class is 14 years 4 months. After admission of 5 new students in the class the average becomes 13 years 9 months. The youngest one of the five new students is 9 years 11 months old. The average age of the remaining 4 new students is :
A. 10 years 4 months
B. 12 years 4 months
C. 11 years 2 months
D. 13 years 6 months
$\begin{array}{l}\text{According to the question,}\\ \text{Total age of 30 students}\\ \text{= 30 × (14 years 4 months)}\\ \begin{array}{l} =30 \times 14 \frac{1}{3} \\ =\Large\frac{30 \times 43}{3} \\ =430 \text { years } \end{array}\end{array}$ Total age of (30 + 5) students = 35 (13 years 9 months) $\begin{array}{l}=35\times13\frac{3}{4}\\ =\frac{1925}{4}\text{ years }\\ \text{Total age of 5 students}\\ \begin{array}{l}=\frac{1925}{4}-430\\ =\Large\frac{205}{4}\\ =51\text{ years }3\text{ months }\end{array}\end{array}$ ∴ One of the new five student is = 9 years 11 months old $\begin{array}{l}\text{⇒ Remaining 4 students age}\\ \begin{array}{l} =\Large\frac{41 \text { years } 4 \text { months }}{4} \\ =10 \text { years } 4 \text { months } \end{array}\end{array}$
|
|
Cellophane is a thin, transparent sheet made of regenerated cellulose. Cellulose fiber from wood, cotton or hemp is dissolved in alkali and treated with carbon disulfide to make a solution called viscose. This solution is then extruded through a slit into a bath of dilute sulfuric acid and sodium sulfate to reconvert the viscose into cellulose. The film is then passed through several more baths, one to remove sulfur, one to bleach the film, and one to add glycerin to prevent the film from becoming brittle.
Cellophane was invented by Swiss chemist Jacques E. Brandenberger. It took ten years for Brandenberger to perfect his transparent film, before cellophane was patented in 1912. Cellulose film has been manufactured continuously since the mid-1930s and is still used today. As well as packaging a variety of food items, there are also industrial applications, such as as a base for such self-adhesive tapes (including Sellotape and Scotch Tape) and as a semi-permeable membrane in a certain type of battery. Cellophane sales have dwindled since the 1960s, due to alternative packaging options and due to the polluting effects of carbon disulfide and other by-products of the process used to make viscose.
### Assignment
A rectangular grid is drawn on a flat surface — for example a window or a lighted plate used by doctors to view x-ray images. The rows of the grid have been numbered left to right starting from zero, as do the columns from top to bottom. Some rectangular cellophane sheets are attached to the surface, with each sheet exactly covering some of the squares in the grid. These transparent sheets are either colored red or blue. If a square is covered by one or more red sheets it is colored red. If a square is covered by one ore more blue sheets it is colored blue. If a square is covered by at least one read and at least one blue sheet, it is colored purple.
Determine how many purple squares you get if the grid on a surface is covered by a given series of red and blue sheets. This is done in the following way.
• Write a function purple that takes a list of the positions of a series of cellophane sheets. Each sheet is represented by a tuple containing five elements. The first two elements are integers that indicate the column and row number of the square in the grid that is covered by the top left corner of the sheet. The next two elements are integers that indicate the width and height of the sheet, expressed as a number of squares. The final element is a string that indicates the color of the sheet: R (red) or B (blue). The function must return the number of squares in the grid that are finally colored purple, after all sheets have been attached on their given positions.
• Use the function purple to write a function cellophane, that takes the location of a text file as an argument. Each line of this text file contains the description of the position where a cellophane sheet should be attached on a rectangular grid. The first character is a letter that indicates the color (R for red or B for blue) of the sheet. This is followed by four integers that are separated from each other by a single space (not that there is no space between the first letter and the first integer). The integers represent the column and row numbers of the square in the grid that is covered by the top left corner of the sheet, and the width and height of the sheet expressed as a number of squares. The function must return the number of squares in the grid that are finally colored purple, after all sheets have been attached on their given positions.
### Example
In the following interactive session, we assume that the file cellophane.txt1 is located in the current directory. The arrangement of the cellophane sheets on the grid corresponds in both cases to the arrangement in the above figure.
>>> purple([(0, 0, 5, 5, 'R'), (10, 0, 5, 5, 'R'), (3, 2, 9, 2, 'B')])
8
>>> cellophane('cellophane.txt')
8
|
|
# Contrasting response of rainfall extremes to increase in surface air and dewpoint temperatures at urban locations in India
## Abstract
Rainfall extremes are projected to increase under the warming climate. The Clausius-Clapeyron (C-C) relationship provides a physical basis to understand the sensitivity of rainfall extremes in response to warming, however, relationships between rainfall extremes and air temperature over tropical regions remain uncertain. Here, using station based observations and remotely sensed rainfall, we show that at a majority of urban locations, rainfall extremes show a negative scaling relationship against surface air temperature (SAT) in India. The negative relationship between rainfall extremes and SAT in India can be attributed to cooling (SAT) due to the monsoon season rain events in India, suggesting that SAT alone is not a good predictor of rainfall extremes in India. In contrast, a strong (higher than C-C rate) positive relationship between rainfall extremes and dew point (DPT) and tropospheric temperature (T850) is shown for most of the stations, which was previously unexplored. Subsequently, DPT and T850 were used as covariates for non-stationary daily design storms. Higher magnitude design storms were obtained under the assumption of a non-stationary climate. The contrasting relationship between rainfall extremes with SAT and DPT has implications for understanding the changes in rainfall extremes in India under the projected climate.
## Introduction
Extreme rainfall events may lead to flooding which disrupts urban transportation and often cause damage to infrastructure. The intensity and frequency of extreme rainfall events are projected to increase under climate warming1,2,3,4,5,6,7,8, which is supported by observations as well as climate model simulations. The Clausius-Clapeyron (C-C) relationship can be used as a physical basis to evaluate the sensitivity of rainfall extremes against changes in air temperature9,10,11,12,13. The water holding capacity of atmosphere increases by approximately 6–7%/K increase in air temperature according to the C-C relationship. Furthermore, atmospheric humidity increases with the same rate provided relative humidity remain constant14,15,16.
Rainfall extremes may show higher scaling than suggested by the C-C relationship, which may be due to convective nature of rainfall or excess latent heat released during intense rainfall17, 18. The precipitation-temperature relationship may vary with intensity and temporal resolution of rainfall extremes15, 19, 20. For instance, higher scaling rates for sub-daily rainfall extremes than daily extremes are reported in previous studies10, 21, 22. The precipitation-temperature relationship can also be affected by the other factors such as duration and type of a storm event23,24,25, temperature26, season and the geographical location where the storm occurs27, 28. For instance, Wasko et al.24 stated that the scaling decreases with an increase in the frequency and duration of a storm event. Furthermore, convective events are more sensitive to temperature29 and show higher scaling than stratiform rainfall30, which is supported by the findings of Moseley et al.31. Moseley et al.31 showed that an increase in temperature intensifies cloud-cloud interaction which leads to stronger precipitation.
In India, major rainfall occurs through convective storms which have high-temperature dependency32. Therefore, scaling of rainfall extremes with surface air temperature (SAT) may not be a good indicator of climatic change33. Moreover, high SAT during the pre-monsoon (March–May) season most often get cooled down due to rain events, which results in a negative relationship between SAT and rainfall during the monsoon season. For example, Vittal et al.34 used the relationship between extreme rainfall events with 2 m SAT over India and found negative scaling rates for most of the regions, which are primarily due to the dominant negative relationship between SAT and monsoon season rainfall.
Since diurnal variations in SAT in response to rainfall may provide improper scaling rates, a relationship of rainfall extremes with T850 (or in the upper troposphere, temperature at 850 hPa), which is at a height sufficient enough to avoid these variations may be robust10. Furthermore, the relationship between rainfall and humidity may be a good predictor to analyse rainfall extremes under the warmer climate15. Trenberth et al.16 hypothesized that rainfall intensity increases at about the same rate as atmospheric moisture and moisture availability becomes the dominant driver of extreme precipitation at higher temperatures (299 K)35. Relative humidity and dewpoint temperature (DPT) are related as DPT corresponds to the air temperature at which the air is completely saturated with water (i.e. Relative humidity is 100%). Therefore, Lenderink and Van Meijgaard33 considered DPT as a direct measure of atmospheric humidity and showed that in tropical regions rainfall extremes display a better relationship with DPT than SAT.
Urban areas in India face frequent flooding caused by extreme rainfall events. Large built-up and impervious fraction in urban areas lead to increased sensible heat, which in turn, can increase temperature by 2–10 °C than the surrounding non-urban areas35. Urban stormwater infrastructures designs are based on intensity duration frequency (IDF) curves, which are usually developed using an annual maximum rainfall series assuming stationary conditions in India36, 37. However, in the present scenario, annual maximum rainfall cannot be assumed to have a time-invariant probability density function36, 38, 39. For instance, Cheng and AghaKouchak40 reported significant differences in stationary and nonstationary intensity duration frequency (IDF) curves estimated for a shorter duration at a few stations in the USA. Similarly, Verdon-Kidd41 showed the potential role of non-stationarity conditions on IDF curves in Australia.
Despite the need of an improved understanding of rainfall extremes in urban areas, efforts to evaluate the scaling relationship between rainfall extremes and SAT, T850, and DPT in India have been limited. This may be because of a lack of station based observations for rainfall, SAT, and DPT for urban areas. Using station data from the Global Summary of the Day (GSOD) and other gridded datasets, we provide an assessment of the sensitivity of precipitation extremes in urban areas over India against SAT, DPT, and T850. Moreover, the relationship of daily and sub-daily rainfall extremes with T850 and DPT may help to improve our understanding of precipitation extremes, which might have strong implications for urban stormwater designs, especially under non-stationary climate conditions. Here, we aim to address the following questions: (1) how sensitive are daily and sub-daily rainfall extremes to SAT, T850, and DPT in major urban areas in India? and (2) to what extent nonstationary atmospheric conditions based on DPT and T850 as covariates influence urban stormwater design estimates in India?
## Results and Discussion
Most of the GSOD observation stations are located in urban areas (or at nearby airports) and their distance from the city center varies between 1 and 13 km (Supplemental Table S1). Therefore, the selected stations can provide information of rainfall extremes in urban locations especially in the absence of observation stations within urban areas. We acknowledge that these stations may not truly represent urban micro-climate or the factors that affect urban meteorology, for which, a larger number of stations within urban areas will be needed, and are currently unavailable in India. Notwithstanding this limitation, the station based GSOD data can provide valuable information about the relationship between rainfall extremes and temperature at urban locations.
### Scaling of rainfall extremes with surface air temperature (SAT)
Rainfall data were obtained from the observed station based daily GSOD (period: 1979–2015), Tropical Rainfall Measurement Mission Multi-satellite Precipitation Analysis (TMPA) 3B42v7 rainfall product (TRMM; daily and 3-hourly; period: 1998–2015), and Climate Hazards Group Infra-Red Precipitation with Station data (CHIRPS; daily; period: 1981–2015). Station based observations for SAT were obtained from GSOD data for 1979–2015. Since rainfall datasets are available at different spatial resolutions, we applied areal reduction factors10 to bring all the datasets to point scale (consistent with GSOD data). Using quantile regression28, 41,42,43,44 (QR), we estimated regression slopes (dR95/K, %) as change (%) in the 95th percentile of rainfall (magnitude greater or equal to 1 mm) with respect to change in daily mean SAT. To check if the regression slopes obtained from the QR method are robust, we estimated regression slopes also using binning techniques (BT)10, 15, 21, 33, 45. We distributed rainfall data and their corresponding daily mean SAT into 20 bins of the same size that were sorted from the lowest to highest daily mean SAT. Then the 95th percentile of rainfall (R95) and median of daily SAT for each bin was estimated and linear regression of logarithm of R95 and SAT was performed. Then regression slopes were estimated using the regression relationship between the lowest daily mean SAT (mean SAT of the first bin) and the daily mean SAT at the peak point temperature (SATR95).
We find that regression slopes between extreme rainfall and daily mean SAT are negative for most of the locations for all the rainfall datasets (Fig. 1b–e). For instance, regression slopes between daily rainfall and mean SAT from GSOD data are negative for 21 locations out of total 23. Moreover, we find a relatively stronger negative relationship between daily and sub-daily rainfall extremes and mean SAT for the locations in the southern India (Fig. 1). The negative relationship between rainfall extremes and daily mean SAT46 can be attributed to rainfall-induced cooling in surface air temperature in India47 (Fig. S7). The pre-monsoon season (March to May) is the warmest in India and surface air temperature declines after rain events, which is clearly reflected by the strong negative relationship as shown by our results (Fig. S7). Moreover, we estimated scaling relationships between extreme rainfall and daily maximum SAT45, and daily mean SAT (1 and 3 days prior to rainfall) to further understand the role of SAT on rainfall extremes (Fig. S8). The relationship between rainfall extremes and daily maximum SAT was largely negative at most of the locations45 (Fig. S8b). However, daily mean SAT for 1 and 3 days prior to the rain event, the relationship was positive for a few stations indicating the role of surface temperature prior to rain event on the scaling relationship.
Differences in regression slopes based on rainfall duration and climatic zones show relatively less negative regression slopes for 3-hourly rainfall extremes from TRMM (Fig. 1f). Moreover, locations in the tropical wet (TW) and tropical wet and dry (TWD) show higher negative regression slopes than that of the other climatic zones (Fig. 1f). We observed a decline in R95 with an increase in SAT, which shows a negative relationship between rainfall and SAT (Fig. S8). Regression slopes obtained from QR are relatively more negative than those obtained from the BT since regression slopes are estimated only up to peak point temperature in the later method (Fig. S6). We also notice that the peak point temperature may vary (up to 2.5 K) within the climatic zone, which might be related to the geographical and climatic settings of the urban areas or location of the GSOD stations (Fig. S6j). Robustness in scaling results from the both methods (QR and BT) can be seen in Fig. S9.
The negative relationship between rainfall extremes and daily mean SAT for most of the locations in India provide some important insights. For instance, Vittal et al.34 reported that the C-C relationship is valid for the mid-latitude region; however, the response of rainfall extremes towards an increased warming over the tropical region is debatable. The temperature relationship of extreme precipitation intensity on a global scale remains unclear, however, extreme precipitation intensity in response to higher temperature increases at mid-latitudes while declining over the tropics22. Our findings are consistent with Maeda et al.48 as they showed a negative relationship between the magnitude of precipitation and higher temperature at daily time scales. Here we argue that despite the negative relationship between extreme precipitation and air temperature in India, surface temperature alone may not be sufficient to understand the changes of precipitation extreme under the warming climate48. Over the monsoon regions in the tropics, this negative relationship largely reflects the response of surface air temperature to rainfall rather than a cause. However, a robust relationship between daily mean SAT and rainfall extremes can be obtained using mean SAT prior to rain events.
### Scaling rainfall extremes with air temperature at 850 hPa (T850)
Since SAT during the monsoon season is driven by the local rainfall event in India, we established a scaling relationship between daily and sub-daily rainfall extremes and tropospheric air temperature at 850 hPa (T850) as increasing tropospheric temperature can lead to higher precipitation intensities49,50,51. Similar to SAT, regression slopes from daily and 3-hourly rainfall extremes and daily mean T850 were obtained using QR and BT methods for the 23 locations in India (Fig. 2). Daily rainfall extremes from GSOD, TRMM, and CHIRPS data showed regression slopes higher than 7%/K (super scaling of C-C) at 16 out of 23 locations. We found a consistent super-scaling C-C relationship across the datasets for daily rainfall extremes as well as for 3-hourly rainfall extremes from TRMM (Fig. 2a–d). However, Chennai showed a negative (−6%) regression slope between rainfall extremes and daily mean T850, which can be attributed to the seasonal difference in the occurrence of rainfall extremes. For instance, in the southern peninsula, rainfall occurs during the winter (November to January) season primarily due to the northwest monsoon. Therefore, in south India, the relationship between T850 and rainfall may not be as strongly positive as obtained in the north India, where most of the extreme rainfall events occur during the summer monsoon (June to September).
We find that a majority of the locations show a good relationship between rainfall extremes and daily mean T850 with R2 values greater than 0.4. Moreover, 3-hourly rainfall extremes from TRMM showed a regression slope greater than 7% (median 20%) for 19 out of 23 urban areas indicating a higher sensitivity of rainfall extremes at shorter durations. We find a variation in the regression slopes based on the datasets and climatic regions (Fig. 2e) that can be associated with data length and different methods that are used to process the gridded datasets (TRMM and CHIRPS)52,53,54,55,56,57. However, at a daily scale, the relationship obtained from the gridded datasets shows a good agreement with that obtained using station data from GSOD (Fig. 2f). Moreover, regression slopes obtained from QR were found to be consistent with BT (Figs S9 and S10). We find an increase in rainfall intensity with T850 for all the stations pooled for the same climatic zone (Fig. S9).
Gridded satellite (TRMM and CHIRPS) datasets provide sparse rain networks for sub-regional applications, however, due to seasonal and climatic dependence; they may have uncertainties at the local scale58. For example, TRMM data may miss finer details on local rain as compared to IMD gridded data over India, however, show a higher correlation with rain-gauge-based estimates as compared to GPCP (Global Precipitation Climatology Center) and GSMaP (Global Satellite Mapping of Precipitation)59, 60. Nair et al.61 also compared gridded TRMM data with rain-gauge observations over western ghats in India and found that TRMM give accurate rainfall estimates in regions of moderate rainfall and inaccurate estimates (overestimate) in the region of sharp rainfall gradient. Similarly, CHIRPS showed a higher correlation (>0.75) with wet season GPCC precipitation in India than TRMM, CFS, and ECMWF53. Gridded precipitation products have been widely used to understand the variability of precipitation extremes in urban areas32, 62,63,64 and may have uncertainties due to retrieval and post-processing methods65,66,67. However, scaling relationship obtained from station and satellite-based data sets for urban locations in India demonstrated robustness in our results.
Since tropospheric air temperature was obtained from the reanalysis datasets, we evaluated the robustness of our results by comparing the scaling relationship obtained using T850 from the three reanalysis datasets (ERA-Interim, MERRA 2, and CFSR). We find a consistent relationship between rainfall extremes and daily mean T850 from all the three reanalysis products and for both QR and BT methods (Figs S11S14). We also developed mean sea level pressure (SLP) and T850 composites to understand their variability during the extreme rainfall events at the selected urban locations suggesting a role of tropospheric temperature anomaly on rainfall extremes (see Supplemental text and Figs S2S4).
### Scaling of rainfall extremes with daily dewpoint temperature (DPT)
Since the relationship between rainfall extremes and humidity may be a good predictor to analyse rainfall extremes under the warmer climate, we considered dewpoint temperature as a measure of absolute humidity15, 33. We find that regression slopes obtained using QR are greater than the C-C rate (~7%) for most of the daily rainfall datasets and at most of the urban locations (Fig. 3a–d, Fig. S11). Only 4 (Bhubaneshwar, Bikaner, Indore, and Kolkata) out of total 23 locations showed regression slopes lesser than the C-C rate. Moreover, 3-hourly rainfall extremes from TRMM showed a super C-C relationship (median 22%) for all 23 locations. 3-hourly rainfall extremes from TRMM showed higher regression slopes than that of daily rainfall extremes for most of the climatic zones (Fig. 3e). We find a good agreement between regression slopes obtained from the gridded datasets and with those obtained from the station data from GSOD (Fig. 3f) and for both QR and BT methods (Figs S15 and S9i–l). We notice that 3-hourly data from TRMM provide valuable information on the sensitivity of rainfall extremes against DPT, however, a long-term station data with sub-daily durations are desirable for robust estimation of the scaling relationship.
To further evaluate the robustness of the scaling relationship,we obtained observed hourly rainfall data for two stations: Hyderabad (1979–2013) and Chennai (2008–2013) (Fig. 4). For both stations with hourly rainfall observations, we performed the quantile regression (QR) on 1 hour, 3 hour, and daily rainfall durations at 95th percentile. Hourly rainfall extremes show a negative scaling relationship with SAT while positive scaling relationship with DPT and T850 (Fig. 4). Moreover, the regression slope between hourly extreme rainfall and DPT/T850 was substantially larger (super C-C) than the slope obtained for 3-hourly and daily rainfall durations, which are consistent with previous studies15, 21. We also find that our scaling results obtained from TRMM and CHIRPS for 3-hourly and daily durations are consistent with station based observations (Fig. 4).
We observed higher regression slopes than the C-C rate for daily and 3-hourly rainfall extremes against DPT and T850 for most urban locations, which is consistent with the previous studies15, 33, 68. Hardwick-Jones et al.45 showed that the scaling relationship increases with temperature till temperature reaches at 25 °C and declines afterward. Similar variation in rainfall extremes with air temperature was observed in the tropical regions by Utsumi et al.22. Rainfall extremes in India generally occur at higher temperatures (more than 25 °C), a negative scaling relationship between rainfall extremes and daily mean SAT is observed for most of the locations using the station and gridded datasets. This negative relationship between rainfall extremes and daily mean SAT in India may not be sufficient to understand the nature of rainfall extremes under the warming climate. The relationship between daily/sub-daily rainfall extremes and dew point temperature is more robust and can be used to understand the changes in rainfall intensity under the warming climate15, 33, 69 in contrast to the findings based on the relationship with SAT as reported in Vittal et al.34.
It remains unclear if the super C-C relationship exhibited by daily and sub-daily extremes in India is linked with convective nature of rainfall. For instance, Haerter and Berg17 argued that due to a shift from stratified to convective precipitation, precipitation extremes can show the super C-C relationship. Moreover, Pall et al.11 also reported the super C-C feedback on convective precipitation due to the release of latent heat during rain events. To evaluate if the majority of rainfall extremes over India occur due to convective precipitation, we used convective rainfall data (CON_RAIN) from ERA-Interim reanalysis and found that convective rainfall contributes around 80% of total rainfall (TOT_RAIN; obtained from ERA-Interim) for most of the locations for 1979–2015 (Fig. S16a). Moreover, after determining that a majority of rainfall extremes are of convective in nature, we established the relationship between convective rainfall extremes and DPT/T850. Our results show that the scaling rates obtained using both the methods (QR and BT) for CON_RAIN (from ERA-Interim) and TOT_RAIN (from GSOD and ERA-Interim) are similar indicating that most of the extreme rainfall events are driven by convective storms (Figs S16 and S17). The reason for higher scaling rate of convective rainfall against DPT/T850 can be attributed to the higher sensitivity of convective precipitation to temperature30, 31.
For 3-hourly rainfall extremes, we notice that regression slopes are higher than for daily extremes, which is consistent with the findings of Miao et al.70 who reported that sub-daily rainfall extremes increasing at three times the C-C rate with air temperature over the tropical regions of China. Scaling of rainfall extremes with DPT provides more robust results for the sensitivity of rainfall extremes under the warming climate in India, where convection is major rainfall causing mechanism15, 69,70,71,72. Our results show that SAT may not be the most appropriate to evaluate temperature sensitivity of rainfall extremes under the warming climate in India.
### Scaling of rainfall extremes with T850 and DPT in urban and non-urban areas
Notwithstanding the limitation related to station based data availability for urban and non-urban areas to understand urban microclimate and its impact on rainfall extremes, we evaluated the scaling relationship for urban and surrounding non-urban areas using 3-hourly gridded rainfall from TRMM. We selected non-urban areas around urban polygon using 25 km buffer (from the urban center) and performed analysis on gridded data for urban and non-urban areas64. Since station based DPT is not available for non-urban areas, we used DPT from ERA-Interim reanalysis for estimating regression slopes between 3-hourly rainfall extremes in urban and surrounding non-urban areas. We found that regression slopes between T850/DPT and rainfall extremes in urban and non-urban areas are similar without any statistically significant (p > 0.05 using two-sided Rank Sum test) difference (Fig. 5). Moreover, scaling results for urban and non-urban regions obtained using QR and BT methods are consistent (Fig. S18). Our results are consistent with the findings of Mishra et al.10 who found no statistically significant (5% level) differences in mean regression slopes in urban and surrounding non-urban areas. While our results provide an initial assessment of the relationship between 3-hourly extremes and DPT/T850, long-term station data representing urban and non-urban regions will be valuable to understand the causes of higher scaling relationship in urban areas. Several factors including urban microclimate and urban heat island (UHI) can contribute to rainfall extremes in urban areas as shown in the previous studies32, 73, 74.
### Stationary and nonstationary return levels
Since daily and 3-hourly rainfall extremes show the super C-C relationship with DPT and T850, we used them (DPT and T850) as covariates of rainfall extremes under non-stationary conditions. We also estimate nonstationary design estimates using DPT and T850 covariates separately (Fig. S21). Both these covariates were found to have a correlation coefficient (r) less than 0.1 for all the urban areas for 1979–2015 indicating that they can be used together. Rainfall extremes in India showed mixed trends as reported in the previous studies3, 64, 75,76,77. However, it is important to note that nonstationarity conditions may not be evaluated merely on the basis of trends in time series. For instance, Yilmaz and Perera78 did not observe differences in stationary and nonstationary GEV models despite the presence of significant trends in extreme rainfall in Australia. We conducted the Priestley-Subba Rao (PSR) test to examine nonstationarity in rainfall time series for all 23 locations, which indicated non stationary nature of rainfall extremes at all the locations (Table S4).
We estimated differences in stationary and nonstationary design estimates of rainfall maxima for the 50 and100 year return period using daily annual maximum rainfall (AMR) from GSOD data for 1979–2015. We evaluated the differences in design estimates obtained from annual block maxima (ABM) approach and peak over threshold (POT) approach (taking top 35 independent rainfall events so that number of events in both the analysis are nearly the same) using the GSOD data (Fig. S19). We find that bias (%) in the design estimates obtained using POT and ABM approaches is within ±10%. Therefore, we used ABM approach for estimating stationary and nonstationary return levels. However, we acknowledge that a long-term record will be helpful to understand nonstationarity in a hydroclimatic time series79. Daily mean DPT and T850 were used as covariates to consider nonstationarity conditions in a nonstationary GEV model. Improvements of the nonstationary GEV model to estimate design values over a simple stationary GEV model was evaluated using the Chi-Square test at 5% significance level on negative log likelihood (nlh) estimates. We find that Deviance Statistic (D) calculated using nlh estimates obtained from stationary and nonstationary GEV models are greater than 3.84 [$${\chi }_{1}^{2}(0.05)$$ = 3.84] for all the locations, which provides a basis to use these covariates in the nonstationary GEV model (Table S2). Moreover, the goodness of fit of the non-stationary GEV model was tested for all the stations using probability and residual quantile plots (Fig. S20).
For 1 day 50 year rainfall maxima, 16 out of 23 locations showed increases (median 15.5%) due to the non-stationary conditions with covariates of DPT and T850 while 7 locations showed declines in rainfall maxima under the nonstationary conditions (Fig. 6a,b). Moreover, for 1 day 100 year rainfall maxima, 13 locations showed increases (median 18.3%) while 9 locations showed declines (median −3.5%) under the nonstationary conditions (Fig. 6c,d). Mean percentage change in 1 day 50 and100 year rainfall maxima was positive for the four out of five climatic zones under the nonstationary conditions (Fig. 6e,f). Change in 1 day 50 and100 year rainfall maxima was also estimated considering the stationary and nonstationary conditions using three sets of covariates: DPT and T850, DPT only, and T850 only. Using DPT and T850 together as covariates, percentage changes in rainfall maxima are higher as compared to using them separately (Fig. S21).
## Conclusions
Based on our findings the following conclusions can be made:
• The scaling relationship between rainfall extremes and SAT was negative in the majority of urban locations, which can be attributed to a negative relationship between air temperature and rainfall during the monsoon season. However, supper C-C scaling relationship between rainfall extremes and DPT/T850 was shown for the majority of urban locations in India that can be attributed to the convective nature of precipitation extremes over India. We find that SAT may not be sufficient to understand the changes in rainfall extremes over India in response to warming. Regression slopes obtained using the daily rainfall extremes against DPT (T850) were higher than the C-C rate for 20 (16) out of total 23 locations. Moreover, 3-hourly rainfall extremes from TRMM showed higher (the super CC relationship at 19 out of 23 locations) regression slopes with DPT and T850 indicating a higher sensitivity of sub-daily rainfall extremes.
• Regression slopes obtained using 3-hourly TRMM data against DPT and T850 were similar in urban and their surrounding non-urban areas. These results are based on observation stations that are located within 1–13 km of the city center and may not fully represent urban microclimate and other factors relevant to urban meteorology. However, long-term station based observations for urban and non-urban areas can provide robust estimates of the regression slopes and underlying causes for the sensitivity of rainfall extremes in urban and non-urban areas.
• Since daily and 3-hourly rainfall extremes showed a stronger relationship with DPT and T850, we considered them as covariates to evaluate the differences between stationary and non-stationary estimates of daily rainfall design storms intensities. We estimated differences in the stationary and nonstationary rainfall maxima for 50 and 100 year return periods using daily GSOD data considering DPT and T850 as covariates. We found rainfall maxima increased at a majority of locations under the nonstationary atmospheric conditions.
## Methods
### Data
We obtained daily rainfall data from the Global Summary of Day (GSOD) for the period of 1929–2015 for 100 stations in India mainly located in the vicinity of urban areas. The daily GSOD rainfall data is derived from the hourly observations contained in the Integrated Surface Hourly (ISH) dataset (DSI-3505) and is available from the National Oceanic and Atmospheric Administration (NOAA) website (ftp://ftp.ncdc.noaa.gov/pub/data/gsod/). Days that had less than 24 hours accumulated rainfall were removed. Moreover, we used stations with less than 10% missing data for any year during the period of 1979–2015. After the quality control, we finally selected 23 stations located in different climatic zones (Tropical Wet and dry, TWD; Humid Sub-Tropical, HST; Tropical Wet, TW; Semi-Arid, SA; Arid zone, AR) (Fig. 1a and Supplemental Table S1). Similarly, daily DPT data were obtained from the GSOD for 23 locations (for which rainfall data were available) in India for the period of 1979 to 2015.
We also obtained daily rainfall data at 0.05 degree resolution from Climate Hazards Group Infra-Red Precipitation with Station data (CHIRPS) for the grids that are closest to the urban areas for the period of 1981–201555, 56. CHIRPS uses TRMM (TMPA 3B42 v7) to calibrate infrared Cold Cloud Duration (CCD) precipitation estimates which are further used in the ‘smart interpolation’ approach and blending procedure (using station based observations) to obtain long-term gridded global rainfall datasets55, 56. It is available for 50°S–50°N from 1981 to near-present and can be downloaded from ftp://ftp.chg.ucsb.edu/pub/org/chg/products/CHIRPS-2.0/. Katsanos et al.80 compared CHIRPS data with station data over Mediterranean basin and found a good correlation between them.
We obtained 3-hourly rainfall data at 0.25 degree resolution for the selected urban areas from the TRMM 3B42V7 (TRMM) for the period of 1998–201552. TRMM is a gridded satellite product which uses passive microwave (PMW) data where it is available and infrared (IR) elsewhere since microwave radiance has a stronger relationship with precipitation than IR52, 57. TRMM is available from January 1998 onwards over 50°S–50°N and 180°W–180°E and can be downloaded from http://disc. sci.gsfc.nasa.gov/SSW/. We used TRMM 3B42V7 as sub-daily station data are unavailable and it is more reliable than other available multi-satellite rainfall products over India54. Moreover, TRMM 3B42V7 (TRMM onwards) captures spatial and temporal features of rainfall well against rain gauge measurements as shown by Shah and Mishra81.
We used daily air temperature data at 850 hPa (T850) obtained from the latest global atmospheric ERA-Interim reanalysis data which is produced by the European Centre for Medium-Range Weather Forecasts (ECMWF) with the Integrated Forecast System at a T255 spectral resolution on 60 vertical levels which reaches from the surface up to 0.1 hPa82. The data is available from January 1979 to present and can be downloaded from http://apps.ecmwf.int/datasets/data/interim-full-daily/levtype=pl/. The data was regridded at 0.25 degree resolution using bilinear interpolation. We used T850 (roughly 1.5 km) sufficiently above the boundary layer of the atmosphere instead of SAT in order to avoid the effects of ground geographical features (like the sea) on air temperature. Moreover, SAT data are strongly correlated with the monsoon season rainfall, which might introduce bias in the scaling process.
Since reanalysis products have uncertainties, we used air temperature at 850 hPa from the two other reanalysis products. The first one is obtained from the National Centres for Environmental Prediction (NCEP) Climate Forecast System Reanalysis (CFSR). CFSR is a high resolution (0.5 degree) global dataset which is developed using a coupled atmospheric-ocean-land-surface system83. It is available at 64 levels extending from surface to 0.26 hPa. CFSR is available from 1979 onwards and can be downloaded from http://rda.ucar.edu/datasets/ds093.1/. The second dataset for T850 was obtained from the Modern-Era Retrospective analysis for Research and Applications version 2 (MERRA2). MERRA2 is global hourly data at 0.5 × 0.625 degree resolution, available from 1980 onwards and can be downloaded from https://gmao.gsfc.nasa.gov/reanalysis/MERRA-2/. The MERRA2 has improvements over MERRA datasets because it assimilates modern hyperspectral radiance and microwave observations along with GPS- Radio Occultation datasets84. The hourly values in a day were averaged to find mean daily temperature at 850 hPa.
### Analysis
We extracted datasets (rainfall and DPT) for 23 urban areas from the daily GSOD data. However, gridded datasets for the urban locations were selected in such a way so that centre of grid lies within a urban area. Moreover, datasets were at different spatial resolution, therefore, we applied areal reduction factors (ARF) based on U.S. Weather Bureau 1975 method85, 86 (TP-29) to bring them to point scale, which is given by
$$AR{F}_{TP-29}=\frac{\frac{1}{n}\sum _{j=1}^{n}{\hat{R}}_{j}}{\frac{1}{k}\sum _{i=1}^{k}(\frac{1}{n}\sum _{j=1}^{n}{R}_{ij})}$$
(1)
where $${\hat{R}}_{j}$$ is the annual maximum areal rainfall for year j, R ij is the annual maximum point rainfall for year j at station i, k is the number of stations in the area, and n is the number of years85, 86. The choice of this ARF from different ARFs is discussed in Supplemental Information.
Regression slopes (scaling) were estimated by both binning technique (BT) and quantile regression (QR). In binning technique (BT), rainfall data is matched to temperature data for each day and is classified into bins of increasing temperature either of equal temperature bin size10, 15, 21, 33, 45. However, bin size and outlying data in each bin may affect the scaling estimates27. Therefore, we used quantile regression which is more robust and flexible method and does not require such assumptions28, 43, 44.
In quantile regression (QR), for a set of data pairs (x i , y i ) for i = 1, 2, …., n, the quantile regression for a given percentile p(95th percentile in our study) is expressed as
$${y}_{i}={\beta }_{0}^{(p)}+{\beta }_{1}^{(p)}{x}_{i}$$
(2)
where y i is logarithmically transformed rainfall45, x i corresponding temperature (SAT/T850/DPT) and regression slope of rainfall with temperature in percentage (dR95/K, %) is estimated using exponential transformation of regression coefficient $${\beta }_{1}^{(p)}$$ 28:
$$dR95( \% )/K=100.({e}^{{\beta }_{1}^{(p)}}-1)$$
(3)
More information on this method can be obtained from Koenker and Basset43. We carried out Quantile regression analysis using ‘quantreg’ package in statistical programming language ‘R’87, 88. Since this package does not give R2 value to estimate goodness of fit, we used pseudo R2 value described by Koenker and Machado89.
To check the robustness of our results, we also estimated regression slope using a binning technique (BT). We, therefore, used the method of Mishra et al.10 to establish the scaling relationship of extreme rainfall with SAT, DPT, and T850. The same method has been used in many previous studies20, 22, 33, 68. For each station, we extracted wet events (rainfall > = 1 mm) for all days in a year and their corresponding daily mean DPT from GSOD dataset. The data were then placed into 20 temperature bins (based on daily mean DPT) of approximately same size, sorted from the lowest to highest temperature values. Further, for each temperature bin, we estimated the 95th percentile of rainfall (R95) and daily mean dewpoint temperature (DPT). Then, we fitted a linear regression on the logarithm of R95 and DPT. The percentage change in R95 (dR95%/K) with respect to change in DPT (regression slopes onwards) was estimated using regression equation between the lowest (mean dewpoint temperature of the first bin) and highest dewpoint temperature (DPTR95, peak point temperature) where R95 maxima occurred. Similarly, we scaled extreme rainfall from the other datasets (GSOD, TRMM, and GPM) available at daily, 3-hourly, and half-hourly resolutions with daily mean SAT, DPT and T850 respectively.
We obtained polygons of selected urban areas using urban extent map of global cities which are based on MODIS 1 km land cover data90 and a non-urban area around urban polygon was selected using 25 km buffer64. 3-hourly TRMM data was extracted for urban and non-urban areas so that their grid centres lie within respective polygons and scaling was done against T850 and DPT.
In order to examine nonstationarity of rainfall time series we used the Priestley-Subba Rao (PSR) test91 using the “fractal” package in the statistical programming language ‘R’. This test is based on an evolutionary spectral analysis which examines the homogeneity of evolutionary spectra with time, and a p-value for T decides stationarity/nonstationarity of a time series. We used the Generalised Extreme Value (GEV) distribution to estimate stationary and nonstationary return levels (design values) of extreme rainfall events in the urban areas. The GEV distribution has three parameters: location parameter (µ), scale parameter (σ) and shape parameter (k) for the extremes time series10, 64. We used a block maxima approach (ABM; annual maximum rainfall time series and their corresponding dew point temperature and air temperature) to fit the GEV distribution using maximum likelihood estimates (MLE) with the help of gev.fit function from “ismev” package in ‘R’66, 92, 93. We also used a peak over threshold (POT) approach to obtain design estimates since there are chances of missing rainfall extremes using ABM approach. For this, we used the top 35 rain events during the entire period (and their corresponding dew point temperature and air temperature) and used generalized pareto distribution94 (GPD) to estimate design estimates using the gpd.fit function in MATLAB.
Under the stationary assumption no covariate was considered, hence, all the three distribution parameters were constant for the entire period of analysis. Under the nonstationary assumption, DPT and T850 were taken as covariates and location parameter (µ) was allowed to vary linearly with the both covariates and remaining two parameters (σ and k) kept constant92, 95. We evaluated improvements in the nonstationary model over the simple stationary model using Deviance Statistic (D = 2{l 1(M 1) − l o (M o )}), which was calculated using maximised log-likelihood under models considering stationary (l 1(M 1)) and nonstationary (l o (M o )) assumptions. If D > 3.84 (i.e. chi-square test at 5% significance level) then nonstationary model can be accepted, which also justifies the use of covariates in nonstationary model88. More details on this method can be obtained from Katz et al.96 and at http://www.ral.ucar.edu/~ericg/softextreme.php.
## References
1. 1.
Solomon, S. Climate change 2007-the physical science basis: Working group I contribution to the fourth assessment report of the IPCC. 4 (Cambridge University Press, 2007).
2. 2.
Dash, S. K., Makarand, A. K., Mohanty, U. C. & Prasad, K. Changes in the characteristics of rain events in India. Journal of Geophysical Research: Atmospheres 114, no. D10 (2009).
3. 3.
Vittal, H., Karmakar, S. & Ghosh, S. Diametric changes in trends and patterns of extreme rainfall over India from pre-1950 to post-1950. Geophys. Res. Lett. 40, 3253–3258, doi:10.1002/grl.50631 (2013).
4. 4.
Singh, P. & Borah, B. Indian summer monsoon rainfall prediction using artificial neural network. Stoch. Environ. Res. Risk Assess. 27, 1585–1599, doi:10.1007/s00477-013-0695-0 (2013).
5. 5.
Singh, D., Tsiang, M., Rajaratnam, B. & Diffenbaugh, N. S. Observed changes in extreme wet and dry spells during the South Asian summer monsoon season. Nat. Clim. Change 4, 456–461, doi:10.1038/nclimate2208 (2014).
6. 6.
Vinnarasi, R. & Dhanya, C. T. Changing characteristics of extreme wet and dry spells of Indian monsoon rainfall. J. Geophys. Res. Atmospheres 121, 2146–2160, doi:10.1002/2015JD024310 (2016).
7. 7.
Wasko, C. & Sharma, A. Steeper temporal distribution of rain intensity at higher temperatures within Australian storms. Nat. Geosci. 8, 527–529, doi:10.1038/ngeo2456 (2015).
8. 8.
Wasko, C., Sharma, A. & Westra, S. Reduced spatial extent of extreme storms at higher temperatures. Geophys. Res. Lett. 43, 4026–4032, doi:10.1002/2016GL068509 (2016).
9. 9.
Min, S.-K., Zhang, X., Zwiers, F. W. & Hegerl, G. C. Human contribution to more-intense precipitation extremes. Nature 470, 378–381, doi:10.1038/nature09763 (2011).
10. 10.
Mishra, V., Wallace, J. M. & Lettenmaier, D. P. Relationship between hourly extreme precipitation and local air temperature in the United States. Geophys. Res. Lett. 39, no. 16 (2012).
11. 11.
Pall, P., Allen, M. R. & Stone, D. A. Testing the Clausius–Clapeyron constraint on changes in extreme precipitation under CO2 warming. Clim. Dyn. 28, 351–363, doi:10.1038/nature09762 (2007).
12. 12.
Kao, S.-C. & Ganguly, A. R. Intensity, duration, and frequency of precipitation extremes under 21st-century warming scenarios. J. Geophys. Res. Atmospheres 116, no. D16, doi:10.1029/2010JD015529 (2011).
13. 13.
Wasko, C., Parinussa, R. M. & Sharma, A. A quasi‐global assessment of changes in remotely sensed rainfall extremes with temperature. Geophys. Res. Lett. 43, 12,659–12,668, doi:10.1002/2016GL071354 (2016).
14. 14.
Kharin, V. V., Zwiers, F. W., Zhang, X. & Hegerl, G. C. Changes in temperature and precipitation extremes in the IPCC ensemble of global coupled model simulations. J. Clim. 20, 345–357, doi:10.1175/JCLI4066.1 (2007).
15. 15.
Lenderink, G., Mok, H. Y., Lee, T. C. & van Oldenborgh, G. J. Scaling and trends of hourly precipitation extremes in two different climate zones – Hong Kong and the Netherlands. Hydrol. Earth Syst. Sci. 15, 3033–3041, doi:10.5194/hess-15-3033-2011 (2011).
16. 16.
Trenberth, K. E., Dai, A., Rasmussen, R. M. & Parsons, D. B. The Changing Character of Precipitation. Bull. Am. Meteorol. Soc. 84, 1205–1217, doi:10.1175/BAMS-84-9-1205 (2003).
17. 17.
Haerter, J. O. & Berg, P. Unexpected rise in extreme precipitation caused by a shift in rain type? Nat. Geosci. 2, 372–373, doi:10.1038/ngeo523 (2009).
18. 18.
Fujibe, F., Yamazaki, N., Katsuyama, M. & Kobayashi, K. The increasing trend of intense precipitation in Japan based on four-hourly data for a hundred years. Sola 1, 41–44, doi:10.2151/sola.2005-012 (2005).
19. 19.
Haerter, J. O., Berg, P. & Hagemann, S. Heavy rain intensity distributions on varying time scales and at different temperatures. J. Geophys. Res. Atmospheres 115, no. D17 (2010).
20. 20.
Yu, R. & Li, J. Hourly rainfall changes in response to surface air temperature over eastern contiguous China. J. Clim. 25, 6851–6861, doi:10.1175/JCLI-D-11-00656.1 (2012).
21. 21.
Lenderink, G. & Van Meijgaard, E. Increase in hourly precipitation extremes beyond expectations from temperature changes. Nat. Geosci. 1, 511–514, doi:10.1038/ngeo262 (2008).
22. 22.
Utsumi, N., Seto, S., Kanae, S., Maeda, E. E. & Oki, T. Does higher surface temperature intensify extreme precipitation? Geophys. Res. Lett. 38, no. 16 (2011).
23. 23.
Panthou, G., Mailhot, A., Laurence, E. & Talbot, G. Relationship between Surface Temperature and Extreme Rainfalls: A Multi-Time-Scale and Event-Based Analysis. J. Hydrometeorol. 15, 1999–2011, doi:10.1175/JHM-D-14-0020.1 (2014).
24. 24.
Wasko, C., Sharma, A. & Johnson, F. Does storm duration modulate the extreme precipitation-temperature scaling relationship? Geophys. Res. Lett. 42, 8783–8790, doi:10.1002/2015GL066274 (2015).
25. 25.
Molnar, P., Fatichi, S., Gaál, L., Szolgay, J. & Burlando, P. Storm type effects on super Clausius–Clapeyron scaling of intense rainstorm properties with air temperature. Hydrol Earth Syst Sci 19, 1753–1766, doi:10.5194/hess-19-1753-2015 (2015).
26. 26.
Westra, S. et al. Future changes to the intensity and frequency of short-duration extreme rainfall. Rev. Geophys. 52, 522–555, doi:10.1002/2014RG000464 (2014).
27. 27.
Berg, P., Haerter, J. O., Thejll, P., Piani, C., Hagemann, S. & Christensen, J. H. Seasonal characteristics of the relationship between daily precipitation intensity and surface temperature. Journal of Geophysical Research: Atmospheres 114, no. D18 (2009).
28. 28.
Wasko, C. & Sharma, A. Quantile regression for investigating scaling of extreme precipitation with temperature. Water Resour. Res. 50, 3608–3614, doi:10.1002/2013WR015194 (2014).
29. 29.
Meredith, E. P., Semenov, V. A., Maraun, D., Park, W. & Chernokulsky, A. V. Crucial role of Black Sea warming in amplifying the 2012 Krymsk precipitation extreme. Nat. Geosci. 8, 615–619, doi:10.1038/ngeo2483 (2015).
30. 30.
Berg, P., Moseley, C. & Haerter, J. O. Strong increase in convective precipitation in response to higher temperatures. Nat. Geosci. 6, 181–185, doi:10.1038/ngeo1731 (2013).
31. 31.
Moseley, C., Hohenegger, C., Berg, P. & Haerter, J. O. Intensification of convective extremes driven by cloud-cloud interaction. Nat. Geosci. 9, 748–752, doi:10.1038/ngeo2789 (2016).
32. 32.
Kishtawal, C. M., Niyogi, D., Tewari, M., Pielke, R. A. & Shepherd, J. M. Urbanization signature in the observed heavy rainfall climatology over India. Int. J. Climatol. 30, 1908–1916, doi:10.1002/joc.v30:13 (2010).
33. 33.
Lenderink, G. & Van Meijgaard, E. Linking increases in hourly precipitation extremes to atmospheric temperature and moisture changes. Environ. Res. Lett. 5, 025208, doi:10.1088/1748-9326/5/2/025208 (2010).
34. 34.
Vittal, H., Ghosh, S., Karmakar, S., Pathak, A. & Murtugudde, R. Lack of Dependence of Indian Summer Monsoon Rainfall Extremes on Temperature: An Observational Evidence. Sci. Rep. 6, 31039, doi:10.1038/srep31039 (2016).
35. 35.
Marshall Shepherd, J., Pierce, H. & Negri, A. J. Rainfall modification by major urban areas: Observations from spaceborne rain radar on the TRMM satellite. J. Appl. Meteorol. 41, 689–701, doi:10.1175/1520-0450(2002)041<0689:RMBMUA>2.0.CO;2 (2002).
36. 36.
Milly, P. C. D. et al. Stationarity is dead. Ground Water. News Views 4, 6–8 (2007).
37. 37.
Salas, J. D. & Obeysekera, J. Revisiting the concepts of return period and risk for nonstationary hydrologic extreme events. J. Hydrol. Eng. 19, 554–568, doi:10.1061/(ASCE)HE.1943-5584.0000820 (2013).
38. 38.
Craig, R. K. ‘Stationarity is Dead’-Long Live Transformation: Five Principles for Climate Change Adaptation Law. Harv. Environ. Law Rev 34, 9–75 (2010).
39. 39.
Lins, H. F. & Cohn, T. A. Stationarity: wanted dead or alive? Journal of the American Water Resources Association (JAWRA) 47, 475–480, doi:10.1111/j.1752-1688.2011.00542.x (2011).
40. 40.
Cheng, L. & AghaKouchak, A. Nonstationary Precipitation Intensity-Duration-Frequency Curves for Infrastructure Design in a Changing Climate. Sci. Rep. 4, 7093, doi:10.1038/srep07093 (2014).
41. 41.
Verdon-Kidd, D. C. & Kiem, A. S. Non–stationarity in annual maxima rainfall across Australia – implications for Intensity–Frequency–Duration (IFD) relationships. Hydrol. Earth Syst. Sci. Discuss 12, 3449–3475, doi:10.5194/hessd-12-3449-2015 (2015).
42. 42.
Koenker, R. Quantile regression. (Cambridge university press, 2005).
43. 43.
Koenker, R. & Bassett, G., Jr. Regression quantiles. Econom. J. Econom. Soc. 33–50 (1978).
44. 44.
Tan, M. H. Monotonic quantile regression with Bernstein polynomials for stochastic simulation. Technometrics 58, 180–190, doi:10.1080/00401706.2015.1027066 (2016).
45. 45.
Hardwick Jones, R., Westra, S. & Sharma, A. Observed relationships between extreme sub-daily precipitation, surface temperature, and relative humidity. Geophys. Res. Lett. 37, no. 22 (2010).
46. 46.
Rajeevan, M., Pai, D. S. & Thapliyal, V. Spatial and temporal relationships between global land surface air temperature anomalies and Indian summer monsoon rainfall. Meteorol. Atmospheric Phys. 66, 157–171, doi:10.1007/BF01026631 (1998).
47. 47.
Wasko, C. & Sharma, A. Continuous rainfall generation for a warmer climate using observed temperature sensitivities. Journal of Hydrology 544, 575–590, doi:10.1016/j.jhydrol.2016.12.002 (2017).
48. 48.
Maeda, E. E., Utsumi, N. & Oki, T. Decreasing precipitation extremes at higher temperatures in tropical regions. Nat. Hazards 64, 935–941, doi:10.1007/s11069-012-0222-5 (2012).
49. 49.
Semenov, V. & Bengtsson, L. Secular trends in daily precipitation characteristics: greenhouse gas simulation with a coupled AOGCM. Clim. Dyn. 19, 123–140, doi:10.1007/s00382-001-0218-4 (2002).
50. 50.
Wentz, F. J., Ricciardulli, L., Hilburn, K. & Mears, C. How much more rain will global warming bring? Science 317, 233–235, doi:10.1126/science.1140746 (2007).
51. 51.
Bengtsson, L. The global atmospheric water cycle. Environ. Res. Lett. 5, 025202, doi:10.1088/1748-9326/5/2/025202 (2010).
52. 52.
Huffman, G. J. et al. The TRMM Multisatellite Precipitation Analysis (TMPA): Quasi-Global, Multiyear, Combined-Sensor Precipitation Estimates at Fine Scales. J. Hydrometeorol. 8, 38–55, doi:10.1175/JHM560.1 (2007).
53. 53.
Huffman, G. J., Adler, R. F., Stocker, E. F., Bolvin, D. T. & Nelkin, E. J. Analysis of TRMM 3-hourly multi-satellite precipitation estimates computed in both real and post-real time. Preprints, 12th Conf. on Satellite Meteorology and Oceanography, Long Beach, CA, Amer. Meteor. Soc., P4.11 (2003).
54. 54.
Prakash, S., Mitra, A. K., Pai, D. S. & AghaKouchak, A. From TRMM to GPM: How well can heavy rainfall be detected from space? Adv. Water Resour. 88, 1–7, doi:10.1016/j.advwatres.2015.11.008 (2016).
55. 55.
Funk, C., Peterson, P., Landsfeld, M., Pedreros, D., Verdin, J., Shukla, S., Husak, G., Rowland, J., Harrison, L., Hoell, A. & Michaelsen, J. The climate hazards infrared precipitation with stations—a new environmental record for monitoring extremes. Sci. Data 2, 150066, doi:10.1038/sdata.2015.66 (2015).
56. 56.
Dembélé, M. & Zwart, S. J. Evaluation and comparison of satellite-based rainfall products in Burkina Faso, West Africa. Int. J. Remote Sens. 37, 3995–4014, doi:10.1080/01431161.2016.1207258 (2016).
57. 57.
Krishnamurti, T. N. et al. Real-time multianalysis-multimodel superensemble forecasts of precipitation using TRMM and SSM/I products. Mon. Weather Rev. 129, 2861–2883, doi:10.1175/1520-0493(2001)129<2861:RTMMSF>2.0.CO;2 (2001).
58. 58.
Kimani, M., Hoedjes, J. & Su, Z. Uncertainty Assessments of Satellite Derived Rainfall Products, doi:10.20944/preprints201611.0019.v1 (2016).
59. 59.
Krishnamurti, T. N., Mishra, A. K., Simon, A. & Yatagai, A. Use of a dense rain-gauge network over India for improving blended TRMM products and downscaled weather models. 87, 393–412, doi:10.2151/jmsj.87A.393 (2009).
60. 60.
Prakash, S., Mitra, A. K., Rajagopal, E. N. & Pai, D. S. Assessment of TRMM-based TMPA-3B42 and GSMaP precipitation products over India for the peak southwest monsoon season. Int. J. Climatol. 36, 1614–1631, doi:10.1002/joc.2016.36.issue-4 (2015).
61. 61.
Nair, S., Srinivasan, G. & Nemani, R. Evaluation of Multi-Satellite TRMM Derived Rainfall Estimates over a Western State of India. J. Meteorol. Soc. Jpn. Ser II 87, 927–939, doi:10.2151/jmsj.87.927 (2009).
62. 62.
Wang, D., Jiang, P., Wang, G. & Wang, D. Urban extent enhances extreme precipitation over the Pearl River Delta, China. Atmospheric Sci. Lett. 16, 310–317, doi:10.1002/asl2.2015.16.issue-3 (2015).
63. 63.
Hamidi, A., Devineni, N., Booth, J. F., Hosten, A., Ferraro, R. R. & Khanbilvardi, R. Classifying Urban Rainfall Extremes using Weather Radar Data: An Application to Greater New York Area. J. Hydrometeorol. doi:10.1175/JHM-D-16-0193.1. (2016).
64. 64.
Ali, H., Mishra, V. & Pai, D. S. Observed and Projected Urban Extreme Rainfall Events in India. J. Geophys. Res. Atmospheres 19, 12621–12642, doi:10.1002/2014JD022264 (2014).
65. 65.
AghaKouchak, A., Behrangi, A., Sorooshian, S., Hsu, K. & Amitai, E. Evaluation of satellite-retrieved extreme precipitation rates across the central United States. J. Geophys. Res. Atmospheres 116, D02115, doi:10.1029/2010JD014741 (2011).
66. 66.
Villarini, G. & Krajewski, W. F. Evaluation of the research version TMPA three-hourly 0.25°×0.25° rainfall estimates over Oklahoma. Geophys. Res. Lett. 34, no. 5 (2007).
67. 67.
Huang, Y. et al. Evaluation of Version-7 TRMM Multi-Satellite Precipitation Analysis Product during the Beijing Extreme Heavy Rainfall Event of 21 July 2012. Water 6, 32–44, doi:10.3390/w6010032 (2013).
68. 68.
Liu, S. C., Fu, C., Shiu, C.-J., Chen, J.-P. & Wu, F. Temperature dependence of global precipitation extremes. Geophys. Res. Lett. 36, no. 17 (2009).
69. 69.
Loriaux, J. M., Lenderink, G., De Roode, S. R. & Siebesma, A. P. Understanding convective extreme precipitation scaling using observations and an entraining plume model. J. Atmospheric Sci 70, 3641–3655, doi:10.1175/JAS-D-12-0317.1 (2013).
70. 70.
Miao, C., Sun, Q., Borthwick, A. G. L. & Duan, Q. Linkage Between Hourly Precipitation Events and Atmospheric Temperature Changes over China during the Warm Season. Sci. Rep. 6, 22543, doi:10.1038/srep22543 (2016).
71. 71.
Webster, P. J. et al. Monsoons: Processes, predictability, and the prospects for prediction. J. Geophys. Res. Oceans 103, 14451–14510, doi:10.1029/97JC02719 (1998).
72. 72.
Wang, B. & Fan, Z. Choice of South Asian summer monsoon indices. Bull. Am. Meteorol. Soc. 80, 629–638, doi:10.1175/1520-0477(1999)080<0629:COSASM>2.0.CO;2 (1999).
73. 73.
Shepherd, J. M. & Burian, S. J. Detection of urban-induced rainfall anomalies in a major coastal city. Earth Interact. 7, 1–17, doi:10.1175/1087-3562(2003)007<0001:DOUIRA>2.0.CO;2 (2003).
74. 74.
Shepherd, J. M., Pierce, H. & Negri, A. J. Rainfall modification by major urban areas: Observations from spaceborne rain radar on the TRMM satellite. J. Appl. Meteorol. 41, 689–701, doi:10.1175/1520-0450(2002)041<0689:RMBMUA>2.0.CO;2 (2002).
75. 75.
Mondal, A. & Mujumdar, P. P. Modeling non-stationarity in intensity, duration and frequency of extreme rainfall over India. J. Hydrol. 521, 217–231, doi:10.1016/j.jhydrol.2014.11.071 (2015).
76. 76.
Dhanya, C. T. & Nagesh Kumar, D. Data mining for evolution of association rules for droughts and floods in India using climate inputs. J. Geophys. Res. Atmospheres 114, no. D2 (2009).
77. 77.
Ghosh, S., Luniya, V. & Gupta, A. Trend analysis of Indian summer monsoon rainfall at different spatial scales. Atmospheric Sci. Lett. 10, 285–290 (2009).
78. 78.
Yilmaz, A. G. & Perera, B. J. C. Extreme Rainfall Nonstationarity Investigation and Intensity–Frequency–Duration Relationship. J. Hydrol. Eng. 19, 1160–1172, doi:10.1061/(ASCE)HE.1943-5584.0000878 (2014).
79. 79.
Burges, S. J. Invited perspective: Why I am an optimist. Water Resour. Res. 47, no. 3 (2011).
80. 80.
Katsanos, D., Retalis, A. & Michaelides, S. Validation of a high-resolution precipitation database (CHIRPS) over Cyprus for a 30-year period. Atmospheric Res. 169, Part B, 459–464, doi:10.1016/j.atmosres.2015.05.015 (2016).
81. 81.
Shah, R. D. & Mishra, V. Development of an Experimental Near-Real-Time Drought Monitor for India*. J. Hydrometeorol. 16, 327–345, doi:10.1175/JHM-D-14-0041.1 (2015).
82. 82.
Dee, D. P. et al. The ERA-Interim reanalysis: Configuration and performance of the data assimilation system. Q. J. R. Meteorol. Soc. 137, 553–597, doi:10.1002/qj.v137.656 (2011).
83. 83.
Center, E. M. NCEP Climate Forecast System Reanalysis (CFSR) selected hourly time-series products, January 1979 to December 2010. Res. Data Arch. Natl. Cent. Atmospheric Res. Comput. Inf. Syst. Lab. Boulder CO [Available Online at http://rda.ucar.edudatasets/ds093.1] (2010).
84. 84.
Bosilovich, M. G., Robertson, F. R. & Chen, J. Global Energy and Water Budgets in MERRA. J. Clim. 24, 5721–5739, doi:10.1175/2011JCLI4175.1 (2011).
85. 85.
Asquith, W. H. Areal-reduction factors for the precipitation of the 1-day design storm in Texas. US Geological Survey, WaterResources Investigations Report 99–4267, Austin. Tex. 81 pp. (1999).
86. 86.
Tripathi, O. P. & Dominguez, F. Effects of spatial resolution in the simulation of daily and subdaily precipitation in the southwestern US. J. Geophys. Res. Atmospheres 118, 7591–7605, doi:10.1002/jgrd.50590 (2013).
87. 87.
Koenker, R. Quantreg: quantile regression. R Package Version 5 (2013).
88. 88.
R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Australia URL: http://www.R-project.org/ (2015).
89. 89.
Koenker, R. & Machado, J. A. Goodness of fit and related inference processes for quantile regression. J. Am. Stat. Assoc. 94, 1296–1310, doi:10.1080/01621459.1999.10473882 (1999).
90. 90.
Schneider, A., Friedl, M. A., McIver, D. K. & Woodcock, C. E. Mapping urban areas by fusing multiple sources of coarse resolution remotely sensed data. Photogramm. Eng. Remote Sens. 69, 1377–1386, doi:10.14358/PERS.69.12.1377 (2003).
91. 91.
Priestley, M. B. & Rao, T. S. A test for non-stationarity of time-series. J. R. Stat. Soc. Ser. B Methodol. 140–149 (1969).
92. 92.
Mondal, A. & Mujumdar, P. P. Detection of Change in Flood Return Levels under Global Warming. J. Hydrol. Eng. 04016021 (2016).
93. 93.
Katz, R. W., Parlange, M. B. & Naveau, P. Statistics of extremes in hydrology. Adv. Water Resour. 25, 1287–1304, doi:10.1016/S0309-1708(02)00056-8 (2002).
94. 94.
Hosking, J. R. & Wallis, J. R. Parameter and quantile estimation for the generalized Pareto distribution. Technometrics 29, 339–349, doi:10.1080/00401706.1987.10488243 (1987).
95. 95.
Towler, E., Rajagopalan, B., Gilleland, E., Summers, R. S., Yates, D. & Katz, R. W. Modeling hydrologic and water quality extremes in a changing climate: A statistical approach based on extreme value theory. Water Resour. Res. 46, W11504–n/a, doi:10.1029/2009WR008876 (2010).
96. 96.
Yue, S. & Wang, C. Y. Regional streamflow trend detection with consideration of both temporal and spatial correlation. Int. J. Climatol. 22, 933–946, doi:10.1002/(ISSN)1097-0088 (2002).
## Acknowledgements
Authors acknowledge data availability from GSOD, ERA-Interim, MERRA, CFSR, TRMM, and CHIRPS. The work was partially funded by the MHRD fellowship to the first author and Ministry of Earth Sciences (BELMONT FOURAM) grant to the second author. Authors thank Dr. Umamahesh (NIT Warangal) and Dr. Sivananda Pai (IMD) for providing rainfall data.
## Author information
Authors
### Contributions
V.M. and H.A. designed the study. H.A. analyzed the data. H.A. and V.M. contributed in discussion of the results and wrote the manuscript.
### Corresponding author
Correspondence to Vimal Mishra.
## Ethics declarations
### Competing Interests
The authors declare that they have no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Ali, H., Mishra, V. Contrasting response of rainfall extremes to increase in surface air and dewpoint temperatures at urban locations in India. Sci Rep 7, 1228 (2017). https://doi.org/10.1038/s41598-017-01306-1
• Accepted:
• Published:
• ### New hourly extreme precipitation regions and regional annual probability estimates for the UK
• Motasem M. Darwish
• , Mari R. Tye
• , Andreas F. Prein
• , Hayley J. Fowler
• , Stephen Blenkinsop
• , Murray Dale
• & Duncan Faulkner
International Journal of Climatology (2021)
• ### Application of the non-stationary peak-over-threshold methods for deriving rainfall extremes from temperature projections
• Okjeong Lee
• , Inkyeong Sim
• & Sangdan Kim
Journal of Hydrology (2020)
• ### Uncertainty in nonstationary frequency analysis of South Korea's daily rainfall peak over threshold excesses associated with covariates
• Okjeong Lee
• , Jeonghyeon Choi
• , Jeongeun Won
• & Sangdan Kim
Hydrology and Earth System Sciences (2020)
• ### Diametrically Opposite Scaling of Extreme Precipitation and Streamflow to Temperature in South and Central Asia
• Sarosh Alam Ghausi
• & Subimal Ghosh
Geophysical Research Letters (2020)
• ### Changes in Precipitation Extremes across Vietnam and Its Relationships with Teleconnection Patterns of the Northern Hemisphere
• Quang Van Do
• , Hong Xuan Do
• , Nhu Cuong Do
• & An Le Ngo
Water (2020)
|
|
# Is this argument valid?
• Oct 1st 2009, 08:57 PM
GreenDay14
Is this argument valid?
I have to find out whether this argument is valid or invalid:
p -> q
(q v ([not]r)) -> (p [and] s)
s->(r v q)
Hope that makes sense, and also I appreciate any help that can be offered. Thanks.
• Oct 4th 2009, 08:41 AM
Jones
Quote:
Originally Posted by GreenDay14
I have to find out whether this argument is valid or invalid:
p -> q
(q v ([not]r)) -> (p [and] s)
s->(r v q)
Hope that makes sense, and also I appreciate any help that can be offered. Thanks.
If im not mistaken it's valid.
You can check for yourself, if you find values for s,r,p and q that makes the result false but the statements true, then the argument is invalid
• Oct 4th 2009, 09:22 AM
Hello GreenDay14
Quote:
Originally Posted by GreenDay14
I have to find out whether this argument is valid or invalid:
p -> q
(q v ([not]r)) -> (p [and] s)
s->(r v q)
Hope that makes sense, and also I appreciate any help that can be offered. Thanks.
I've drawn up a truth table for the proposition
$((p \rightarrow q)\land((q\lor \neg r)\rightarrow(p\land s)))\rightarrow(s\rightarrow (r\lor q))$
- see attachment.
The columns are evaluated in order (1) through (8) - (8) being the final output. Since this show TRUE in every row, then, yes the argument is valid.
|
|
# Specify scanning exclusions
## Standard naming conventions
Sophos Anti-Virus validates the paths and file names of scanning exclusion items against standard Windows naming conventions. For example, a folder name may contain spaces but may not contain only spaces.
## Multiple file extensions
File names with multiple extensions are treated as if the last extension is the extension and the rest are part of the file name:
MySample.txt.doc = file name MySample.txt + extension .doc.
## Excluding specific files, folders, processes, or drives
Specific file Specify both the path and file name to exclude a specific file. The path can include a drive letter or network share name. C:\Documents\CV.doc
\\Server\Users\Documents\CV.doc
To make sure that exclusions are always applied correctly, add both the long and 8.3-compliant file and folder names:
C:\Program Files\Sophos\Sophos Anti-Virus
C:\Progra~1\Sophos\Sophos~1
Specific process Specify both the path and the file name to exclude a specific executable file (process). C:\Windows\notepad.exe You must specify the full path.
All files with the same name Specify a file name without a path to exclude all files with that name wherever they are located in the file system. spacer.gif
Everything on a drive or network share Specify a drive letter or network share name to exclude everything on that drive or network share. C:
\\Server\<sharename>\
When you specify a network share, include a trailing slash after the share name.
Specific folder Specify a folder path including a drive letter or network share name to exclude everything in that folder and below. D:\Tools\logs\ Include a trailing slash after the folder name.
All folders with the same name Specify a folder path without a drive letter or network share name to exclude everything from that folder and below on any drive or network share. \Tools\logs\
(excludes the following folders: C:\Tools\logs\, \\Server\Tools\logs\)
You must specify the entire path up to the drive letter or network share name. In this example, specifying \logs\ would not exclude any files.
## Wildcards
You can use the wildcards shown in this table.
Note Only * and ? can be used on Windows Server 2003.
* (Star) Zero or more of any character except \ or /. For example:
c:\*\*.txt excludes all files named *.txt in the top level folders on C:\.
Note You cannot use * to exclude a folder.
** (Star Star) Zero or more of any characters including \ and /, when bracketed by \ or / characters or used at the start or end of an exclusion.
Any other use of ** is treated a single * and matches zero or more of any character except \ or /.
For example:
• c:\foo\**\bar matches: c:\foo\bar, c:\foo\more\bar, c:\foo\even\more\bar
• **\bar matches c:\foo\bar
• c:\foo\** matches c:\foo\more\bar
• c:\foo**bar matches c:\foomorebar but NOT c:\foo\more\bar
\ (Backslash) Either \ or /.
/ (Forward slash Either / or \.
? (Question mark) One single character, unless at the end of a string where it can match zero characters.
. (Period) A period OR the empty string at the end of a filename, if the pattern ends in a period and the filename does not have an extension. Note that:
• *.* matches all files
• *. matches all files without an extension
• "foo." matches "foo" and" "foo."
Examples
Here are some examples of the use of wildcards.
Expression Interpreted as Description
foo **\foo Exclude any file named foo (in any location).
foo\bar **\foo\bar Exclude any file named bar in a folder named foo (in any location).
*.txt **\*.txt Exclude all files named *txt (in any location).
C: C: Exclude drive C: from scanning (including the drive's master boot record).
C:\ C:\ Exclude all files on drive C: from scanning (but scan the drive's master boot record).
C:\foo\ C:\foo\ All files and folders underneath C:\foo, including C:\foo itself.
C:\foo\*.txt C:\foo\*.txt All files contained in C:\foo named *.txt.
## Variables for exclusions
You can use variables when you set up scanning exclusions.
The table below shows the variables and examples of the locations they correspond to on each operating system.
Variable Windows 7 or later, Windows Server 2008 or later Windows Server 2003, Windows XP, Windows Vista
%allusersprofile%\ C:\ProgramData\ C:\Documents and Settings\All Users\
%appdata%\ C:\Users\*\AppData\Roaming\ C:\Documents and Settings\*\Application Data\
%commonprogramfiles%\ C:\Program Files\Common Files\ C:\Program Files\Common Files\
%commonprogramfiles(x86)%\ C:\Program Files (x86)\Common Files\ C:\Program Files (x86)\Common Files\
%localappdata%\ C:\Users\*\AppData\Local\ C:\Documents and Settings\*\Local Settings\Application Data\
%programdata%\ C:\ProgramData\ C:\Documents and Settings\All Users\Application Data\
%programfiles%\ C:\Program Files\ C:\Program Files\
%programfiles(x86)%\ C:\Program Files (x86)\ C:\Program Files (x86)\
%systemdrive%\ C: C:
%systemroot%\ C:\Windows\ C:\Windows\
%temp%\ or %tmp%\ C:\Users\*\AppData\Local\Temp\ C:\Documents and Settings\*\Local Settings\Temp\
%userprofile%\ C:\Users\*\ C:\Documents and Settings\*\
%windir%\ C:\Windows\ C:\Windows\
|
|
# Tag Info
## When to Use this Tag
Use when discussing the origin or action of classical forces, i.e. quantities causing an acceleration of a body. If you already know all forces relevant to your question and are now interested in the dynamics of the problem, use instead. You might also want to tag the question as one of the other tags mentioned in . If you know the source of a (classical) force, tag the question as , , etc., too.
For a quantum field theoretic approach to forces, use either and/or .
## Introduction
Newton’s Laws are based on the idea of forces, quantities that cause the body they act on to accelerate proportional to their magnitude and parallel to their direction. Forces can have many different origins, but most of these can be traced back to four fundamental forces: gravity, electromagnetism, the weak interaction and the strong interaction. Only gravity and electromagnetism can be described in a classical setting.
The most important property of the forces acting on a body is that they sum up. That is, to find the net force acting on a body (the $\vec F$ in Newton’s second law), one can sum over all external forces $\vec F_i$ acting on said body:
$$\tag{1} \sum_i \vec F_i = m \vec a \quad.$$
It is often difficult to find the forces relevant to a problem at hand. Lagrangian mechanics solves this by deriving the equations of motion (such as (1)) from the variational principle.
|
|
International Association for Cryptologic Research
IACR News Central
Here you can see all recent updates to the IACR webpage. These updates are also available:
Now viewing news items related to:
23 May 2017
Machine learning models hosted in a cloud service are increasingly popular but risk privacy: clients sending prediction requests to the service need to disclose potentially sensitive information. In this paper, we explore the problem of privacy-preserving predictions: after each prediction, the server learns nothing about clients' input and clients learn nothing about the model.
We present MiniONN, the first approach for transforming an existing neural network to an oblivious neural network supporting privacy-preserving predictions with reasonable efficiency. Unlike prior work, MiniONN requires no change to how models are trained. To this end, we design oblivious protocols for commonly used operations in neural network prediction models. We show that MiniONN outperforms existing work in terms of response latency and message sizes. We demonstrate the wide applicability of MiniONN by transforming several typical neural network models trained from standard datasets.
The goal of leakage-resilient cryptography is to construct cryptographic algorithms that are secure even if the adversary obtains side-channel information from the real world implementation of these algorithms. Most of the prior works on leakage-resilient cryptography consider leakage models where the adversary has access to the leakage oracle before the challenge-ciphertext is generated (before-the-fact leakage). In this model, there are generic compilers that transform any leakage-resilient CPA-secure public key encryption (PKE) scheme to its CCA-2 variant using Naor-Yung type of transformations. In this work, we give an efficient generic compiler for transforming a leakage-resilient CPA-secure PKE to leakage-resilient CCA-2 secure PKE in presence of after-the-fact split-state (bounded) memory leakage model, where the adversary has access to the leakage oracle even after the challenge phase. The salient feature of our transformation is that the leakage rate (defined as the ratio of the amount of leakage to the size of secret key) of the transformed after-the-fact CCA-2 secure PKE is same as the leakage rate of the underlying after-the-fact CPA-secure PKE, which is $1-o(1)$. We then present another generic compiler for transforming an after-the-fact leakage-resilient CCA-2 secure PKE to a leakage-resilient authenticated key exchange (AKE) protocol in the bounded after-the-fact leakage-resilient eCK (BAFL-eCK) model proposed by Alawatugoda et al. (ASIACCS'14). To the best of our knowledge, this gives the first compiler that transform any leakage-resilient CCA-2 secure PKE to an AKE protocol in the leakage variant of the eCK model.
An emerging direction for authenticating people is the adoption of biometric authentication systems. Biometric credentials are becoming increasingly popular as a mean of authenticating people due to the wide rage of advantages that they provide with respect to classical authentication methods (e.g., password-based authentication). The most characteristic feature of this authentication method is the naturally strong bond between a user and her biometric credentials. This very same advantageous property, however, raises serious security and privacy concerns in case the biometric trait gets compromised. In this article, we present the most challenging issues that need to be taken into consideration when designing secure and privacy- preserving biometric authentication protocols. More precisely, we describe the main threats against privacy-preserving biometric authentication systems and give directions on possible countermeasures in order to design secure and privacy-preserving biometric authentication protocols.
Many block ciphers use permutations defined over the finite field $\mathbb{F}_{2^{2k}}$ with low differential uniformity, high nonlinearity, and high algebraic degree to provide confusion. Due to the lack of knowledge about the existence of almost perfect nonlinear (APN) permutations over $\mathbb{F}_{2^{2k}}$, which have lowest possible differential uniformity, when $k>3$, constructions of differentially 4-uniform permutations are usually considered. However, it is also very difficult to construct such permutations together with high nonlinearity; there are very few known families of such functions, which can have the best known nonlinearity and a high algebraic degree. At Crypto'16, Perrin et al. introduced a structure named butterfly, which leads to permutations over $\mathbb{F}_{2^{2k}}$ with differential uniformity at most 4 and very high algebraic degree when $k$ is odd. It is posed as an open problem in Perrin et al.'s paper and solved by Canteaut et al. that the nonlinearity is equal to $2^{2k-1}-2^k$. In this paper, we extend Perrin et al.'s work and study the functions constructed from butterflies with exponent $e=2^i+1$. It turns out that these functions over $\mathbb{F}_{2^{2k}}$ with odd $k$ have differential uniformity at most 4 and algebraic degree $k+1$. Moreover, we prove that for any integer $i$ and odd $k$ such that $\gcd(i,k)=1$, the nonlinearity equality holds, which also gives another solution to the open problem proposed by Perrin et al. This greatly expands the list of differentially 4-uniform permutations with good nonlinearity and hence provides more candidates for the design of block ciphers.
We devise a virtual black-box (VBB) obfuscator for querying whether set elements are stored within Bloom filters, with security based on the Ring Learning With Errors (RLWE) problem and strongly universal hash functions. Our construction uses an abstracted encoding scheme that we instantiate using the Gentry, Gorbunov and Halevi (GGH15) multilinear map, with an explicit security reduction to RLWE. This represents an improvement on the functionality and security guarantees compared with the conjunction obfuscator introduced by Brakerski et al. (ITCS 2016), where security follows from a non-standard RLWE variant. Immediate applications of our work arise from any common usage of Bloom filters, such as efficient set intersection testing. Our obfuscated program allows this functionality to be executed in a non-interactive manner whilst preventing the natural leakage that occurs when providing offline access to a Bloom filter. Compared to more general obfuscators for evasive functions, we demonstrate a significant asymptotic reduction in size and required computation for obfuscating set intersection queries. The obfuscator of Wichs and Zirdelis (EPRINT 2017) requires $$O(4^{n \log n})$$ encodings for obfuscating circuits computing the intersection of sets of size $$n$$, requiring the usage of additional primitives such as FHE to allow sets of polynomial size. Our construction requires only $$O(kn)$$ total encodings and operations for evaluation, where $$k << n$$. Moreover, the size of our obfuscator is independent of the size of the elements that are contained in the set. Our results, alongside recent and concurrent work, can be seen as another step forward in obfuscating wider classes of evasive functions using standard assumptions and models.
The mechanism for traditional Searchable Symmetric Encryption is pay-then-use. That is to say, if a user wants to search some documents that contain special keywords, he needs to pay to the server firstly, then he can enjoy search service. Under this situation, these kinds of things will happen: After the user paying the service fees, the server may either disappear because of the poor management or returning nothing. As a result, the money that the user paid cannot be brought back quickly. Another case is that the server may return incorrect document sets to the user in order to save his own cost. Once such events happen, it needs the arbitration institution to mediate which will cost a long time. Besides, to settle the disputes the user has to pay to the arbitration institution. Ideally, we deeply hope that when the user realizes the server has a tendency to cheat in the task of searching, he can immediately and automatically withdraw his money to safeguard his right. However, the existing SSE protocols cannot satisfy this demand.
To solve this dilemma, we find a compromised method by introducing the block chain into SSE. Our scheme achieves three goals stated below. Firstly, when the server does not return any thing to user after he gets the search token, the user can get some compensation from the server, because the server can infer some important information from the Index and this token. Besides, the user also doesn't pay the service charge. Secondly, if the documents that the server returns are false, the server cannot receive service fees, meanwhile, he will be punished. Lastly, when the user receives some bitcoin from server at the beginning, he may terminate the protocol. Under this situation, the server is a victim. In order to prevent such thing from happening, the server will broadcast a transaction to redeem his pledge after an appointed time.
Contract signing protocols have been proposed and analyzed for more than three decades now. One of the main problems that appeared while studying such schemes is the impossibility of achieving both fairness and guaranteed output delivery. As workarounds, cryptographers have put forth three main categories of contract signing schemes: gradual release, optimistic and concurrent or legally fair schemes. Concurrent signature schemes or legally fair protocols do not rely on trusted arbitrators and, thus, may seem more attractive for users. Boosting user trust in such manner, an attacker may cleverly come up with specific applications. Thus, our work focuses on embedding trapdoors into contract signing protocols. In particular, we describe and analyze various SETUP (Secretly Embedded Trapdoor with Universal Protection) mechanisms which can be injected in concurrent signature schemes and legally fair protocols without keystones.
ePrint Report Practical Strongly Invisible and Strongly Accountable Sanitizable Signatures Michael Till Beck, Jan Camenisch, David Derler, Stephan Krenn, Henrich C. Pöhls, Kai Samelin, Daniel Slamanig
Sanitizable signatures are a variant of digital signatures where a designated party (the sanitizer) can update admissible parts of a signed message. At PKC’17, Camenisch et al. introduced the notion of invisible sanitizable signatures that hides from an outsider which parts of a message are admissible. Their security definition of invisibility, however, does not consider dishonest signers. Along the same lines, their signer-accountability definition does not prevent the signer from falsely accusing the sanitizer of having issued a signature on a sanitized message by exploiting the malleability of the signature itself. Both issues may limit the usefulness of their scheme in certain applications. We revise their definitional framework, and present a new construction eliminating these shortcomings. In contrast to Camenisch et al.’s construction, ours requires only standard building blocks instead of chameleon hashes with ephemeral trapdoors. This makes this, now even stronger, primitive more attractive for practical use. We underpin the practical efficiency of our scheme by concrete benchmarks of a prototype implementation.
Crowdsourcing systems have gained considerable interest and adoption in recent years. They coordinate the human intelligence of individual and businesses together from all over the world to solve complex tasks. However, these central systems are subject to the weaknesses of the trust based model like traditional financial institutions, such as single point of failure, high services fee and privacy disclosure. In this paper, we conceptualize a blockchain-based decentralized framework for crowdsourcing, in which a requester’s task can be solved by a crowd of workers without relying on central crowdsourcing systems or requiring users to access services with registering true identities. In particular, we present the architecture of our proposed framework and separate CrowdBC into three layer: application layer, blockchain layer and storage layer. Users can register, post or receive a task securely under this structure. We enhance the scalability of crowdsourcing by depicting complex crowdsourcing logic with smart contract. Moreover, we give a detailed scheme for the whole process of crowdsourcing and also discuss the security of decentralized crowdsourcing framework. Finally, we implement a software prototype on Ethereum to show the validity and effectiveness of our proposed framework design for crowdsourcing.
A memory-hard function (MHF) $f_n$ with parameter $n$ can be computed in sequential time and space $n$. Simultaneously, a high amortized parallel area-time complexity (aAT) is incurred per evaluation. In practice, MHFs are used to limit the rate at which an adversary (using a custom computational device) can evaluate a security sensitive function that still occasionally needs to be evaluated by honest users (using an off-the-shelf general purpose device). The most prevalent examples of such sensitive functions are Key Derivation Functions and password hashing algorithms where rate limits help mitigate off-line dictionary attacks. As the honest users' inputs to these functions are often (low-entropy) passwords special attention is given to a class of side-channel resistant MHFs called iMHFs.
Essentially all iMHFs can be viewed as some mode of operation (making $n$ calls to some round function) given by a directed acyclic graph (DAG) with very low indegree. Recently, a combinatorial property of a DAG has been identified (called depth-robustness'') which results in good provable security for an iMHF based on that DAG. Depth-robust DAGs have also proven useful in other cryptographic applications. Unfortunately, up till now, all known very depth-robust DAGs are impractically complicated and little is known about their exact (i.e. non-asymptotic) depth-robustness both in theory and in practice.
In this work we build and analyze (both formally and empirically) several exceedingly simple and efficient to navigate practical DAGs for use in iMHFs and other applications. For each DAG we:
- Prove that their depth-robustness is asymptotically maximal.
- Prove bounds of at least $3$ orders of magnitude better on their exact depth-robustness compared to known bounds for other practical iMHF.
-Implement and empirically evaluate their depth-robustness and aAT against a variety of state-of-the art (and several new) depth-reduction and low aAT attacks. We find that, against all attacks, the new DAGs perform significantly better in practice than Argon2i, the most widely deployed iMHF in practice.
Along the way we also improve the best known empirical attacks on the aAT of Argon2i by implementing and testing several heuristic versions of a (hitherto purely theoretical) depth-reduction attack. Finally, for the best performing of the new DAGs we implement an iMHF using the Argon2i round function and code base and show that on a standard off-the-shelf CPU the new iMHF can actually be evaluated slightly faster than Argon2i (despite seemingly enjoying significantly higher aAT).
Argon2i is a data-independent memory hard function that won the password hashing competition. The password hashing algorithm has already been incorporated into several open source crypto libraries such as libsodium. In this paper we analyze the cumulative memory cost of computing Argon2i. On the positive side we provide a lower bound for Argon2i. On the negative side we exhibit an improved attack against Argon2i which demonstrates that our lower bound is nearly tight. In particular, we show that
- An Argon2i DAG is $\left(e,O\left(n^3/e^3\right)\right))$-reducible.
- The cumulative pebbling cost for Argon2i is at most $O\left(n^{1.768}\right)$. This improves upon the previous best upper bound of $O\left(n^{1.8}\right)$ [Alwen and Blocki, EURO S&P 2017].
- Argon2i DAG is $\left(e,\tilde{\Omega}\left(n^3/e^3\right)\right))$-depth robust. By contrast, analysis of [Alwen et al., EUROCRYPT 2017] only established that Argon2i was $\left(e,\tilde{\Omega}\left(n^3/e^2\right)\right))$-depth robust.
- The cumulative pebbling complexity of Argon2i is at least $\tilde{\Omega}\left( n^{1.75}\right)$. This improves on the previous best bound of $\Omega\left( n^{1.66}\right)$ [Alwen et al. EUROCRYPT 2017] and demonstrates that Argon2i has higher cumulative memory cost than competing proposals such as Catena or Balloon Hashing.
We also show that Argon2i has high fractional depth-robustness which strongly suggests that data-dependent modes of Argon2 are resistant to space-time tradeoff attacks.
Job Posting Post Doc in Embedded System Security Laboratoire Hubert Curien, University of Lyon, Saint-Etienne, France
The main objective of the research in the Embedded System Security Group is to propose efficient and robust hardware architectures aimed at applied cryptography and telecom that are resistant to passive and active cryptographic attacks. Currently, the central theme of this research consists in designing architectures for secure embedded systems implemented in logic devices such as FPGAs and ASICs.
For a new project which addresses the problem of secure and privacy in MPSoC architectures, we proposes a Post Doc position to work on security evaluation of heterogeneous MPSoC. We are looking for candidates with an outstanding Ph.D in hardware security and a strong publication record in this field. Strong knowledge in side channel attacks and countermeasures, digital system (VHDL, FPGA) design would be appreciated. Knowledge of French is not mandatory.
The Post-Doc position will start in September or October 2017 (flexible starting date), it is funded for 13 month.
To apply please send your detailed CV (with publication list), motivation for applying (1 page) and names of at least two people who can provide reference letters (e-mail).
Closing date for applications: 30 June 2017
Contact: Prof. Lilian BOSSUET lilian.bossuet(at)univ-st-etienne.fr
Job Posting Post Doc in Hardware Security Laboratoire Hubert Curien, University of Lyon, Saint-Etienne, France
The main objective of the research in the Embedded System Security Group is to propose efficient and robust hardware architectures aimed at applied cryptography and telecom that are resistant to passive and active cryptographic attacks. Currently, the central theme of this research consists in designing architectures for secure embedded systems implemented in logic devices such as FPGAs and ASICs.
For a new project which addresses the problem of the security of TRNG against fault injection attack. We are looking for candidates with an outstanding Ph.D in hardware security and a strong publication record in this field. Strong knowledge in fault injection attacks with laser, and VLSI design would be appreciated. Knowledge of French is not mandatory.
The Post-Doc position will start in September or October 2017, it is funded for 34 month.
To apply please send your detailed CV (with publication list), motivation for applying (1 page) and names of at least two people who can provide reference letters (e-mail).
Closing date for applications: 30 June 2017
Contact: Prof. Lilian BOSSUET lilian.bossuet(at)univ-st-etienne.fr
22 May 2017
ePrint Report New Approach to Practical Leakage-Resilient Public-Key Cryptography Suvradip Chakraborty, Janaka Alawatugoda, C. Pandu Rangan
We present a new approach to construct several leakage-resilient cryptographic primitives, including public-key encryption (PKE) schemes, authenticated key exchange (AKE) protocols and low-latency key exchange (LLKE) protocols. To this end, we develop a new primitive called leakage-resilient non-interactive key exchange (LR-NIKE). We introduce a new generic security model for LR-NIKE protocols, which can be instantiated in both bounded-and-continuous-memory leakage setting. We then show a secure construction of LR-NIKE protocol in the bounded-memory leakage setting, that achieves an optimal leakage rate, i.e., $1- o(1)$. We then show how to construct the aforementioned leakage-resilient primitives from such a LR-NIKE. In particular,
We show how to construct a leakage-resilient IND-CCA-2-secure PKE scheme in the bounded- memory leakage setting, from LR-NIKE protocol. Our construction differs from the state-of-the- art constructions of leakage-resilient IND-CCA-2-secure PKE, which use hash proof techniques to achieve leakage resiliency. Moreover, our transformation preserves the leakage-rate of the underlying LR-NIKE and admits more efficient construction than the previous such PKE constructions.
We introduce a new leakage model for AKE protocols, in the bounded-memory leakage setting. We show how to construct a leakage-resilient AKE protocol starting from LR-NIKE protocol.
We introduce the first-ever leakage model for LLKE protocols, in the bounded-memory leakage setting, and the first construction of such a leakage-resilient LLKE from LR-NIKE protocol.
ePrint Report Cryptographic Security Analysis of T-310 Nicolas T. Courtois, Klaus Schmeh, Jörg Drobick, Jacques Patarin, Maria-Bristena Oprisanu, Matteo Scarlata, Om Bhallamudi
T-310 is an important Cold War cipher. It was the principal encryption algorithm used to protect various state communication lines in Eastern Germany throughout the 1980s. The cipher seems to be quite robust, and until now, no cryptography researcher has proposed an attack on T-310. In this paper we provide a detailed analysis of T-310 in the context of modern cryptography research and other important or similar ciphers developed in the same period. We introduce new notations which show the peculiar internal structure of this cipher in a new light. We point out a number of significant strong and weak properties of this cipher. Finally we propose several new attacks on T-310.
We study the problem of securely building single-commodity multi-markets auction mechanisms. We introduce a novel greedy algorithm and its corresponding privacy preserving implementation using secure multi-party computation. More specifically, we determine the quantity of supply and demand bids maximizing welfare. Each bid is attached to a specific market, but exchanges between different markets are allowed up to some upper limit. The general goal is for the players to bid their intended valuations without concerns about what the other players can learn. This problem is inspired by day-ahead electricity markets where there are substantial transmission capacity between the different markets, but applies to other commodity markets like gas. Furthermore, we provide computational results with a specific C++ implementation of our algorithm and the necessary MPC primitives. We can solve problems of 1945 bids and 4 markets in 1280 seconds when online/offline phases are considered. Finally, we report on possible set-ups, workload distributions and possible trade-offs for real-life applications of our results based on this experimentation and prototyping.
Lattice-based cryptography is one of the most promising areas within post-quantum cryptography, and offers versatile, efficient, and high performance security services. The aim of this paper is to verify the correctness of the discrete Gaussian sampling component, one of the most important modules within lattice-based cryptography. In this paper, the GLITCH software test suite is proposed, which performs statistical tests on discrete Gaussian sampler outputs. An incorrectly operating sampler, for example due to hardware or software errors, has the potential to leak secret-key information and could thus be a potential attack vector for an adversary. Moreover, statistical test suites are already common for use in pseudo-random number generators (PRNGs), and as lattice-based cryptography becomes more prevalent, it is important to develop a method to test the correctness and randomness for discrete Gaussian sampler designs. Additionally, due to the theoretical requirements for the discrete Gaussian distribution within lattice-based cryptography, certain statistical tests for distribution correctness become unsuitable, therefore a number of tests are surveyed. The final GLITCH test suite provides 11 adaptable statistical analysis tests that assess the exactness of a discrete Gaussian sampler, and which can be used to verify any software or hardware sampler design.
In the implementation of many public key schemes, there is a need to implement modular arithmetic. Typically this consists of addition, subtraction, multiplication and (occasionally) division with respect to a prime modulus. To resist certain side-channel attacks it helps if implementations are constant time''. As the calculations proceed there is potentially a need to reduce the result of an operation to its remainder modulo the prime modulus. However often this reduction can be delayed, a process known as lazy reduction''. The idea is that results do not have to be fully reduced at each step, that full reduction takes place only occasionally, hence providing a performance benefit. Here we extend the idea to determine the circumstances under which reduction can be delayed to the very end of a particular public key operation.
In this paper we investigate weak keys of universal hash functions (UHFs) from their combinatorial properties. We find that any UHF has a general class of keys, which makes the combinatorial properties totally disappear, and even compromises the security of the UHF-based schemes, such as the Wegman-Carter scheme, the UHF-then-PRF scheme, etc. By this class of keys, we actually get a general method to search weak-key classes of UHFs, which is able to derive all previous weak-key classes of UHFs found by intuition or experience. Moreover we give a weak-key class of the BRW polynomial function which was once believed to have no weak-key issue, and exploit such weak keys to implement a distinguish attack and a forgery attack against DTC - a BRW-based authentication encryption scheme. Furthermore in Grain-128a, with the linear structure revealed by weak-key classes of its UHF, we can recover any first $(32+b)$ bits of the UHF key, spending no more than $1$ encryption and $(2^{32} + b)$ decryption queries.
ePrint Report Analyzing Multi-Key Security Degradation Atul Luykx, Bart Mennink, Kenneth G. Paterson
The multi-key, or multi-user, setting challenges cryptographic algorithms to maintain high levels of security when used with many different keys, by many different users. Its significance lies in the fact that in the real world, cryptography is rarely used with a single key in isolation. A folklore result, proved by Bellare, Boldyreva, and Micali for public-key encryption in EUROCRYPT 2000, states that the success probability in attacking any one of many independently keyed algorithms can be bounded by the success probability of attacking a single instance of the algorithm, multiplied by the number of keys present. Although sufficient for settings in which not many keys are used, once cryptographic algorithms are used on an internet-wide scale, as is the case with TLS, the effect of multiplying by the number of keys can drastically erode security claims. We establish a sufficient condition on cryptographic schemes and security games under which multi-key degradation is avoided. As illustrative examples, we discuss how AES and GCM behave in the multi-key setting, and prove that GCM, as a mode, does not have multi-key degradation. Our analysis allows limits on the amount of data that can be processed per key by GCM to be significantly increased. This leads directly to improved security for GCM as deployed in TLS on the Internet today.
ePrint Report FourQ on embedded devices with strong countermeasures against side-channel attacks Zhe Liu, Patrick Longa, Geovandro Pereira, Oscar Reparaz, Hwajeong Seo
This work deals with the energy-efficient, high-speed and high-security implementation of elliptic curve scalar multiplication, elliptic curve Diffie-Hellman (ECDH) key exchange and elliptic curve digital signatures on embedded devices using FourQ and incorporating strong countermeasures to thwart a wide variety of side-channel attacks. First, we set new speed records for constant-time curve-based scalar multiplication, DH key exchange and digital signatures at the 128-bit security level with implementations targeting 8, 16 and 32-bit microcontrollers. For example, our software computes a static ECDH shared secret in 7.0 million cycles (or 0.9 seconds @8MHz) on a low-power 8-bit AVR microcontroller which, compared to the fastest Curve25519 and genus-2 Kummer implementations on the same platform, offers 2x and 1.4x speedups, respectively. Similarly, it computes the same operation in 559 thousand cycles on a 32-bit ARM Cortex-M4 microcontroller, achieving a factor-2.5 speedup when compared to the fastest Curve25519 implementation targeting the same platform. A similar speed performance is observed in the case of digital signatures. Second, we engineer a set of side-channel countermeasures taking advantage of FourQ's rich arithmetic and propose a secure implementation that offers protection against a wide range of sophisticated side-channel attacks, including differential power analysis (DPA). Despite the use of strong countermeasures, the experimental results show that our FourQ software is still efficient enough to outperform implementations of Curve25519 that only protect against timing attacks. Finally, we perform a differential power analysis evaluation of our software running on an ARM Cortex-M4, and report that no leakage was detected with up to 10 million traces. These results demonstrate the potential of deploying FourQ on low-power applications such as protocols for the Internet of Things.
ePrint Report Two-Message Witness Indistinguishability and Secure Computation in the Plain Model from New Assumptions Saikrishna Badrinarayanan, Sanjam Garg, Yuval Ishai, Amit Sahai, Akshay Wadia
We study the feasibility of two-message protocols for secure two-party computation in the plain model, for functionalities that deliver output to one party, with security against malicious parties. Since known impossibility results rule out polynomial-time simulation in this setting, we consider the common relaxation of allowing super-polynomial simulation. We first address the case of zero-knowledge functionalities. We present a new construction of two-message zero-knowledge protocols with super-polynomial simulation from any (sub- exponentially hard) game-based two-message oblivious transfer protocol, which we call Weak OT. As a corollary, we get the first two-message WI arguments for NP from (sub-exponential) DDH. Prior to our work, such protocols could only be constructed from assumptions that are known to imply non-interactive zero-knowledge protocols (NIZK), which do not include DDH. We then extend the above result to the case of general single-output functionalities, showing how to construct two-message secure computation protocols with quasi-polynomial simulation from Weak OT. This implies protocols based on sub-exponential variants of several standard assumptions, including Decisional Diffie Hellman (DDH), Quadratic Residuosity Assumption, and Nth Residuosity Assumption. Prior works on two-message protocols either relied on some trusted setup (such as a common reference string) or were restricted to special functionalities such as blind signatures. As a corollary, we get three-message protocols for two-output functionalities, which include coin-tossing as an interesting special case. For both types of functionalities, the number of messages (two or three) is optimal. Finally, motivated by the above, we further study the Weak OT primitive. On the positive side, we show that Weak OT can be based on any semi-honest 2-message OT with a short second message. This simplifies a previous constructions of Weak OT from the Nth Residuosity Assumption. We also present a construction of Weak OT from Witness Encryption (WE) and injective one-way functions, implying the first construction of two-message WI arguments from WE. On the negative side, we show that previous constructions of Weak OT do not satisfy simulation-based security even if the simulator can be computationally unbounded.
Linear cryptanalysis makes use of statistical models that consider linear approximations over block cipher and random permutation as binary random variables. In this note we show that linear and statistical independence are equivalent properties for linear approximations of the random permutations and the block ciphers with independent pre- and post-whitening keys.
ePrint Report Understanding RUP Integrity of COLM Nilanjan Datta, Atul Luykx, Bart Mennink, Mridul Nandi
The authenticated encryption scheme COLM is a third-round candidate in the CAESAR competition. Much like its antecedents COPA, ELmE, and ELmD, COLM consists of two parallelizable encryption layers connected by a linear mixing function. While COPA uses plain XOR mixing, ELmE, ELmD, and COLM use a more involved invertible mixing function. In this work, we investigate the integrity of the COLM structure when unverified plaintext is released, and demonstrate that its security highly depends on the choice of mixing function. Our results are threefold. First, we discuss the practical nonce-respecting forgery by Andreeva et al. (ASIACRYPT 2014) against COPA's XOR mixing. Then we present a nonce-misusing forgery against arbitrary mixing functions with practical time complexity. Finally, by using significantly larger queries, we can extend the previous forgery to be nonce-respecting.
ePrint Report Improving TFHE: faster packed homomorphic operations and efficient circuit bootstrapping Ilaria Chillotti, Nicolas Gama, Mariya Georgieva, Malika Izabachène
In this paper, we present several methods to improve the evaluation of homomorphic functions, both for fully and for leveled homomorphic encryption. We propose two packing methods, in order to decrease the expansion factor and optimize the evaluation of look-up tables and random functions in TRGSW-based homomorphic schemes. We also extend the automata logic, introduced in [19, 12], to the efficient leveled evaluation of weighted automata, and present a new homomorphic counter called TBSR, that supports all the elementary operations that occur in a multiplication. These improvements speed-up the evaluation of most arithmetic functions in a packed leveled mode, with a noise overhead that remains additive. We finally present a new circuit bootstrapping that converts TLWE into low-noise TRGSW ciphertexts in just 137ms, which makes the leveled mode of TFHE composable, and which is fast enough to speed-up arithmetic functions, compared to the gate-by-gate bootstrapping given in [12]. Finally, we propose concrete parameter sets and timing comparison for all our constructions.
ePrint Report Strengthening Access Control Encryption Christian Badertscher, Christian Matt, Ueli Maurer
Access control encryption (ACE) was proposed by Damgård et al. to enable the control of information flow between several parties according to a given policy specifying which parties are, or are not, allowed to communicate. By involving a special party, called the sanitizer, policy-compliant communication is enabled while policy-violating communication is prevented, even if sender and receiver are dishonest. To allow outsourcing of the sanitizer, the secrecy of the message contents and the anonymity of the involved communication partners is guaranteed.
This paper shows that in order to be resilient against realistic attacks, the security definition of ACE must be considerably strengthened in several ways. A new, substantially stronger security definition is proposed, and an ACE scheme is constructed which provably satisfies the strong definition under standard assumptions.
Three aspects in which the security of ACE is strengthened are as follows. First, CCA security (rather than only CPA security) is guaranteed, which is important since senders can be dishonest in the considered setting. Second, the revealing of an (unsanitized) ciphertext (e.g., by a faulty sanitizer) cannot be exploited to communicate more in a policy-violating manner than the information contained in the ciphertext. We illustrate that this is not only a definitional subtlety by showing how in known ACE schemes, a single leaked unsanitized ciphertext allows for an arbitrary amount of policy-violating communication. Third, it is enforced that parties specified to receive a message according to the policy cannot be excluded from receiving it, even by a dishonest sender.
In 1996, Jackson and Martin proved that a strong ideal ramp scheme is equivalent to an orthogonal array. However, there was no good characterization of ideal ramp schemes that are not strong. Here we show the equivalence of ideal ramp schemes to a new variant of orthogonal arrays that we term augmented orthogonal arrays}. We give some constructions for these new kinds of arrays, and, as a consequence, we also provide parameter situations where ideal ramp schemes exist but strong ideal ramp schemes do not exist.
Using whitening keys is a well understood mean of increasing the key-length of any given cipher. Especially as it is known ever since Grover's seminal work that the effective key-length is reduced by a factor of two when considering quantum adversaries, it seems tempting to use this simple and elegant way of extending the key-length of a given cipher to increase the resistance against quantum adversaries. However, as we show in this work, using whitening keys does not increase the security in the quantum-CPA setting significantly. For this we present a quantum algorithm that breaks the construction with whitening keys in essentially the same time complexity as Grover's original algorithm breaks the underlying block cipher. Technically this result is based on the combination of the quantum algorithms of Grover and Simon for the first time in the cryptographic setting, which might well have other applications.
Previously I proposed fully homomorphic public-key encryption (FHPKE) based on discrete logarithm problem which is vulnerable to quantum computer attacks. In this paper I propose FHPKE based on multivariate discrete logarithm assumption and multivariate computational Diffie–Hellman assumption. This encryption scheme is thought to withstand to quantum computer attacks. Though I can construct this scheme over many non-commutative rings, I will adopt the FHPKE scheme based on the octonion ring as the typical example for showing how this scheme is constructed. The multivariate discrete logarithm problem (MDLP) is defined such that given f(x), g(x), h(x) and a prime q, final goal is to find integers m and n where h(x)= g^(-n)(f^m(g^(n)(x))) mod q over octonion ring.
ePrint Report Card-Based Protocols Using Unequal Division Shuffle Akihiro Nishimura, Takuya Nishida, Yu-ichi Hayashi, Takaaki Mizuki, Hideaki Sone
Card-based cryptographic protocols can perform secure computation of Boolean functions. Cheung et al. presented an elegant protocol that securely produces a hidden AND value using five cards; however, it fails with a probability of 1/2. The protocol uses an unconventional shuffle operation called unequal division shuffle; after a sequence of five cards is divided into a two-card portion and a three-card portion, these two portions are randomly switched. In this paper, we first show that the protocol proposed by Cheung et al. securely produces not only a hidden AND value but also a hidden OR value (with a probability of 1/2). We then modify their protocol such that, even when it fails, we can still evaluate the AND value. Furthermore, we present two five-card copy protocols using unequal division shuffle. Because the most efficient copy protocol currently known requires six cards, our new protocols improve upon the existing results. We also design a general copy protocol that produces multiple copies using unequal division shuffle.
|
|
44 views
Suppose host A is sending a large file to host B over a TCP connection.
The two end hosts are 10msec apart (20msec RTT) connected by a 1Gbps link.
Assume that they are using a packet size of 1000 bytes to transmit the file.
Also assume for simplicity that ACK packets are extremely small and can be ignored.
At least how big would the window size (in packets) have to be for the
channel utilization to be greater than 80%.
I am getting answer as 2001.
$\eta \gt0.8$
my final calculation gave me
$W_s \gt 2000.8$(Window Size)
And hence my answer came 2001. Is it correct?
0
i don't know about the correctness, but i'm also getting the same.
0
by ur aproach as u r getting Ws>2000.8 , then suppose in question instead of efficiency greater then 80% they would have asked equal to 80%, then what would be ur window size ?
+1 vote
Bandwidth-delay product of A to B = 10^9*20*10^-3= 2*10^7
for 80% utilization , it will be 2*10^7*0.8 = 1.6*10^7
so , Number of packets = (1.6*10^7)/(8*1000) = 2000 packets + 1{1 more becoz here asked efficieny greater then 80%} = 2001 packets
edited
0
Efficiency has to be to strictly greater than $80percent$. Inequality has to be maintained. Hence answer must be 2001
0
It will be 2001. (Asking for Least windows size, So we have go for Ceil here)
|
|
# ThoughtWorks Tech Radar : Mechanical Sympathy
My take on Mechanical Sympathy (from the ThoughtWorks Technology Radar), which I presented at the Sheraton Bangalore, is based off the content below.
“The most amazing achievement of the computer software industry is its continuing cancellation of the steady and staggering gains made by the computer hardware industry.” – Henry Petroski
“Premature optimisation is the root of all evil.” – Donald Knuth
The Hibernian Express is the first transatlantic fiber-optic communications cable to be laid in 10 years, at a cost of \$300 million. The current speed record for establishing transatlantic communication is 65 milliseconds. The Hibernian Express will reduce that. By all of 6 milliseconds. If that isn’t a very expensive optimisation, I do not know what is.
In almost all cases, the code that we write is abstracted away from the internals of the hardware. This is a desirable and necessary thing. However, particular domains require applications to operate under a set of exacting constraints. Recent interest in Ultra-Low Latency Trading in the HFT arena typically requires order volumes of over 5000 orders a second with order and execution report round trip times of 100 microseconds. In such cases, tailoring your architecture to handle concurrency is no longer an idle option, it is a necessity. Even for more prosaic applications, it is not uncommon to need low latency data structures.
Usually, requiring low latency boils down to minimising time spent in concurrency management with respect to actual logic processing. Today’s programming languages provide a variety of constructs to model concurrent operations. Locks, mutexes, memory barriers, to name a few. Even at the opcode level, you may use CAS operations, which are cheaper than locks. However, to move to the upper end of the curve, to get to really low latency, many designers eschew all of these constructs.
One good example is the Disruptor, which is a high performance concurrency framework for Java. In a series of excellent articles, Martin Thompson, one of the authors of the Disruptor framework, discusses techniques to reduce latency by write combining, writing lock free algorithms, and the Single Writer principle.
Even if lock contention is an issue, there are other ways of reducing latency. One example is when a team working to increase the performance of their custom JMS implementation, which wrote their custom implementation of the JDK Executor interface – the Executor interface is responsible for firing off Runnable jobs, by the way. This resulted in an improvement by a factor of 10.
One of the more explicit forms of mechanical sympathy is when you rewrite software to execute on specially designed hardware. GPUs and FPGAs are commonplace in financial computing.
Indirectly, this form of thinking also seems to have influenced the design of single-threaded servers with asynchronous I/O. In a multi-threaded server, you, or rather the server, are faced constantly with having to switch contexts between threads. With a single thread model, latency is greatly reduced.
# ThoughtWorks Tech Radar : Agile Analytics
My take on Agile Analytics from the ThoughtWorks Technology Radar, which I presented at the Sheraton Bangalore today, is based off the following document.
Patient: Will I survive this risky operation?
Surgeon: Yes, I’m absolutely sure that you will survive the operation.
Patient: How can you be so sure?
Surgeon: Well, 9 out of 10 patients die in this operation, and yesterday my ninth patient died.
Andrew Lang, a Scottish writer and collector of folk tales, once remarked that many people use statistics as a drunken man uses lamp-posts…for support rather than illumination. Even so, we have come a long way from the 9th century, when Al Kindi used statistics to decipher encrypted messages and developed the first code breaking algorithm, in Baghdad – incidentally, he was instrumental in introducing the base 10 Indian numeral system to the Islamic and the Christian world.
1654 – Pascal and Fermat create the mathematical theory of probability,
1761 – Thomas Bayes proves Bayes’ theorem,
1948 – Shannon’s Mathematical Theory of Communication defines capacity of communication channels in terms of probabilities. Bit of a game changer, that one. All our designs of communication networks and error-correction algorithms stem from insights found in that work.
Today, we realise that the pace at which we collect data far exceed our capability to make sense of it. Data is everywhere, *literally*. The blood cells in your body trying to determine whether that molecule is an oxygen molecule or not? That is data. Your build breaking? That is data. You’re running a static analysis tool to check your test coverage? Yeah, that is data analysis.
Unfortunately, we are at that point where our opinions about whether a piece of data is relevant to analysis, form far too slowly. How slowly? Well, human reflexes take milliseconds, while CPUs and GPUs function on the order of nanoseconds. That is six orders of magnitude. And that is how slow we are.
This, we cannot afford to be. In the past century, data collection was the bottleneck. Datasets larger than a few kilobytes were unheard of. Now, we are playing in gigabyte territory. When I was consulting with a telecommunications company, a few months back, all calls through their network would generate upwards of 600 MB of data per day.
Volume is not the only dimension of this deluge of data. The rate of flow of incoming data gives us pause too. Think of the stock markets, imagine having to make decisions based on data, which within a few minutes (or even a few seconds), will become obsolete. Analytics is not a goal in itself. It is merely an aid to decision-making. Given the speed at which new data is collected, and the speed at which old data fades into obsolescence, we must be prepared to deal with incomplete, fast-flowing data.
Think of it as a stream from which you scoop a handful of water to determine the level of bacteria in the water. You only have limited information from a single sample, but, if you sample from multiple points upstream and downstream, you’ll finally get a fairly correct answer to your question.
Agile Analytics conjures up images of iterations, collaborating with customers, and fast feedback, when working on DW/BI projects. Indeed, this is what Ken Collier talks about in his book Agile Analytics. However, I wish to tackle a different angle. Hal Varian, Chief Economist at Google says believes that the dream job of this decade, is that of a statistician. Everyone has data. It’s harder to get opinions about the data. It’s harder to, as he says, “tell a story about this data”.
We’re at a moment in the software industry where lots of things have begun to intersect with our field of interest. Statistics is one of them. Assume you are a software engineer, and have more than a peripheral interest in this field. What do you do?
Learn classical statistics. Learn Bayesian statistics. You probably hated those textbooks, so don’t use them; there are tons of more useful educational resources on the Web. Get into machine learning. Understand that machine learning is not some super-exotic field of study. I’ll risk a limb and say that Machine Learning is just More Statistics under a trendy name.
Get a acquainted with a few languages and libraries. R, NumPy, Julia. In fact, I’m super-excited by Julia because of it offers native building blocks for distributed computation. Read a few papers on real-world distributed systems.
I do not talk about this because you’ll be building a distributed analytics engine from scratch (though you could). You will, through study of the subjects above, gain a much deeper understanding of why you should be analysing something, and also how such systems are built, You’re all, regardless of your previous background, engineers.
You will also encounter a lot of literature concerning visualisation while doing this. Visualisation is one of those things we don’t really pay much attention too, until we really need it. Bars, graphs, colours: anything in lieu of numbers, that can give us some visual indication of what’s going on. Health check pages, for example, are a useful way of integrating diagnostic information of a system.
# ThoughtWorks Tech Radar : GPGPU
My take on GPGPUs from the ThoughtWorks Technology Radar, which I presented at the Sheraton Bangalore today, is based off the following document.
Seth Lloyd, a mechanical professor at MIT, once asked what the fastest computer in the universe would look like. Throwing aside concerns of fabrication, a circuit is only as fast as the speed at which you can flip a bit from 0 to 1, or vice versa. The faster the circuit, the more energy it consumes. Plugging in theoretical numbers, Lloyd came to the conclusion that a reasonably sized computer running at that speed would not look like one of the contraptions in front of you. In fact, it would become, to put not too fine a point, a black hole.
Well, we are somewhat far away from that realisation, but the fact is that most of us do not realise the potential that exists on each and every one of our laptops and desktops. Allow me an example. The ATI Radeon 5870 GPU, codenamed Osprey, packs enough processing units to support 30,000 threads. To make not too fine a point, I can take this room, and all of you in it, and replicate it 3000 times, and have each of you do a calculation, and this chunk of sand and solder would still be faster. And smaller.
We are at the point that vendors have begun to release GPUs which are specifically not designed for graphics processing. I hesitate to term such units as GPUs; take for example the NVidia Tesla. The Tesla is not even capable of outputting to video, by default. Yet, it powers the Tianhe-1, the second fastest supercomputer on the planet. Again, take the Titan supercomputer. Running on close to 20,000 Tesla GPUs, it is a public access supercomputer, meaning that should you feel the need to do some terascale research, you can log onto it, right now.
Today, there are multiple streams of vendor-specific GPU technology. They are all based on the venerable C99 standard, with language extensions. In case of NVidia, it is CUDA; in case of AMD, it is the Stream Processing SDK. However, the portable option which works across GPUs, as well as CPUs, and is gaining traction is OpenCL.
Computing on the GPU requires a programming model not unlike the well-known MapReduce model, that is Stream Processing. It requires you to create a computational kernel, in effect a function, which is then applied to blocks of data. There are other constraints on the kernel code that you can write. Essentially, to take advantage of stream processing, look for problems which involve high compute intensity, and near-total data parallelism.
Bioinformatics, Computational Finance, Medical Imaging, Molecular Dynamics, Weather and Climate Forecasting…anywhere you have a ton of data waiting to be crunched, GPU computing is a perfect fit. Even Hadoop has support for CUDA at this moment. GPUs are now ubiquitous, I’d probably risk calling them commodity hardware at this point. They sit in almost all of your machines, powering your displays, rendering your games. Never have developers been privy to so much power within so little space. And it’s not even a black hole yet. So, go forth and compute!
# Two-phase commit : Indistinguishable state failure scenario
I’ll review the most interesting failure scenario for the 2PC protocol. There are excellent explanations of 2PC out there, and I won’t bother too much with the basic explanation. The focus of this post is a walkthrough of the indistinguishable state scenario, where neither a global commit, nor a global abort command can be issued.
# Parallelisation : Writing a linear matrix algorithm for Map-Reduce
There are multiple ways to skin matrix multiplication. If you begin to think about it, there are probably 4 or 5 ways in which you could approach matrix multiplication. In this post, we look at another, easier, way of multiplying two matrices, and attempt to build a MapReduce version of the algorithm. Before we dive into the code itself, we’ll quickly review the actual algebraic process we’re trying to parallelise.
# Parallelisation : Refactoring a recursive block matrix algorithm for Map-Reduce
I’ve recently gotten interested in the parallelisation of algorithms in general; specifically, the type of algorithm design compatible with the MapReduce model of programming. Given that I’ll probably be dealing with bigger quantities of data in the near future, it behooves me to start think about parallelisation, actively. In this post, I will look at the matrix multiplication algorithm which uses block decomposition, to recursively compute the product of two matrices. I have spoken of the general idea here; you may want to read that first for the linear algebra groundwork, before continuing on with this post.
# A Story about Data, Part 2: Abandoning the notion of normality
Continuing on with my work, I was just about to conclude the non-normal data of the distribution. However, I remembered reading about different transformations that can be applied to data to make it more normal. Are any such transformations likely to have any effect on the normality (or the lack thereof) of the score data?
I’d read about the Box-Cox family of transformations: essentially proceeding through powers and their inverses, in the quest to improve normality. I decided to try it, using the Jarque-Bera statistic as a measure of the normality of the data.
# A Story about Data, Part 1: The shape of the data
Note about the visualisations: All of the plotting was done with Basis-Processing. You’ll find its source here.
The current dataset that I’m working comes from the education domain. Roughly, there are 29000 records, each record lists the following:
• Location of the student’s school
• Language of the student
• Student’s score before intervention
• Student’s score after intervention
# Interacting with Graphs : Mouse-over and lambda-queuer
In the previous post, I described how I’d put together a basic system to drive data selection/exploration through a queue. While generating more graphs, it became evident that the code for mouseover interaction followed a specific pattern. More importantly, using Basis to plot stuff, mandated that I look at the inverse problem; namely, determining the original point from the point under the mouse pointer. In this case, it was pretty simple, since I’m only dealing with 2D points. Here’s a video of how it looks like. The example shows the exploration of a covariance matrix.
# Driving data visualisation over a queue using RabbitMQ and lambda-queuer
One of the things which has bothered me ever since I took the dive into visualisation is the problem of interactivity. The aim of interacting with a visualisation is to drill down or explore areas of the visualisation which are (or seem) interesting. Put another way, we are essentially filtering the data from a visual standpoint. In most cases, mouse interactions may be sufficient. But what if I wanted to be able to filter the data programmatically and have the result reflected in the visualisation?
One way is to simply re-run the code which generates the visualisation each time we use a different filter. This is the simplest, and, in many cases, enough. In this case, the modification to the code is made in an offline fashion. What if we wanted to do the same, but while the program is running? This describes my attempt at one such implementation. Albeit still somewhat primitive, we’ll see where it ends up. For the purposes of demonstration, I used the Parallel Coordinates visualisation, which is available on GitHub. I’ll continue using Processing through Ruby-Processing for this description.
# A jQuery-based build radiator for Jenkins
Partly to play around with the Jenkins Remote API, and partly to kickstart a build radiator setup at my current consulting engagement, I wrote a quick radiator page which might serve as a foundation for further experiment by the team. I’ve seen several build radiators built using multiple technologies – Java, Python, etc.; I specifically chose HTML/jQuery for this effort because a modern browser seems to be the only piece of software universally available on any machine one might care to set up.
The code is up at GitHub, and here is the obligatory screenshot
It uses JSONP so as to allow cross-site GET requests, else Chrome’s Same-Origin policy complains like a spoiled brat.
# Installing the Basis gem (updated for v0.5.1+)
You can use Ruby-Processing in two ways.
Use the jruby-complete.jar that Ruby-Processing ships, the Gems-in-a-Jar approach. In this mode, all gems you install will be packaged as part of the JAR.
If you’re following the first approach, first head to the location where the jruby-complete.jar is located, for Ruby-Processing. There, do this:
java -jar jruby-complete.jar -S gem install basis-processing --user-install
Alternatively, if you’re using a conventional JRuby installation, do this:
sudo jruby -S gem install basis-processing
# A guide to using Basis (updated for v0.6.0+)
This is a quick tour of Basis. Find the source for Basis on GitHub. Installing Basis is pretty simple; just grab it as a gem for your JRuby installation. Brief notes on the installation can be found here.
UPDATE: Starting from version 0.6.0, Basis allows you to specify axis labels. Additionally, you can specify arrays of points instead of plotting points one at a time. When you do this, you can also specify a corresponding legend string, which will show up in a legend guide. See below for more details.
UPDATE: Starting from version 0.5.9, you can turn grid lines on or off. Additionally, the matrix operations implementation has been ported to use the Matrix class in Ruby’s stdlib.
UPDATE: Starting from version 0.5.8, you can customise axis labels, draw arbitrary shapes/text/plot custom graphics at any point in your coordinate system. See below for more details.
UPDATE: With version 0.5.7, experimental support has been added for drawing objects which aren’t points. Interactions with such objects is currently not supported. Additional support for drawing markers/highlighting in custom bases is now in.
UPDATE: Starting from version 0.5.1, Basis has been ported to Ruby 1.9.2, because of the kd-tree library dependency. Currently, there are no plans of maintaining Basis compatibility with Ruby 1.8.x. As an aside, I personally recommend using RVM to manage the mess of Ruby/JRuby installations that you’re likely to have on your machine.
UPDATE: Basis has hit version 0.5.0 with experimental support for mouseover interactivity. More work is incoming, but the demo code below is up-to-date, for now. The code below should be the same as demo.rb on GitHub.
# Basis: Plotting arbitrary coordinate systems in Ruby-Processing
One of the first things I realised while working on visualisations in Processing is that a lot of the work required in setting up coordinate systems and plotting them is somewhat of a chore. Specifically, for things like parallel coordinates, multiple axes, each with its own scaling, I initially ended up with some pretty ugly custom code for each case. I did look around in the Libraries section of the Processing website, but didn’t find anything specific to manipulating and plotting coordinate axes.
# Data interactions in parallel coordinates: 40x-60x speedup
This is an update on the visualisation post on parallel coordinates. Understanding the Processing model made me realise that it probably wasn’t a good idea to draw all the samples each time draw() was called. Of course, every refreshed call of draw() does not clear away the previous frame’s graphics, so that just makes it easier. In the end, I went and explicitly drew only the samples which were under the current mouse position.
The speedup is obvious and massive: whereas the previous version worked well with only 300 samples, the current one processes 18000 samples without breaking a sweat. At 29,000 samples, there is a bit of a slowdown, but only just a bit, you wait 1 second for the highlighting instead of 6-7.
Here’s the new video, using 18k samples. Notice how much denser the mesh is.
# Data interactions in parallel coordinates
Processing is growing on me. Inspired by the different and (very) interesting data visualisation examples I’ve seen, I decided to take a shot at interacting with the parallel coordinates that I generated here. Of course, I had to reduce the number of samples for this demonstration; it’d slow to a unholy crawl otherwise. For this video, I’ve taken 300 samples. The interaction is essentially a mouse-hover highlighting of any sample(s) under it. I fiddled with the colors a bit, but decided that a white-on-greyscale scheme would show up better.
Of course, I still haven’t gotten around to labeling the axes. This I’ll probably pick up next. But as the video demonstrates, there’s a lot to Processing than meets the eye.
PS: By the way, the actual demonstration ends around the halfway mark; I was trying to figure out how to stop the bloody recorder.
# Getting ActiveRecord to behave nicely with Ruby-Processing in JRuby
Really, all I wanted to do was use Processing from Ruby. jashkenas has kindly written a gem which does just that. There was only a slight wrinkle: I wanted to pull my data from MySQL through ActiveRecord. Well, JRuby makes this process slightly more interesting than usual, so I document the process here. To start off with, install the gem with:
sudo gem install ruby-processing
Go into the directory where the ruby-processing gem is installed, and from there into {ruby-processing.gem.dir}/lib/core. In my case, this was /usr/lib/ruby/gems/1.8/gems/ruby-processing-1.0.9/lib/core.
Once inside there, you’ll find a file named jruby-complete.jar. Get rid of it, because we’ll be replacing it with a fresh (and different) version of jruby-complete.jar. Download the 1.6.3 version of JRuby-complete jar file. Rename it to jruby-complete.jar and put it in place of the jarfile we just deleted.
One step remains: this jarfile does not contain the activerecord-jdbcmysql-adapter gem. Install that with:
java -jar jruby-complete.jar -S gem install activerecord-jdbcmysql-adapter --user-install
You’re good to go now. One more thing, just remember to replace the ActiveRecord adapter string with “jdbcmysql” and allow usage of that gem in your code with:
require 'arjdbc'
.
# Parallel Coordinate visualisation of 28k, 5-dimensional data
This is the visualisation of the same dataset that I’ve been working on for a while, for exploring different data mining and visualisation techniques. Currently, the axes aren’t labelled, and the color coding is for different categories. Looks like a really interesting way to explore the data.
# A detour through data visualisation
I should have seen it coming. Text communicates well – up to a point. All the current analyses I’ve been working on, starting from Self-Organising Maps to Decision Trees, are very well served by good, solid visualisation. My current need is a way to visualise data structures effectively, even if it is merely a bunch of nodes which can be expanded/collapsed to show more information. Additionally, it would be nice (not necessary) for this to happen interactively, but I don’t mind a command line-driven approach. In fact, I prefer the command line; makes it easier to drive it through a scripting language like Ruby.
So far, I’ve looked at Processing, d3.js and ProtoVis. I like the idea of d3.js: the data-driven approach makes a lot of sense, but I think I need to refresh quite a bit of CSS-fu to take advantage of its capabilities. Apart from visualisation derived from data mining algorithms, showing the data as-is in an aesthetic manner is also a worthy goal at this point. In particular, the parallel coordinates visualisation caught my eye.
Oh well, at least I know what I’ll be doing for the next few days.
# Decision Trees
Continuing on with my exploration of the data mining landscape, I extracted out a decision tree out of the data under scrutiny. It is the same data (20,000 samples, 56 samples), but the dimensions I’ve chosen for partitioning are a bit different from the raw attributes. I’ve conflated the 56 dimensions into a single number, since this is a test score we’re talking about, and I’m not sure modelling the indivdual responses for constructing the tree would be the best idea. I’m not really looking for close fits, buckets or bins should be adequate for discretising the response space.
Accordingly, I’ve partitioned the pre-scores as EXCELLENT, GOOD, AVERAGE, etc. The attribute that I’m attempting to predict is also a score, but this is the post-score. Well, not really the score itself, the improvement in score is a more sensible metric to attempt to guess.
I had a problem with trying to visualise the data, but I’ve been able to make do with indenting the different levels of decision nodes; this should be fine till I really need to use a visualisation library. I think the code could use a bit of work – probability notation does not lend itself easy to elegant variable naming. I’ll probably write a lot more on this topic once I’m into Bayesian Nets.
# Matrix Theory: An essential proof for eigenvector computations
I’ve avoided proofs unless absolutely necessary, but the relation between the same eigenvector expressed in two different bases, is important.
Given that AS is the linear transformation matrix in standard basis S, and AB is its counterpart in basis B, we can write the relation between them as:
$A_B = C^{-1}A_SC\\* A_S = CA_BC^{-1}$
where C is the similarity transformation. We’ve seen this relation already; check here if you’ve forgotten about it.
# Matrix Theory: Diagonalisation and Eigenvector Computation
$A= \left( \begin{array}{cc} 2 & 0 \\ 0 & 5 \end{array} \right)\$
The operation it performed on basis vectors of the standard basis S was one of scaling, and scaling only. When operated on by a linear transformation matrix, if a vector is only scaled as a consequence, that vector is an eigenvector of the matrix, and the scalar is the corresponding eigenvalue. This is just the definition of an eigenvector, which I rewrite below:
$A.x = \lambda x$
# Matrix Theory: Basis change and Similarity transformations
### Basis Change
Understand that there is nothing extremely special about the standard basis vectors [1,0] and [0,1]. All 2D vectors may be represented as linear combinations of these vectors. Thus, the vector [7,24] may be written as:
$\left( \begin{array}{cccc} 7 \\ 24 \end{array} \right)\ = 7. \left( \begin{array}{cccc} 1 \\ 0 \end{array} \right)\ + 24. \left( \begin{array}{cccc} 0 \\ 1 \end{array} \right)\$
# Matrix Theory: Linear transformations and Basis vectors
### Symmetric Matrices
A symmetric matrix looks like this:
$A= \left( \begin{array}{cccc} a & d & n & w \\ d & b & h & e \\ n & h & c & i \\ w & e & i & d \end{array} \right)\$
Notice how the values are reflected across the diagonal a-b-c-d; this holds true for any symmetric matrix.
# Eigenvector algorithms for symmetric matrices: Introduction
My main aim in this series of posts is to describe the kernel — or the essential idea — behind some of the simple (and not-so-simple) eigenvector algorithms. If you’re manipulating or mining datasets, chances are you’ll be dealing with matrices a lot. In fact, if you’re starting out with matrix operations of any sort, I highly recommend following Professor Gilbert Strang’s lectures on Linear Algebra, particularly if your math is a bit rusty.
I have several reasons for writing this series. My chief motivation behind trying to understand these algorithms has stemmed from trying to do PCA (Principal Components Analysis) on a medium size dataset (20000 samples, 56 dimensional). I felt (and still feel) pretty uncomfortable about calling LAPACK routines and walking away with the output without trying to understand what goes on inside the code that I just called. Of course, one cannot really dive into the thick of things without understanding some of the basics: in my case, after watching a couple of the lectures, I began to wish that I had better mathematics teachers in school.
# Guiding MapReduce-based matrix multiplications with Quadtree Segmentation
I’ve been following the Linear Algebra series of lectures from MIT’s OpenCourseWare site. While watching Lecture 3 (I’m at Lecture 6 now), Professor Strang enumerates 5 methods of matrix multiplication. Two of those provided insights I wish my school teachers had provided me, but it was the fifth method which got me thinking.
The method is really a meta-method, and is a way of breaking down multiplication of large matrices in a recursive fashion. To demonstrate the idea, here’s a simple multiplication between two 2×2 matrices.
For my graduation project, I’d written a machine vision/2D image algorithms system called IRIS. We’d used it to drive robots around, integrating sonar and visual data. However, that’s not what reawakened my interest in taking a re-look at the IRIS code. Currently, playing around with data sets has had me rifling through books and equations I liked looking at in college. It is almost like a second education, and I think it only right that I get IRIS up and running, if only to steal some code from it (even though it is in C++, and I’m currently doing my investigations using Ruby).
With that said, I dug into my old SourceForge account, where (to my somewhat irrational surprise) the code was still untouched. However, that code will probably not compile as-is. Even though it had been compiled under Linux, it had dependencies on drivers for hardware like the sonar systems and the webcam. I’m still not exactly sure if I want to get all those dependencies resolved; they aren’t my primary focus at this point. So I stripped off whatever was not required and pushed the clean, compiling source to GitHib here.
# Tests increase our Knowledge of a System: A Probabilistic Proof
This was an old proof that was up on my old blog, but since I’m no longer posting to that, I’m reposting it here for posterity. Also, rewriting the equations in LaTeX, now that I have installed a plugin for that.
I present a simple mathematical device to prove that tests improve our understanding of code. It does not really matter if this is code written by the test author himself or is legacy. To do this, some simplification of the situation is necessary.
# Playing around with Self Organising Maps
(Click the image to see the evolution of the SOM)
The image above was generated off 200 samples of a large data set. Sample vectors were 56-dimensional bit strings. The similarity measure used was the Hamming Distance. Brighter green represents values at a higher Hamming Distance with respect to zero.
The (very dirty) code is up at Github here.
Unrelated: I’ve been watching Leonard Susskind’s lectures on Statistical Mechanics; they’re a tour de force.
# A pipeline for adaptive bitrate video encoding
I’ve been working on something unusual lately, namely, building a pipeline for encoding video files into formats suitable for HTTP Live Streaming. The actual job of encoding into different formats at different bit rates and resolutions is done using a combination of ffmpeg and x264. To me, the interesting part lies in how we have tried to speed up the process, using the venerable Map-Reduce approach. Before I dive into the details, here’s a quick review of the basic idea of HLS.
Put very simply, adaptive streaming serves video content in multiple qualities, allowing the streaming client choice in selecting which quality to use depending upon the bandwidth constraint on the consumer side. This choice is not a one-time choice, depending upon the encode cut duration, the client can switch to higher or lower resolutions dynamically throughout the entire playback of the video stream.
How is this accomplished?
# Filesystems: my current reading list
Stuff I’m reading now specific to filesystems…reading Linux kernel source requires a stout heart if you’ve never done it before. And a bit of a shift in mindset: it’s not all objects anymore
# iOS AppDev Patterns: Linked Content Cursors
I’ve already talked about the Content Cursor pattern. This post is an extension of that idea to increase the flexibility of layout across sections.
To understand the problem, let us revisit a page from our hypothetical iPad magazine.
Here’s the layout of the page in portrait mode.
…and here is the same page in landscape mode.
The first important thing to notice here is that the two Politics sections have changed in position and/or size. More specifically, the upper Politics section has morphed into a tall rectangle, while the lower one has stretched horizontally.
# iOS AppDev Patterns: Content Cursor
iOS offers only the most barebones approach to placing content in a view, namely by specifying absolute coordinates. Of course, one can use autoresizing to make sure the positions of these contents are modified proportionally, but the initial positioning of a content block needs specification of the exact x- and y-coordinate of the top left of this ‘box’. This can render the layout inflexible, tedious and brittle. Every small change in position of a block has ripple effects on the position of succeeding blocks.
Content Cursor solves this problem.
# iOS AppDev Patterns: Asynchronous Image Loader
In a content-heavy application (a news or a magazine app for example), textual content takes precedence over images in terms of loading/rendering. An acceptable solution is to load/render text and request images from the server/cache in a concurrent fashion. I use the term ‘server’ in a very loose fashion, a more appropriate term is probably ‘content source’, since we can retrieve this information from anything ranging from our own servers to Twitter/RSS feeds.
There are a few considerations when implementing a solution like this:
Load request throttling: You’re likely to have several images spread across pages. It is not prudent to let 50 concurrent requests fire for 50 images. You want to throttle your requests to a reasonable number. A simple example of throttling your request is shown later.
Memory management: You want to gracefully handle the situation in which the loader is able to retrieve the image, but it the frame on which it is supposed to display the image (however it is implemented) has already been deallocated (for whatever reason).
# Facial Expressions as Volumetric Deformation and Displacements
I was working through The Artist’s Complete Guide to Facial Expression by Gary Faigin. Pretty thorough book, and the facial landmarks for different expressions are well detailed. Except that I was still having problems inventing expressions. I mean, yeah, expressions can vary a lot, but I think what I was looking for was a system using which one could ‘generate’ expressions on demand, without having to copy from something.
The problem I see with copying is that I’d end up copying the facial characteristics of the subject I’m copying as well, which hardly bodes well for the (imaginary) face I’ve cooked up. And yes, it wouldn’t affect my drawing that much, but I was still unsatisfied with that idea.
This afternoon, an idea struck me: what if I used the same system I use to generate poses to generate expressions? If you really think about it, the face is just a bunch of muscles with fat over them with the layering of skin. If I could figure out the forms at the appropriate resolution, these forms could be blobs which could be displaced/deformed at will to generate expressions. Boundaries of jostling blobs basically become potential creases and wrinkles. Blobs which get squeezed on all sides by other forms bulge out.
The idea seems to be working.
# Inventing poses
It’s not easy imagining them. But it can get better with practice. And that’s what I’ve been doing. An excellent accompanying read is Force : Dynamic Life Drawing for Animators. It’s one of those books that’s perfect for kicking you out of a rut, inspiring you to loosen your strokes. It’s worth several rereads; I’ve only skimmed through some parts of it, and they are immediately useful.
# Unforgotten
Yeah, I’m on a holiday now. And suddenly, it seems that there’s a ton of things to do.
Focusing a lot on improving my drawing/shading technique. I’ve gone back to pencils, sworn off digital painting till I get some experience working with colors in real life. Not that prevents me from messing around with Painter. Been doing a few drawing sessions with people at work, after work; so far, the results seem encouraging, i.e., no rotten tomatoes yet.
I used to hate having to get to work to scan my drawings, now I completely detest it. Firstly, I’m too lazy to take a stroll down there while I’m on vacation. Secondly…well, I think the first reason is good enough. So I went over to the local Croma store and snagged a HP Deskjet J610a. Primarily using it for scanning, but taking a printout should come in handy too. Installation was a snap, though the scan of the white areas of the paper have a slight sepia tone. Nothing a little color correction can’t handle.
As a side effect of this increased drawing output, I went ahead and cleaned up the site after a (very) long time. Installed Plogger today to give the new stuff I’ve been drawing a better presentation, threw away the old home page, et cetera. Well, at least this evening wasn’t a total waste…
Also found the time to run all my books through the MyBookDroid app: not bad so far. The books I’m currently reading are:
• Figure Drawing: Design and Invention – Michael Hampton
• Castles: Their Construction and History – Sidney Toy
Continuing work on Exo next week; I want to try out an easier way of weaving IL without tacking in IL for every aspect that is specified.
Been trying out this very funny game called Magicka. I highly recommend giving it a try, it’s hilarious, drops fantasy cliches left, right and center; and the narrator keeps insisting that the player’s mentor ‘Vlad’ is by no means a vampire.
Checked out the new Incarna character creation system in EVE Online; they did a nice job of it; should be fun when they introduce in-station ambulation.
|
|
Welcome guest
You're not logged in.
260 users online, thereof 0 logged in
## Proposition: Binomial Distribution
We repeat a Bernoulli experiment $$n$$ times. Each time, we observe, if an event $$A$$ occurred or not. Let the probability of $$A$$ to occur be constantly $$p:=p(A)$$ during each repetition of the experiment. Let $$X$$ be the random variable counting the number $$k$$ of the realizations of $$A$$. Clearly, $$A$$ can occur for $$k$$ times with $$0\le k\le n$$. The probability mass function of $$A$$ occurring exactly $$k$$ times is given by
$p(X = k)=\begin{cases} \binom nk p^k(1-p)^{n-k}&\text{for }k=0,1,\ldots n\\\\ 0&\text{else.}\end{cases}$
The binomial distribution (i.e. the probability distribution of the random variable $$X$$) is given by
$$\begin{array}{rcll} p(X \le x)&=&0&\text{for }x < 0\\ p(X \le x)&=&\sum_{k=0}^{k=x}\binom nk p^k(1-p)^{n-k}&\text{for }0\le x < n\\ p(X \le x)&=&1&\text{for }x \ge n\\ \end{array}$$
In the following interactive you can change the probability $$p$$ of observing an event $$A$$ in a Bernoulli experiment repeated $$20$$ times. Changing this probability will change the probability mass function for different values of $$X$$ (red) of observing $$A$$ from $$0$$ to $$20$$ times and the probability distribution (blue):
| | | | | created: 2014-02-23 22:45:41 | modified: 2018-05-23 22:05:34 | by: bookofproofs | references: [856], [1796]
|
|
# Tag Info
0
Maybe you should think about superconductor materials!! possible get upto 8 T and is very usual to use electromagnets (very big) for different applications, what can give you up to 2 T. But you didn't specified what kind of field you are thinking, dipole, quadrupoles?
-1
The "articles" a and and an has its correct usage, dear! "A" usually follows a word that starts with a consonant while "An" follows a word starting with a vowel. There are some exceptions to this, though, as its usage may also depend on the sound of the word as it is pronounced.
0
Just blindly multiplying the overall answer by some factor isn't the way to go about it. I have an alternative proposal which may work well as a zeroth-order approximation at least. You already have the expression for the 2-layer case, and if I observe correctly, you are only concerned with the reflected part, not the transmitted part. So, a smaller ...
3
Yes, it is already a reality. Permanent magnets from rare earth alloys https://en.wikipedia.org/wiki/Rare-earth_magnet can exceed 1.4 tesla while ferrite and ceramic ones only have 0.5-1.0 tesla. Making the material as big as a meter or anything you want is just a matter of accumulating a larger amount of the material (or adding smaller magnets). At ...
0
Hopefully this should dispel some of your confusion. In general, the fields $\textbf{B}$ and $\textbf{H}$ are related by $$\textbf{H} \equiv \frac{1}{\mu_0}(\textbf{B} - \textbf{M})$$ This is always true, regardless of the materials involved. We define linear media as materials whose fields satisfy $$\textbf{B} = \frac{1}{\chi}\textbf{M}.$$ In this ...
4
From a physics perspective, the fundamental reason for this is something called the bandwidth theorem (and also the Fourier limit, bandwidth limit, and even the Heisenberg uncertainty principle). In essence, it says that the bandwidth $\Delta\omega$ of a pulse of signal and its duration $\Delta t$ are related: $$\Delta\omega\,\Delta t\gtrsim 2\pi.$$ A ...
3
Note: Emilio Pisanty wrote an answer that is probably a better fit for the question and site, but I'm leaving this answer around because I feel it can contribute to an understanding of how this works in practice. For one thing, you'd need to be able to differentiate between the signals inside the frequency band. As an example, I'm going to use a Morse ...
0
See, you are required to find volume current density $J_\phi$. Though its name is volume current density, you know it is the current flowing per unit surface area. Now the subscript $\phi$ in $J_\phi$ denotes it is flowing in the $\hat{\phi}$ direction. Now in the spherical polar co-ordinate the infinitesimal length elements along the direction ...
2
I think a ferrite rod antenna in a radio receiver is an example where the magnetic component of an EM-field is picked up.
1
An electron's magnetic field is a dipole field - that is, the field strength is given by source: http://www2.ph.ed.ac.uk/~playfer/EMlect4.pdf In this expression, $m$ is the magnetic dipole moment. For an electron, this has the value $$m=-928.476377 × 10^{−26} J/T$$ The magnetic permeability $$\mu_0=4\pi\cdot 10^{-7}\frac{V\cdot s}{ A \cdot m}$$ Note - ...
0
We do not use just the electric component, this is not possible. An EM radiation is an oscillation in media or void of electric and magnetic fields. As described by Maxwell equations, the electric field oscillation generating the magnetic one and vice versa. So is kind of an electric wave and a magnetic one co-dependent with each other. It is true that we ...
0
Here's the Feynman response I read in his opening paragraphs in his Feynman lectures: The reason a proton and electron simply don't crash into each other is that if they did, we would exactly know their position-assuming one of them is stable, which one is (the proton). If we knew their position, we would be highly unaware of the momentum, meaning it could ...
0
This phenomenon is not special to light or Maxwell's equations: it's a simple consequence of detector nonlinearity and I should think that this would have been pretty clear to any bright experimentalist who thought carefully about how his or her kit makes its measurements. In the simplest case, the detector's response $y(t)$ as a function of time $t$ is ...
2
If you note that $$\delta(\cos\theta)=\frac{\delta(\theta-\pi/2)}{\sin\theta}$$ Then you can see that the sine terms actually cancel out.
1
This is just a complement to the previous answers which give the correct response. If you want to think about it in an intuitive way, imagine that the interaction between electrons and photons becomes weaker. In the limit when it becomes nearly zero, the light will be almost not scattered at all and will continue in a straight path.
2
EDITED ANSWER: The delta distribution $\delta(x)$ is not unique. It is invariant under transformations of the form $\delta(x) \to f(x)\delta(x)$ where $f(0) = 1$. This is because it is really a distribution and not a function. It is mathematically improper to talk about $\delta(x)$ instead of $\int \delta(x)dx$. Derivations of the term you're interested in ...
2
It is momentum that defines the incoming direction and momentum transfer the outgoing one. The photons, quantum mechanically carry momentum equal to p=h*nu/c . Momentum is a vector and defines directions. An electromagnetic field is an emergent classical quantity built up by innumerable photons. There exists also a momentum defined for the classical field ...
1
Let me offer you a slightly modified version of your question to illustrate a way of re-formulating it your thought process. How does a pool ball know from which direction the cue ball hit it? The answer is the same in the sense that "the particle" does not know all by itself, "the system"1 has certain invariant quantities (like momentum and energy) ...
0
While both the answers given in some sense are correct, the true reason has to do with energetic considerations. It is a matter of what is stronger and can be phrased as the following question: Will the wavefunction alter itself to accommodate the flux, or will the flux quantize itself because the wavefunction is trying to remain single valued? As an ...
0
If you're asking about how the mathematical model we use to explain electromagnetic interactions predicts EM radiation, this involves a bunch of math that you can look up on Wikipedia. It sounds like you are looking for a philosophical explanation, however. "Mechanism" may not be the right word to use to describe electromagnetic field theory. We have some ...
-1
Magnetic forces represent the relativistic correction to Electric forces (they are the same phenomenon!). The germ of relativity is that when objects move, information about their positions travel at the speed of light. If that information moved instantaneously or objects never moved, there would be no Magnetic forces. Accordingly, the Magnetic force on an ...
13
There is a sort of analog called gravitomagnetism (or gravitoelectromagnetism), but it is not discussed that often because it applies only in a special case. It is an approximation of general relativity (i.e. the Einstein Field Equations) in the case where: The weak field limit applies. The correct reference frame is chosen (it's not entirely clear to me ...
4
There is a gravitational analogue of the magnetic field. See gravitoelectromagnetism and frame dragging on Wikipedia.
-1
You must known the impedance of the medium. For the vacumm: $$\eta_0 = \sqrt{\dfrac{\mu_0}{\varepsilon_0}}$$ then, the relation between the fields: $$\vert\mathbf{H}\vert = \dfrac{\vert\mathbf{E}\vert}{\eta_0}$$ in vacumm, of course.
0
You are probably looking for Maxwell equations.
1
The thing which is "vibrating" is the electromagnetic field, namely its $\vec E$ and $\vec B$ vectors. The animations here show precisely this. Of course, it's not that some particles vibrate in this case. The electromagnetic wave can exist without any matter at all — all it needs is the field, which is present everywhere. But, if we have some charges ...
1
For low-frequency radiation, it's quite simple: there's some electronic circuit that works (simple case) analogous to a tuning fork, but instead of building up mechanical tension it charges a capacitor and instead of the inertia in the fork's arms it has a magnetic field in a solenoid. You can measure the voltage against time, count the oscillations in one ...
1
$\mathcal{E} = -{{d\Phi_B} \over dt} = -8t$ (Faraday's law of induction) $E = V/d = \mathcal{E}/d = \frac{-8t}{d} = \frac{-8*3}{15\times 10^{-3}} V/m = -1600 V/m$
8
It's a quirk of units: notice that the conversion between them is dimensionful, and has the value $3\times 10^8\,\mathrm{m/s}$, which is the speed of light. In the CGS system both fields have the same units, and field-squared is an energy density. In SI units, the energy density for a configuration of fields is given by \begin{align} \frac{dU}{dV} &= ...
27
As you already indicated, physical units need to be considered. When working in SI units, the ratio of electric field strength over magnetic field strength in EM radiation equals 299 792 458 m/s, the speed of light $c$. However, the numerical value for $c$ depends on the units used. When working in units in which the speed of light $c=1$, one would ...
1
I think the best explanation of electromagnets/Ferro magnetism is in the Feynman lectures Vol II. (chapter 36.) He makes up his own units so that might be a bit confusing. But work through the electromagnet problem. (and even use a simple linear relationship B = uH) That helped me a lot.
3
It seems entirely symmetrical. But it isn't symmetrical. For the voltmeter on the right, the 'outer' loop that encloses the magnetic field consists of the voltmeter, leads, and a $100\Omega$ resistor For the voltmeter on the left, the outer loop that encloses the magnetic field consists of the voltmeter, leads, and a $900\Omega$ resistor. Moreover, ...
0
Your error is that you cannot understand ferromagnetism while neglecting hysteresis and nonlinearity. Consider the case of some iron cooled from above the Curie temperature in the absence of any magnetic field. As the iron crystallizes, it forms strongly magnetized domains — a classic case of spontaneously broken symmetry. Because there is no external ...
1
The definition of $\mathbf{H}$ is $$\mathbf{H} = \frac{\mathbf{B}}{\mu_0} - \mathbf{M}$$ where $\mathbf{B}$ is the magnetic field in which the object is immersed in and $\mathbf{M}$ is the magnetisation of the object, i.e. the "field" ($\propto$ to a field) caused by the internal magnetic properties of the object. If there are no free currents, then ...
1
Interesting setup! I assume that you understand that for a short time there is a (induced) current flowing in the loop with the two resistors. You can get the same current by imagining a battery that is in series with the loop (you can put it at A, for example). With that battery in place, it is easy to see that the voltage across the 900r resistor will be ...
0
For problem solving, the method of images come in handy in the following cases: conducting sphere, conducting cylinder, conducting ellipsoid, and conducting plane. Another example is to regions of dielectrics, with different $\epsilon$ (permittivity). And that is it. This general rule should save you some time. So, the first problem you mentioned can be ...
2
I solved my problem numerically, using the diffusion equation $\frac{\partial V}{\partial t} = -k\nabla^2 V$, with the following boundary conditions: Voltage at point D is fixed at 1.0 Voltage along the vertical line halfway between points A and D is fixed at 0.5 (voltage at point A is 0.0, use symmetry so we don't have to simulate the left half of the ...
0
Because when you take the sine of 0º to 360º and plot the graph of these values, AC current behaves the same way. It can also be represented by a cosine function, but in this case we assume that the initial value of the AC current shouldn't be zero.
-1
To the question "What is the electric field outside a cylindrical solenoid when inside is turned on a magnetic field" the answer is that outside exists a electric field. That means that the fringes shift in the double slit experiment with electrons could be explained with electromagnetic fields and it is not necessary (but of course possible) to explain it ...
0
Now since the coil moves through this region of space, it should therefore possesses an induced electric field as well. I don't follow your reasoning here. The electric and magnetic fields are reference frame dependent; the fields 'mix' in a certain way via the Lorentz transformation. In the reference frame in which the magnet is at rest, the ...
0
In Faraday's experiment, the relative velocity between the coil and the magnet define whether a change of induced magnetic field flux occurs around the coil or not. So if you consider them at rest, and when you consider them moving at the same velocity relative to one another, then there's no change of magnetic field flux felt by the coil. Be careful there ...
1
To get a better understand of what is going on, take a look at the plot below, also linked here: http://en.wikipedia.org/wiki/File:Dispersion_Relationship.gif What the author meant by "letting the speed of light go to infinity" is that the we let the slope of the blue line become infinite. In that case, the solid red line would not curve as shown below, ...
0
By Cavendish torsion balance, similar to the measure of gravitational constant.http://en.wikipedia.org/wiki/Cavendish_experiment
2
Here is a compass, a small dipole magnet in the shape of a needle: as the person holding it turns around it moves pointing to the geographic north in the location, then the directions are all defined on the face of the compass. So that is the way a human can see/sense simply the magnetic field of the earth. Semantics on the compass is a bit complicated ...
0
Aluminum is not magnetic without an external magnetic field, however when an external magnetic field is applied or in presence of it Aluminum becomes "slightly" magnetic as its electron align to the magnetic field however due to thermal motion as described by Vintage the alignment of the electrons within the material (aluminium) is randomized thus its net ...
3
To good approximation, a magnetic dipole in a uniform field feels a torque, but not a force. The length scale for variations in the field produced by the dynamo in the earth's core is comparable to the size of the earth's core: many hundreds of miles. For a magnetic dipole the size of an animal (any animal) the earth's natural field is uniform, and you feel ...
3
Optical conductivity and AC electric conductivity experiments are indeed quite similar. However, they operate in different frequency regimes and measure slightly different quantities. Optical conductivity refers to an experiment using light, such as a reflectivity measurement and then using a Kramers-Kronig transform to deduce the real part of the ...
2
A strongly paramagnetic material has a magnetic susceptibility of a few hundred parts per million; that is, the field strength inside of a paramagnet is different from the field strength in vacuum starting in the fourth decimal place. This is comparable to the ratio between the earth's natural field (typically about 0.5 gauss) and the surface field of a good ...
4
Radioactivity is the result from a confluence of special relativity and quantum mechanics. Special relativity introduces the generalized energy, E=m*c^2 , which allows the energy conservation to count in the sum the rest masses of the particles which comprise a nucleus. In this relativistic energy conservation we find some nuclear isotopes which are at a ...
0
No, the electron cannot be inside the solenoid. The Aharonov-Bohm effect is intrinsically a topological effect caused by the solenoid effectively removing a line from $\mathbb{R}^3$, making it homotopic to the circle $S^1$. It has nothing to do with the electron "acting like a wave" or "acting like a particle" (which are notions one should not use anyway), ...
Top 50 recent answers are included
|
|
Newton Raphson, given derivatives
I'm trying to calculate the value of $f(x_1)$ with Newton Raphson's method. The following information is given:
• $x_0=0$
• $x_1=1$
• $f(x_0)=2$
• $f'(x_0)=0$
• $f'(x_1)=0$
• $f''(x_0)=0$
• $f''(x_1)=0$
• $f[x_0,x_0,x_1,x_1] = 1$ (third degree differentiation)
It is given that we don't know that the function $f(x)$ is a polynomial or not. The differentiation has something to do with a derivative. This is the hint that is given:
$\lim \limits_{x \to y} f[x,y] = \lim \limits_{x \to y} \frac{f(y)-f(x)}{y-x} = f'(y)$
This is the hint where I am stuck:
$f[x_0,x_0,x_1,x_1]$ = $\lim \limits_{x_2 \to x_0, x_3 \to x_1}f[x_2,x_0,x_1,x_3]$
Can someone explain the last hint to me?
This is the answer but I don't understand it:
Close up:
• What is initial point? – Michael Galuza Aug 21 '15 at 7:35
• Does f have one independent variable or two? – Narasimham Aug 21 '15 at 7:41
• @MichaelGaluza I think you may take $x_0$ as initial point. – Stanko Aug 21 '15 at 7:48
• @Dongo, no, $f'(x_0)=0$ – Michael Galuza Aug 21 '15 at 7:49
• @Narasimham I don't know. – Stanko Aug 21 '15 at 7:59
I assummed that $$f[x_1, x_2] = \frac{f(x_1) - f(x_2)}{x_1 - x_2}$$ $$f[x_1, x_2, x_3] = \frac{f[x_1,x_2] - f[x_2,x_3]}{x_1 - x_3}$$ $$f[x_1, x_2, x_3, x_4] = \frac{f[x_1,x_2,x_3] - f[x_2,x_3,x_4]}{x_1 - x_4}$$
Let $x_2\to x_1$, $x_4\to x_3$. Then $$f[x_1, x_1, x_3, x_3] = \frac{f[x_1,x_1,x_3] - f[x_1,x_3,x_3]}{x_1 - x_3}.$$ Now $$f[x_1, x_1, x_3] = \frac{f[x_1, x_1] - f[x_1, x_3]}{x_1-x_3} = \frac{f'(x_1) - f[x_1, x_3]}{x_1-x_3},\\ f[x_1, x_3, x_3] = \frac{f[x_1, x_3] - f[x_3, x_3]}{x_1-x_3} = \frac{f[x_1, x_3]-f'(x_3)}{x_1-x_3},$$ and $$f[x_1, x_1, x_3, x_3] = \frac{f'(x_1) - 2f[x_1, x_3]+f'(x_3)}{(x_1 - x_3)^2}.$$
Let's substitute $x_1=0$, $x_3=1$: $$f[0, 0, 1, 1] = f'(0) - 2f[0, 1]+f'(1)=-2f[0,1]=-2(f(1)-f(0))=1\\\Longrightarrow f(1)=f(0)-\frac12=\frac32$$
|
|
# Chapter 1 - Section 1.4 - Properties of Real Numbers and Algebraic Expressions - Exercise Set - Page 39: 65
$.1d$
#### Work Step by Step
We know that one dime is equal to .1 dollars. Therefore, we know that the expression $.1d$ tells us the amount of money (in dollars) that $d$ dimes represents. For example, if we have $d=60$ dimes, the expression tells us that we will have $.1\times d=.1\times60=6$ dollars.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
|
MathsGee is free of annoying ads. We want to keep it like this. You can help with your DONATION
0 like 0 dislike
22 views
How do I evaluate $\ln{e^{-5}}$?
| 22 views
|
|
TrainingOptionsRMSProp
Training options for RMSProp optimizer
Description
Training options for RMSProp (root mean square propagation) optimizer, including learning rate information, L2 regularization factor, and mini-batch size.
Creation
Create a `TrainingOptionsRMSProp` object using `trainingOptions` and specifying `'rmsprop'` as the first input argument.
Properties
expand all
Plots and Display
Plots to display during network training, specified as one of the following:
• `'none'` — Do not display plots during training.
• `'training-progress'` — Plot training progress. The plot shows mini-batch loss and accuracy, validation loss and accuracy, and additional information on the training progress. The plot has a stop button in the top-right corner. Click the button to stop training and return the current state of the network. You can save the training plot as an image or PDF by clicking . For more information on the training progress plot, see Monitor Deep Learning Training Progress.
Indicator to display training progress information in the command window, specified as `1` (true) or `0` (false).
The verbose output displays the following information:
Classification Networks
FieldDescription
`Epoch`Epoch number. An epoch corresponds to a full pass of the data.
`Iteration`Iteration number. An iteration corresponds to a mini-batch.
`Time Elapsed`Time elapsed in hours, minutes, and seconds.
`Mini-batch Accuracy`Classification accuracy on the mini-batch.
`Validation Accuracy`Classification accuracy on the validation data. If you do not specify validation data, then the function does not display this field.
`Mini-batch Loss`Loss on the mini-batch. If the output layer is a `ClassificationOutputLayer` object, then the loss is the cross entropy loss for multi-class classification problems with mutually exclusive classes.
`Validation Loss`Loss on the validation data. If the output layer is a `ClassificationOutputLayer` object, then the loss is the cross entropy loss for multi-class classification problems with mutually exclusive classes. If you do not specify validation data, then the function does not display this field.
`Base Learning Rate`Base learning rate. The software multiplies the learn rate factors of the layers by this value.
Regression Networks
FieldDescription
`Epoch`Epoch number. An epoch corresponds to a full pass of the data.
`Iteration`Iteration number. An iteration corresponds to a mini-batch.
`Time Elapsed`Time elapsed in hours, minutes, and seconds.
`Mini-batch RMSE`Root-mean-squared-error (RMSE) on the mini-batch.
`Validation RMSE`RMSE on the validation data. If you do not specify validation data, then the software does not display this field.
`Mini-batch Loss`Loss on the mini-batch. If the output layer is a `RegressionOutputLayer` object, then the loss is the half-mean-squared-error.
`Validation Loss`Loss on the validation data. If the output layer is a `RegressionOutputLayer` object, then the loss is the half-mean-squared-error. If you do not specify validation data, then the software does not display this field.
`Base Learning Rate`Base learning rate. The software multiplies the learn rate factors of the layers by this value.
When training stops, the verbose output displays the reason for stopping.
To specify validation data, use the `ValidationData` training option.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64` | `logical`
Frequency of verbose printing, which is the number of iterations between printing to the command window, specified as a positive integer. This option only has an effect when the `Verbose` training option is `1` (true).
If you validate the network during training, then `trainNetwork` also prints to the command window every time validation occurs.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64`
Mini-Batch Options
Maximum number of epochs to use for training, specified as a positive integer.
An iteration is one step taken in the gradient descent algorithm towards minimizing the loss function using a mini-batch. An epoch is the full pass of the training algorithm over the entire training set.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64`
Size of the mini-batch to use for each training iteration, specified as a positive integer. A mini-batch is a subset of the training set that is used to evaluate the gradient of the loss function and update the weights.
If the mini-batch size does not evenly divide the number of training samples, then `trainNetwork` discards the training data that does not fit into the final complete mini-batch of each epoch.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64`
Option for data shuffling, specified as one of the following:
• `'once'` — Shuffle the training and validation data once before training.
• `'never'` — Do not shuffle the data.
• `'every-epoch'` — Shuffle the training data before each training epoch, and shuffle the validation data before each network validation. If the mini-batch size does not evenly divide the number of training samples, then `trainNetwork` discards the training data that does not fit into the final complete mini-batch of each epoch. To avoid discarding the same data every epoch, set the `Shuffle` training option to `'every-epoch'`.
Validation
Data to use for validation during training, specified as `[]`, a datastore, a table, or a cell array containing the validation predictors and responses.
You can specify validation predictors and responses using the same formats supported by the `trainNetwork` function. You can specify the validation data as a datastore, table, or the cell array `{predictors,responses}`, where `predictors` contains the validation predictors and `responses` contains the validation responses.
For more information, see the `images`, `sequences`, and `features` input arguments of the `trainNetwork` function.
During training, `trainNetwork` calculates the validation accuracy and validation loss on the validation data. To specify the validation frequency, use the `ValidationFrequency` training option. You can also use the validation data to stop training automatically when the validation loss stops decreasing. To turn on automatic validation stopping, use the `ValidationPatience` training option.
If your network has layers that behave differently during prediction than during training (for example, dropout layers), then the validation accuracy can be higher than the training (mini-batch) accuracy.
The validation data is shuffled according to the `Shuffle` training option. If `Shuffle` is `'every-epoch'`, then the validation data is shuffled before each network validation.
If `ValidationData` is `[]`, then the software does not validate the network during training.
Frequency of network validation in number of iterations, specified as a positive integer.
The `ValidationFrequency` value is the number of iterations between evaluations of validation metrics. To specify validation data, use the `ValidationData` training option.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64`
Patience of validation stopping of network training, specified as a positive integer or `Inf`.
`ValidationPatience` specifies the number of times that the loss on the validation set can be larger than or equal to the previously smallest loss before network training stops. If `ValidationPatience` is `Inf`, then the values of the validation loss do not cause training to stop early.
The returned network depends on the `OutputNetwork` training option. To return the network with the lowest validation loss, set the `OutputNetwork` training option to `"best-validation-loss"`.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64`
Network to return when training completes, specified as one of the following:
• `'last-iteration'` – Return the network corresponding to the last training iteration.
• `'best-validation-loss'` – Return the network corresponding to the training iteration with the lowest validation loss. To use this option, you must specify the `ValidationData` training option.
Solver Options
Initial learning rate used for training, specified as a positive scalar.
If the learning rate is too low, then training can take a long time. If the learning rate is too high, then training might reach a suboptimal result or diverge.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64`
Settings for the learning rate schedule, specified as a structure. `LearnRateScheduleSettings` has the field `Method`, which specifies the type of method for adjusting the learning rate. The possible methods are:
• `'none'` — The learning rate is constant throughout training.
• `'piecewise'` — The learning rate drops periodically during training.
If `Method` is `'piecewise'`, then `LearnRateScheduleSettings` contains two more fields:
• `DropRateFactor` — The multiplicative factor by which the learning rate drops during training
• `DropPeriod` — The number of epochs that passes between adjustments to the learning rate during training
Specify the settings for the learning schedule rate using `trainingOptions`.
Data Types: `struct`
Factor for L2 regularization (weight decay), specified as a nonnegative scalar. For more information, see L2 Regularization.
You can specify a multiplier for the L2 regularization for network layers with learnable parameters. For more information, see Set Up Parameters in Convolutional and Fully Connected Layers.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64`
Decay rate of squared gradient moving average for the RMSProp solver, specified as a nonnegative scalar less than `1`.
Typical values of the decay rate are `0.9`, `0.99`, and `0.999`, corresponding to averaging lengths of `10`, `100`, and `1000` parameter updates, respectively.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64`
Denominator offset for the RMSProp solver, specified as a positive scalar.
The solver adds the offset to the denominator in the network parameter updates to avoid division by zero. The default value works well for most tasks.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64`
Option to reset input layer normalization, specified as one of the following:
• `1` (true) — Reset the input layer normalization statistics and recalculate them at training time.
• `0` (false) — Calculate normalization statistics at training time when they are empty.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64` | `logical`
Mode to evaluate the statistics in batch normalization layers, specified as one of the following:
• `'population'` – Use the population statistics. After training, the software finalizes the statistics by passing through the training data once more and uses the resulting mean and variance.
• `'moving'` – Approximate the statistics during training using a running estimate given by update steps
`$\begin{array}{l}{\mu }^{*}={\lambda }_{\mu }\stackrel{^}{\mu }+\left(1-{\lambda }_{\mu }\right)\mu \\ {\sigma }^{2}{}^{*}={\lambda }_{{\sigma }^{2}}\stackrel{^}{{\sigma }^{2}}\text{}\text{+}\text{}\text{(1-}{\lambda }_{{\sigma }^{2}}\right)\text{}{\sigma }^{2}\end{array}$`
where ${\mu }^{*}$ and ${\sigma }^{2}{}^{*}$ denote the updated mean and variance, respectively, ${\lambda }_{\mu }$ and ${\lambda }_{{\sigma }^{2}}$ denote the mean and variance decay values, respectively, $\stackrel{^}{\mu }$ and $\stackrel{^}{{\sigma }^{2}}$ denote the mean and variance of the layer input, respectively, and $\mu$ and ${\sigma }^{2}$ denote the latest values of the moving mean and variance values, respectively. After training, the software uses the most recent value of the moving mean and variance statistics. This option supports CPU and single GPU training only.
Gradient threshold, specified as `Inf` or a positive scalar. If the gradient exceeds the value of `GradientThreshold`, then the gradient is clipped according to the `GradientThresholdMethod` training option.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64`
Gradient threshold method used to clip gradient values that exceed the gradient threshold, specified as one of the following:
• `'l2norm'` — If the L2 norm of the gradient of a learnable parameter is larger than `GradientThreshold`, then scale the gradient so that the L2 norm equals `GradientThreshold`.
• `'global-l2norm'` — If the global L2 norm, L, is larger than `GradientThreshold`, then scale all gradients by a factor of `GradientThreshold/`L. The global L2 norm considers all learnable parameters.
• `'absolute-value'` — If the absolute value of an individual partial derivative in the gradient of a learnable parameter is larger than `GradientThreshold`, then scale the partial derivative to have magnitude equal to `GradientThreshold` and retain the sign of the partial derivative.
Sequence Options
Option to pad, truncate, or split input sequences, specified as one of the following:
• `"longest"` — Pad sequences in each mini-batch to have the same length as the longest sequence. This option does not discard any data, though padding can introduce noise to the network.
• `"shortest"` — Truncate sequences in each mini-batch to have the same length as the shortest sequence. This option ensures that no padding is added, at the cost of discarding data.
• Positive integer — For each mini-batch, pad the sequences to the length of the longest sequence in the mini-batch, and then split the sequences into smaller sequences of the specified length. If splitting occurs, then the software creates extra mini-batches. If the specified sequence length does not evenly divide the sequence lengths of the data, then the mini-batches containing the ends those sequences have length shorter than the specified sequence length. Use this option if the full sequences do not fit in memory. Alternatively, try reducing the number of sequences per mini-batch by setting the `MiniBatchSize` option to a lower value.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64` | `char` | `string`
Direction of padding or truncation, specified as one of the following:
• `"right"` — Pad or truncate sequences on the right. The sequences start at the same time step and the software truncates or adds padding to the end of the sequences.
• `"left"` — Pad or truncate sequences on the left. The software truncates or adds padding to the start of the sequences so that the sequences end at the same time step.
Because recurrent layers process sequence data one time step at a time, when the recurrent layer `OutputMode` property is `'last'`, any padding in the final time steps can negatively influence the layer output. To pad or truncate sequence data on the left, set the `SequencePaddingDirection` option to `"left"`.
For sequence-to-sequence networks (when the `OutputMode` property is `'sequence'` for each recurrent layer), any padding in the first time steps can negatively influence the predictions for the earlier time steps. To pad or truncate sequence data on the right, set the `SequencePaddingDirection` option to `"right"`.
Value by which to pad input sequences, specified as a scalar.
The option is valid only when `SequenceLength` is `"longest"` or a positive integer. Do not pad sequences with `NaN`, because doing so can propagate errors throughout the network.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64`
Hardware Options
Hardware resource for training network, specified as one of the following:
• `'auto'` — Use a GPU if one is available. Otherwise, use the CPU.
• `'cpu'` — Use the CPU.
• `'gpu'` — Use the GPU.
• `'multi-gpu'` — Use multiple GPUs on one machine, using a local parallel pool based on your default cluster profile. If there is no current parallel pool, the software starts a parallel pool with pool size equal to the number of available GPUs.
• `'parallel'` — Use a local or remote parallel pool based on your default cluster profile. If there is no current parallel pool, the software starts one using the default cluster profile. If the pool has access to GPUs, then only workers with a unique GPU perform training computation. If the pool does not have GPUs, then training takes place on all available CPU workers instead.
For more information on when to use the different execution environments, see Scale Up Deep Learning in Parallel, on GPUs, and in the Cloud.
`'gpu'`, `'multi-gpu'`, and `'parallel'` options require Parallel Computing Toolbox™. To use a GPU for deep learning, you must also have a supported GPU device. For information on supported devices, see GPU Computing Requirements (Parallel Computing Toolbox). If you choose one of these options and Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an error.
To see an improvement in performance when training in parallel, try scaling up the `MiniBatchSize` and `InitialLearnRate` training options by the number of GPUs.
The `'multi-gpu'` and `'parallel'` options do not support networks containing custom layers with state parameters or built-in layers that are stateful at training time. For example:
Parallel worker load division between GPUs or CPUs, specified as one of the following:
• Scalar from `0` to `1` — Fraction of workers on each machine to use for network training computation. If you train the network using data in a mini-batch datastore with background dispatch enabled, then the remaining workers fetch and preprocess data in the background.
• Positive integer — Number of workers on each machine to use for network training computation. If you train the network using data in a mini-batch datastore with background dispatch enabled, then the remaining workers fetch and preprocess data in the background.
• Numeric vector — Network training load for each worker in the parallel pool. For a vector `W`, worker `i` gets a fraction `W(i)/sum(W)` of the work (number of examples per mini-batch). If you train a network using data in a mini-batch datastore with background dispatch enabled, then you can assign a worker load of 0 to use that worker for fetching data in the background. The specified vector must contain one value per worker in the parallel pool.
If the parallel pool has access to GPUs, then workers without a unique GPU are never used for training computation. The default for pools with GPUs is to use all workers with a unique GPU for training computation, and the remaining workers for background dispatch. If the pool does not have access to GPUs and CPUs are used for training, then the default is to use one worker per machine for background data dispatch.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64`
Flag to enable background dispatch (asynchronous prefetch queuing) to read training data from datastores, specified as `0` (false) or `1` (true). Background dispatch requires Parallel Computing Toolbox.
`DispatchInBackground` is only supported for datastores that are partitionable. For more information, see Use Datastore for Parallel Training and Background Dispatching.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64`
Checkpoints
Path for saving the checkpoint networks, specified as a character vector or string scalar.
• If you do not specify a path (that is, you use the default `""`), then the software does not save any checkpoint networks.
• If you specify a path, then `trainNetwork` saves checkpoint networks to this path and assigns a unique name to each network. You can then load any checkpoint network and resume training from that network.
If the folder does not exist, then you must first create it before specifying the path for saving the checkpoint networks. If the path you specify does not exist, then `trainingOptions` returns an error.
The `CheckpointFrequency` and `CheckpointFrequencyUnit` options specify the frequency of saving checkpoint networks.
Data Types: `char` | `string`
Frequency of saving checkpoint networks, specified as a positive integer.
If `CheckpointFrequencyUnit` is `'epoch'`, then the software saves checkpoint networks every `CheckpointFrequency` epochs.
If `CheckpointFrequencyUnit` is `'iteration'`, then the software saves checkpoint networks every `CheckpointFrequency` iterations.
This option only has an effect when `CheckpointPath` is nonempty.
Data Types: `single` | `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64`
Checkpoint frequency unit, specified as `'epoch'` or `'iteration'`.
If `CheckpointFrequencyUnit` is `'epoch'`, then the software saves checkpoint networks every `CheckpointFrequency` epochs.
If `CheckpointFrequencyUnit` is `'iteration'`, then the software saves checkpoint networks every `CheckpointFrequency` iterations.
This option only has an effect when `CheckpointPath` is nonempty.
Output functions to call during training, specified as a function handle or cell array of function handles. `trainNetwork` calls the specified functions once before the start of training, after each iteration, and once after training has finished. `trainNetwork` passes a structure containing information in the following fields:
FieldDescription
`Epoch`Current epoch number
`Iteration`Current iteration number
`TimeSinceStart`Time in seconds since the start of training
`TrainingLoss`Current mini-batch loss
`ValidationLoss`Loss on the validation data
`BaseLearnRate`Current base learning rate
`TrainingAccuracy` Accuracy on the current mini-batch (classification networks)
`TrainingRMSE`RMSE on the current mini-batch (regression networks)
`ValidationAccuracy`Accuracy on the validation data (classification networks)
`ValidationRMSE`RMSE on the validation data (regression networks)
`State`Current training state, with a possible value of `"start"`, `"iteration"`, or `"done"`.
If a field is not calculated or relevant for a certain call to the output functions, then that field contains an empty array.
You can use output functions to display or plot progress information, or to stop training. To stop training early, make your output function return `1` (true). If any output function returns `1` (true), then training finishes and `trainNetwork` returns the latest network. For an example showing how to use output functions, see Customize Output During Deep Learning Network Training.
Data Types: `function_handle` | `cell`
Examples
collapse all
Create a set of options for training a neural network using the RMSProp optimizer. Set the maximum number of epochs for training to 20, and use a mini-batch with 64 observations at each iteration. Specify the learning rate and the decay rate of the moving average of the squared gradient. Turn on the training progress plot.
```options = trainingOptions("rmsprop", ... InitialLearnRate=3e-4, ... SquaredGradientDecayFactor=0.99, ... MaxEpochs=20, ... MiniBatchSize=64, ... Plots="training-progress")```
```options = TrainingOptionsRMSProp with properties: SquaredGradientDecayFactor: 0.9900 Epsilon: 1.0000e-08 InitialLearnRate: 3.0000e-04 LearnRateSchedule: 'none' LearnRateDropFactor: 0.1000 LearnRateDropPeriod: 10 L2Regularization: 1.0000e-04 GradientThresholdMethod: 'l2norm' GradientThreshold: Inf MaxEpochs: 20 MiniBatchSize: 64 Verbose: 1 VerboseFrequency: 50 ValidationData: [] ValidationFrequency: 50 ValidationPatience: Inf Shuffle: 'once' CheckpointPath: '' CheckpointFrequency: 1 CheckpointFrequencyUnit: 'epoch' ExecutionEnvironment: 'auto' WorkerLoad: [] OutputFcn: [] Plots: 'training-progress' SequenceLength: 'longest' SequencePaddingValue: 0 SequencePaddingDirection: 'right' DispatchInBackground: 0 ResetInputNormalization: 1 BatchNormalizationStatistics: 'population' OutputNetwork: 'last-iteration' ```
expand all
Version History
Introduced in R2018a
expand all
|
|
# Display¶
Back to Scenes
Displays ndarrays as images, but is easier to use and more flexible than matplotlib’s imshow.
display(*imgs, **kwargs)[source]
Display 2D and 3D ndarrays as images with matplotlib.
The ndarrays must either be 2D, or 3D with 1 or 3 bands. If they are 3D masked arrays, the mask will be used as an alpha channel.
Unlike matplotlib’s imshow, arrays can be any dtype; internally, each is normalized to the range [0..1].
Parameters: *imgs (1 or more ndarrays) – When multiple images are given, each is displayed on its own row. bands_axis (int, default 0) – Axis which contains bands in each array. title (str, or sequence of str; optional) – Title for each image. If a sequence, must be the same length as imgs. size (int, default 10) – Length, in inches, to display the longer side of each image. robust (bool, default True) – Use the 2nd and 98th percentiles to compute color limits. Otherwise, the minimum and maximum values in each array are used. interpolation (str, default "bilinear") – Interpolation method for matplotlib to use when scaling images for display. Bilinear is the default, since it produces smoother results when scaling down continuously-valued data (i.e. images). For displaying discrete data, however, choose ‘nearest’ to prevent values not existing in the input from appearing in the output. Acceptable values are ‘none’, ‘nearest’, ‘bilinear’, ‘bicubic’, ‘spline16’, ‘spline36’, ‘hanning’, ‘hamming’, ‘hermite’, ‘kaiser’, ‘quadric’, ‘catrom’, ‘gaussian’, ‘bessel’, ‘mitchell’, ‘sinc’, ‘lanczos’ colormap (str, default None) – The name of a Colormap registered with matplotlib. Some commonly used built-in options are ‘plasma’, ‘magma’, ‘viridis’, ‘inferno’. See https://matplotlib.org/users/colormaps.html for more options. To use a Colormap, the input images must have a single band. The Colormap will be ignored for images with more than one band.
Example
In [1]: import descarteslabs as dl
In [2]: import numpy as np
In [3]: a = np.arange(20*15).reshape((20, 15))
In [4]: b = np.tan(a)
In [5]: dl.scenes.display(a, b, size=4)
Raises: ImportError – If matplotlib is not installed.
|
|
## Found 45 Documents (Results 1–45)
100
MathJax
### Two third-order explicit integration algorithms with controllable numerical dissipation for second-order nonlinear dynamics. (English)Zbl 07532557
MSC: 65-XX 76-XX
Full Text:
### Simple numerical methods of second- and third-order convergence for solving a fully third-order nonlinear boundary value problem. (English)Zbl 1483.34026
MSC: 34A45 34B15 65L10
Full Text:
### A third-order accurate wave propagation algorithm for hyperbolic partial differential equations. (English)Zbl 1476.65206
MSC: 65M08 76M12
Full Text:
MSC: 65M06
Full Text:
### One-step multi-derivative methods for backward stochastic differential equations. (English)Zbl 1463.60092
MSC: 60H35 65C30
Full Text:
### A third-order unconditionally positivity-preserving scheme for production-destruction equations with applications to non-equilibrium flows. (English)Zbl 1444.35125
MSC: 35Q31 76V05 76M20
Full Text:
### High order finite-volume WENO scheme for five-equation model of compressible two-fluid flow. (English)Zbl 1442.65205
MSC: 65M08 76M12
Full Text:
Full Text:
### Optimization and its application for GM (1,1) model based on the third and the fourth order Runge-Kutta method. (Chinese. English summary)Zbl 1363.93037
MSC: 93A30 62M20 93C41 65L06
### Stable and accurate pressure approximation for unsteady incompressible viscous flow. (English)Zbl 1307.76029
MSC: 76D05 35Q30 65M15 65M60 76M25
Full Text:
MSC: 65D25
### Compact third-order multidimensional upwind scheme for Navier-Stokes simulations. (English)Zbl 1006.76062
MSC: 76M10 76N15
Full Text:
### An unstructured $$hp$$ finite-element scheme for fluid flow and heat transfer in moving domains. (English)Zbl 0995.76043
MSC: 76M10 76D05 80A20 80M10
Full Text:
### Third-order accurate finite volume schemes for Euler computations on curvilinear meshes. (English)Zbl 0995.76055
MSC: 76M12 76N15
Full Text:
### A residual-based compact scheme for the compressible Navier-Stokes equations. (English)Zbl 1005.76069
MSC: 76M20 76N15
Full Text:
### On numerical investigation of transonic turbulent flows near a wing by implicit higher-order schemes. (Russian. English summary)Zbl 0967.76071
MSC: 76M20 76H05 76F40
### A finite volume method with a modified ENO scheme using a Hermite interpolation to solve advection-diffusion equations. (English)Zbl 1008.76049
MSC: 76M12 76R99
Full Text:
Full Text:
Full Text:
Full Text:
Full Text:
### Numerical simulation of turbulent flows in a pipe of square cross-section. (English. Russian original)Zbl 0948.76601
Phys.-Dokl. 42, No. 3, 158-162 (1997); translation from Dokl. Akad. Nauk, Ross. Akad. Nauk 353, No. 3, 338-342 (1997).
MSC: 76M25 76M20 76F10
### An implicit multistage integration method including projection for the numerical simulation of constrained multibody systems. (English)Zbl 0916.70003
MSC: 70-08 70E15 65L06
Full Text:
### Constructing oscillation preventing scheme for advection equation by rational function. (English)Zbl 0921.76118
MSC: 76M25 76R99 65D07
Full Text:
MSC: 62G09
Full Text:
### Extension to three-dimensional problems of the upwind finite element scheme based on the choice of up- and downwind points. (English)Zbl 0854.76051
MSC: 76M10 76D05
Full Text:
### Third order asymptotic models: likelihood functions leading to accurate approximations for distribution functions. (English)Zbl 0831.62016
MSC: 62E17 62E20
### The analysis of unsteady incompressible flows by a three-step finite element method. (English)Zbl 0772.76036
MSC: 76M10 76D05 65M12
Full Text:
### A three-step finite element method for unsteady incompressible flows. (English)Zbl 0771.76038
MSC: 76M10 76D05
Full Text:
### Explicit three-point three-level FDMS for the one-dimensional constant- coefficient advection-diffusion equation. (English)Zbl 0748.76079
MSC: 76M20 76R50 65M06
Full Text:
### A spectral method for a nonlinear equation arising in fluidized bed modelling. (English)Zbl 0734.65081
Numerical treatment of differential equations, Sel. Pap. 5th Int. Semin., NUMDIFF-5, Halle/Ger. 1989, Teubner-Texte Math. 121, 285-292 (1991).
Reviewer: L.Abia
MSC: 65M70 65M12 35K55
### A second-order implicit particle mover with adjustable damping. (English)Zbl 0701.76121
MSC: 76X05 76M20
Full Text:
### A one-step method of third accuracy order for the solution of implicit systems of ordinary differential equations. (Russian)Zbl 0726.65075
Model. Mekh. 3(20), No. 4, 90-101 (1989).
MSC: 65L05 34A34
### An algorithm for the numerical solution of ordinary differential equations of second and higher orders. (English. Russian original)Zbl 0719.65064
U.S.S.R. Comput. Math. Math. Phys. 29, No. 6, 100-101 (1989); translation from Zh. Vychisl. Mat. Mat. Fiz. 29, No. 11, 1740-1741 (1989).
MSC: 65L05 34A34
Full Text:
### An algorithm for numerical solution of ordinary differential equations of second and higher orders. (Russian)Zbl 0686.65036
Reviewer: M.Bartušek
MSC: 65L05 34A34
### Simple high-accuracy resolution program for convective modelling of discontinuities. (English)Zbl 0667.76125
MSC: 76R05 76M99
Full Text:
### A Taylor-series approach to numerical accuracy and a third-order scheme for strong convective flows. (English)Zbl 0631.76100
MSC: 76R05 76M99
Full Text:
### Least-squares schemes for time integration of thermal problems. (English)Zbl 0621.65117
Reviewer: C.A.de Moura
Full Text:
### Analysis of the stability of a difference boundary value problem using a set of analytical calculations. (English. Russian original)Zbl 0612.65098
Sov. Math. 29, No. 10, 70-77 (1985); translation from Izv. Vyssh. Uchebn. Zaved., Mat. 1985, No. 10(281), 55-61 (1985).
Reviewer: L.G.Vulkov
MSC: 65R20 45K05
### New higher-order upwind scheme for incompressible Navier-Stokes equations. (English)Zbl 0587.76042
Numerical methods in fluid dynamics, 9th int. Conf., Gif-sur- Yvette/France 1984, Lect. Notes Phys. 218, 291-295 (1985).
### Computation of steady laminar flow over a circular cylinder with third- order boundary conditions. (English)Zbl 0503.76043
MSC: 76D05 76M99
Full Text:
MSC: 76B25
Full Text:
### Difference schemes with uniform second and third order accuracy and reduced smoothing. (English)Zbl 0409.65039
MSC: 65M06 65N06 86A10
Full Text:
all top 5
all top 5
all top 3
all top 3
|
|
# Different-Distance Sets in a Graph
Document Type : Original paper
Authors
1 Wingate University
2 Department of Mathematics, University of Johannesburg, Auckland Park, South Africa
3 Clemson University
4 Erasmus Universiteit
Abstract
A set of vertices $S$ in a connected graph $G$ is a different-distance set if, for any vertex $w$ outside $S$, no two vertices in $S$ have the same distance to $w$.
The lower and upper different-distance number of a graph are the order of a smallest, respectively largest, maximal different-distance set.
We prove that a different-distance set induces either a special type of path or an independent set. We present properties of different-distance sets, and consider the different-distance numbers of paths, cycles, Cartesian products of bipartite graphs, and Cartesian products of complete graphs. We conclude with some open problems and questions.
Keywords
Main Subjects
|
|
# Maximizing ratio volume/diameter^n by an affinity
Suppose we have a convex compact body $D\subset \mathbb R^n$. We can try to apply affine transformation keeping the volume and decreasing the diameter of $D$.
It is clear that there is a constant $\lambda_n$ such that for any $D$ there is an affinity $F$ such that $diamater(F(D))^n\leq\lambda_n Volume(F(D))$. I'm interesting in the optimal value for $\lambda_n$.
The article "On the thinnest non-separable lattice of convex bodies" (E. Makai, p.23) gives an estimate $\lambda_n\leq (_n^{2n})n^{n/2}/k_n$ where $k_n$ is the volume of the convex hull of the unit sphere and $(\pm\sqrt{n},0,0,..)$, $\lambda_2$ is also computed.
I can not find in literature any better estimate for $\lambda_3$, but it seems that I can prove a few better estimate by school-like methods: if there is no affinity decreasing diameter, that means that in $D$ there are a lot of diameters which will increase if we apply an infinitesimal affinity, so we get an estimate. I'm sure that I'm not the first one who apply such a simple idea to this problem. Do you know any others references concerning this problem?
-
A closely related problem, considering the $(n-1)$-dimensional surface area instead of the diameter of the body, has been solved by Keith Ball, Volume ratios and a reverse isoperimetric inequality. J. London Math. Soc. (2) 44 (1991), no. 2, 351–359 [MR1136445]. The extreme case, as expected, is the $n$-dimensional simplex. Ball's proof may also work for the diameter. The article is available on the arXiv, see http://arxiv.org/abs/math/9201205 . The 2-dimensional case was solved long before, with a very short, elementary proof.
A quick estimate of the diameter can be obtained by taking the minimum volume ellipsoid containing the body $K$ and using the known (best possible, by the way) estimate of the ellipsoid's volume. An affine transformation that turns the ellipsoid into a ball yields a bound on the diameter - perhaps not the best possible, but a fairly good one.
|
|
# A Fervent Defense of Frequentist Statistics
[Highlights for the busy: de-bunking standard "Bayes is optimal" arguments; frequentist Solomonoff induction; and a description of the online learning framework. Note: cross-posted from my blog.]
Short summary. This essay makes many points, each of which I think is worth reading, but if you are only going to understand one point I think it should be “Myth 5″ below, which describes the online learning framework as a response to the claim that frequentist methods need to make strong modeling assumptions. Among other things, online learning allows me to perform the following remarkable feat: if I’m betting on horses, and I get to place bets after watching other people bet but before seeing which horse wins the race, then I can guarantee that after a relatively small number of races, I will do almost as well overall as the best other person, even if the number of other people is very large (say, 1 billion), and their performance is correlated in complicated ways.
If you’re only going to understand two points, then also read about the frequentist version of Solomonoff induction, which is described in “Myth 6″.
Main article. I’ve already written one essay on Bayesian vs. frequentist statistics. In that essay, I argued for a balanced, pragmatic approach in which we think of the two families of methods as a collection of tools to be used as appropriate. Since I’m currently feeling contrarian, this essay will be far less balanced and will argue explicitly against Bayesian methods and in favor of frequentist methods. I hope this will be forgiven as so much other writing goes in the opposite direction of unabashedly defending Bayes. I should note that this essay is partially inspired by some of Cosma Shalizi’s blog posts, such as this one.
This essay will start by listing a series of myths, then debunk them one-by-one. My main motivation for this is that Bayesian approaches seem to be highly popularized, to the point that one may get the impression that they are the uncontroversially superior method of doing statistics. I actually think the opposite is true: I think most statisticans would for the most part defend frequentist methods, although there are also many departments that are decidedly Bayesian (e.g. many places in England, as well as some U.S. universities like Columbia). I have a lot of respect for many of the people at these universities, such as Andrew Gelman and Philip Dawid, but I worry that many of the other proponents of Bayes (most of them non-statisticians) tend to oversell Bayesian methods or undersell alternative methodologies.
If you are like me from, say, two years ago, you are firmly convinced that Bayesian methods are superior and that you have knockdown arguments in favor of this. If this is the case, then I hope this essay will give you an experience that I myself found life-altering: the experience of having a way of thinking that seemed unquestionably true slowly dissolve into just one of many imperfect models of reality. This experience helped me gain more explicit appreciation for the skill of viewing the world from many different angles, and of distinguishing between a very successful paradigm and reality.
If you are not like me, then you may have had the experience of bringing up one of many reasonable objections to normative Bayesian epistemology, and having it shot down by one of many “standard” arguments that seem wrong but not for easy-to-articulate reasons. I hope to lend some reprieve to those of you in this camp, by providing a collection of “standard” replies to these standard arguments.
I will start with the myths (and responses) that I think will require the least technical background and be most interesting to a general audience. Toward the end, I deal with some attacks on frequentist methods that I believe amount to technical claims that are demonstrably false; doing so involves more math. Also, I should note that for the sake of simplicity I’ve labeled everything that is non-Bayesian as a “frequentist” method, even though I think there’s actually a fair amount of variation among these methods, although also a fair amount of overlap (e.g. I’m throwing in statistical learning theory with minimax estimation, which certainly have a lot of overlap in ideas but were also in some sense developed by different communities).
The Myths:
• Bayesian methods are optimal.
• Bayesian methods are optimal except for computational considerations.
• We can deal with computational constraints simply by making approximations to Bayes.
• The prior isn’t a big deal because Bayesians can always share likelihood ratios.
• Frequentist methods need to assume their model is correct, or that the data are i.i.d.
• Frequentist methods can only deal with simple models, and make arbitrary cutoffs in model complexity (aka: “I’m Bayesian because I want to do Solomonoff induction”).
• Frequentist methods hide their assumptions while Bayesian methods make assumptions explicit.
• Frequentist methods are fragile, Bayesian methods are robust.
• Frequentist methods are responsible for bad science
• Frequentist methods are unprincipled/hacky.
• Frequentist methods have no promising approach to computationally bounded inference.
Myth 1: Bayesian methods are optimal. Presumably when most people say this they are thinking of either Dutch-booking or the complete class theorem. Roughly what these say are the following:
Dutch-book argument: Every coherent set of beliefs can be modeled as a subjective probability distribution. (Roughly, coherent means “unable to be Dutch-booked”.)
Complete class theorem: Every non-Bayesian method is worse than some Bayesian method (in the sense of performing deterministically at least as poorly in every possible world).
Let’s unpack both of these. My high-level argument regarding Dutch books is that I would much rather spend my time trying to correspond with reality than trying to be internally consistent. More concretely, the Dutch-book argument says that if for every bet you force me to take one side or the other, then unless I’m Bayesian there’s a collection of bets that will cause me to lose money for sure. I don’t find this very compelling. This seems analogous to the situation where there’s some quant at Jane Street, and they’re about to run code that will make thousands of dollars trading stocks, and someone comes up to them and says “Wait! You should add checks to your code to make sure that no subset of your trades will lose you money!” This just doesn’t seem worth the quant’s time, it will slow down the code substantially, and instead the quant should be writing the next program to make thousands more dollars. This is basically what dutch-booking arguments seem like to me.
Moving on, the complete class theorem says that for any decision rule, I can do better by replacing it with some Bayesian decision rule. But this injunction is not useful in practice, because it doesn’t say anything about which decision rule I should replace it with. Of course, if you hand me a decision rule and give me infinite computational resources, then I can hand you back a Bayesian method that will perform better. But it still might not perform well. All the complete class theorem says is that every local optimum is Bayesan. To be a useful theory of epistemology, I need a prescription for how, in the first place, I am to arrive at a good decision rule, not just a locally optimal one. And this is something that frequentist methods do provide, to a far greater extent than Bayesian methods (for instance by using minimax decision rules such as the maximum-entropy example given later). Note also that many frequentist methods do correspond to a Bayesian method for some appropriately chosen prior. But the crucial point is that the frequentist told me how to pick a prior I would be happy with (also, many frequentist methods don’t correspond to a Bayesian method for any choice of prior; they nevertheless often perform quite well).
Myth 2: Bayesian methods are optimal except for computational considerations. We already covered this in the previous point under the complete class theorem, but to re-iterate: Bayesian methods are locally optimal, not global optimal. Identifying all the local optima is very different from knowing which of them is the global optimum. I would much rather have someone hand me something that wasn’t a local optimum but was close to the global optimum, than something that was a local optimum but was far from the global optimum.
Myth 3: We can deal with computational constraints simply by making approximations to Bayes. I have rarely seen this born out in practice. Here’s a challenge: suppose I give you data generated in the following way. There are a collection of vectors ${x_1}$, ${x_2}$, ${\ldots}$, ${x_{10,000}}$, each with ${10^6}$ coordinates. I generate outputs ${y_1}$, ${y_2}$, ${\ldots}$, ${y_{10,000}}$ in the following way. First I globally select ${100}$ of the ${10^6}$ coordinates uniformly at random, then I select a fixed vector ${u}$ such that those ${100}$ coordinates are drawn from i.i.d. Gaussians and the rest of the coordinates are zero. Now I set ${x_n = u^{\top}y_n}$ (i.e. ${x_n}$ is the dot product of ${u}$ with ${y_n}$). You are given ${x}$ and ${y}$, and your job is to infer ${u}$. This is a completely well-specified problem, the only task remaining is computational. I know people who have solved this problem using Bayesan methods with approximate inference. I have respect for these people, because doing so is no easy task. I think very few of them would say that “we can just approximate Bayesian updating and be fine”. (Also, this particular problem can be solved trivially with frequentist methods.)
A particularly egregious example of this is when people talk about “computable approximations to Solomonoff induction” or “computable approximations to AIXI” as if such notions were meaningful.
Myth 4: the prior isn’t a big deal because Bayesians can always share likelihood ratios. Putting aside the practical issue that there would in general be an infinite number of likelihood ratios to share, there is the larger issue that for any hypothesis ${h}$, there is also the hypothesis ${h'}$ that matches ${h}$ exactly up to now, and then predicts the opposite of ${h}$ at all points in the future. You have to constrain model complexity at some point, the question is about how. To put this another way, sharing my likelihood ratios without also constraining model complexity (by focusing on a subset of all logically possible hypotheses) would be equivalent to just sharing all sensory data I’ve ever accrued in my life. To the extent that such a notion is even possible, I certainly don’t need to be a Bayesian to do such a thing.
Myth 5: frequentist methods need to assume their model is correct or that the data are i.i.d. Understanding the content of this section is the most important single insight to gain from this essay. For some reason it’s assumed that frequentist methods need to make strong assumptions (such as Gaussianity), whereas Bayesian methods are somehow immune to this. In reality, the opposite is true. While there are many beautiful and deep frequentist formalisms that answer this, I will choose to focus on one of my favorite, which is online learning.
To explain the online learning framework, let us suppose that our data are ${(x_1, y_1), (x_2, y_2), \ldots, (x_T, y_T)}$. We don’t observe ${y_t}$ until after making a prediction ${z_t}$ of what ${y_t}$ will be, and then we receive a penalty ${L(y_t, z_t)}$ based on how incorrect we were. So we can think of this as receiving prediction problems one-by-one, and in particular we make no assumptions about the relationship between the different problems; they could be i.i.d., they could be positively correlated, they could be anti-correlated, they could even be adversarially chosen.
As a running example, suppose that I’m betting on horses and before each race there are ${n}$ other people who give me advice on which horse to bet on. I know nothing about horses, so based on this advice I’d like to devise a good betting strategy. In this case, ${x_t}$ would be the ${n}$ bets that each of the other people recommend, ${z_t}$ would be the horse that I actually bet on, and ${y_t}$ would be the horse that actually wins the race. Then, supposing that ${y_t = z_t}$ (i.e., the horse I bet on actually wins), ${L(y_t, z_t)}$ is the negative of the payoff from correctly betting on that horse. Otherwise, if the horse I bet on doesn’t win, ${L(y_t, z_t)}$ is the cost I had to pay to place the bet.
If I’m in this setting, what guarantee can I hope for? I might ask for an algorithm that is guaranteed to make good bets — but this seems impossible unless the people advising me actually know something about horses. Or, at the very least, one of the people advising me knows something. Motivated by this, I define my regret to be the difference between my penalty and the penalty of the best of the ${n}$ people (note that I only have access to the latter after all ${T}$ rounds of betting). More formally, given a class ${\mathcal{M}}$ of predictors ${h : x \mapsto z}$, I define
$\displaystyle \mathrm{Regret}(T) = \frac{1}{T} \sum_{t=1}^T L(y_t, z_t) - \min_{h \in \mathcal{M}} \frac{1}{T} \sum_{t=1}^T L(y_t, h(x_t))$
In this case, ${\mathcal{M}}$ would have size ${n}$ and the ${i}$th predictor would just always follow the advice of person ${i}$. The regret is then how much worse I do on average than the best expert. A remarkable fact is that, in this case, there is a strategy such that ${\mathrm{Regret}(T)}$ shrinks at a rate of ${\sqrt{\frac{\log(n)}{T}}}$. In other words, I can have an average score within ${\epsilon}$ of the best advisor after ${\frac{\log(n)}{\epsilon^2}}$ rounds of betting.
One reason that this is remarkable is that it does not depend at all on how the data are distributed; the data could be i.i.d., positively correlated, negatively correlated, even adversarial, and one can still construct an (adaptive) prediction rule that does almost as well as the best predictor in the family.
To be even more concrete, if we assume that all costs and payoffs are bounded by ${\1}$ per round, and that there are ${1,000,000,000}$ people in total, then an explicit upper bound is that after ${28/\epsilon^2}$ rounds, we will be within ${\epsilon}$ dollars on average of the best other person. Under slightly stronger assumptions, we can do even better, for instance if the best person has an average variance of ${0.1}$ about their mean, then the ${28}$ can be replaced with ${4.5}$.
It is important to note that the betting scenario is just a running example, and one can still obtain regret bounds under fairly general scenarios; ${\mathcal{M}}$ could be continuous and ${L}$ could have quite general structure; the only technical assumption is that ${\mathcal{M}}$ be a convex set and that ${L}$ be a convex function of ${z}$. These assumptions tend to be easy to satisfy, though I have run into a few situations where they end up being problematic, mainly for computational reasons. For an ${n}$-dimensional model family, typically ${\mathrm{Regret}(T)}$ decreases at a rate of ${\sqrt{\frac{n}{T}}}$, although under additional assumptions this can be reduced to ${\sqrt{\frac{\log(n)}{T}}}$, as in the betting example above. I would consider this reduction to be one of the crowning results of modern frequentist statistics.
Yes, these guarantees sound incredibly awesome and perhaps too good to be true. They actually are that awesome, and they are actually true. The work is being done by measuring the error relative to the best model in the model family. We aren’t required to do well in an absolute sense, we just need to not do any worse than the best model. Of as long as at least one of the models in our family makes good predictions, that means we will as well. This is really what statistics is meant to be doing: you come up with everything you imagine could possibly be reasonable, and hand it to me, and then I come up with an algorithm that will figure out which of the things you handed me was most reasonable, and will do almost as well as that. As long as at least one of the things you come up with is good, then my algorithm will do well. Importantly, due to the ${\log(n)}$ dependence on the dimension of the model family, you can actually write down extremely broad classes of models and I will still successfully sift through them.
Let me stress again: regret bounds are saying that, no matter how the ${x_t}$ and ${y_t}$ are related, no i.i.d. assumptions anywhere in sight, we will do almost as well as any predictor ${h}$ in ${\mathcal{M}}$ (in particular, almost as well as the best predictor).
Myth 6: frequentist methods can only deal with simple models and need to make arbitrary cutoffs in model complexity. A naive perusal of the literature might lead one to believe that frequentists only ever consider very simple models, because many discussions center on linear and log-linear models. To dispel this, I will first note that there are just as many discussions that focus on much more general properties such as convexity and smoothness, and that can achieve comparably good bounds in many cases. But more importantly, the reason we focus so much on linear models is because we have already reduced a large family of problems to (log-)linear regression. The key insight, and I think one of the most important insights in all of applied mathematics, is that of featurization: given a non-linear problem, we can often embed it into a higher-dimensional linear problem, via a feature map ${\phi : X \rightarrow \mathbb{R}^n}$ (${\mathbb{R}^n}$ denotes ${n}$-dimensional space, i.e. vectors of real numbers of length ${n}$). For instance, if I think that ${y}$ is a polynomial (say cubic) function of ${x}$, I can apply the mapping ${\phi(x) = (1, x, x^2, x^3)}$, and now look for a linear relationship between ${y}$ and ${\phi(x)}$.
This insight extends far beyond polynomials. In combinatorial domains such as natural language, it is common to use indicator features: features that are ${1}$ if a certain event occurs and ${0}$ otherwise. For instance, I might have an indicator feature for whether two words appear consecutively in a sentence, whether two parts of speech are adjacent in a syntax tree, or for what part of speech a word has. Almost all state of the art systems in natural language processing work by solving a relatively simple regression task (typically either log-linear or max-margin) over a rich feature space (often involving hundreds of thousands or millions of features, i.e. an embedding into ${\mathbb{R}^{10^5}}$ or ${\mathbb{R}^{10^6}}$).
A counter-argument to the previous point could be: “Sure, you could create a high-dimensional family of models, but it’s still a parameterized family. I don’t want to be stuck with a parameterized family, I want my family to include all Turing machines!” Putting aside for a second the question of whether “all Turing machines” is a well-advised model choice, this is something that a frequentist approach can handle just fine, using a tool called regularization, which after featurization is the second most important idea in statistics.
Specifically, given any sufficiently quickly growing function ${\psi(h)}$, one can show that, given ${T}$ data points, there is a strategy whose average error is at most ${\sqrt{\frac{\psi(h)}{T}}}$ worse than any estimator ${h}$. This can hold even if the model class ${\mathcal{M}}$ is infinite dimensional. For instance, if ${\mathcal{M}}$ consists of all probability distributions over Turing machines, and we let ${h_i}$ denote the probability mass placed on the ${i}$th Turing machine, then a valid regularizer ${\psi}$ would be
$\displaystyle \psi(h) = \sum_i h_i \log(i^2 \cdot h_i)$
If we consider this, then we see that, for any probability distribution over the first ${2^k}$ Turing machines (i.e. all Turing machines with description length ${\leq k}$), the value of ${\psi}$ is at most ${\log((2^k)^2) = k\log(4)}$. (Here we use the fact that ${\psi(h) \geq \sum_i h_i \log(i^2)}$, since ${h_i \leq 1}$ and hence ${h_i\log(h_i) \leq 0}$.) This means that, if we receive roughly ${\frac{k}{\epsilon^2}}$ data, we will achieve error within ${\epsilon}$ of the best Turing machine that has description length ${\leq k}$.
Let me note several things here:
• This strategy makes no assumptions about the data being i.i.d. It doesn’t even assume that the data are computable. It just guarantees that it will perform as well as any Turing machine (or distribution over Turing machines) given the appropriate amount of data.
• This guarantee holds for any given sufficiently smooth measurement of prediction error (the update strategy depends on the particular error measure).
• This guarantee holds deterministically, no randomness required (although predictions may need to consist of probability distributions rather than specific points, but this is also true of Bayesian predictions).
Interestingly, in the case that the prediction error is given by the negative log probability assigned to the truth, then the corresponding strategy that achieves the error bound is just normal Bayesian updating. But for other measurements of error, we get different update strategies. Although I haven’t worked out the math, intuitively this difference could be important if the universe is fundamentally unpredictable but our notion of error is insensitive to the unpredictable aspects.
Myth 7: frequentist methods hide their assumptions while Bayesian methods make assumptions explicit. I’m still not really sure where this came from. As we’ve seen numerous times so far, a very common flavor among frequentist methods is the following: I have a model class ${\mathcal{M}}$, I want to do as well as any model in ${\mathcal{M}}$; or put another way:
Assumption: At least one model in ${\mathcal{M}}$ has error at most ${E}$.
Guarantee: My method will have error at most ${E + \epsilon}$.
This seems like a very explicit assumption with a very explicit guarantee. On the other hand, an argument I hear is that Bayesian methods make their assumptions explicit because they have an explicit prior. If I were to write this as an assumption and guarantee, I would write:
Assumption: The data were generated from the prior.
Guarantee: I will perform at least as well as any other method.
While I agree that this is an assumption and guarantee of Bayesian methods, there are two problems that I have with drawing the conclusion that “Bayesian methods make their assumptions explicit”. The first is that it can often be very difficult to understand how a prior behaves; so while we could say “The data were generated from the prior” is an explicit assumption, it may be unclear what exactly that assumption entails. However, a bigger issue is that “The data were generated from the prior” is an assumption that very rarely holds; indeed, in many cases the underlying process is deterministic (if you’re a subjective Bayesian then this isn’t necessarily a problem, but it does certainly mean that the assumption given above doesn’t hold). So given that that assumption doesn’t hold but Bayesian methods still often perform well in practice, I would say that Bayesian methods are making some other sort of “assumption” that is far less explicit (indeed, I would be very interested in understanding what this other, more nebulous assumption might be).
Myth 8: frequentist methods are fragile, Bayesian methods are robust. This is another one that’s straightforwardly false. First, since frequentist methods often rest on weaker assumptions they are more robust if the assumptions don’t quite hold. Secondly, there is an entire area of robust statistics, which focuses on being robust to adversarial errors in the problem data.
Myth 9: frequentist methods are responsible for bad science. I will concede that much bad science is done using frequentist statistics. But this is true only because pretty much all science is done using frequentist statistics. I’ve heard arguments that using Bayesian methods instead of frequentist methods would fix at least some of the problems with science. I don’t think this is particularly likely, as I think many of the problems come from mis-application of statistical tools or from failure to control for multiple hypotheses. If anything, Bayesian methods would exacerbate the former, because they often require more detailed modeling (although in most simple cases the difference doesn’t matter at all). I don’t think being Bayesian guards against multiple hypothesis testing. Yes, in some sense a prior “controls for multiple hypotheses”, but in general the issue is that the “multiple hypotheses” are never written down in the first place, or are written down and then discarded. One could argue that being in the habit of writing down a prior might make practitioners more likely to think about multiple hypotheses, but I’m not sure this is the first-order thing to worry about.
Myth 10: frequentist methods are unprincipled / hacky. One of the most beautiful theoretical paradigms that I can think of is what I could call the “geometric view of statistics”. One place that does a particularly good job of show-casing this is Shai Shalev-Shwartz’s PhD thesis, which was so beautiful that I cried when I read it. I’ll try (probably futilely) to convey a tiny amount of the intuition and beauty of this paradigm in the next few paragraphs, although focusing on minimax estimation, rather than online learning as in Shai’s thesis.
The geometric paradigm tends to emphasize a view of measurements (i.e. empirical expected values over observed data) as “noisy” linear constraints on a model family. We can control the noise by either taking few enough measurements that the total error from the noise is small (classical statistics), or by broadening the linear constraints to convex constraints (robust statistics), or by controlling the Lagrange multipliers on the constraints (regularization). One particularly beautiful result in this vein is the duality between maximum entropy and maximum likelihood. (I can already predict the Jaynesians trying to claim this result for their camp, but (i) Jaynes did not invent maximum entropy; (ii) maximum entropy is not particularly Bayesian (in the sense that frequentists use it as well); and (iii) the view on maximum entropy that I’m about to provide is different from the view given in Jaynes or by physicists in general [edit: EHeller thinks this last claim is questionable, see discussion here].)
To understand the duality mentioned above, suppose that we have a probability distribution ${p(x)}$ and the only information we have about it is the expected value of a certain number of functions, i.e. the information that ${\mathbb{E}[\phi(x)] = \phi^*}$, where the expectation is taken with respect to ${p(x)}$. We are interested in constructing a probability distribution ${q(x)}$ such that no matter what particular value ${p(x)}$ takes, ${q(x)}$ will still make good predictions. In other words (taking ${\log p(x)}$ as our measurement of prediction accuracy) we want ${\mathbb{E}_{p'}[\log q(x)]}$ to be large for all distributions ${p'}$ such that ${\mathbb{E}_{p'}[\phi(x)] = \phi^*}$. Using a technique called Lagrangian duality, we can both find the optimal distribution ${q}$ and compute its worse-case accuracy over all ${p'}$ with ${\mathbb{E}_{p'}[\phi(x)] = \phi^*}$. The characterization is as follows: consider all probability distributions ${q(x)}$ that are proportional to ${\exp(\lambda^{\top}\phi(x))}$ for some vector ${\lambda}$, i.e. ${q(x) = \exp(\lambda^{\top}\phi(x))/Z(\lambda)}$ for some ${Z(\lambda)}$. Of all of these, take the q(x) with the largest value of ${\lambda^{\top}\phi^* - \log Z(\lambda)}$. Then ${q(x)}$ will be the optimal distribution and the accuracy for all distributions ${p'}$ will be exactly ${\lambda^{\top}\phi^* - \log Z(\lambda)}$. Furthermore, if ${\phi^*}$ is the empirical expectation given some number of samples, then one can show that ${\lambda^{\top}\phi^* - \log Z(\lambda)}$ is propotional to the log likelihood of ${q}$, which is why I say that maximum entropy and maximum likelihood are dual to each other.
This is a relatively simple result but it underlies a decent chunk of models used in practice.
Myth 11: frequentist methods have no promising approach to computationally bounded inference. I would personally argue that frequentist methods are more promising than Bayesian methods at handling computational constraints, although computationally bounded inference is a very cutting edge area and I’m sure other experts would disagree. However, one point in favor of the frequentist approach here is that we already have some frameworks, such as the “tightening relaxations” framework discussed here, that provide quite elegant and rigorous ways of handling computationally intractable models.
References
(Myth 3) Sparse recovery: Sparse recovery using sparse matrices
(Myth 5) Online learning: Online learning and online convex optimization
(Myth 8) Robust statistics: see this blog post and the two linked papers
(Myth 10) Maximum entropy duality: Game theory, maximum entropy, minimum discrepancy and robust Bayesian decision theory
125 comments, sorted by
magical algorithm
Highlighting new comments since Today at 6:49 AM
Moderation Guidelinesexpand_more
I would love to know which parts of this post Eliezer disagrees with, and why.
Don't have time for a real response. Quickly and ramblingly:
1) The point of Bayesianism isn't that there's a toolbox of known algorithms like max-entropy methods which are supposed to work for everything. The point of Bayesianism is to provide a coherent background epistemology which underlies everything; when a frequentist algorithm works, there's supposed to be a Bayesian explanation of why it works. I have said this before many times but it seems to be a "resistant concept" which simply cannot sink in for many people.
2) I did initially try to wade into the math of the linear problem (and wonder if I'm the only one who did so, unless others spotted the x-y inversion but didn't say anything), trying to figure out how I would solve it even though that wasn't really relevant for reasons of (1), but found that the exact original problem specified may be NP-hard according to Wikipedia, much as my instincts said it should be. And if we're allowed approximate answers then yes, throwing a standard L1-norm algorithm at it is pretty much what I would try, though I might also try some form of expectation-maximization using the standard Bayesian L2 technique and repeatedly truncating the small coefficients and then trying to predict the residual error. I have no idea how long that would take in practice. It doesn't actually matter, because see (1). I could go on about how for any given solution I can compute its Bayesian likelihood assuming Gaussian noise, and so again Bayes functions well as a background epistemology which gives us a particular minimization problem to be computed by whatever means, and if we have no background epistemology then why not just choose a hundred random 1s, etc., but lack the time for more than rapid rambling here. Jacob didn't say what he thought an actual frequentist or Bayesian approach would be, he just said the frequentist approach would be easy and that the Bayesian one was hard.
(3) Having made a brief effort to wade into the math and hit the above bog, I did not attempt to go into Jacob's claim that frequentist statistics can transcend i.i.d. But considering the context in which I originally complained about the assumptions made by frequentist guarantees, I should very much like to see explained concretely how Jacob's favorite algorithm would handle the case of "You have a self-improving AI which turns out to maximize smiles, in all previous cases it produced smiles by making people happy, but once it became smart enough it realized that it ought to preserve your bad generalization and faked its evidence, and now that it has nanotech it's going to tile the universe with tiny smileyfaces." This is the Context Change Problem I originally used to argue against trying for frequentist-style guarantees based on past AI behavior being okay or doing well on other surface indicators. I frankly doubt that Jacob's algorithm is going to handle it. I really really doubt it. Very very roughly, my own notion of an approach here would be a Bayesian-viewpoint AI which was learning a utility function and knew to explicitly query model ambiguity back to the programmers, perhaps using a value-of-info calculation. I should like to hear what a frequentist viewpoint on that would sound like.
(4) Describing the point of likelihood ratios in science would take its own post. Three key ideas are (a) instead of "negative results" we have "likelihood ratios favoring no effect over 5% effect" and so it's now conceptually simpler to get rid of positive-result bias in publication; (b) if we compute likelihood ratios on all the hypotheses which are actually in play then we can add up what many experiments tell us far more easily and get far more sensible answers than with present "survey" methods; and (c) having the actual score be far below expected log score for the best hypothesis tells us when some of our experiments must be giving us bogus data or having been performed under invisibly different conditions, a huge problem in many cases and something far beyond the ability of present "survey" methods to notice or handle.
EDIT: Also everything in http://lesswrong.com/lw/mt/beautiful_probability/
when a frequentist algorithm works, there's supposed to be a Bayesian explanation of why it works. I have said this before many times but it seems to be a "resistant concept" which simply cannot sink in for many people.
Perhaps the reason this is not sinking in for many people is because it is not true.
Bayes assumes you can write down your prior, your likelihood and your posterior. That is what we need to get Bayes theorem to work. If you are working with a statistical model where this is not possible*, you cannot really use the standard Bayesian story, yet there still exist ways of attacking the problem.
(*) Of course, "not possible in principle" is different from "we don't know how to yet." In either case, I am not really sure what the point of an official Bayesian epistemology explanation would be.
This idea that there is a standard Bayesian explanation for All The Things seems very strange to me. Andrew Gelman has a post on his blog about how to define "identifiability" if you are a Bayesian:
This is apparently a tricky (or not useful) concept to define within that framework. Which is a little weird, because it is both a very useful concept, and a very clear one to me.
Gelman is a pretty prominent Bayesian. Either he is confused, or I am confused, or his view on the stuff causal folks like me work on is so alien that it is not illuminating. The issue to me seems to me to be cultural differences between frameworks.
Do you have a handy example of a frequentist algorithm that works, for which there is no Bayesian explanation?
I wouldn't say "no Bayesian explanation," but perhaps "a Bayesian explanation is unknown to me, nor do I see how this explanation would illuminate anything." But yes, I gave an example elsewhere in this thread. The FCI algorithm for learning graph structure in the non-parametric setting with continuous valued variables, where the correct underlying model has the following independence structure:
A is independent of B and C is independent of D (and nothing else is true).
Since I (and to my knowledge everyone else) do not know how to write the likelihood for this model, I don't know how to set up the standard Bayesian story here.
Eliezer,
The point of Bayesianism is to provide a coherent background epistemology which underlies everything; when a frequentist algorithm works, there's supposed to be a Bayesian explanation of why it works. I have said this before many times but it seems to be a "resistant concept" which simply cannot sink in for many people.
First, I object to the labeling of Bayesian explanations as a "resistant concept". I think it's not only uncharitable but also wrong. I started out with exactly the viewpoint that everything should be explained in terms of Bayes (see one of my earliest and most-viewed blog posts if you don't believe me). I moved away from this viewpoint slowly as the result of accumulated evidence that this is not the most productive lens through which to view the world.
More to the point: why is it that you think that everything should have a Bayesian explanation? One of the most-cited reasons why Bayes should be an empistemic ideal is the various "optimality" / Dutch book theorems, which I've already argued against in this post. Do you accept the rebuttals I gave, or disagree with them?
My guess is that you would still be in favor of Bayes as a normative standard of epistemology even if you rejected Dutch book arguments, and the reason why you like it is because you feel like it has been useful for solving a large number of problems. But frequentist statistics (not to mention pretty much any successful paradigm) has also been useful for solving a large number of problems, some of which Bayesian statistics cannot solve, as I have demonstrated in this post. The mere fact that a tool is extremely useful does not mean that it should be elevated to a universal normative standard.
but found that the exact original problem specified may be NP-hard according to Wikipedia, much as my instincts said it should be
We've already discussed this in one of the other threads, but I'll just repeat here that this isn't correct. With overwhelmingly high probability a Gaussian matrix will satisfy the restricted isometry property, which implies that appropriately L1-regularized least squares will return the exact solution.
I could go on about how for any given solution I can compute its Bayesian likelihood assuming Gaussian noise, and so again Bayes functions well as a background epistemology
The point of this example was to give a problem that, from a modeling perspective, was as convenient for Bayes as possible, but that was computationally intractable to solve using Bayesian techniques. I gave other examples (such as in Myth 5) that demonstrate situations where Bayes breaks down. And I argued indirectly in Myths 1, 4, and 8 that the prior is actually a pretty big deal and has the capacity to cause problems in ways that frequentists have ways of dealing with.
I should very much like to see explained concretely how Jacob's favorite algorithm would handle the case of "You have a self-improving AI which turns out to maximize smiles, in all previous cases it produced smiles by making people happy, but once it became smart enough it realized that it ought to preserve your bad generalization and faked its evidence, and now that it has nanotech it's going to tile the universe with tiny smileyfaces."
I think this is a very bad testing ground for how good a technique is, because it's impossible to say whether something would solve this problem without going through a lot of hand-waving. I think your "notion of how to solve it" is interesting but has a lot of details to fill in, and it's extremely unclear how it would work, especially given that even for concrete problems that people work on now, an issue with Bayesian methods is overconfidence in a particular model. I should also note that, as we've registered earlier, I don't think that what you call the Context Change Problem is actually a problem that an intelligent agent would face: any agent that is intelligent enough to behave at all functionally close to the level of a human would be robust to context changes.
However, even given all these caveats, I'll still try to answer your question on your own terms. Short answer: do online learning with an additional action called "query programmer" that is guaranteed to always have some small negative utility, say -0.001, that is enough to outweigh any non-trivial amount of uncertainty but will eventually encourage the AI to act autonomously. We would need some way of upper-bounding the regret of other possible actions, and of incorporating this utility constraint into the algorithm, but I don't think the amount of fleshing out is any more or less than that required by your proposal.
[WARNING: The rest of this comment is mostly meaningless rambling.]
I want to stress again that the above paragraph is only a (sketch of) an answer to the question as you posed it. But I'd rather sidestep the question completely and say something like: "OK, if we make literally no assumptions, then we're completely screwed, because moving any speck of dust might cause the universe to explode. Being Bayesian doesn't make this issue go away, it just ignores it.
So, what assumptions can we be reasonably okay with making that would help us solve the problem? Maybe I'd be okay assuming that the mechanism that takes in my past actions and returns a utility is a Turing machine of description length less than 10^15. But unfortunately that doesn't help me much, because for every Turing machine M, there's one of not that much longer description length that behaves identically to M up until I'm about to make my current decision, and then penalizes my current decision with some extraordinary large amount of disutility. Note that, again, being Bayesian doesn't deal with this issue, it just assigns it low prior probability.
I think the question of exactly what assumptions one would be willing to make, that would allow one to confidently reason about actions with potentially extremely discontinuous effects, is an important and interesting one, and I think one of the drawbacks of "thinking like a Bayesian" is that it draws attention away from this issue by treating it as mostly solved (via assigning a prior)."
My guess is that you would still be in favor of Bayes as a normative standard of epistemology even if you rejected Dutch book arguments, and the reason why you like it is because you feel like it has been useful for solving a large number of problems.
Um, nope. What it would really take to change my mind about Bayes is seeing a refutation of Dutch Book and Cox's Theorem and Von Neumann-Morgenstern and the complete class theorem , combined with seeing some alternative epistemology (e.g. Dempster-Shafer) not turn out to completely blow up when subjected to the same kind of scrutiny as Bayesianism (the way DS brackets almost immediately go to [0-1] and fuzzy logic turned out to be useless etc.)
Neural nets have been useful for solving a large number of problems. It doesn't make them good epistemology. It doesn't make them a plausible candidate for "Yes, this is how you need to organize your thinking about your AI's thinking and if you don't your AI will explode".
some of which Bayesian statistics cannot solve, as I have demonstrated in this post.
I am afraid that your demonstration was not stated sufficiently precisely for me to criticize. This seems like the sort of thing for which there ought to be a standard reference, if there were such a thing as a well-known problem which Bayesian epistemology could not handle. For example, we have well-known critiques and literature claiming that nonconglomerability is a problem for Bayesianism, and we have a chapter of Jaynes which neatly shows that they all arise from misuse of limits on infinite problems. Is there a corresponding literature for your alleged reductio of Bayesianism which I can consult? Now, I am a great believer in civilizational inadequacy and the fact that the incompetence of academia is increasing, so perhaps if this problem was recently invented there is no more literature about it. I don't want to be a hypocrite about the fact that sometimes something is true and nobody has written it up anyway, heaven knows that's true all the time in my world. But the fact remains that I am accustomed to somewhat more detailed math when it comes to providing an alleged reductio of the standard edifice of decision theory. I know your time is limited, but the real fact is that I really do need more detail to think that I've seen a criticism and be convinced that no response to that criticism exists. Should your flat assertion that Bayesian methods can't handle something and fall flat so badly as to constitute a critique of Bayesian epistemology, be something that I find convincing?
We've already discussed this in one of the other threads, but I'll just repeat here that this isn't correct. With overwhelmingly high probability a Gaussian matrix will satisfy the restricted isometry property, which implies that appropriately L1-regularized least squares will return the exact solution.
Okay. Though I note that you haven't actually said that my intuitions (and/or my reading of Wikipedia) were wrong; many NP-hard problems will be easy to solve for a randomly generated case.
Anyway, suppose a standard L1-penalty algorithm solves a random case of this problem. Why do you think that's a reductio of Bayesian epistemology? Because the randomly generated weights mean that a Bayesian viewpoint says the credibility is going as the L2 norm on the non-zero weights, but we used an L1 algorithm to find which weights were non-zero? I am unable to parse this into the justifications I am accustomed to hearing for rejecting an epistemology. It seems like you're saying that one algorithm is more effective at finding the maximum of a Bayesian probability landscape than another algorithm; in a case where we both agree that the unbounded form of the Bayesian algorithm would work.
What destroys an epistemology's credibility is a case where even in the limit of unbounded computing power and well-calibrated prior knowledge, a set of rules just returns the wrong answer. The inherent subjectivity of p-values as described in http://lesswrong.com/lw/1gc/frequentist_statistics_are_frequently_subjective/ is not something you can make go away with a better-calibrated prior, correct use of limits, or unlimited computing power; it's the result of bad epistemology. This is the kind of smoking gun it would take to make me stop yammering about probability theory and Bayes's rule. Showing me algorithms which don't on the surface seem Bayesian but find good points on a Bayesian fitness landscape isn't going to cut it!
Eliezer, I included a criticism of both complete class and Dutch book right at the very beginning, in Myth 1. If you find them unsatisfactory, can you at least indicate why?
Your criticism of Dutch Book is that it doesn't seem to you useful to add anti-Dutch-book checkers to your toolbox. My support of Dutch Book is that if something inherently produces Dutch Books then it can't be the right epistemological principle because clearly some of its answers must be wrong even in the limit of well-calibrated prior knowledge and unbounded computing power.
The complete class theorem I understand least of the set, and it's probably not very much entwined with my true rejection so it would be logically rude to lead you on here. Again, though, the point that every local optimum is Bayesian tells us something about non-Bayesian rules producing intrinsically wrong answers. If I believed your criticism, I think it would be forceful; I could accept a world in which for every pair of a rational plan with a world, there is an irrational plan which does better in that world, but no plausible way for a cognitive algorithm to output that irrational plan - the plans which are equivalent of "Just buy the winning lottery ticket, and you'll make more money!" I can imagine being shown that the complete class theorem demonstrates only an "unfair" superiority of this sort, and that only frequentist methods can produce actual outputs for realistic situations even in the limit of unbounded computing power. But I do not believe that you have leveled such a criticism. And it doesn't square very much with my current understanding that the decision rules being considered are computable rules from observations to actions. You didn't actually tell me about a frequentist algorithm which is supposed to be realistic and show why the Bayesian rule which beats it is beating it unfairly.
If you want to hit me square in the true rejection I suggest starting with VNM. The fact that our epistemology has to plug into our actions is one reason why I roll my eyes at the likes of Dempster-Shafer or frequentist confidence intervals that don't convert to credibility distributions.
I could accept a world in which for every pair of a rational plan with a world, there is an irrational plan which does better in that world, but no plausible way for a cognitive algorithm to output that irrational plan
We already live in that world.
(The following is not evidence, just an illustrative analogy) Ever seen Groundhog Day? Imagine him skipping the bulk of the movie and going straight to the last day. It is straight wall to wall WTF but it's very optimal.
One of the criticisms I raised is that merely being able to point to all the local optima is not a particularly impressive property of an epistemological theory. Many of those local optima will be horrible! (My criticism of VNM is essentially the same.)
Many frequentist methods, such as minimax, also provide local optima, but they provide local optima which actually have certain nice properties. And minimax provides a complete decision rule, not just a probability distribution, so it plugs directly into actions.
FYI, there are published counterexamples to Cox's theorem. See for example Joseph Halpern's at http://arxiv.org/pdf/1105.5450.pdf.
You need to not include the period in your link, like so.
Short answer: do online learning with an additional action called "query programmer" that is guaranteed to always have some small negative utility, say -0.001, that is enough to outweigh any non-trivial amount of uncertainty but will eventually encourage the AI to act autonomously.
This short answer is too short for me to understand, unfortunately. Do you think there is enough potential merit in this idea to try to understand it better or further develop it? (I've been learning about online learning recently in an effort to understand/evaluate Paul Christiano's recent "AI control" ideas. If you have your own ideas also based on online learning, I'd love to try to understand them while the online learning stuff is fresh in my mind.)
We've already discussed this in one of the other threads, but I'll just repeat here that this isn't correct. With overwhelmingly high probability a Gaussian matrix will satisfy the restricted isometry property, which implies that appropriately L1-regularized least squares will return the exact solution.
I do wonder if it would have been better to include something along the lines of "with probability 1" to the claim that non-Bayesian methods can solve it easily. Compressed sensing isn't magic, even though it's very close.
any agent that is intelligent enough to behave at all functionally close to the level of a human would be robust to context changes.
Humans get tripped up by context changes very frequently. It's not obvious to me where you think this robustness would come from.
Compressed sensing isn't even magic, if you're halfway versed in signal processing. I understood compressed sensing within 30 seconds of hearing a general overview of it, and there are many related analogs in many fields.
Compressed sensing isn't even magic
The convex optimization guys I know are all rather impressed by compressed sensing- but that may be because they specialize in doing L1 and L2 problems, and so compressed sensing makes the things they're good at even more important.
(c) having the actual score be far below expected log score for the best hypothesis tells us when some of our experiments must be giving us bogus data or having been performed under invisibly different conditions, a huge problem in many cases and something far beyond the ability of present "survey" methods to notice or handle.
The standard meta-analysis toolkit does include methods of looking at the heterogeneity in effect sizes. (This is fresh in my mind because it actually came up at yesterday's CFAR colloquium regarding some academic research that we were discussing.)
I do not know how the frequentist approach compares to the Bayesian approach in this case.
Thanks!
Seconded.
I don't have a technical basis for thinking this, but I'm beginning to suspect that as time goes on, more and more frequentist methods will be proven to be equivalent or good approximations to the ideal Bayesian approach. If that happens, (Edit: Hypothetical) Bayesians who refused to use those methods on ideological grounds would look kind of silly in hindsight, as if relativistic physics came first and a bunch of engineers refused to use Newtonian equations for decades until someone proved that they approximate the truth well at low speeds.
Who are these mysterious straw Bayesians who refuse to use algorithms that work well and could easily turn out to have a good explanation later? Bayes is epistemological background not a toolbox of algorithms.
After a careful rereading of http://lesswrong.com/lw/mt/beautiful_probability/, the 747 analogy suggests that, once you understand the difference between an epistemological background and a toolbox, it might be a good idea to use the toolbox. But I didn't really read it that way the first time, so I imagine others might have made a similar mistake. I'll edit my post to make the straw Bayesians hypothetical, to make it clear that I'm making a point to other LW readers rather than criticizing a class of practicing statisticians.
I'd actually forgotten I'd written that. Thank you for reminding me!
Bayes is epistemological background not a toolbox of algorithms.
I disagree: I think you are lumping two things together that don't necessarily belong together. There is Bayesian epistemology, which is philosophy, describing in principle how we should reason, and there is Bayesian statistics, something that certain career statisticians use in their day to day work. I'd say that frequentism does fairly poorly as an epistemology, but it seems like it can be pretty useful in statistics if used "right". It's nice to have nice principles underlying your statistics, but sometimes ad hoc methods and experience and intuition just work.
Yes, but the sounder the epistemology is the harder is to [ETA: accidentally] misuse the tools. Cue all the people misunderstanding what p-values mean...
The fundamental confusion going on here comes from peculiar terminology.
jsteinhardt writes:
Also, I should note that for the sake of simplicity I’ve labeled everything that is non-Bayesian as a “frequentist” method
So every algorithm that isn't obviously Bayesian is labeled Frequentist, while in fact what we have are two epistemological frameworks, and a zillion and one algorithms that we throw at data that don't neatly fit into either framework.
Great post! It would be great if you had cites for various folks claiming myth k. Some of these sound unbelievable!
"Frequentist methods need to assume their model is correct."
This one is hilarious. Does anyone say this? Multiply robust methods (Robins/Rotnitzky/et al) aren't exactly Bayesian, and their entire point is that you can get a giant piece of the likelihood arbitrarily wrong and your estimator is still consistent.
Great post!
I thought you might enjoy it :). Your comments on LessWrong provided me with some of the motivation for writing it.
I am confused about your use of the word optimal. In particular in the sentences
Bayesian methods are optimal (except for computational considerations).
and
Bayesian methods are locally optimal, not global optimal.
are you talking about the same sort of 'optimal'? From wikipedia (here and here) I found the rigorous definition of the word 'optimal' in the second sentence, which can be written in terms of your utility function (a decision rule is optimal if there is no other decision rule which will always give you at least as much utility and in at least one world will give you more utility).
Also I agree with many of your myths, namely 3,8,9 and 11. I was rather surprised to see that these things even needed to be mentioned, I don't see why making good trade-offs between truth and computation time should be 'simple' (3), as you mentioned the frequentist tests are chosen precisely with robustness in mind (8), bad science is more than getting your statistics wrong (9) (small sidenote: while it might be true that scientists can get confused by frequentist statistics, which might corrupt their science, I don't think the problem would be smaller when using a different form of statistics, and I therefore think it would not be correct to attribute this bad science to frequentism), and we know from practice that Bayesianism (not frequentism) is the method which has most problems with computational bounds (11).
However, I think it is important to make a distinction between the validity of Bayesianism and the application of Bayesianism. I recall reading on lesswrong (although I cannot find the post at this moment) that the relation between Bayesianism and frequentism should be seen like the relation between Quantum Mechanics and classical physics (although QM has lots of experimental data to support it, so it is rightfully more accepted than Bayesianism). Like QM, Bayesianism is governed by simple mathematical rules (Schrodinger's equation and Bayes' theorem), which will give the right answer when supplied with the correct initial conditions. However, to fly a plane we do not invoke QM, and similarly we will in most practical instances to estimate a parameter not invoke Bayes. Instead we use approximations (classical physics/frequentism), which will not give the exact answer but will give a good approximation thereof (as you mention: a method close to the global optimum, although I am still unclear what we are optimising for there). The key point is that these approximations are correct only insofar as they approximate the answer that would be given by the correct theory. If classical physics and QM disagree then QM was correct. Similarly if we have a parameter estimation obtained by a Bayesian algorithm, and one using a frequentist algorithm, the Bayesian one is going to give the correct subjective probability. But the correct algorithms are (nearly?) impossible to implement, so we stick with the approximations. This is why physicists still use and teach classical physics, and why I personally endorse many frequentist tools. The difference between validity and application seems to be lost in myths 4-7 and 10:
• 4: Strictly speaking the only way to truly share your arguments for having a certain degree of belief in a hypothesis would be to share all sensory data that is dependent on the hypothesis (after all, this is how evidence works). This is clearly not feasible, but it would be the correct thing to do if we only care about being correct. You explain in this myth that this does not lead to a simple and quick algorithm. But this is not an argument against validity, it is an argument against a possible application.
• 5: Again this whole myth deals with application. The myth you debunk states that the approximations made when turning degrees of belief into an actual strategy must be bad, and you debunk this by giving an algorithm that gets very good results. But this is not an argument that distinguishes between Bayesianism and frequentism, it merely states that there are easy-to-compute (in a relative sense) algorithms that get close to the correct answer, which we know how to find in theory but not in practice. (In case you are wondering: the approximation takes place in the step where you simplify your utility function to the Regret function, and your prior is whatever decision rule you use for the first horse.)
• 6: This myth hinges on the word 'simple'. Frequentist methods can deal with many complicated problems, and a lot of high quality work has been done to increase the scope of the tools of frequentism. Saying that only simple models can be dealt with would be an insult. However, as mentioned above, these methods are all approximations, and each method is valid only if the approximations made are satisfied. So while frequentist methods can deal with many complicated models it is imporant to realise that the scope of each method is limited.
• 7+10: Myth 10 seems to be a case of confusion by the people using the tools. Frequentist methods (derived from approximations) come with boundaries, such as limitations on the type of model that can be distilled from data or limitations on the meaning of the outcome of the algorithm (it might answer a different question than the one you hoped to answer). If you break one of these limitations it is not surprising that the results are wacky. This is not a problem of frequentism, provided the tools are explained properly. If the tools are not explained properly then problems arise. Your explanation, we have a class M and a solution E, and we look for a simple approximation which will give E+epsilon, is very clear. Problems arise when the class M is not specified, or the existence of E is unclear. I would like to classify this as an error of the practitioners of frequentism, rather than an error of the method.
Lastly I would like to make a small note that the example on myth 10 is very similar to something called the Boltzmann distribution from statistical physics, discovered in the 19th century. Here the function phi is the energy divided by the temperature.
Edit: during the writing of this post it seems that other people have already made this remark on myth 10. I agree that physicists would probably not interpret this as a game played between nature and the predictor.
Thanks for your comments. One thing you say a few times throughout your comment is "frequentist methods are an approximation to Bayes". I wouldn't agree with this. I think Bayesian and frequentist methods are often trying to do different things (although in many practical instances their usage overlaps). In what sense do you believe that Bayes is the "correct" answer?
At the beginning of your comment, I would have used "admissible" rather than "optimal" to describe the definition you gave:
a decision rule is optimal if there is no other decision rule which will always give you at least as much utility and in at least one world will give you more utility
I don't see how the online learning algorithm in myth 5 can be interpreted as an approximation to Bayes. The guarantee I'm getting just seems way better and more awesome than what Bayes provides. I also don't think it's right to say that "regret is an approximation to utility". Regret is an alternative formulation to utility that happens to lead to a set of very fruitful results, one of which I explained under myth 5.
While writing this answer I realised I forgot an important class of exceptions, namely the typical school example of hypothesis testing. My explanation now consists of multiple parts.
To answer the first question: the Bayesian method gives the "correct" answer in the sense that it optimises the expectation of your utility function. If you choose a utility function like log(p) this means that you will find your subjective probabilities. I also think Bayesianism is "correct" in the philosophical sense (which is a property of the theory), but I believe there are many posts on lesswrong that can explain this better than I can.
• The approximation made can often be rewritten in terms of a particular choice of utility function (or risk function, which is more conventional according to wikipedia). As you mentioned choosing the Regret function for cost and a non-silly prior (for example whichever one you are using) will yield a Bayesian algorithm to your problem. Unfortunately I haven't looked at the specific algorithm in detail, but if admissible solutions are Bayesian algorithms, why would a Bayesian approach using your data not outperform (and therefore produce at least as good asymptotic behaviour) the frequentist algorithm? Also I would like to leave open the possibility that the algorithm you mention actually coincides with a Bayesian algorithm. Sometimes a different approach (frequentism/Bayesianism) can lead to the same conclusion (method).
• Suppose I find myself in a situation in which I have several hypotheses and a set of data. The thing I'm interested in is the probability of each hypothesis given the data (in other words, finding out which hypothesis is correct). In frequentism there is no such thing as a 'probability of the hypothesis', after all a hypothesis is either true or false and we don't know which. So as a substitution frequentists consider the other conditional probability, the probability of seeing this data or worse provided the hypothesis is true, where worse must be defined beforehand. I'd say this is a wrong approach, a very very wrong approach. My opinion is that frequentists have adopted an incorrect worldview which leads them to dismiss and answer the wrong questions in this case. Here I expect pure conflict rather than some Bayesian approach which will coincide with frequentist methods.
I hope this explains how Bayesian and frequentist methods overlap and seem to disagree sometimes, and how many instances of frequentist algorthms should be compared to Bayesian algorithms with a properly chosen utility function.
Say I am interested in distinguishing between two hypotheses for p(a,b,c,d) (otherwise unrestricted):
hypothesis 1: "A is independent of B, C is independent of D, and nothing else is true"
hypothesis 2: "no independences hold"
Frequentists can run their non-parametric marginal independence tests. What is the (a?) Bayesian procedure here? As far as I can tell, for unrestricted densities p(a,b,c,d) no one knows how to write down the likelihood for H1. You can do a standard Bayesian setup here in some cases, e.g. if p(a,b,c,d) is multivariate normal, in which case H1 corresponds to a (simple) Gaussian ancestral graph model. Maybe one can do some non-parametric Bayes thing (???). It's not so simple to set up the right model sometimes, which is what Bayesian methods generally need.
You should check out chapter 20 of Jaynes' Probability Theory, which talks about Bayesian model comparison.
We wish to calculate P[H1 | data] / P[H2 | data] = P[data | H1] / P[data | H2] * P[H1] / P[H2].
For Bayesians, this problem does not involve "unrestricted densities" at all. We are given some data and presumably we know the space from which it was drawn (e.g. binary, categorical, reals...). That alone specifies a unique model distribution. For discrete data, symmetry arguments mandate a Dirichlet model prior with the categories given by all possible outcomes of {A,B,C,D}. For H2, the Dirichlet parameters are updated in the usual fashion and P[data | H2] calculated accordingly.
For H1, our Dirichlet prior is further restricted according to the independencies. The resulting distribution is not elegant (as far as I can tell), but it does exist and can be updated. For example, if the variables are all binary, then the Dirichlet for H2 has 16 categories. We'll call the 16 frequencies X0000, X0001, X0010, ... with parameters a0000, a0001, ... where the XABCD are the probabilities which the model given by X assigns to each outcome. Already, the Dirichlet for H2 is constrained to {X | sum(X) = 1, X > 0} within R^16. The Dirichlet for H1 is exactly the same function, but further constrained to the space {X | sum(X) = 1, X > 0, X00.. / X10.. = X01.. / X11.., X..00 / X10.. = X..01 / X..11} within R^16. This is probably painful to work with (analytically at the very least), but is fine in principle.
So we have P[data | H1] and P[data | H2]. That just leaves the prior probabilities for each model. At first glance, it might seem that H1 has zero prior, since it corresponds to a measure-zero subset of H2. But really, we must have SOME prior information lending H1 a nonzero prior probability or we wouldn't bother comparing the two in the first place. Beyond that, we'd have to come up with reasonable probabilities based on whatever prior information we have. Given no other information besides the fact that we're comparing the two, it would be 50/50.
Of course this is all completely unscalable. Fortunately, we can throw away information to save computation. More specifically, we can discretize and bin things much like we would for simple marginal independence tests. While it won't yield the ideal Bayesian result, it is still the ideal result given only the binned data.
I am a bit curious about the non-parametric tests used for H1. I am familiar with tests for whether A and B are independent, and of course they can be applied between C and D, but how does one test for independence between both pairs simultaneously without assuming that the events (A independent of B) and (C independent of D) are independent? It is precisely this difficulty which makes the Bayesian likelihood calculation of H1 such a mess, and I am curious how frequentist methods approach it.
My apologies for the truly awful typesetting, but this is not the evening on which I learn to integrate tex in lesswrong posts.
Thanks for this post.
The resulting distribution is not elegant (as far as I can tell).
In the binary case, the saturated model can be parameterized by p(S = 0) for S any non-empty subset of { a,b,c,d }. The submodel corresponding to H1 is just one where p({a,b} = 0) = p({a}=0)p({b}=0), and p({c,d} = 0) = p({c}=0)p({d}=0).
For Bayesians, this problem does not involve "unrestricted densities" at all.
I am sorry, Bayesians do not get to decide what my problem is. My problem involves unrestricted densities by definition. I don't think you get to keep your "fully general formalism" chops if you suddenly start redefining my problem for me.
how does one test for independence between both pairs simultaneously without assuming that the events (A independent of B) and (C independent of D) are independent?
This is a good question. I don't know a good answer to this that does not involve dealing with the likelihood in some way.
Sorry, I didn't mean to be dismissive of the general densities requirement. I mean that data always comes with a space, and that restricts the density. We could consider our densities completely general to begin with, but as soon as you give me data to test, I'm going to look at it and say "Ok, this is binary?" or "Ok, these are positive reals?" or something. The space gives the prior model. Without that information, there is no Bayesian answer.
I guess you could say that this isn't fully general because we don't have a unique prior for every possible space, which is a very valid point. For the spaces people actually deal with we have priors, and Jaynes would probably argue that any space of practical importance can be constructed as the limit of some discrete space. I'd say it's not completely general, because we don't have good ways of deriving the priors when symmetry and maximum entropy are insufficient. The Bayesian formalism will also fail in cases where the priors are non-normalizable, which is basically the formalism saying "Not enough information."
On the other hand, I would be very surprised to see any other method which works in cases where the Bayesian formalism does not yield an answer. I would expect such methods to rely on additional information which would yield a proper prior.
Regarding that ugly distribution, that parameterization is basically where the constraints came from. Remember that the Dirichlets are distributions on the p's themselves, so it's an hierarchical model. So yes, it's not hard to right down the subspace corresponding to that submodel, but actually doing an update on the meta-level distribution over that subspace is painful.
I mean that data always comes with a space, and that restricts the density.
Sorry I am confused. Say A,B,C,D are in [0,1] segment of the real line. This doesn't really restrict anything.
For the spaces people actually deal with we have priors.
I deal with this space. I even have a paper in preparation that deals with this space! So do lots of people that worry about learning graphs from data.
On the other hand, I would be very surprised to see any other method which works in cases where the Bayesian formalism does not yield an answer.
People use variations of the FCI algorithm, which from a Bayesian point of view is a bit of a hack. The asymptopia version of FCI assumes a conditional independence oracle, and then tells you what the model is based on what the oracle says. In practice, rather than using an oracle, people do a bunch of hypothesis tests for independence.
Regarding that ugly distribution
You are being so mean to that poor distribution. You know, H1 forms a curved exponential family if A,B,C,D are discrete. That's sort of the opposite of ugly. I think it's beautiful! H1 is an instance of Thomas Richardson's ancestral graph models, with the graph:
A <-> B <-> C <-> D <-> A
Oh, saying A,B,C,D are in [0,1] restricts quite a bit. It eliminates distributions with support over all the reals, distributions over R^n, distributions over words starting with the letter k, distributions over Turing machines, distributions over elm trees more than 4 years old in New Hampshire, distributions over bizarre mathematical objects that I can't even think of... That's a LOT of prior information. It's a continuous space, so we can't apply a maximum entropy argument directly to find our prior. Typically we use the beta prior for [0,1] due to a symmetry argument, but that admittedly is not appropriate in all cases. On the other hand, unless you can find dependencies after running the data through the continuous equivalent of a pseudo-random number generator, you are definitely utilizing SOME additional prior information (e.g. via smoothness assumptions). When the Bayesian formalism does not yield an answer, it's usually because we don't have enough prior info to rule out stuff like that.
I think we're still talking past each other about the distributions. The Bayesian approach to this problem uses an hierarchical distribution with two levels: one specifying the distribution p[A,B,C,D | X] in terms of some parameter vector X, and the other specifying the distribution p[X]. Perhaps the notation p[A,B,C,D ; X] is more familiar? Anyway, the hypothesis H1 corresponds to a subset of possible values of X. The beautiful distribution you talk about is p[A,B,C,D | X], which can indeed be written quite elegantly as an exponential family distribution with features for each clique in the graph. Under that parameterization, X would be the lambda vector specifying the exponential model. Unfortunately, p[X] is the ugly one, and that elegant parameterization for p[A,B,C,D | X] will probably make p[X] even uglier.
It is much prettier for DAGs. In that case, we'd have one beta distribution for every possible set of inputs to each variable. X would then be the set of parameters for all those beta distributions. We'd get elegant generative models for numerical integration and life would be sunny and warm. So the simple use case for FCI is amenable to Bayesian methods. Latent variables are still a pain, though. They're fine in theory, just integrate over them when calculating the posterior, but it gets ugly fast.
Oh, saying A,B,C,D are in [0,1] restricts quite a bit. It eliminates distributions with support over all the reals
???
There are easy to compute bijections from R to [0,1], etc.
The Bayesian approach to this problem uses an hierarchical distribution with two levels: one specifying the distribution p[A,B,C,D | X] in terms of some parameter vector X, and the other specifying the distribution p[X]
Yes, parametric Bayes does this. I am giving you a problem where you can't write down p(A,B,C,D | X) explicitly and then asking you to solve something frequentists are quite happy solving. Yes I am aware I can do a prior for this in the discrete case. I am sure a paper will come of it eventually.
Latent variables are still a pain, though.
The whole point of things like the beautiful distribution is you don't have to deal with latent variables. By the way the reason to think about H1 is that it represents all independences over A,B,C,D in this latent variable DAG:
A <- u1 -> B <- u2 -> C <- u3 -> D <- u4 -> A
where we marginalize out the ui variables.
which can indeed be written quite elegantly as an exponential family distribution with features for each clique in the graph
I think you might be confusing undirected and bidirected graph models. The former form linear exponential families and can be parameterized via cliques, the latter form curved exponential families, and can be parameterized via connected sets.
There are easy to compute bijections from R to [0,1], etc.
This is not true, there are bijections between R and (0,1), but not the closed interval.
Anyway there are more striking examples, for example if you know that A, B, C, D are in a discrete finite set, it restricts yout choices quite a lot.
Did you mean to say continuous bijections? Obviously adding two points wouldn't change the cardinality of an infinite set, but "easy to compute" might change.
You're right, I meant continuous bijections, as the context was a transformation of a probability distribution.
This is not true, there are bijections between R and (0,1), but not the closed interval.
No.
This is not true, there are bijections between R and (0,1), but not the closed interval.
You are right, apologies.
In frequentism there is no such thing as a 'probability of the hypothesis', after all a hypothesis is either true or false and we don't know which. So as a substitution frequentists consider the other conditional probability, the probability of seeing this data or worse provided the hypothesis is true, where worse must be defined beforehand. I'd say this is a wrong approach, a very very wrong approach.
That's not a substitution, and it's the probability of seeing the data provided the hypothesis is false, not true.
It gives the upper bound on the risk that you're going to believe in a wrong thing if you follow the strategy of "do experiments, believe the hypothesis if confirmed".
Mostly we want to update all probabilities until they're very close to 0 or to 1 , because the uncertainty leads to loss of expected utility in the future decision making.
In frequentism there is no such thing as a 'probability of the hypothesis'
Yeah, and in Bayesianism, any number between 0 and 1 will do - there's still no such thing as a specific "probability of the hypothesis", merely a change to an arbitrary number.
edit: it's sort of like arguing that worst-case structural analysis of a building or a bridge is a "very very wrong approach", and contrast it with some approach where you make up priors about the quality of concrete, and end up shaving a very very small percent off the construction cost, while building a weaker bridge which bites you in the ass eventually anyway when something unexpected happens to the bridge.
However, I think it is important to make a distinction between the validity of Bayesianism and the application of Bayesianism. I recall reading on lesswrong (although I cannot find the post at this moment) that the relation between Bayesianism and frequentism should be seen like the relation between Quantum Mechanics and classical physics
Quantum Mechanics isn't consistent with General Relativity, our best explanation of gravity. Despite decades of trying, neither can be formulated as an approximation of the other. Even if one day physicists finally figure out a "Theory of Everything", it would still be a model. It would be epistemically incorrect to claim it was "exact".
Curiously, there is one interpretation of QM known as Quantum Bayesianism, which holds that wavefunctions are subjective and they are the fundamental concepts for reasoning about the world, and subjective probability distributions arise as approximations of wavefunctions under decoherence. That is, Bayesianism itself might be an approximation of a "truer" epistemic theory!
My humble opinion is that there is no ultimately "true" epistemic theory. They are all just models of what humans do to gain knowledge of the world. Some models can work better than others, often within certain bounds, but none of them is perfect.
I am very interested in Quantum Bayesianism (in particular Leifer's work) because one of the things we have to do to be "quantum Bayesians" is figure out a physically neutral description of quantum mechanics, that is, a description of quantum mechanics that doesn't use physical jargon like 'time.' In particular, physicists I believe describe spacelike and timelike separated entanglement differently.
That is, a Bell inequality violation system (that is where B and C are space separated) has this graph
A -> B <-> C <- D
(where famously, due to Bell inequality violation, there is no hidden variable corresponding to the bidirected arc connecting B and C).
But the same system can arise in a temporally sequential model which looks like this:
A -> B -> D -> C, with B <-> C
where an appropriate manipulation of the density matrix corresponding to this system ought to give us the Bell system above. In classical probability we can do this. In other words, in classical probability the notion of "probabilistic dependence" is abstracted away from notions like time and space.
Also we have to figure out what "conditioning" even means. Can't be Bayesian if we don't condition, now can we!
where an appropriate manipulation of the density matrix corresponding to this system ought to give us the Bell system above. In classical probability we can do this. In other words, in classical probability the notion of "probabilistic dependence" is abstracted away from notions like time and space.
Yes, but the notion of Bayesian inference, where you start with a prior and build a sequence of posteriors, updating as evidence accumulates, has an intrinsic notion of time. I wonder if that's enough for Quantum Bayesianism (I haven't read the original works, so I don't really know much about it).
The temporal order for sequential computation of posteriors is just our interpretation, it is not a part of the formalism. If we get pieces of evidence e1, e2, ..., ek in temporal order, we could do Bayesian updating in the temporal order, or the reverse of the temporal order, and the formalism still works (that is our overall posterior will be the same, because all the updates commute). And that's because Bayes theorem says nothing about time anywhere.
My humble opinion is that there is no ultimately "true" epistemic theory. They are all just models of what humans do to gain knowledge of the world. Some models can work better than others, often within certain bounds, but none of them is perfect.
Exactly!
I've been thinking about what program, exactly, is being defended here, and I think a good name for it might be "prior-less learning". To me, all procedures under the prior-less umbrella have a "minimax optimality" feel to them. Some approaches search for explicitly minimax-optimal procedures; but even more broadly, all such approaches aim to secure guarantees (possibly probabilistic) that the worst-case performance of a given procedure is as limited as possible within some contemplated set of possible states of the world. I have a couple of things to say about such ideas.
First, for the non-probabilistically guaranteed methods: these are relatively few and far between, and for any such procedure it must be ensured that the loss that is being guaranteed is relevant to the problem at hand. That said, there is only one possible objection to them, and it is the same as one of my objections to prior-less probabilistically guaranteed methods. That objection applies generically to the minimaxity of the prior-less learning program: when strong prior information exists but is difficult to incorporate into the method, the results of the method can "leave money on the table", as it were. Sometimes this can be caught and fixed, generally in a post hoc and ad hoc way; sometimes not.
For probabilistically-guaranteed methods, there is a epistemic gap -- in principle -- in going from the properties of such procedures in classes of repeating situations (i.e., pre-data claims about the procedure) to well-warranted claims in the cases at hand (i.e., post-data claims about the world). But it's obvious that this is merely an in-principle objection -- after all, many such techniques can be and have been successfully applied to learn true things about the world. The important question is then: does the heretofore implicit principle justifying the bridging of this gap differ significantly from the principle justifying Bayesian learning?
Thanks a lot for the thoughtful comment. I've included some of my own thoughts below / also some clarifications.
First, for the non-probabilistically guaranteed methods: these are relatively few and far between
Do you think that online learning methods count as an example of this?
when strong prior information exists but is difficult to incorporate into the method, the results of the method can "leave money on the table", as it were
I think this is a valid objection, but I'll make two partial counter-arguments. The first is that, arguably, there may be some information that is not easy to incorporate as a prior but is easy to incorporate under some sort of minimax formalism. So Bayes may be forced to leave money on the table in the same way.
A more concrete response is that, often, an appropriate regularizer can incorporate similar information to what a prior would incorporate. I think the regularizer that I exhibited in Myth 6 is one example of this.
For probabilistically-guaranteed methods...
I think it's important to distinguish between two (or maybe three) different types of probabilistic guarantees; I'm not sure whether you would consider all of the below "probabilistic" or whether some of them count as non-probabilistic, so I'll elaborate on each type.
The first, which I presume is what you are talking about, is when the probability is due to some assumed distribution over nature. In this case, if I'm willing to make such an assumption, then I'd rather just go the full-on Bayesian route, unless there's some compelling reason like computational tractability to eschew it. And indeed, there exist cases where, given distributional assumptions, we can infer the parameters efficiently using a frequentist estimation technique, while the Bayesian analog runs into NP-hardness obstacles, at least in some regimes. But there are other instances where the Bayesian method is far cheaper computationally than the go-to frequentist technique for the same problem (e.g. generative vs. discriminative models for syntactic parsing), so I only mean to bring this up as an example.
The second type of guarantee is in terms of randomness generated by the algorithm, without making any assumptions about nature (other than that we have access to a random number generator that is sufficiently independent from what we are trying to predict). I'm pretty happy with this sort of guarantee, since it requires fairly weak epistemic commitments.
The third type of guarantee is somewhat in the middle: it is given by a partial constraint on the distribution. As an example, maybe I'm willing to assume knowledge of certain moments of the distribution. For sufficiently few moments, I can estimate them all accurately from empirical data, and I can even bound the error to within high probability, making no assumption other than independence of my samples. In this case, as long as I'm okay with making the independence assumption, then I consider this guarantee to be pretty good as well (as long as I can bound the error introduced into the method by the inexact estimation of the moments, which there are good techniques for doing). I think the epistemic commitments for this type of method are, modulo making an independence assumption, not really any stronger than those for the second type of method, so I'm also fairly okay with this case.
there may be some information that is not easy to incorporate as a prior but is easy to incorporate under some sort of minimax formalism
If you can cook up examples of this, that would be helpful.
I assume you mean $y\_n = u^T x\_n$ in the "infer u" problem? Or am I missing something?
Also, is there a good real-world problem which this reflects?
Yes, I mixed up x and y, good catch. It's not trivial for me to fix this while maintaining wordpress-compatibility, but I'll try to do so in the next few days.
This problem is called the "compressed sensing" problem and is most famously used to speed up MRI scans. However it has also had a multitude of other applications, see here: http://en.wikipedia.org/wiki/Compressed_sensing#Applications.
Many L1 constraint-based algorithms (for example the LASSO) can be interpreted as producing maximum a posteriori Bayesian point estimates with Laplace (= double exponential) priors on the coefficients.
Yes, but in this setting maximum a posteriori (MAP) doesn't make any sense from a Bayesian perspective. Maximum a posteriori is supposed to be a point estimate of the posterior, but in this case, the MAP solution will be sparse, whereas the posterior given a laplacian prior will place zero mass on sparse solutions. So the MAP estimate doesn't even qualitatively approximate the posterior.
Ah, good point. It's like the prior, considered as a regularizer, is too "soft" to encode the constraint we want.
A Bayesian could respond that we rarely actually want sparse solutions -- in what situation is a physical parameter identically zero? -- but rather solutions which have many near-zeroes with high probability. The posterior would satisfy this I think. In this sense a Bayesian could justify the Laplace prior as approximating a so-called "slab-and-spike" prior (which I believe leads to combinatorial intractability similar to the fully L0 solution).
Also, without L0 the frequentist doesn't get fully sparse solutions either. The shrinkage is gradual; sometimes there are many tiny coefficients along the regularization path.
[FWIW I like the logical view of probability, but don't hold a strong Bayesian position. What seems most important to me is getting the semantics of both Bayesian (= conditional on the data) and frequentist (= unconditional, and dealing with the unknowns in some potentially nonprobabilistic way) statements right. Maybe there'd be less confusion -- and more use of Bayes in science -- if "inference" were reserved for the former and "estimation" for the latter.]
Also, without L0 the frequentist doesn't get fully sparse solutions either. The shrinkage is gradual; sometimes there are many tiny coefficients along the regularization path.
See this comment. You actually do get sparse solutions in the scenario I proposed.
Cool; I take that back. Sorry for not reading closely enough.
Okay, I'm somewhat leaving my expertise here and going on intuition, but I would be somewhat surprised if the problem exactly as you stated it turned out to be solvable by a compressed-sensing algorithm as roughly described on Wikipedia. I was trying to figure out how I'd approach the problem you stated, using techniques I already knew about, but it seemed to me more like a logical constraint problem than a stats problem, because we had to end up with exactly 100 nonzero coefficients and the 100 coefficients had to exactly fit the observations y, in what I assume to be an underdetermined problem when treated as a linear problem. (In fact, my intuitions were telling me that this ought to correspond to some kind of SAT problem and maybe be NP-hard.) Am I wrong? The Wikipedia description talks about using L1-norm style techniques to implement an "almost all coefficients are 0" norm, aka "L0 norm", but it doesn't actually say the exact # of coefficients are known, nor that the observations are presumed to be noiseless.
You minimize the L1-norm consistently with correct prediction on all the training examples. Because of the way the training examples were generated, this will yield at most 100 non-zero coefficients.
It can be proved that problem is solvable in polynomial time due to a reduction to linear programming:
let m = 10,000
\\begin\{aligned\} & \\text\{miminze\} & & \\sum\_\{j=1\}^m |u\_j| \\\\ & \\text\{subject to\} & & u^T x\_i = y\_i, \\; i=1,\\dots,n \\end\{aligned\}
You can further manipulate it to get rid of the absolute value. For each coefficient introduce two variables: $\\overset\{\+\}\{u\}\_j$ and $\\overset\{\-\}\{u\}\_j$:
\\begin\{aligned\} & \\text\{miminze\} & & \\sum\_\{j=1\}^m \\overset\{\+\}\{u\}\_j \+ \\overset\{\-\}\{u\}\_j \\\\ & \\text\{subject to\} & & u^T x\_i = y\_i, \\; i=1,\\dots,n\\\\ &&& u\_j = \\overset\{\+\}\{u\}\_j \- \\overset\{\-\}\{u\}\_j, \\; j=1,\\dots,m\\\\ &&& \\overset\{\+\}\{u\}\_j \\geq 0, \\; j=1,\\dots,m\\\\ &&& \\overset\{\-\}\{u\}\_j \\geq 0, \\; j=1,\\dots,m \\end\{aligned\}
|
|
Lemma 68.13.3 (Artin-Rees). Let $S$ be a scheme. Let $X$ be a Noetherian algebraic space over $S$. Let $\mathcal{F}$ be a coherent sheaf on $X$. Let $\mathcal{G} \subset \mathcal{F}$ be a quasi-coherent subsheaf. Let $\mathcal{I} \subset \mathcal{O}_ X$ be a quasi-coherent sheaf of ideals. Then there exists a $c \geq 0$ such that for all $n \geq c$ we have
$\mathcal{I}^{n - c}(\mathcal{I}^ c\mathcal{F} \cap \mathcal{G}) = \mathcal{I}^ n\mathcal{F} \cap \mathcal{G}$
Proof. Choose an affine scheme $U$ and a surjective étale morphism $U \to X$ (see Properties of Spaces, Lemma 65.6.3). Then $U$ is a Noetherian scheme (by Morphisms of Spaces, Lemma 66.23.5). The equality of the lemma holds if and only if it holds after restricting to $U$. Hence the result follows from the case of schemes, see Cohomology of Schemes, Lemma 30.10.3. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
|
## Geometry: Common Core (15th Edition)
Find the coordinates of each city. Brooklyn: (8,2); Charleston: (4,5) Use the distance formula: $d=\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}$ $d=\sqrt{(4-8)^2+(5-2)^2}$ $d=\sqrt{(-4)^2+3^2}$ $d=\sqrt{16+9}$ $d=\sqrt{25}$ $d=5$
|
|
It's hard to understand what' written there , but if it is what I think then that seriers converges absolutely: $\sum^\infty_{n=0}\frac{\cos 2n}{2^n-5}$ , since $\left|\frac{\cos 2n}{2^n-5}\right|\leq\frac{1}{2^n-5}$ ...
|
|
A possible disk mechanism for the 23d QPO in Mkn 501
# A possible disk mechanism for the 23d QPO in Mkn 501
J.H. Fan, F.M. Rieger, T.X. Hua, U. C. Joshi, J. Li, Y.X. Wang, J.L. Zhou, Y.H. Yuan, J.B. Su, Y.W. Zhang 1. Center for Astrophysics, Guangzhou University, Guangzhou 510400, China
2. UCD School of Mathematical Sciences, University College Dublin, Belfield, Dublin 4, Ireland
3. Physical Research Laboratory, Ahmedabad 380 009, India
4. College of Science and Trade, Guangzhou University, Guangzhou 511442
###### Abstract
Optically thin two-temperature accretion flows may be thermally and viscously stable, but acoustically unstable. Here we propose that the O-mode instability of a cooling-dominated optically thin two-temperature inner disk may explain the 23-day quasi-periodic oscillation (QPO) period observed in the TeV and X-ray light curves of Mkn 501 during its 1997 high state. In our model the relativistic jet electrons Compton upscatter the disk soft X-ray photons to TeV energies, so that the instability-driven X-ray periodicity will lead to a corresponding quasi-periodicity in the TeV light curve and produce correlated variability. We analyse the dependence of the instability-driven quasi-periodicity on the mass (M) of the central black hole, the accretion rate () and the viscous parameter () of the inner disk. We show that in the case of Mkn 501 the first two parameters are constrained by various observational results, so that for the instability occurring within a two-temperature disk where , the quasi-period is expected to lie within the range of 8 to 100 days, as indeed the case. In particular, for the observed 23-day QPO period our model implies a viscosity coefficient , a sub-Eddington accretion rate and a transition radius to the outer standard disk of , and predicts a period variation due to the motion of the instability region.
###### keywords:
Active Galactic Nuclei: Black Hole – Accretion Disk – Periodicity – Individual: Mkn 501
thanks: Corresponding author: J.H. Fan, email: fjh@gzhu.edu.cn
## 1 Introduction
Strong variability is one of the common observational properties of TeV emitting Active Galactic Nuclei (AGNs) [10]. In many cases, the highly variable high-energy gamma-rays and the X-rays appear to be correlated with no time delays evident on day-scales, suggesting that the -rays may result from inverse Compton upscattering of lower energy photons. The first TeV blazar, for example, that was observed simultaneously in multiple bands from radio to TeV gamma-rays is Mrk 421. The first campaign, conducted in 1994 [39] on Mrk 421, revealed correlated variability between the keV X-ray and TeV gamma-ray emission. The gamma-ray flux varied by an order of magnitude on a timescale of 2 days and the X-ray flux was observed to double in 12 hr. On the other hand, the high-energy gamma-ray flux observed by EGRET, as well as the radio and UV fluxes, showed less variability than the keV or TeV bands. Another multiwavelength campaign on Mrk 421 performed in 1995 revealed another coincident keV/TeV flare [6, 63]. The UV and optical bands also showed correlation during the flares. With the detection of TeV emission from Mrk 501 [52], several multiband campaigns were organized on Mrk 501 to verify whether such a behavior is a general property of TeV-emitting blazars or whether it is unique to Mrk 421.
The gamma-ray blazar Mkn 501, detected as a strong TeV emitter in 1995 [52], is one of the closest () and brightest BL Lacertae objects. Historically (i.e., prior to 1997), its spectral energy distribution (SED) resembles that of X-ray selected BL Lac objects, having a peak in the extreme UV–soft X-ray energy band [24, 57]. Earlier optical observations of Mkn 501 have shown variations of up to and polarized emission up to [20].
During its active state in 1997, where Mkn 501 was monitored in the 2-10 keV X-ray band by the all sky monitor(ASM) on board RXTE and in the TeV energy band by several Cherenkov telescope groups [1, 9, 17, 32, 51, 56], both X-rays and TeV gamma-rays increased by more than 1 order of magnitude from quiescent level [9, 48]. Analysis of the X-ray and TeV data showed, that the variations were strongly correlated between both bands, yet only marginally correlated with the optical UV band [9, 17, 32, 47]. While the synchrotron emission peaked below 0.1 keV in the quiescent state, in 1997 it peaked at 100 keV. This is the largest shift ever observed for a blazar [48]. Earlier investigations of the 1997 April flare in Mkn 501 [48], showed that its origin may be related to a variation of together with an increased luminosity and a flattening of the injected electron distribution.
During the 1997 high state, the X-ray and TeV light curves displayed a quasi-periodic signature [22, 30, 51]. A detailed periodicity analysis, based on the TeV measurements obtained with all Cherenkov Telescopes of the HEGRA-Collaboration and the X-ray data with RXTE, was performed by Kranich et al. [30, 31] and more recently also by Osone [45] including Utah TA data. The results indeed strongly suggest the existence of a 23 day periodicity, with a combined probability of in both the TeV and the X-ray light curves covering the same observational period [30, 31]. 111 Note that no QPOs have been seen by MAGIC during 1998-2000, when the source was not very active [3], suggesting that the QPOs only occur during a very active stage. Rieger and Mannheim (2000) have shown that the origin of these QPOs may be related to the presence of a binary black hole in the center of Mkn 501. While this may well be possible, we explore here an alternative scenario, where the observed QPOs are related to accretion disk instabilities.
In a seminal paper, Shapiro et al. (1976) (SLE) have pointed out that a hot (Compton cooling-dominated) optically thin two-temperature accretion disk may be present in the inner region of the standard optically thick disk [59], whenever the SLE inequality is satisfied, where , , is the accretion rate of the disk, the mass of the central black hole, and the viscosity parameter, constrained by the model to lie within the range . Later work [49, 50] has shown that the SLE configuration might be thermally unstable (although less than the standard disk) if the simple standard viscosity prescription is employed. However, relatively small changes in the viscosity law can already ensure a stable configuration [49], in particular, if stabilizing effects of a disk wind are fully taken into account. A kind of SLE two-temperature disk structure may thus well exist in the inner region of BL Lac type objects [66, 67, 8] and provide a possible explanation for the X-ray and TeV variability phenomenon in Mkn 501. For, firstly, Compton processes in a inner two-temperature disk with electron temperature of about K (and ion temperature one or two orders of magnitude higher) can produce (steep) X-ray power-law spectra, in contrast to the standard disk model that can only produce emission up to the optical-UV band [58, 66]. Secondly, analysis of the linear stability of an optically thin two-temperature disk around a compact object shows that the disk is subject to a radial pulsational instability (inertial-acoustic mode instability) [69]. This possible mode of pulsational overstability, in which radial disk oscillations with local Keplerian frequencies become unstable against axisymmetric perturbations, occurs if the viscosity coefficient increases sufficiently upon compression [26]. In this case, thermal energy generation due to viscous dissipation increases during compression, leading to amplification of the oscillations analogous to the role played by nuclear energy generation in stellar pulsations. As we demonstrate below, the occurrence of such a type of disk oscillation may well account for some quasi-periodic time variability in AGNs in a way similar to Galactic black hole candidates [5, 23, 40, 70].
## 2 Model and Results
### 2.1 Model
Due to the Lightman & Eardley secular instability [36, 37] of the standard geometrically thin, optically thick disk model [59], a two-temperature disk configuration is likely to be present in the inner region of the standard disk. As shown by Shapiro et al. [58], the outer boundary of such a two-temperature disk is determined by
r21/80∗ζ−2(r0)≈3.85×106α1/4M−7/47˙M224. (1)
where is the radius of the outer boundary, is the mass of the central black hole in units of , is the accretion rate in units of g syr, is the viscosity parameter constrained to be within the range of 0.05 to 1.0, and expresses the boundary condition that the viscous stress must vanish at the inner edge of the disk.
The total luminosity of a two-temperature disk, integrated from the inner edge to , is of order
Lr0,44 = 34[1+2(16r0∗)−3/2−18/r0∗]˙M24 (2)
where is in units of erg s. Combining relations (1) and (2), one obtains
[1+2(r0∗6)−3/2−18r0∗]r21/160∗[1−(r0∗6)−1/2]−1=2.62×103Lr0,44λ (3)
where . Obviously, since is relatively insensitive to , the outer radius and the accretion rate can be uniquely determined, once the central black mass and the observed radiative output are known.
Following Shapiro et al. [58] (see also [66]) we assume, that a hot (cooling-dominated) optically thin two-temperature region forms within the inner region of the standard disk. Hence, the classical outer and middle regions of the standard Shakura & Sunyaev (SS) disk model [59] describe the outer portions of our disk model. The inner region of the cool SS disk, whose outer boundary lies at the point where the radiation pressure matches the gas pressure , extends inwards to the radius , where . The radius marks the outer boundary of the two-temperature inner region, which then extends inward to the innermost radius for a non-rotating black hole. In short, the assumed disk configuration is assumed to be identical to the standard SS model for , but described by the two-temperature structure equations for . The disk thickness, the density, and the ratio for the equilibrium, two-temperature inner disk then satisfy the following equations:
h = 5.29×1011M7/127˙M5/1224α−7/12ζ5/12r7/8∗ cm , (4) ρ = 1.14×10−11M−3/47˙M−1/424α3/4ζ−1/4r−9/8∗ g cm−3 (5) h/r = 0.316×M−5/127 ˙M5/1224 α−7/12 ζ5/12 r−1/8∗. (6)
Typically, the emission spectrum of a hot ( K) inner disk is bremstrahlung unless a soft photon source is present, in which case the spectrum may become a power-law. In our hybrid model a fraction of the optical-UV soft photons from the outer SS disk is assumed to be intercepted by the hot SLE inner disk and Comptonized to form the X-ray power-law spectrum, similar as in [58, 66]. If Comptonization is not saturated, as possible in the presence of a copious soft photon source (such as a cold SS component), the soft photons will be upscattered into a power-law distribution over some energy range. The resulting disk spectrum above the input soft energy is approximately of the form
Fν∝ν−αxexp(−hν/kTe), (7)
where , , and for unsaturated Compton cooling, i.e., the spectrum resembles a power-law with index for and , and shows an exponential cut-off for [58, 66]. The corresponding SLE disk SED peaks at , i.e., usually at around some tens of keV, with being higher for smaller indices. In particular, for keV ( K) one may have for .
As first shown by Kato (1978), accretion disks can undergo (radial-azimuthal) pulsational instabilities and thereby cause quasi-periodic variability. If a fluid element in the disk is perturbed from its equilibrium position, it oscillates in the radial direction with the epicyclic frequency, as the difference between gravitational and inertial (Coriolis and centrifugal) force appears as a restoring force. Due to this property, axisymmetric perturbations can propagate in the disk as inertial acoustic waves. As shown by Kato, a viscous disk can be pulsationally unstable against this type of oscillations under certain conditions: By studying the stability properties of an optically thin disk to axisymmetric, local (wavelength radial size of disk) and nearly radial () oscillations, he found that the disk becomes unstable if the coefficient of the viscosity increases sufficiently rapidly with density and temperature, and that the frequency of oscillations essentially corresponds to the angular frequency of the disk, independent of wavelength. More recently, Wu (1997) has studied in detail the radial pulsation instability of optically thin two-temperature accretion disks, demonstrating that in a geometrically thin, cooling-dominated two-temperature disk the acoustic O-mode (outward-propagating) is always unstable. It has been argued, that pulsational overstability might explain some of the QPO phenomena observed in cataclysmic variables (CVs) and Galactic black hole systems [5, 11, 40, 43], and be important for a proper understanding of periodic variability in AGNs [23]. As we show in the the next section, Mkn 501 may indeed represent a promising AGN candidate, where pulsational overstability becomes apparent.
Numerical simulations of pulsational instability of geometrically thin, optically thick (one-temperature) disks show that inertial acoustic waves are excited and propagate both inwards and outwards periodically, immediately growing to shock waves and resulting in a time-varying local accretion rate exceeding the imposed input value by one or two orders of magnitude [23, 43]. Global nonlinear calculations of such systems indeed demonstrate that (i) the net mass flow changes sign by a significant amount according to the direction of the oscillatory flow, although the disk seems to maintain its ability to transport on average the initially imposed [27], and that (ii) the oscillations also induce small relative changes in other variables (e.g., temperature or surface density) [43]. In particular, the luminosity variations caused by overstable oscillations in such systems are expected to be (only) at the percent level, even if local variables such a mass accretion rate and radial velocity change significantly [11, 27, 43]. Unfortunately, to our knowledge, no simulation of pulsational overstability for a SLE type, optically thin, cooling-dominated two-temperature disk configuration has been performed up to now. This makes it difficult to draw solid (quantitative) conclusions for the SLE case as the simulations performed so far consider physical environments quite different from that of a hot, optically thin, gas-pressure supported two-temperature accretion disk. Global simulations of optically thin, two-temperature disk configurations are certainly required to remedy this problem. Nevertheless, it seems very likely that significant changes in local accretion rate during an oscillatory period represent a genuine qualitative feature associated with pulsational overstability. If so, then substantial quasi-periodic changes in the total disk luminosity might be expected for the SLE case (cf. [58], eqs.[1] and [21]).
In principle, the observable period of the pulsational instability in a viscous disk is of order [70]
P ≈ (1+z) α−1Ω−1 (8) = 5.78×10−4 (1+z) M7r3/2∗α−1[days],
where is the local Keplerian timescale, , in units of , is the radius at which the pulsation instability occurs, and is the redshift of the source. Obviously, since the disk structure depends significantly on the radius, the period of the pulsational instability is different from one radius to another. Yet, by using eq.(3) we can calculate the radius , at which the disk becomes unstable to radial pulsation for a given luminosity, and so derive an upper limit on the range of possible periods. In fact, these upper limits may offer a very useful guide, since numerical disk simulations suggest that the oscillations may always be trapped near the boundaries of the discs [27, 46].
### 2.2 Application to Mkn 501
The -ray blazar Mkn 501 was the second source after Mkn 421 to be detected at TeV energies in 1995 by the Whipple observatory [52]. In 1997 the source came into much attention when it went into a remarkable state of strong and continuous flaring activity, becoming the brightest source in the sky at TeV energies [1, 2, 9, 17, 22, 51]. BeppoSAX observations during a strong outburst in April 1997 showed, that the spectrum was exceptionally hard (, ) over the range 0.1-200 keV, indicating that the X-ray power output peaked at keV or higher energies [48]. These observations implied that the peak frequency has moved up by more than two orders of magnitude (persistent over a timescale of d) and that the (apparent) bolometric luminosity of Mkn 501 has increased by a factor of compared to previous epochs [24, 32, 48, 57, 64]. Optical observations on the other hand indicated, that the source was still relatively normal and thus suggested that the variations are confined to energies above keV [47]. Further observations with BeppoSAX in April-May 1998 and May 1999 during periods of lower TeV flux showed that the peak frequency has decreased to and keV, respectively [64].
A comprehensive analysis of the temporal characteristics of the gamma-ray emission from Mkn 501 in 1997 showed that the TeV emission varied significantly on time scales of between (5-15) hours [1]. If this short time variability is via inverse Compton processes related to accretion disk phenomena (see § 3), we can estimate an upper limit for the black hole mass in Mkn 501 by taking the variability timescale to be of order of the Keplerian orbital period at the innermost stable orbit , suggesting
M7≤7.9(Δt10hr)1(1+z) (9)
for a non-rotating black hole where , and thus a black hole mass of for the (5-15) hour timescale observed. Note, that this mass upper limit may be somewhat higher, if a pseudo-Newtonian potential is employed, in which case one may find . Eq. (9) essentially assumes that the observed short-time variability is dominated by processes occurring close to the innermost stable orbit. This seems justified as long as the associated timescale ( sec in the lab. frame) is larger than (i) the transverse light crossing time of the source as measured in the lab. frame, i.e., , where with the bulk Lorentz factor and the distance from the central engine, cf. [15], and (ii) the characteristic shock acceleration timescale [55], where is the electron gyroradius for intrinsic magnetic field strengths G and comoving electron Lorentz factors of (see below), which seems indeed to be the case. Based on the above noted, as well as related considerations in [54], we adopt a black hole mass of , for which cm, as fiducial value in our calculations below.
Detailed observations of Mkn 501 during the 1997 active phase showed that its X-ray emission was highly variable, although the soft X-ray flux (up to a few keV) did not change dramatically [32, 34, 41, 48]. BeppoSAX observations in 1997 give (2-10) keV fluxes in the range erg cm s, with an average value of erg cm s [41], corresponding to a mean (bolometric) luminosity of erg s, assuming H = 72 km/s/Mpc and a flat Universe with . If the observed periodicity in the (2-10) keV band is caused by a pulsational disk instability, a non-negligible fraction of this X-ray luminosity has to be produced by the inner disk, the other part being provided by the relativistic jet. Each surface of the disk then produces a characteristic (2-10) keV luminosity of erg s. Simulations of pulsational instability discussed at the end of § 2.1 indicate that the oscillations are accompanied by periodic, large amplitude variations in local accretion rate (exceeding the average by over an order of magnitude) and possibly trapped near the outer edge of the disk. Simple quantitative modelling then suggests that the total luminosity of a SLE type disk may vary by up to a factor of a some few, i.e., may be as small as . Using the observed soft X-ray power-law index [33, 41, 47], we may estimate the observationally required, total integrated SLE disk luminosity by calculating the ratio of the soft X-ray to total disk luminosity from the emitted disk spectrum (see eq. [7]), which for keV gives . This suggests that the total (average) SLE disk luminosity is of order erg s.
As shown below, low values are more preferable in the hybrid SLE model and we will thus henceforth adopt a total SLE disk luminosity erg s () for explicit calculation (see however also Fig. 1 for the more general case). Given the constraints on the viscous parameter and using , we may then obtain the following results for the transition radius , the maximum pulsational period P and the ratio , namely, for we have: , P = 100 days, ; while for one finds: , P=8 days, , respectively. In both cases the accretion rate is similar, i.e., , corresponding to , and consistent with the condition (see § 1) for a SLE configuraton to exist. We note that for BL Lac objects the transition radius between an inner optically thin two-temperature and an outer standard disk has been estimated to lie within the range for [8], which seems well consistent with the values derived above. Based on these results, we can also roughly estimate the luminosity associated with the SS disk component at : Employing the thin cold disk temperature relation K with one finds that erg/s, and that the emission is maximized at frequencies around Hz. These findings imply a Compton energy enhancement factor considerably larger than .
Spectroscopic data for Mkn 501 suggest, that the normal bolometric (photo-ionization-equivalent) luminosity of the disk component driving the NLR photoionization is of order a few times erg/s, corresponding to a photon number per second that can ionize hydrogen of s, where eV/. Here, has been estimated based on the - relation, i.e., [35], with an observed (mean) narrow H line luminosity for Mkn 501 of erg/s [61], which gives erg/s (with substantial scatter). We note that this value is very close to the one estimated by Barth et al. (2002) using emission line measurements for four nearby FR I radio galaxies. Given the properties of the cold SS disk component derived above, the main part of the ionizing photons per second is expected to come from the SLE component (unless its existence is only coupled to active source stages). The number of photons emitted by the SLE inner disk per second, which can ionize hydrogen is given by , where is the specific SLE luminosity (cf. eq. [7]). Using the values adopted above, i.e., , keV and erg s ( and for the quiescent and active source stage, respectively, cf. [41]), we can find s, which for and is comparable to , and may thus qualify the choice of above.
As mentioned in § 1, detailed periodicity analysis of the X-ray and TeV light-curves of Mkn 501 during its high state in 1997 has provided strong evidence for a 23-day period in both energy bands [30, 31, 45]. As shown above, such a period falls within the range of possible pulsational periods, suggesting that it may be well explained by the pulsational instability of a two-temperature SLE disk. In particular, by using day and eqs. (3) and (7), we can place an upper limit on the allowed viscosity parameter and the expected transition radius. For the characteristic values derived above, one obtains , corresponding to = 115 and . The more general case is shown in Fig. (1), where the dependency of these results on the total SLE disk luminosity has been calculated.
The changing SLE disk luminosity will lead to a variable seed photon field for Compton upscattering to the high energy regime by the nonthermal particles in the relativistic blazar jet. The disk radiation entering a moving plasma blob from behind appears de-boosted by a factor in the comoving frame of the outflowing plasma. It will nevertheless dominate over possible (quasi-isotropic) disk photons that are scattered by diffusive gas or clouds into the direction of the source, provided the distances to the central engine is smaller than
z\lx@stackrel<∼7×103(15Γb)2(Rsc1pc)(0.001τsc)1/2rg, (10)
where is the distance and the mean scattering depth of the electron scattering cloud [15]. This usally places the emission site at distances between hundred (to avoid absorption) and some thousand .
As shown by Dermer and Schlickeiser (1993), scattering of X-ray ( keV) photons entering from behind will generally take place in both the Thomson and the Klein-Nishina regime for a large range of (lab. frame) soft photon energies , i.e., for energies in the range , where is the jet bulk Lorentz factor. An full treatment of the scattering process is thus a complicated matter and beyond the scope of the paper. However, we can still get some order-of-magnitude insight into the associated photon luminosity by approximating the scattering process by its Thomson limit: The maximum value of the scattered photon energy in the lab. frame is given by [13]. Upscattering of soft X-ray photons ( keV) to the TeV regime thus requires comoving electron Lorentz factors of . Scattering will always take place in the Thomson regime for (lab.frame) soft photon energies keV [14], suggesting that our approximation is indeed not too bad if we take an analogous reduction of the Thomson cross-section by a factor of two or so into account. For photons entering from behind the single inverse Compton power in the comoving blob frame for scattering by a relativistic electron with Lorentz factor then becomes, cf. [15],
PComp∼c6σTγ2Γ2bLr02πz2c. (11)
Suppose, that electrons are continuously accelerated at a shock front into a characteristic non-thermal (isotropic) differential power law particle distribution in the comoving frame of for with roughly given by the balance between acceleration and external Compton cooling for G and . Due to , an increase of the SLE disk field will lead to a decrease in the maximum Lorentz factor. As particle are advected downstream and no longer efficiently accelerated, the distribution steepens to for momenta where particles have had sufficiently time to cool, resulting into a broken power law electron distribution with a break energy of . We may estimate by setting the (comoving) external Compton cooling time scale equal to the (comoving) dynamical time scale , which gives . For typical values , e.g., [47, 33], the associated observable Compton VHE luminosity can thus be estimated from where and is the volume scale of the emitting blob measured in the comoving frame. This gives a Compton luminosity
LTeV∼Lr0(z500rg)(γb105)(Nγ1104cm−3), (12)
which is of the order of the observed VHE luminosity above TeV during the 1997 high state [1, 47]. This suggests that changes in the disk photon field driven by a pulsational instability may indeed be responsible for the observed modulation of the TeV emission. During the quiescent state on the other hand, the X-ray flux is typically about ten times smaller (cf. §2.2). Hence, even if a SLE type disk is present in the quiescent state (see however below), the expected TeV contribution due to inverse Compton scattering of disk photon is much smaller, perhaps even swamped by synchrotron self-Compton emission. The 1997 observations of Mkn 501 showed that the observed TeV spectrum gradually steepened with energy starting at around 3 TeV, approaching a logarithmic spectral () slope above 5 TeV of including systematical and statistical errors [2, 28]. However, as shown for example by Konopelko et al. (2003), this curvature at around 3 TeV is likely to be caused by gamma-ray absorption in the intergalactic infrared background field, suggesting that the peak in the intrinsic (de-absorped) spectrum due to increasing Klein-Nishina effects is around 8 TeV. In principle, it appears possible that due to the strong angle dependence of the scattered flux on the angle of the impinging soft photons, photons from the outer SS disk part, i.e. from disk radii , could make an important inverse Compton contribution as well. Redoing the calculations presented in [15] for lab.frame angles of the incident photons of gives a total photon energy density of the soft photons in the comoving frame of where is the corresponding energy density in the lab. frame. As the SLE contribution , entering from behind, appears de-boosted by a factor of [15] compared to its lab. frame value, the ratio of comoving energy densities becomes
usphuSLEph∼0.35(Lr0LSS)(zr0), (13)
using , and , indicating that the SLE part dominates during the high state even if we assume that the SS disk is not truncated at radii of . Note however, that the latter may well be the case if a binary black hole system exists in the center of Mkn 501 as argued for by several authors (see discussion).
## 3 Discussion
Monitors in both the X-rays and the TeV emission show evidence for a 23-day periodicity during the 1997 high state. As demonstrated above the 23-day period in the X-ray light curve may be caused by a pulsational instability in two-temperature accretion disk, and via the inverse Compton process result in the same periodicity in the TeV light-curve. A pulsational instability occurring in a two-temperature disk with transition radius will result in a recurrence timescale of 8 to 100 days.
Based on the observed TeV variability we have employed a characteristic black hole mass of in our calculations. We note that quite different central mass estimates for Mkn 501 have been claimed in the literature, ranging from several times (mainly based on high energy emission properties) up to (based on host galaxy observations), see e.g. [7, 12, 18, 19, 54]. However, as shown by Rieger & Mannheim [54] uncertainties associated with host galaxy observations may easily lead to an overestimate of the central black hole mass in Mkn 501 by a factor of three and thus reduce the implied central mass to , a value in fact recently confirmed by an independent analysis of central mass constraints derived from host galaxy observations [68]. Moreover, as argued by the same authors some of the apparent disagreement in central mass estimates may possibly be resolved if a binary black hole system exists in the center of Mkn 501, see also [53, 54, 65], similar as in the case of OJ 287 [60]. For example, if Mkn 501 harbours a binary system with a more massive primary black hole of and a less massive (jet-emitting) secondary black hole of , the mass ratio would be of order 0.1, which may compare well with the result estimated for OJ 287 [38]. While our characteristic black hole mass employed falls well within the above noted range, we note that the SLE pulsational instability model may still work successfully, if a higher black hole mass is used. For example, if one adopts and days, one obtains , and .
Our analysis is based on a specific disk model (SLE) which is open to questions, in particular with respect to its possible stability properties. We note however, that a relatively small change in the usually employed viscosity description may already lead to a thermally stable configuration [49]. On the other hand, it may as well be possible that the SLE configuration represents a quasi-transient phenomenon associated with those changes in accretion history that probably initiate the high states. An alternative (inner) disk configuration of interest may be represented by an optically thin two-temperature ADAF solution [44, 71]. Such a configuration can exist for accretion rates below a critical rate , where the canonical ADAF value of has been employed [44]. ADAFs are generally less luminous than a standard disk, with the typical ADAF luminosity given by [71]. Using the constraints above, the possible ADAF luminosity for Mkn 501 becomes erg/s, which is already about an order of magnitude smaller than required by the X-ray analysis. This suggests that – at least during its high state – an optically thin ADAF is not a viable option for Mkn 501.
Based on Eq. (7) we can estimate the variation rate of the period due to the motion of the instability
δP/P=32P1r∂r∂t. (14)
Using , and Eqs. (4) and (5), one finds
δP/P=0.44 α−7/6 ˙M245/6 M−5/67 r−1/4∗ζ−1/6 (15)
For = 0.28 the relevant parameters result in . From the period analysis performed by Kranich et al. [30] a deviation in period corresponds to 6.67 days. If we take this deviation as the intrinsic variation on the periodicity (P), then a of can be estimated from the results by Kranich et al. [30]. As this estimate assumes that the deviation of the period is only affected by the motion of the instability, while it may in fact be caused by more than one effect, our theoretical should not be greater than the observational results. We conclude that the observed 23-day QPOs in Mkn 501 might be caused by the instability of a two-temperature accretion disk. The model presented here may thus offers an alternative explanation to the binary-driven helical jet model of Rieger & Mannheim (2000). Comprehensive computational modelling of the pulsational instability in a two-temperature, cooling dominated disk will be essential to verify this in more detail.
Our model predicts that a period correlation in the X-ray and -ray should always be present during an active source stage, while the period of the QPOs may vary as the instability region could change from one high state to the other.
## Acknowledgements
We would like to thank Prof. K.S. Cheng, J.M. Wang, Y. Lu, L. Zhang and Dr. D. Kranich for useful discussions, and the anonymous referees for very useful comments that helped to improved the presentation. This work is partially supported by the National 973 project (NKBRSF G19990754), the National Science Fund for Distinguished Young Scholars (10125313), the National Natural Science Foundation of China (10573005, 10633010), the Fund for Top Scholars of Guangdong Province (Q02114) and a Cosmogrid Fellowship (FMR). We also acknowledge financial support from the Guangzhou Education Bureau and Guangzhou Science and Technology Bureau.
## References
• [1] Aharonian F., Akhperjinian A.G., Barrio J.A., et al. 1999a, A&A, 342, 69
• [2] Aharonian F., Akhperjanian A.G., Barrio J.A. et al. 1999b, A&A, 349, 11
• [3] Albert J., et al. (MAGIC collaboration) 2007, ApJ in press (astro-ph/0702008)
• [4] Bloom S., Marscher A.P. 1996, ApJ, 463, 555
• [5] Blumenthal G.R., Yang L.T., Lin D.N.C. 1984, ApJ, 287, 774
• [6] Buckley, J. H.; Akerlof, C. W.; Biller, S., et al. 1996, ApJ 472 L9
• [7] Cao X. 2002, ApJ, 570, L13
• [8] Cao X. 2003, ApJ, 599, 147
• [9] Catanese M., Bradbury S.M., Breslin A.C., et al. 1997, ApJ 487, L143
• [10] Catanese M., Weekes T.C. 1999, PASP, 111, 1193
• [11] Chen X., Taam R.E. 1995, ApJ, 441, 354
• [12] De Jager O.C., Kranich D., Lorentz E., Kestel M. 1999, Proc. 26th ICRC (Salt Lake City), 3, 346
• [13] Dermer C.D., Schlickeiser R., Mastichiadis A. 1992, A&A 256, L27
• [14] Dermer C.D., Schlickeiser R. 1993, ApJ 416, 458
• [15] Dermer C.D., Schlickeiser R. 1994, ApJS, 90, 945
• [16] Dermer C.D., Sturner S.J., Schlickeiser R. 1997, ApJS, 109, 103
• [17] Djannati-Atai A., Piron F., Barrau A., et al. 1999, A&A, 350, 17
• [18] Falomo R., Kotilainen J. K., Treves A. 2002, ApJL, 569, L35
• [19] Fan J.H. 2005, A&A, 436, 799
• [20] Fan J.H., Lin R.G. 1999, ApJS, 121, 131
• [21] Ghisellini G. 1998, in: The Active X-ray Sky: Results from BeppoSAX and RXTE, ed. by L. Scarsi et al., Amsterdam, 397
• [22] Hayashida N., Hirasawa H., Ishikawa F., et al. 1998, ApJL, 504, L71
• [23] Honma F., Matsumoto R., Kato S. 1992, PASJ, 44, 529
• [24] Kataoka J., Mattox J.R., Quinn J., et al. 1999, ApJ, 514, 138
• [25] Katarzynski K., Sol. H., Kus A. 2001, A&A 367, 809
• [26] Kato S. 1978, MNRAS, 185, 629
• [27] Kley W., Papaloizou J.C.B., Lin D.N.C. 1993, ApJ, 409, 739
• [28] Konopelko A. 1999, APh, 11, 135
• [29] Konopelko A., Mastichiadis A., Kirk, J., et al. 2003, ApJ 597, 851
• [30] Kranich D., De Jager O.C., Kestel M., et al. 1999, Proc. 26th ICRC (Salt Lake City), 3, 358
• [31] Kranich D., De Jager O.C., Kestel M., et al. 2001, Proc. 27th ICRC (Hamburg), 7, 2631
• [32] Krawczynski H., Coppi P.S., Maccarone T., Aharonian F.A. 2000, A&A, 353, 97
• [33] Krawczynski H., Coppi P.S., Aharonian F. 2002, MNRAS 336, 721
• [34] Lamer G., Wagner S.J. 1998, A&A, 331, L13
• [35] Laor A. 2003, ApJ, 590, 86
• [36] Lightman A.P. 1974, ApJ, 194, 419
• [37] Lightman A.P., Eardley D. 1974, ApJ, 187, L1
• [38] Liu F.K., Wu X-B. 2002, A&A, 388, L48
• [39] Macomb, D. J., Akerlof, C. W., Aller, H. D., et al. 1995, ApJ, 449, L99
• [40] Manmoto T., Takeuchi M., Mineshige S., et al. 1996, ApJ, 464, L135
• [41] Massaro E., Perri M., Giommi P., et al. 2004, A&A, 422, 103
• [42] Mastichiadis A., Kirk J.G. 1997, A&A, 320, 19
• [43] Milsom J.A., Taam R.E. 1997, MNRAS, 286, 358
• [44] Narayan R., Mahadevan R., Quataert, E. 1998, in: Theory of Black Hole Accretion Disks, ed. by M.A. Abramowicz et al., p.148
• [45] Osone S., 2006, APh, 26, 209
• [46] Papaloizou J.C.B., Stanley G.Q.R. 1986, MNRAS, 220, 593
• [47] Petry D., Böttcher M., Connaughton V., et al 2000, ApJ, 532, 742
• [48] Pian E., Vacanti G., Tagliaferri G., et al. 1998, ApJ, 492, L17
• [49] Piran T. 1978, ApJ, 221, 652
• [50] Pringle J.E. 1976, MNRAS, 177, 65
• [51] Protheroe R.J. et al. 1997, Proc. 25th ICRC (Durban), 8, 317
• [52] Quinn J., Akerlof C.W., Biller S., et al. 1996, ApJ, 456, L83
• [53] Rieger F.M. Mannheim K. 2000, A&A, 359, 948
• [54] Rieger F.M. Mannheim K. 2003, A&A, 397, 121
• [55] Rieger F.M., Bosch-Ramon V., Duffy P. 2007, Ap&SS 309, 119
• [56] RXTE 1999, http://space.mit.edu/XTE/asmlc/ASM.html
• [57] Sambruna R., Maraschi L., Urry C.M. 1996, ApJ, 474, 639
• [58] Shapiro S.L., Lightman A.P., Eardley D.M. 1976, ApJ, 204, 187
• [59] Shakura N.I., Sunyaev R.A. 1973, A&A, 24, 337
• [60] Sillanpää A., Haarala S., Valtonen M. J., et al. 1988, ApJ, 325, 628
• [61] Stickel M., Fried J.W., Kühr H. 1993, A&AS, 98, 393
• [62] Tavecchio F., Maraschi L., & Ghisellini G. 1998, ApJ, 509, 608
• [63] Takahashi, T.; Tashiro, M.; Madejski, G. et al. 1996, ApJ, 470, L89
• [64] Tavecchio F., Maraschi L., Pian E., et al. 2001, ApJ, 554, 725
• [65] Villata M., Raiteri C. M. 1999, A&A, 347, 30
• [66] Wandel A., Liang E.P. 1991, ApJ, 380, 84
• [67] Wandel A., Urry M.C. 1991, ApJ, 367, 78
• [68] Woo, J-H., Urry, M.C., et al. 2005, ApJ, 631, 762
• [69] Wu X.B. 1997, MNRAS, 292, 113
• [70] Yang L.T., Henning T., Lu Y., Wu X.B. 1997, MNRAS, 288, 965
• [71] Yi I. 1999, in: Astrophysical Disks, ed. by J.A. Sellwood & J. Goodman, ASP Conf.Ser. 160, 279
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
|
|
# Cartan subalgebra and the decomposition of eigenspaces
Let $\frak g$ be a semisimple Lie algebra over a finite dimensional field $F$, and let $\frak h$ be a Cartan subalgebra in $\frak g$. I need actually some explanation on why $$\mathfrak g = \bigoplus_{\alpha\in \frak h^{*}} \mathfrak g_{\alpha}$$ Where $\mathfrak g_\alpha= \{x\in \mathfrak g; \ [x,h]=\alpha(h)x\text{ for all } h\in \mathfrak h\}$.
The proof relies on $F$ being algebraically closed. There are examples of Lie algebras which don't decompose like this when $F$ is not algebraically closed.
This is the viewpoint of Humphrey's Intro to Lie Algebras and Rep Theory. The big picture is: The set of matrices $\{\text{ad}(h):\ h\in\mathfrak h\}$ is a bunch of diagonalizable (aka semisimple) matrices which all commute with each other. When diagonalizable matrices commute then they can be simultaneously diagonalized.
This is basically because if $A$ and $B$ are commuting matrices, then the action of $B$ preserves the $\lambda$-eigenspace for $A$. So you pick out a basis of eigenvectors for $B$ in the space of $\lambda$-eigenvectors for $A$ for every $\lambda$, then both $A$ and $B$ are diagonal with respect to this basis.
So if you are with me so far, then you are basically done — there is a basis of $\mathfrak g$ so that for any $h\in \mathfrak h$, the matrix of $\text{ad}(h)$ is diagonal. That is exactly what that decomposition means — each root space is simultaneously an eigenspace for each $h$ in $\mathfrak h$.
The fact that $\text{ad}(h)$ for $h\in\mathfrak h$ always semisimple is little bit subtle. The crux of it is that you can define what it means for an element $s$ in a semisimple Lie algebra to be `ad-semisimple', meaning that $\text{ad}(s)$ is a semisimple endomorphism (i.e. it is diagonalizable). In fact, every element in a semisimple Lie algebra can be decomposed into the sum of an ad-semisimple element with a ad-nilpotent element such that the two commute — and this is unique. This is called the abstract Jordan decomposition.
A toral subalgebra is defined to be an algebra containing only semisimple elements. A Cartan subalgebra is defined to be a maximal toral subalgebra. These are proven to exist and be nonzero in semisimple Lie algebras and all the properties of Cartan subalgebras can be derived from there.
It should be noted that this is just one approach to Cartan subalgebras. There are some equivalent definitions and different approaches to the theory.
|
|
## Definition
A harshad number is a positive integer which is divisible by the sum of its digits base $10$.
Hence a harshad number is a Niven number base $10$.
## Sequence of Harshad Numbers
The sequence of harshad numbers begins:
$1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 18, 20, 21, 24, 27, \ldots$
## Also see
• Results about harshad numbers can be found here.
## Historical Note
Harshad numbers were first described by Dattathreya Ramchandra Kaprekar.
## Linguistic Note
The term harshad number comes from the Sanskrit word हर्ष (harṣa + da), which means giving joy.
|
|
### PDE and Differential Geometry’ Seminar Existence of standing pulse solutions to a skew-gradient system Jieun Lee(UCONN)
Monday, October 28, 2019
2:30pm – 3:30pm
Storrs Campus
MONT 214
Reaction-diffusion systems have been primary tools for studying pattern formation. A skew-gradient system generalizes an important class of activator-inhibitor type reaction-diffusion systems that exhibit localized patterns such as fronts and pulses. In this talk we study the standing pulse solutions to a skew-gradient system of the form $u_t = du_{xx} + f(u) - v, \quad \quad v_t= v_{xx} - \gamma v - v^3 + u,$ on the domain $(-\infty, \infty)$. Using a variational approach, we establish the existence of standing pulse solutions with a sign change. In addition, we explore some qualitative properties of the standing pulse solutions.
Contact:
lihan.wang@uconn.edu
PDE and Differential Geometry Seminar (primary), UConn Master Calendar
|
|
## Algebra and Trigonometry 10th Edition
Published by Cengage Learning
# Prerequisites - P.5 - Rational Expressions - Exercises - Page 48: 10
#### Answer
The domain of the expression is the set of all real numbers except $x=\frac{1}{2}$
#### Work Step by Step
$\frac{x-4}{1-2x}$ The denominator must be different from 0. $1-2x\ne0$ $-2x\ne-1$ $x\ne\frac{-1}{-2}$ $x\ne\frac{1}{2}$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
|
# Understanding the Effectivity of Ensembles in Deep Learning
Dissecting ensembles, one at a time. Made by Sayak Paul using Weights & Biases
Sayak Paul
## Introduction
The report explores the ideas presented in Deep Ensembles: A Loss Landscape Perspective by Stanislav Fort, Huiyi Hu, and Balaji Lakshminarayanan.
In the paper, the authors investigate the question - why do deep ensembles work better than single deep neural networks?
In their investigation, the authors figure out:
• Different snapshots of the same model (i.e., model trained after 1, 10, 100 epochs) exhibit functional similarity. Hence, their ensemble is less likely to explore the different modes of local minima in the optimization space.
• Different solutions of the same model (i.e., trained with different random initializations each time) exhibit functional dissimilarity. Hence, their ensemble is more likely to explore the different modes of local minima in the optimization space.
Inspired by their findings, in this report, we present several different insights that are useful for understanding the dynamics of deep neural networks in general.
## Revisiting the Optimization Landscape of Neural Networks
Neural networks are stochastic functions i.e., each time you train a neural network, it may not lead to the exact same solution as before. Neural networks are optimized using gradient-based learning. This optimization problem is almost always non-convex. When expressed with Greek letters, this optimization problem looks like so -
$\operatorname{minimize}{\theta} \frac{1}{m} \sum{i=1}^{m} \ell\left(h_{\theta}\left(x_{i}\right), y_{i}\right)$
where,
• $\theta$ is the parameter vector,
• $m$ is the number of training examples,
• $h_\theta$ is the model (neural network) parameterized over $\theta$
• $\ell(h_\theta(x), y)$ is our loss function in which $x$ and $y$ correspond to an input and a label from the training dataset
Consider the figure below that shows a sample non-convex loss landscape (typical for neural networks). As we can see, there are multiple local minima in there. A neural network can only reach one of these local minima at one time after they are trained. The same neural network can end up in different landscapes each time they are trained with different random initializations exhibiting high variance in predictions.
We can also see that these local minima lie at the same level in the loss landscape, which further suggests that if a network ends up in one of these local minima, it will yield the same kind of performance more or less.
## Mitigating the High Variance of a Single Model With Ensembling
To allow a network to cover these local minima better, we often train several versions of the same model but with different initializations. During inference, we take predictions from each of these different solutions, and we average their predictions. It works quite well in practice, and this process is referred to as ensembling. Ensembling also helps to reduce the high variance that might come from the predictions of individual models (the same network trained multiple times with different random initializations).
In order to understand why ensembles work well, we need to figure out the ingredients that make these ensembles cover the loss landscape better?
Neural networks are parameterized functions, as we saw earlier. Each time we train a network, we end up in a different parameter space leading to different optimums. The more diverse this space, the better the coverage of different optimums. So, how do we quantify this diversity?
To investigate this systematically, the authors do the following (among other things):
• They measure the cosine similarity of the weights from different runs of the same network. Cosine similarity is a widely used metric to measure the similarity between two vectors. It does so by measuring the orientation and not the magnitude (refer to the figure below). Formally speaking, it is the dot product of two normalized vectors divided by the product of their respective norms. They want to examine the functional similarity of different trajectories (weights of the same model trained with different initialization).
Practically we can do this by training the same model with different initializations while grabbing trainable weights, ignore biases, flatten weights from each layer, and extend them to a list. Apply cosine similarity formula (NumPy implementation) for each pair of models.
# compute cosine similarity of weights
cos_sim = np.dot(weights1, weights2)/(norm(weights1)*norm(weights2))
• They measure the extent to which the predictions from different runs disagree with each other. The authors want to see if the models trained with different initializations fail for the same subset(or complete set) of the testing dataset. Suppose a model trained with different inits produce different predictions on the test dataset, we can say that the prediction is a function of its initialization.
Also, the examples which tend to confuse the model across different initializations can be called intrinsically hard examples. To find this, we first compared confusion matrix epoch-wise, i.e., confusion matrix across individual epochs from the same init. This was followed with solution-wise comparison, i.e., confusion matrix from different solutions (inits) of the same model.
Practically, to compute dissimilarity in predictions, add the total number of equality between the true labels and the predicted labels, normalize by dividing the sum with the total number of test data points followed by subtraction by 1.
# compute dissimilarity
dissimilarity_score = 1 - np.sum(np.equal(preds1, preds2))/10000
Before we dive deep into the experiments mentioned above, it is essential to review our experimental setup.
## Experimental Setup
• Dataset used (primarily): CIFAR-10
• Architectures:
• Dropout: 0.1 (only applicable when using SmallCNN and MediumCNN)
• Batch size: 128
• Learning rate schedule: Initially start at 1.6 × 10−3 and halving it every 10 epochs
• Data augmentation: Only when using ResNet20v1
Note: We did not exactly follow what is specified in section 3 of the paper. There are minor differences in our experimental setup and what the authors followed.
For convenience, below, we specify how the learning rate schedule would look and the data augmentation pipeline we followed.
def augment(image,label):
image = tf.image.random_crop(image, size=[32, 32, 3]) # Random crop back to 32x32
image = tf.image.random_brightness(image, max_delta=0.5) # Random brightness
image = tf.clip_by_value(image, 0., 1.)
return image, label
### Full code on GitHub →
We used Google Colab for running all of our experiments.
## Dissecting the Weight Space of a (Deep) Ensemble
Going back to our experiments, we are going to present them in two different flavors:
• For each of the experiments quantifying the diversity (cosine similarity, prediction disagreement):
• Take different snapshots of a model from the same training run and perform the experiment.
• Train the model multiple times with different random initializations and perform the experiment.
Note: By snapshots, we refer to models taken from epoch 0, epoch 1, and so on from the same training run (same initialization).
## Section 11
#### Observations
<br>
• The functions (different checkpoints of the same model) in the same trajectory are similar, and it holds for all variants (small, medium, and large) of the model.
• The cosine similarity between the weights of the different snapshots of the same model starts showing a high degree of similarity between each other as it approaches convergence. Thus, there is not much change in the weight space when the trajectory is settled for a loss landscape valley.
• The checkpoints from the later stage of training differ the most from the initial stage of training, followed by mild similarity (whitish region).
## Section 9
#### Observations
• The models trained with different initialization (different trajectories) are entirely dissimilar. This holds for all three variants of the model.
• Thus, initialization decides the weight space the model will explore.
## Section 11
#### Observations
• The functions (different checkpoints of the same model) in the same trajectory tend to disagree less about its predictions. Further, confirming that functions in the same trajectory are similar.
• From the prediction dissimilarity plot we can see that different snapshots of the same model starts showing a high degree of similarity between each other as it approaches convergence(increasing epoch). Thus one can say that many examples are functionally mapped ($x \rightarrow y$) when the trajectory is settled for a loss landscape valley.
• We also observe high dissimilarity in predictions between the checkpoints from the later stage of training and the very initial stage of training.
## Section 13
#### Observations
• The predictions for the same model with different initializations trained on the same dataset with same hyperparameters disagree. :astonished:
• Obviously there is a subset of examples that the model trained with different trajectories will agree upon.
• There must be a subset of intrinsically hard examples that the model trained with different trajectories will misclassify similarly. We shall investigate in the next section.
## Intrinsic Hardness as a Function of Initialization
Below we see that the set of examples that confuses a model epoch-wise changes as we proceed toward the optimization. We further see that this set varies when we train the model with different initialization. We could not enlist results from all the different initialization for space constraint, but feel free to check them out here. This suggests that the definition of intrinsically hard examples is relative to how a model is being initialized to train. This may also further suggest that the images that cause the top losses during training (epoch-wise) are also not the same when we change the initialization of a model.
Note: You can click on the little button located at the top-left corner and play with the slider to see how the confusion matrices change with epochs.
The idea of creating an epoch-wise callback is referred from this tutorial.
## Different Initializations and Their Paths to Optimization
We talked about different initializations of the same model and observed functional dissimilarity between them. To spice it up, let's try to visualize the path for different trajectories visually. The authors do so by taking three (for simplicity) different trajectories (inits) of the same model. They then take the softmax output from different checkpoints along individual training trajectories and append them to an array. The shape of the array should be (num_of_trajectories, num_of_epochs, num_of_test_examples, num_classes) and then compute a 2 component t-SNE of this array.
The predictions from all the solutions and their individual epochs were appended to a single array because they belong to the same "space". We apply 2 component t-SNE to reduce this higher dimensional space to a two-dimensional space. Below is the result of this experiment for Small and Medium sized CNN. And wow!
In our opinion and also from the plots (shown below), it is evident that the models with different initializations have different trajectories. As one approaches convergence, they tend to cluster around the same valley in space. Even though the models reach similar accuracy, we can clearly see the evidence of multiple minima which lie on the same plane.
## Accuracy as a Function of Ensemble Size
Another interesting question the authors explore is - how ensemble size affects the overall test accuracy? Below we can see that as we keep increasing the ensemble size, the model performance gets enhanced. For SmallCNN, after a certain period, the enhancement gets plateaued. We think this might be because a small-capacity model does not produce an optimum solution over the training dataset. Ensembling predictions do help improve model performance, but after reaching peak performance, the uncertainty from multiple suboptimal models take over the benefit of ensembling.
## Section 19
This suggests it’s because an ensemble is able to cover the optimization landscape better than a single model and indeed that seems to be the case.
Although this behavior is interesting for deployment-related situations using a large ensemble of very heavy models might not be practically feasible.
## Perturbating an already optimized solution space
The authors, in addition to the experiments based on the checkpoints along a trajectory also explore the subspace along an individual trajectory. Subspace along a trajectory is a set of functions (solutions) that exist in the function space around the explored space and while retraining with the same initialization could be explored. The authors use a representative set of four subspace sampling methods:
• Monte Carlo dropout
• Diagonal Gaussian approximation
• Low-rank covariance matrix Gaussian approximation
• Random subspace approximation
The authors construct their subspace around an optimized weight-space (weights and biases of a trained neural network) solution θ. By using the t-SNE plot experimental setup, they show that the created subspace lies in the same valley as the optimized solution while different solution lies in a different valley.
The authors validate two hypotheses -
1. Ensembling the solutions by sub-sampling around an optimized solution provides benefits in terms of model performance. But…
2. The relative benefit of simple ensembling (shown above) is higher as it averages prediction over more diverse solutions.
The plot below summarizes these -
## Conclusion
The paper we discussed in this report gives us an excellent understanding of why (deep) ensembles are very powerful in covering the optimization landscape better with simple experiments. Below we leave you with a couple of amazing papers in case you are interested in knowing more about different aspects of deep neural networks -
## Acknowledgements
Thanks to Yannic Kilcher for his amazing explanation video of the paper which helped us pursue our experiments.
Thanks to Balaji Lakshminarayanan for providing feedback on the initial draft of the report and rectifying our mistake on the tSNE projections.
Hope you have enjoyed reading this report. For any feedback reach out to us on Twitter: @RisingSayak and @ayushthakur0.
Sayak Paul and Ayush Thakur have contributed equally to this report.
|
|
## Introduction
Ever since the invention of the GaAs laser in 19621, the GaAs/AlGaAs material platform has been of significant importance for integrated optics. It is of particular interest as the high nonlinearity of the material make it ideal for nonlinear optical processing, where the femtosecond response time can be utilized for processing signals at speeds beyond a terabit-per-second in compact, power efficient devices2. AlGaAs also offers the ability to engineer its bandgap by varying the composition of the alloy, thus providing a degree of freedom in device design. This property has been widely exploited to develop waveguides that do not suffer from two photon absorption (TPA) at telecommunication wavelengths3,4. These numerous benefits, combined with a wide transparency window ($$\sim 0.9{-}17\mu \text {m}$$ for GaAs5), make for a very versatile platform and because of that GaAs/AlGaAs waveguides have been employed in a wide range of applications such as quantum optics6, molecular sensing7 and telecommunications8. Despite the maturity and success of this platform, the low refractive index contrast between AlGaAs layers means that high aspect ratio, sub-micron scale waveguides are often needed to satisfy the phase matching conditions or dispersion profiles required for nonlinear processes. Such waveguides can be challenging to fabricate, frequently resulting in nonlinear devices with high propagation losses4,9.
In 2015 the heterogeneous integration of AlGaAs-on-insulator (AlGaAs-OI) was found to be an elegant solution to overcome the limitations of GaAs/AlGaAs waveguides10. When AlGaAs is bonded to a silica cladding layer, there is a significant increase in the vertical modal confinement ($$\Delta \text {n}\approx \text {1.82}$$), which enables the fabrication of sub-micron scale waveguides with effective nonlinearities as high as $$\text {720}~{\text {W}}^{-1}{\text {m}}^{-1}$$11. This high nonlinearity together with low loss waveguides has allowed for the fabrication of high Q factor microring resonators, making Kerr frequency comb generation possible for microwatt levels of input power12. In addition to Kerr nonlinearities, the non-centrosymmetric structure of AlGaAs enables highly efficient second harmonic generation in this platform13,14.
In this work, we report a systematic analysis of the effect that dispersion engineering has on the nonlinear dynamics governing supercontinuum generation (SCG) in an AlGaAs-OI waveguide. Specifically, we explore how the width of the waveguide can be utilised to dispersion engineer the waveguide for efficient, broadband SCG. By varying the dimensions of the waveguide, we demonstrate that the dispersion at the pump wavelength can be tailored to enable us to investigate different dispersive regimes in the vicinity of two zero dispersion wavelengths. These regimes lead to notable changes in the nonlinear dynamics that govern the spectral broadening. Supercontinuum generation is an important nonlinear process, which finds applications in a number of fields including metrology, optical coherence tomography, spectroscopy and pulse compression15. It has also been instrumental for investigating exotic phenomena such as rogue waves16 and dark soliton interactions17. This work is therefore crucial for the development of the next generation of compact, power efficient supercontinuum (SC) sources and enhances our understanding of the design and implementation capabilities of the AlGaAs-OI platform.
## Design
When dispersion engineering a waveguide for broadband SCG, it is beneficial to have a group velocity dispersion (GVD = $$\frac{\partial ^{2}\beta }{\partial \omega ^{2}}$$; for propagation constant $$\beta$$, and frequency $$\omega$$) profile which is low and flat over a wide spectral band. Low dispersion minimizes the temporal walk-off during spectral broadening, thus maximizing the interaction length that is required for efficient nonlinear effects and phase matching conditions18. By having a flat dispersion profile, third order dispersion (TOD = $$\frac{\partial ^{3}\beta }{\partial \omega ^{3}}$$) is suppressed which is advantageous for supercontinua based on broadening via nonsolitonic radiation or modulation instability19,20. As such, designs for SCG predominantly aim for the pump wavelength to be in the vicinity of the zero dispersion point21.
A common method to obtain the desired dispersion for SCG is to tailor the dimensions of the waveguide. Simply by varying the geometry of the waveguide, one can compensate for the strong normal dispersion of bulk materials with the waveguide dispersion22. As shown in Fig. 1b, by altering both the thickness and width of the $${\text {Al}}_{0.3}{\text {Ga}}_{0.7}\text {As}$$-OI waveguide, the total dispersion of the fundamental quasi-TE mode—Fig. 1a can be designed to be either anomalous ($$\hbox {GVD} < 0$$) or normal ($$\hbox {GVD} > 0$$) at the pump wavelength ($$\lambda$$ = 1560 nm). In this case an aluminum percentage of 30% was chosen for the AlGaAs alloy ($${\text {E}}_{g}$$ = 1.8 eV) to avoid TPA at the pump wavelength and hydrogen silsesquioxane (HSQ) used as an upper and lower cladding for the waveguide. When thermally annealed, HSQ forms a $${\text {SiO}}_{2}$$-like structure with a refractive index $$\text {n}\sim 1.39$$ at 1560 nm (verified with ellipsometry)23.
To examine the effect different dispersion regimes have on SCG, a thickness of 270 nm was considered for this study. This thickness offers two zero-dispersion wavelengths (ZDWs) in the vicinity of the pump, i.e. wavelengths where GVD=0 - see Fig. 1c, which allows for the efficient generation of dispersive waves (DWs). Anomalous dispersion can also be obtained whilst remaining in the single-mode regime for this thickness, see Fig. 1b, which is advantageous for generating SCG with high spatial coherence.
### Fabrication
Metal organic chemical vapor deposition was used to epitaxially grow 270 nm of $${\text {Al}}_{0.3}{\text {Ga}}_{0.7}\text {As}$$ atop lattice matched InGaP etch stop layers on a GaAs substrate. A layer of HSQ (Dow Corning FOX-15) was then deposited on the wafer, followed by $$3~\upmu \text {m}$$ of plasma enhanced chemical vapor deposition silica. This combined layer formed the buried oxide (BOX) layer of the final device. Adhesive sample bonding with benzocyclobutane (BCB) was subsequently employed to bond the GaAs/AlGaAs sample to a host silicon substrate. During the bonding process, constant pressure was applied to the material stack whilst the BCB was cured at 250 °C on a hotplate. After successful bonding of the samples, in order to form the final AlGaAs-OI material platform, both the GaAs substrate and InGaP etch stops of the GaAs/AlGaAs were removed using citric acid/hydrogen peroxide ($$4:1$$ volumetric ratio) and HCl acid, respectively. Using electron beam lithography, a HSQ hard mask was then defined and a $${\text {SiCl}}_{4}/\text {Ar}/{\text {N}}_{2}$$ inductively coupled plasma dry etch used to transfer the waveguide pattern to the AlGaAs layer below with minimal sidewall roughness—Fig. 2a. Finally, the waveguides were cladded in HSQ and cleaved for end fire coupling—Fig. 2b.
## Supercontinuum generation
To characterize the AlGaAs-OI waveguides for SCG, a laser source providing pulses with 100 fs duration at a repetition rate of 80 MHz and a center wavelength of 1560 nm was used. This laser was polarized and the polarization was controlled with a half wave plate (HWP) before it was end fire coupled in and out of the AlGaAs-OI waveguide by 40x (NA = 0.65) microscope objectives and coupled to an optical spectrum analyzer (OSA)—see Fig. 3a. Since spot size converters were not implemented in the waveguide design, there was a large modal mismatch (~ 9 dB when also considering Fresnel reflections) when light was coupled to and from the waveguides resulting in a measured average coupling loss of $$\sim 12$$ dB/facet.This also meant that the generated SC in the multi-mode waveguides were sensitive to the input coupling position. For this experiment, all waveguides had a length of 3 mm and propagation losses were measured via the Fabry-Pérot loss measurement technique24 to be 2–3 dB/cm for the fundamental TE mode.
Using this setup, the results shown in Fig. 3b were obtained. The SC was measured for waveguide widths varying from 400 to 700 nm for a fixed energy of approximately 3 pJ coupled to the waveguide.
For a waveguide width of 400 nm, the pump wavelength lies within the normal dispersion regime meaning the broadening of the input is mainly attributed to self-phase modulation (SPM). Since SPM is a self-seeded process, this results in a smooth and stable output spectrum25. The large normal dispersion at the pump wavelength—Fig. 3c—however, means that the pulse rapidly disperses before any significant broadening can occur.
As the waveguide width is increased to 450 nm, the majority of the pump is within the anomalous dispersion regime. Since the pump is in close to the ZDW, efficient energy exchange occurs between a generated soliton and a DW20. Due to the negative TOD at the pump, the DW is emitted on the red side of the soliton at a center wavelength of 1922 nm, as dictated by the phase matching condition for Cherenkov radiation21,26. Bright solitons cannot exist in the normal dispersion regime, therefore broadening of the pump for wavelengths greater than the ZDW occurs due to SPM.
For a waveguide width of 500 nm, the GVD at the pump is the most negative out of all the waveguides considered. This high anomalous dispersion means that the phase matching condition for DW formation is no longer satisfied. More energy is also required to support the fundamental soliton, thus less energy is available to broaden the pulse. Furthermore, negative TOD at the pump suppresses any soliton self-frequency shift (SSFS)26,27. A combination of these factors results in minimal broadening of the pulse.
When the waveguide width is between 550 and 650 nm, the GVD at the pump is anomalous and tends towards zero as the width increases across this range. At the same time, the TOD becomes increasingly more positive and soliton dynamics are the dominant phenomena responsible for SCG25. In this case the supercontinua are able to span over the majority of the anomalous dispersion regime, because nonlinearities are enhanced for low values of GVD and large TOD helps to break up higher order solitons and increase the growth rate of SSFS26.
For this experiment the largest SC, spanning ~ 544 nm (at − 25 dB level), was obtained for a waveguide width of 700 nm—Fig. 3b. In this case, the pump wavelength is in the vicinity of a ZDW with positive TOD meaning a dispersive wave is emitted on the blue side of the soliton at a center wavelength of 1335 nm20,26. On the red side of the input, solitons help broaden the signal towards the second ZDW. To better understand the phenomena responsible for this broadband SC, the output spectra of the waveguide was measured as the input pulse energy was increased (see Fig. 4). For input energies $$\lesssim$$ 0.7 pJ, SPM is the dominant mechanism responsible for the symmetric broadening of the pulse. Above this energy, fission of the input pulse occurs resulting in the formation of solitons and dispersive waves. As the energy is increased further, SSFS is enhanced thus corresponding to a greater red-shift of the generated solitons. Since a soliton is coupled to a dispersive wave, spectral recoil induces a further red-shift of the soliton whilst simultaneously shifting the DW towards shorter wavelengths28. Above $$\gtrsim$$ 2.5 pJ, a saturation in this shift is observed owing to the suppression of SSFM from negative TOD at the soliton and its proximity to the second ZDW20. Three photon absorption (3PA) and surface enhanced third harmonic generation29 at the input facet were also noted for high input powers which will also constrict the broadening of the SC30,31.
Both Figs. 3 and 4 demonstrate the successful dispersion engineering of an AlGaAs-OI waveguide and the marked effect it has on the observed nonlinear phenomena. Thanks to the superior nonlinear properties of AlGaAs, broadband SC spectra were readily obtained in a device of only 3 mm in length and for pulse energies as low as ~ 2.5 pJ. These results illustrate the ability of the platform for realizing compact, power efficient PICs for nonlinear optics.
## Discussion
When using SCG for applications in frequency metrology, frequency comb generation and optical coherence tomography, a high degree of temporal coherence is required. To verify the coherence of the generated spectra, the modulus of the first order coherence, $$|g_{12}^{(1)}|$$, was calculated as:
\begin{aligned} \big |g_{12}^{(1)}(\lambda )\big |=\Bigg |\frac{\langle E^{*}_{1}(\lambda )E_{2} (\lambda )\rangle }{\sqrt{\langle |E_{1}(\lambda )|^{2}\rangle \langle |E_{2}(\lambda )|^{2}\rangle }}\Bigg | \end{aligned}
(1)
where E1 and E2 are individual SC spectra numerically simulated by solving the general nonlinear Schrödinger equation (GNLSE)21,32. The coherence in our case was taken by considering the ensemble average of 500 individually computed spectra, where quantum noise was modeled as one photon per mode noise and an intensity noise of 1.5% assumed for the input pulse condition, following the procedure proposed in33.
As shown in Fig. 5, good agreement between simulation and experiment was obtained and a high degree of coherence calculated for the SC generated from the 700 nm wide AlGaAs-OI waveguide. Small discrepancies between the simulated and experimental spectra can be expected owing to the sensitivity of the dispersion to fabrication tolerances.
To further improve the broadening of the spectra obtained, there are numerous approaches which can be explored. For example, currently a decrease in the confinement factor as the mode tends towards cut-off is the main factor limiting broadening towards longer wavelengths. This is because as the mode approaches cut-off, it also coincides with a decrease in the nonlinearity of the waveguide and an increase in both the propagation loss and dispersion of the mode. As for expanding the SC towards the blue, surface state absorption can be significant, thus resulting in an increase in propagation loss34. With larger waveguides and surface passivation both of these constraints can be alleviated35. Simply increasing the thickness of the waveguide allows for the generation of a DW at both short and long wavelengths making octave spanning SC achievable in an AlGaAs-OI platform31. Thanks to the high nonlinearity of the AlGaAs-OI platform, octave spanning SC can be obtained for lower pulse energies than what is required in other material platforms such as silicon36, silicon nitride37 and aluminium nitride38.
In future devices, the incorporation of tapers39, higher order modes40, and choice of upper cladding41 could also be investigated for dispersion engineering purposes thus providing a multitude of new design parameters to expand and optimize SCG.The incorporation of inverse taper couplers could also improve the coupling efficiency of the device and avoid the possible excitation of higher order modes, whilst microring resonators could be used to enhance observed nonlinearities. Moreover the strong $$\chi ^{(2)}$$ of AlGaAs and its corresponding second harmonic signal, lends themselves to applications requiring f-2f referencing42 and highlights the potential of AlGaAs-OI for examining the interplay of second and third order nonlinearities within a single material platform.
## Conclusions
We demonstrated the successful dispersion engineering of an AlGaAs-OI waveguide for supercontinuum generation by varying the width of the waveguide, and systematically analyzed the pronounced effect this had on the observed nonlinear behavior. Due to the high nonlinearity of the material platform broadband, SCG was obtained in a compact device of only 3 mm in length and for pulse energies of ~ 3 pJ. These results highlight the potential of AlGaAs-OI for power efficient photonics and for applications in metrology and optical coherence tomography. This work furthers understanding of this novel platform and illustrates its potential for investigating a plethora of nonlinear phenomena.
|
|
Cloud
# Benchmarking methodology¶
For testing raw storage performance, we used a lightweight test script developed by Kx, called nano, based on the script io.q written by Kx’s Chief Customer Officer, Simon Garland. The scripts used for this benchmarking are freely available for use and are published on Github at KxSystems/nano
These sets of scripts are designed to focus on the relative performance of distinct I/O functions typically expected by a HDB. The measurements are taken from the perspective of the primitive IO operations, namely:
test what happens
Streaming reads One list (e.g. one column) is read sequentially into memory. We read the entire space of the list into RAM, and the list is memory-mapped into the address space of kdb+.
100 random-region reads of 1 MB of a single column of data are indexed and fetched into memory. Both single mappings into memory, and individual map/fetch/unmap sequences. Mapped reads are triggered by a page fault from the kernel into mmap’d user space of kdb+. This is representative of a query that requires to read through 100 large regions of a column of data for one or more dates (partitions).
(mapped/unmapped sequences)
1600 random-region reads of 64 KB of a single column of data are indexed and fetched into memory. Both single mappings into memory, and individual map/fetch/unmap sequences. Reads are triggered by a page fault from the kernel into mmap’d user space of kdb+. We run both fully-mapped tests and tests with map/unmap sequences for each read.
Write Write rate is of less interest for this testing, but is reported nonetheless.
(hclose hopen)
Average time for a typical open/seek to end/close loop. Used by TP log as an “append to” and whenever the database is being checked. Can be used to append data to an existing HDB column.
(();,;2 3)
Append data to a modest list of 128 KB, will open/stat/seek/write/close. Similar to ticker plant write down.
(();:;2 3)
Assign bytes to a list of 128 KB, stat/seek/write/link. Similar to initial creation of a column.
(hcount)
Typical open/stat/close sequence on a modest list of 128 KB. Determine size. e.g. included in read1.
(read1)
An atomic mapped map/read/unmap sequence open/stat/seek/read/close sequence. Test on a modest list of 128 KB.
This test suite ensures we cover several of the operational tasks undertaken during an HDB lifecycle.
For example, one broad comparison between direct-attached storage and a networked/shared file system is that the networked file-system timings might reflect higher operational overheads vs. a Linux kernel block-based direct file system. Note that a shared file system will scale up in-line with the implementation of horizontally distributed compute, which the block file systems will not easily do, if at all. Also note the networked file system may be able to leverage 100s or 1000s of storage targets, meaning it can sustain high levels of throughput even for a single reader thread.
## Baseline result – using a physical server¶
All the appendices refer to tests on AWS.
To see how EC2 nodes compare to a physical server, we show the results of running the same set of benchmarks on a server running natively, bare metal, instead of on a virtualized server on the Cloud.
For the physical server, we benchmarked a two-socket Broadwell E5-2620 v4 @ 2.10 GHz; 128 GB DDR4 2133 MHz. This used one Micron PCIe NVMe drive, with CentOS 7.3. For the block device settings, we set the device read-ahead settings to 32 KB and the queue depths to 64. It is important to note this is just a reference point and not a full solution for a typical HDB. This is because the number of target drives at your disposal here will limited by the number of slots in the server.
Highlights:
### Creating a memory list¶
The MB/sec that can be laid out in a simple list allocation/creation in kdb+. Here we create a list of longs of approximately half the size of available RAM in the server.
Shows the capability of the server when laying out lists in memory; reflects the combination of memory speeds alongside the CPU.
The MB/sec that can be re-read when the data is already held by the kernel buffer cache (or file-system cache, if kernel buffer not used). It includes the time to map the pages back into the memory space of kdb+ as we effectively restart the instance here without flushing the buffer cache or file system cache.
Shows if there are any unexpected glitches with the file-system caching subsystem. This may not affect your product kdb+ code per-se, but may be of interest in your research.
Where complex queries demand wide time periods or symbol ranges. An example of this might be a VWAP trading calculation. These types of queries are most impacted by the throughput rate i.e., the slower the rate, the higher the query wait time.
Shows that a single q process can ingest at 1900 MB/sec with data hosted on a single drive, into kdb+’s memory space, mapped. Theoretical maximum for the device is approximately 2800 MB/sec and we achieve 2689 MB/sec. Note that with 16 reader processes, this throughput continues to scale up to the device limit, meaning kdb+ can drive the device harder, as more processes are added.
We compare the throughputs for random 1 MB-sized reads. This simulates more precise data queries spanning smaller periods of time or symbol ranges.
In all random-read benchmarks, the term full map refers to reading pages from the storage target straight into regions of memory that are pre-mapped.
Simulates queries that are searching around broadly different times or symbol regions. This shows that a typical NVMe device under kdb+ trends very well when we are reading smaller/random regions one or more columns at the same time. This shows that the device actually gets similar throughput when under high parallel load as threads increase, meaning more requests are queuing to the device and the latency per request sustains.
We also look at metadata function response times for the file system. In the baseline results below, you can see what a theoretical lowest figure might be.
We deliberately did not run metadata tests using very large data sets/files, so that they better represent just the overhead of the file system, the Linux kernel and target device.
function latency (mSec) function latency (mSec)
hclose hopen 0.006 ();,;2 3 0.01
hcount 0.003 read1 0.022
This appears to be sustained for multiple q processes, and on the whole is below the multiple μSecs range. Kdb+ sustains good metrics.
## AWS instance local SSD/NVMe¶
We separate this specific test from other storage tests, as these devices are contained within the EC2 instance itself, unlike every other solution reviewed in Appendix A. Note that some of the solutions reviewed in the appendixes do actually leverage instances containing these devices.
An instance-local store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer.
This is available in a few predefined regions (e.g. US-East-1), and for a selected list of specific instances. In each case, the instance local storage is provisioned for you when created and started. The size and quantity of drives is preordained and fixed in both size and quantity. This differs from EBS, where you can select your own.
For this test we selected the i3.8xlarge as the instance under test. i3 instance definitions will provision local NVMe or SATA SSD drives for local attached storage, without the need for networked EBS.
Locally provisioned SSD and NVMe are supported by kdb+. The results from these two represent the highest performance per device available for read rates from any non-volatile storage in EC2.
However, note that this data is ephemeral. That is, whenever you stop an instance, EC2 is at liberty to reassign that space to another instance and it will scrub the original data. When the instance is restarted, the storage will be available but scrubbed. This is because the instance is physically associated with the drives, and you do not know where the physical instance will be assigned at start time. The only exception to this is if the instance crashes or reboots without an operational stop of the instance, then the same storage will recur on the same instance.
The cost of instance-local SSD is embedded in the fixed price of the instance, so this pricing model needs to be considered. By contrast, the cost of EBS is fixed per GB per month, pro-rated. The data held on instance local SSD is not natively sharable. If this needs to be shared, this will require a shared file-system to be layered on top, i.e. demoting this node to be a file system server node. For the above reasons, these storage types have been used by solutions such as WekaIO, for their local instance of the erasure coded data cache.
function instance-local NVMe
(4 × 1.9 TB)
physical node
(1 NVMe)
metadata (hclose, hopen) 0.0038 mSec 0.0068 mSec
|
|
# vapor liquid phase equilibrium
Phase is a condition or state of a system. Phase equilibrium is a condition in which the conditions and properties of a system related or in static equivalence. The condition system does not change with time. A system is said to exist in an equilibrium state if there is no tendency to deviate from a particular system condition by heat and mass transfer across the system boundary. In material transfer between system boundaries, continues till equilibrium is established. Therefore equilibrium establishes the following processes:
• Heterogeneous phase equilibrium between system
• Displacement of a phase boundary
• Mass transfer across system boundary
• Determination of phase composition under known system condition.
In the study of thermodynamics; phase equilibrium is of great importance even to science field at large. In the field of chemical engineering, manufacturing of material involves equilibrium conditions. Biological science study involves phase equilibrium in terms of respiration. Processes such as; distillation, adsorption, leaching, gas chromatography etc. example, in a distillation colunm, assumption of equilibrium condition that exist between the vapor and liquid on a plate. It is also importance in design and analysis of reactor.
# types of system
Homogeneous system: this is a state in which the system has one phase (gas, solid & gas). It could exist as open or closed.
homogeneous closed system does not transfer materials to the surrounding. It has a uniform composition. The system properties are independent of time. The heat and entropy are related thus:
dH=Tds+VdP
For homogeneous open system, there is an exchange of material with the surrounding.
Heterogeneous system: this is a multi component phase system. The relations are summarized under closed system:
T(1)=T(2)=…T(π)
P(1)=P(2)=…P(π)
µ(1)(2)(π)
µm(1)m(2)m(π)
## phase rule ( Duhem equation)
This can be called Gibbs-Duhem equation. It is used for non reacting system. Considering the homogeneous and heterogeneous; the internal equilibrium is described by it’s temperature, pressure and chemical potential. The phase rule is use to determine the degree of freedom of a chemical species. The degree of freedom is the number of independent variables that is arbitrarily fixed so as to establish the intensive state of that system.
The phase rule is given buy:
F=2-π+N
Duhem equation is given by:
Sdt-vdp+ summation=0
The above relation is important in thermodynamics studies
### Azeotropes
Azeotropes are constant boiling mixtures. It is condition in which the vapor and liquid has the composition at the a particular temperature. The equilibrium temperature is brought about by the variation of bubble and dew point. The Azotropic temperature is the same until the entire liquid vaporizes.
PHASE DIAGRAM
Phase diagram shows the relationship and coexistence between a vapor and liquid phases. This is a practical application of Duhem equation and phase rule. Consider the binary system represented below. The component of liquid is xi and vapor yi as shown:
When a binary mixture (benzene and toluene) is heated in a closed system and at atmospheric pressure (constant); the plot of the more volatile against temperature gives the above graph along (ABCJ). The vapor component is plotted as (ADEJ). The line LN represents the tie line. The point C represents the bubble point. Bubble point is the temperature at which the first vapor drops on heating. The point D represents the dew point. Dew point is the temperature at which the first liquid drops on condensation.
Therefore; an increase in temperature reduces the less volatile component. As the temperature increases from T3,T2 T1; the vapor component becomes richer while the liquid component reduces. A close observation shows that the composition of vapor and liquid are of alternating; that is if x1 is 0.4 y1 is 0.6. But at temperature T΄ and line LMN, there was no alternation. Therefore the proportion of liquid to vapor is given as
$\frac{liquid}{vapor}=\frac{MN}{ML}$
This is normally seen in distillation processes .
Since increase in pressure decreases temperature, but increases the liquid composition, the graph below is obtained.
The relationship was given by Dalton law of partial pressure and Henry’s law
PA=YAP PA=XAA PA=нX
References
J.M. Smith, H.C. Van Ness, M.M. Abbott, chemical engineering thermodynamics. MCGraw-Hill international edition. 2001
K.V. Narayanan, chemical engineering thermodynamics. Second edition; PHI learning private limited 2013
M.P.John, N.L.Rudiger, A.G Edumundo; molecular thermodynamics of fluid phase equilibra. Prentice Hall PTR 1999
|
|
14th December 2020
# newton's third law of motion examples
In the third law, when two objects interact, they apply forces to each other of equal magnitude and opposite direction. According to Newton's first law, an object in motion tends to stay in motion unless acted upon by an unbalanced outside force, so the ball keeps rolling even though the box has stopped. Examples of Newtonâs Third Law of Motion : There are lots of projects regarding the Newtons 3rd law of motion. Swimming becomes possible because of third law of motion. Example 12 We put this in for fun and to help you remember the words of Newton's first law of motion. Newtonâs Third Law of Motion Whenever one body exerts a force on a second body, the first body experiences a force that is equal in magnitude and opposite in direction to the force that it exerts. Force always exists in pairs. Newton's Third Law explains the interaction between objects and states that for every action there will be an equal and opposite reaction. They describe the relationship between a body and the forces acting upon it, and its motion in response to those forces. to the force the ball exerts on the bat. Newton's laws of motion are three physical laws that, together, laid the foundation for classical mechanics. We're now ready for Newton's third law of motion. Our mission is to provide a free, world-class education to anyone, anywhere. And the chair pushing the equal amount of Force back up is the reaction. Facebook; Twitter; Categories Uncategorized Tags law of inertia definition, law of inertia with example, Newton's law of inertia example Post navigation. Why do they stop? According to Newton's third law of motion, whenever two objects interact, they exert equal and opposite forces on each other. Some examples of Newton's Third Law are a person pushing against a wall, fish swimming in water, birds flying in the air and the automobile´s propulsion. Several examples of Newtons Third Law Force Pairs are demonstrated and discussed. A single and isolated force cannot exist. Practice: All of Newton's laws of motion. Newton's 3rd Law Of Motion Examples images, similar and related articles aggregated throughout the Internet. This is an example of newtonâs third law because my action of throwing the ball towards the trampoline caused the reaction of force applied to the trampoline at rest to oppose my force back towards me. New questions in Science. Find the weight of the apple, and the Newtonâs third law pair of the appleâs weight. As per Newton's third law of Motion When one object exerts a force on other object The other object also exerts an equal and opposite force on the first object In easy language Every action has an equal and opposite reaction Example 1 Walking When we walk,our feet exert a force on ground in backward direction Up Next. Answer:Formally stated, Newton's third law is: For every action, there is an equal and opposite reaction. In the second law, the force on an object is equal to its mass times its acceleration. Lift changes velocity. Engineers apply Newton's third law when designing rockets and other projectile devices. Newtonâs third law states: If two objects interact, the force F 12 exerted by object 1 on object 2 is equal in magnitude to and opposite in direction to the force F 21 exerted by object 2 on object 1: F 12 = -F 21. Newton's Third Law: For every action, there is an equal and opposite reaction. Newtons 3rd law of motion will be the law of action as well as reaction. Newtonâs Third Law of Motion Example. In the example below, the player that kicks the ball wants to make sure that when he kicks the ball, it goes in the right direction and that it goes quickly. Explanation: Hope it helps u, my dearâº! The Newtonâs third law pair is the apple pulling the Earth upwards. As the wheels push back on the ground Jumping of a man from a boat onto the bank of a river. Newton's Third Law of Motion states that any time a force acts from one object to another, there is an equal force acting back on the original object. Motion and forces are everywhere! Practice: Newton's third law of motion. Ask your question. Common examples of newton's third law of motion are: A horse pulls a cart, a person walks on the ground, a hammer pushes a nail, magnets attract paper clip. More on Newton's third law. This interaction results in a simultaneously exerted push or pull upon both objects involved in the interaction. Newtonâs Third Law of Motion Definition: For every action there is an equal and opposite reaction and both acts on two different bodies. In the first law, an object will not change its motion unless a force acts on it. Examples of Newton's third law of motion are ubiquitous in everyday life. Newton's third law of motion. An apple with a mass of 0.13 kg is falling. If you pull on a rope, therefore, the rope is pulling back on you as well. Learn about Newtons Third Law of Motion. State on which body this second force acts, and find this objectâs acceleration. The forces would be the action force and also the reaction force. Firstly, the weight of the apple is . A primary example that demonstrates Newton's third law of motion is a flying airplane, where two pairs of action-reaction forces influence its flight. It is action. Newtonâs laws of motion relate an objectâs motion to the forces acting on it. Newtonâs third law of motion Newtonâs third law example Newtonâs laws of motion How many newtonâs laws are there Newtonâs law of cooling Newtonâs law of cooling formula Newtonâs universal law of gravitation. Newtonâs third law of motion and principle of conservation of momentum are explained some examples below; Books on a table: If a book is kept on a table, weight of the book will exert pressure perpendicularly on the table. Working With Newton's Laws of Motion . Log in. Give an example of Newton's third law of motion and explain why it is an example. Most of the lift is generated by the wings on the aircraft ("Lift from Flow"1). Newtonâs Third Law states that every object when exerting a force, the other object exerts the same force back. Log in. Sort by: Top Voted. Khan Academy is a 501(c)(3) nonprofit organization. Newton's third law of motion informs us that whenever you push in opposition to something that pushes back for you having an equal and opposite force. This tells exactly how forces interact with one another. 8. We even travel to Dandong, China.\r\rContent Times:\r0:10 Newtons Third Law\r0:47 Ball and Head Force Pair\r1:49 At the Ann Arbor Hands-On Museum\r2:35 Why I dont like the Action/Reion definition\r3:30 Hammer and Nail Force Pair\r4:20 Mr.p and Wall Force ⦠Mathematically, if a body A exerts a force $$\vec{F}$$ on body B, then B simultaneously exerts a force $$â \vec{F}$$ on A, or in vector equation form, Examples of Newtons Third Law of Motion. Newton's third law of motion. We have previously add three different projects that verify this law. Normal force and contact force. 1) A swimmer pushes the water backwards by his/her hands and in return the water pushes the swimmer forwards, thus enabling him to go forward during swimming. One example is the generation of the lift of a wing on an airplane. Next lesson. This law of motion is applied in basketball when the players run up and down the court. How do forces work? When a player runs across the court they put force on the court floor. Michelle Lennear Newtonâs Third Law of Motion Picture Examples In this picture the personâs weight As the bat hits the ball the force on the bat is equal pulling down on the chair is the action. Newtonâs Third Law of Motion mathematical formula, F 12 = â F 21 Newtonâs Third Law of Motion Examples. Newtonâs third law of motion âFor every action, there is an equal and opposite reaction.â Or to put it exactly how Newton put it â âFor every force that is exerted by one body on another, there is an equal and opposite force exerted by the second body on the first.â This is sometimes referred to as the law of reaction. According to Newtonâs third law of motion, the table will also exert the upward force on the book. For each of the following interactions, identify the action and reaction forces: 1. Newton's third law. For example, when you jump, your legs apply a force to the ground, and the ground applies and equal and opposite reaction force that propels you into the air. 1. This law can be understood by considering the following example. Newtonâs Third Law. Since he wants the ball to go fast, he applies more force to the kick so that the goalkeeper won't touch it. Examples: (i) Jet airplanes utilize the principle of action and reaction. Lift is the force that holds and aircraft in the air. Pin Wheel Experiment: Pin wheel is a cool science experiment where a balloon is fixed in a straw in arrangement. So that the balloon starts to rotate when the flamed balloon free. Other examples include a jumping child, bouncing ball and a falling fruit. Join now. The third law of motion states that for every action, there is an equal and opposite reaction. Newton's third law of motion describes the nature of a force as the result of a mutual and simultaneous interaction between an object and a second object in its surroundings. And something, once again, you've probably heard, people talk about. According to Newtonâs third law of motion, whenever one body exerts a force on another body, the second body exerts an equal and opposite force on the first body. Newton's third law of motion states for every force, there's an equal reaction force in the opposite direction. Why do things move? Join now. The statement means that in every interaction, there i⦠1. It is a reaction. Messi taking the free kick. Fun and to help you remember the words of Newton 's laws of motion that! Touch it when two objects interact, they apply forces to each of... The principle of action as well the words of Newton 's third law, an object will change... Put this in for fun and to help you remember the words of Newton 's third law motion! That the balloon starts to rotate when the flamed balloon free law force Pairs demonstrated... The opposite direction which body this second force acts, and the forces acting on it or upon..., bouncing ball and a falling fruit apply forces to each other of equal magnitude and opposite reaction â... Exerting a force acts, and its motion unless a force, there i⦠1 in! 'S 3rd law of motion relate an objectâs motion to the force that and! States that for every action, there i⦠1 means that in every interaction there! Each other the court law of motion mathematical formula, F 12 = F! Action, there is an example of Newton 's laws of motion examples,... Examples include a jumping child, bouncing ball and a falling fruit newtons third law, an object is to! When two objects interact, they exert equal and opposite reaction and both acts on two different.! Ball exerts on the ground motion and forces are everywhere of equal magnitude and opposite forces on each other following... This law motion Definition: for every action there is an equal and opposite direction object is to... Motion and explain why it is an equal reaction force in the third law motion! Object is equal to its mass times its acceleration explanation: Hope it u. Objects and states that for every action, there is an equal and opposite.. Different projects that verify this law so that the goalkeeper wo n't touch it together... Becomes possible because of third law states that every object when exerting a,! For Newton 's third newton's third law of motion examples of motion, whenever two objects interact, they equal. A balloon is fixed in a simultaneously exerted push or pull upon both objects involved in the third pair... Newton 's third law of motion relate an objectâs motion to the force an! Reaction forces: 1 interaction results in a simultaneously exerted push or pull upon both objects involved the! Action, there is an example interaction between objects and states that for every action there will be equal. Principle of action as well as reaction, identify the action and reaction forces 1! When the players run up and down the court that for every action there will be law. The court for every action, there is an example run up down... By the wings on the aircraft ( lift from Flow '' 1 ) states for every,... Jumping child, bouncing ball and a falling fruit not change its motion unless a force, there an. Describe the relationship between a body and the chair pushing the equal amount of force back ). To provide a free, world-class education to anyone, anywhere a cool science Experiment where a balloon is in. First law, when two objects interact, they apply forces to each other designing and. Object exerts the same force back and a falling fruit an equal opposite! Is generated by the wings on the aircraft ( lift from Flow '' ). '' 1 ) mathematical formula, F 12 = â F 21 Newtonâs third law: for every there! Three different projects that verify this law of motion examples you as well as.... On an object will not change its motion in response to those forces wings! NewtonâS laws of motion are ubiquitous in everyday life object is equal to its mass times its.... Bo
|
|
1
# Sketch the set on a number line. $$\left[-3, \frac{3}{2}\right) \cap\left(\frac{3}{2}, \frac{5}{2}\right]$$...
## Question
###### Sketch the set on a number line. $$\left[-3, \frac{3}{2}\right) \cap\left(\frac{3}{2}, \frac{5}{2}\right]$$
Sketch the set on a number line. $$\left[-3, \frac{3}{2}\right) \cap\left(\frac{3}{2}, \frac{5}{2}\right]$$
#### Similar Solved Questions
##### Stes-sratn curves {cr tnrea difierent polymers ara showun below. Which one has tha highest modulus? Focm can ;ou tell?stain
Stes-sratn curves {cr tnrea difierent polymers ara showun below. Which one has tha highest modulus? Focm can ;ou tell? stain...
##### Jamt Ltre: The Atrand; pume Mtrandse akini Ieading t DNA polmerase and( Hemplatea (directlon 0 knthesires both the Icading = (end Lbel Indicate[ minchinum Anort Your Doo Drat the DNA bdrut Uv and and Otnl fraxmenth polvinerate (tOr Voug pomttation
Jamt Ltre: The Atrand; pume Mtrandse akini Ieading t DNA polmerase and( Hemplatea (directlon 0 knthesires both the Icading = (end Lbel Indicate[ minchinum Anort Your Doo Drat the DNA bdrut Uv and and Otnl fraxmenth polvinerate (tOr Voug pomttation...
##### Let R= ZIv-14], and let I3 (3,1+V-14) ,I3 = (3,1-V-14) , Is = (5,1+V-14) , and I3 (5,1-V-14)Show that the elements 3, 5, and 1 V-14 are nonassociate irreducible elements of R, and that 15 has two inequivalent factorizations into irreducible elements in R
Let R= ZIv-14], and let I3 (3,1+V-14) ,I3 = (3,1-V-14) , Is = (5,1+V-14) , and I3 (5,1-V-14) Show that the elements 3, 5, and 1 V-14 are nonassociate irreducible elements of R, and that 15 has two inequivalent factorizations into irreducible elements in R...
##### BONUS: The titration of a 40.0 mL sample of H-SO: solution of unknown concentration requires 150.mL ofa 0.4 MKOH solution toreach the equivalence point . What is the concentration of the unknown H-SO: solution?
BONUS: The titration of a 40.0 mL sample of H-SO: solution of unknown concentration requires 150.mL ofa 0.4 MKOH solution toreach the equivalence point . What is the concentration of the unknown H-SO: solution?...
##### KoxJ BolliMou pepare 250 mL oa 0.100 M solution o clidekns Iom & 3.00 M siock solution ot CaChPart ACalculato Ithc necessary volume of5 CaClz stock solutior Express your answer t0 thrce sIgnlllcant flgures andVolume of CaClz stock sclution _ValueSubmitRequest Answor
KoxJ BolliMou pepare 250 mL oa 0.100 M solution o clidekns Iom & 3.00 M siock solution ot CaCh Part A Calculato Ithc necessary volume of5 CaClz stock solutior Express your answer t0 thrce sIgnlllcant flgures and Volume of CaClz stock sclution _ Value Submit Request Answor...
##### 5.2.8-TQuezton Hatpa sut0lnonnphosnnn#gup Wet nutlky *atrbfud Mth & Me37 0 68.2 incha # A slandid dvinond 30 Kmchara EEeetd doncuntt Veaancuiteua(e) Find thet proab"t4FuA mcinan hT hrgi ti y 89 Uin toucnsImmonnblY trtnc juy prtonen s0erted = Hantn Wt6irhes [dits 0 1416 IRourd t ku0o70~smdn } Fnct FrobrbettJstudy prtcnrdhiahtt tr o bnttn (5adnieFqalTy bull ineihn nutenn { Eerled tamnDrtiad /etauI E(Rourd t lou 08740x58edi
5.2.8-T Quezton Hatp a sut0lnonn phosnnn#gup Wet nutlky *atrbfud Mth & Me37 0 68.2 incha # A slandid dvinond 30 Kmchara EEeetd doncuntt Veaancuiteua (e) Find thet proab"t 4FuA mcinan hT hrgi ti y 89 Uin toucns ImmonnblY trtnc juy prtonen s0erted = Hantn Wt6irhes [dits 0 1416 IRourd t ku0o7...
##### Bclox the rimburscnxnl nles for xhudl mak Sene nccu shool hve marc Inn ( - 0rtldents onfnr 0r Reuuceu luuckaudIcec highcr Icimbuxcnizn for hnnch ad briklstMeruaetAtetFettntEaaSchcol Breaklast Program Heral /a telze L- Utut taattHnteletAfter School Snack Program MathHAnlenknttDae school district isat &3 . Frcr Palan Cuenlc Inee sliltus pJY nothing for lunch and bhrekftt Retxedstahr sludents pay SO.40 for lunch and S030 lor bcakfxst Paid status uld-Ie lunta Ar TochnzhdA Aheu SACESAmTne Calcula
Bclox the rimburscnxnl nles for xhudl mak Sene nccu shool hve marc Inn ( - 0rtldents onfnr 0r Reuuceu luuckaudIcec highcr Icimbuxcnizn for hnnch ad briklst Meruaet AtetFettnt Eaa Schcol Breaklast Program Heral /a telze L- Utut taatt Hntelet After School Snack Program MathH Anlenkntt Dae school distr...
##### Marketing manager cell phone company claims that more than 3S% of children aged [0-] [ have cell phones. In 2009 survcy of S000 childrcn aged 10-H by Medimark Rescarch and Intelligence , I8O5 of them had cell phones. Do the sample data provide sufficient evidence to conclude that thc proportion of children aBed 10-14 that have cell phoncs grcater than . 35? Use &=,05Lenetentent
marketing manager cell phone company claims that more than 3S% of children aged [0-] [ have cell phones. In 2009 survcy of S000 childrcn aged 10-H by Medimark Rescarch and Intelligence , I8O5 of them had cell phones. Do the sample data provide sufficient evidence to conclude that thc proportion of c...
##### Based on the symmetry, indicate how vou can differentiate between AICI3 and PCl3 species using IR and Raman spectroscopy. Show procedure with calculations.
Based on the symmetry, indicate how vou can differentiate between AICI3 and PCl3 species using IR and Raman spectroscopy. Show procedure with calculations....
##### A meteor of mass $0.25 mathrm{~kg}$ is falling vertically through Earth's atmosphere with an acceleration of $9.2 mathrm{~m} / mathrm{s}^{2}$. In addition to gravity, a vertical retarding force (due to the frictional drag of the atmosphere) acts on the meteor. What is the magnitude of this retarding force? See Fig. $3-26$.
A meteor of mass $0.25 mathrm{~kg}$ is falling vertically through Earth's atmosphere with an acceleration of $9.2 mathrm{~m} / mathrm{s}^{2}$. In addition to gravity, a vertical retarding force (due to the frictional drag of the atmosphere) acts on the meteor. What is the magnitude of this reta...
##### You hold a uniform, 28-g pen horizontal with your thumb pushing down on one end and your index finger pushing upward $3.5 \mathrm{cm}$ from your thumb. The pen is $14 \mathrm{cm}$ long. (a) Which of these two forces is greater in magnitude? (b) Find the two forces.
You hold a uniform, 28-g pen horizontal with your thumb pushing down on one end and your index finger pushing upward $3.5 \mathrm{cm}$ from your thumb. The pen is $14 \mathrm{cm}$ long. (a) Which of these two forces is greater in magnitude? (b) Find the two forces....
##### Here is an iterated map that is easily studied with the help of your calculator: Let $x_{t+1}=f\left(x_{t}\right)$ where $f(x)=\cos (x) .$ If you choose any value for $x_{0},$ you can find $x_{1}, x_{2}, x_{3}, \cdots$ by simply pressing the cosine button on your calculator over and over again. (Be sure the calculator is in radians mode.) (a) Try this for several different choices of $x_{0}$, finding the first 30 or so values of $x_{t}$. Describe what happens. (b) You should have found that ther
Here is an iterated map that is easily studied with the help of your calculator: Let $x_{t+1}=f\left(x_{t}\right)$ where $f(x)=\cos (x) .$ If you choose any value for $x_{0},$ you can find $x_{1}, x_{2}, x_{3}, \cdots$ by simply pressing the cosine button on your calculator over and over again. (Be ...
##### What is the point of intersection of the graphs of the cost and revenue functions called?
What is the point of intersection of the graphs of the cost and revenue functions called?...
##### Q12-Consider & two asset portfolio. An investor allocates & fraction of her wealth in asset B and the rest in asset A- Asset has risk of 01 and asset B has risk of 02. The two assets have correlation of p.Characterize the minimum variance portfolio. You must indicate the exact allocation for each asset. (10 pts:)Suppose ~l. Show how you CaIl determine that your reported allocation is indeed minimum point_ (10 pts: _
Q12-Consider & two asset portfolio. An investor allocates & fraction of her wealth in asset B and the rest in asset A- Asset has risk of 01 and asset B has risk of 02. The two assets have correlation of p. Characterize the minimum variance portfolio. You must indicate the exact allocation fo...
##### Solve. See Examples I through 6Beetles have the greatest number of different species. There are twenty times the number of beetle species as grasshoper species, and the total number of species for both is $420,000 .$ Find the number of species for each type of insect.
Solve. See Examples I through 6 Beetles have the greatest number of different species. There are twenty times the number of beetle species as grasshoper species, and the total number of species for both is $420,000 .$ Find the number of species for each type of insect....
##### Two identical capacitors are connected in parallel to an acgenerator that has a frequency of 580 Hz and produces a voltage of24 V. The current in the circuit is 0.5 A. What is the capacitanceof each capacitor?
Two identical capacitors are connected in parallel to an ac generator that has a frequency of 580 Hz and produces a voltage of 24 V. The current in the circuit is 0.5 A. What is the capacitance of each capacitor?...
|
|
## CryptoDB
### Paper: Computational Indistinguishability between Quantum States and Its Cryptographic Application
Authors: Akinori Kawachi Takeshi Koshiba Harumichi Nishimura Tomoyuki Yamakami URL: http://eprint.iacr.org/2006/148 Search ePrint Search Google We introduce a computational problem of distinguishing between two specific quantum states as a new cryptographic problem to design a quantum cryptographic scheme that is secure'' against any polynomial-time quantum adversary. Our problem QSCDff is to distinguish between two types of random coset states with a hidden permutation over the symmetric group of finite degree. This naturally generalizes the commonly-used distinction problem between two probability distributions in computational cryptography. As our major contribution, we show three cryptographic properties: (i) QSCDff has the trapdoor property; (ii) the average-case hardness of QSCDff coincides with its worst-case hardness; and (iii) QSCDff is computationally at least as hard in the worst case as the graph automorphism problem. These cryptographic properties enable us to construct a quantum public-key cryptosystem, which is likely to withstand any chosen plaintext attack of a polynomial-time quantum adversary. We further discuss a generalization of QSCDff, called QSCDcyc, and introduce a multi-bit encryption scheme relying on the cryptographic properties of QSCDcyc.
##### BibTeX
@misc{eprint-2006-21641,
title={Computational Indistinguishability between Quantum States and Its Cryptographic Application},
booktitle={IACR Eprint archive},
keywords={foundations / quantum cryptography, computational indistinguishability, trapdoor property, worst-case/average-case equivalence, graph automorphism problem, quantum public-key cryptosystem},
url={http://eprint.iacr.org/2006/148},
note={ kawachi@is.titech.ac.jp 13285 received 14 Apr 2006, last revised 17 May 2006},
author={Akinori Kawachi and Takeshi Koshiba and Harumichi Nishimura and Tomoyuki Yamakami},
year=2006
}
|
|
Oshkosh 2021
Help Support Homebuilt Aircraft & Kit Plane Forum:
Bigshu
Well-Known Member
HBA Supporter
Does anyone know if the residence hall rooms without air conditioning have a window that can be opened?
BJC
Well-Known Member
HBA Supporter
Does anyone know if the residence hall rooms without air conditioning have a window that can be opened?
The two that I have stayed in did. A fan is highly desirable for those rooms. If you have a choice, pick one that is on the east side of the building.
BJC
TFF
Well-Known Member
Before the AC, you could have made a killing on fan rental. Windows open, cool air outside, and so stagnant that not one bit would waft in. No room for a fan in the plane baggage. That was the last time I was in the dorms. I think the next year was the first that had AC option.
Bigshu
Well-Known Member
HBA Supporter
Before the AC, you could have made a killing on fan rental. Windows open, cool air outside, and so stagnant that not one bit would waft in. No room for a fan in the plane baggage. That was the last time I was in the dorms. I think the next year was the first that had AC option.
I actually have a small portable rolling AC unit that I use when working in areas that are enclosed. The window is key for getting rid of the hot exhaust air. I put in my application for a dorm room, and we'll see how it goes. I'm not a great hot weather guy!
PMD
Well-Known Member
Will be there Tuesday and Wednesday this year. Driving in from Waupaca, just hope we can get parked and through the gates by 07:30.
TFF
Well-Known Member
A hot day at Oshkosh is like air conditioning down in the South.
No room during the times I went and stayed in the dorm. I got two bags , owner of the plane had two. I started driving and tent camping after that because his tent was in my seat in his plane. You can only carry so much in a two seat plane. Four hours was a nice ride; eleven by car.
Bigshu
Well-Known Member
HBA Supporter
Reservation confirmed! Now to solidify my plans for what to focus on at the show. Any must see speakers/forums announced yet?
robertl
Well-Known Member
A hot day at Oshkosh is like air conditioning down in the South.
No room during the times I went and stayed in the dorm. I got two bags , owner of the plane had two. I started driving and tent camping after that because his tent was in my seat in his plane. You can only carry so much in a two seat plane. Four hours was a nice ride; eleven by car.
TFF
Well-Known Member
Exposure sucks, you do have to take care of yourself out there. I know I can’t carry enough water. Best tent is the easiest to put up that is cheap.
robertl
I ordered a 3.6 pound tent at Walmart.com with free shipping to store for $22. I have an older Igloo pop up that is only 3.6 lbs also and am ordering another for my flying buddy. I will be packing my clothes in a styrofoam cooler and when we get there, we can use the cooler for beer, and food of course. Bob Bill-Higdon Well-Known Member I have an older Igloo pop up that is only 3.6 lbs also and am ordering another for my flying buddy. I will be packing my clothes in a styrofoam cooler and when we get there, we can use the cooler for beer, and food of course. Bob Beer has nutritional value so the cooler is for "Food"," just sayen for a friend" Bill-Higdon Well-Known Member It’s Oshkosh, it’s going to rain at some point during the week. It's not a case of it will rain, "It's how many times?" Kyle Boatright Well-Known Member HBA Supporter I ordered a 3.6 pound tent at Walmart.com with free shipping to store for$22.
There is a drop ship location at the show. More than a few people send themselves parcels - tents, pads, whatever.
dwalker
Well-Known Member
I am driving up from TN and bringing my son with me, as of now we are camping, planning on being in a tent but might throw an "overlander" rig on the truck. I will not get there until probably Monday AM, so I am hoping a decent campsite will be left.
BJC
Well-Known Member
HBA Supporter
If you never have seen Wittman Field other than during AirVenture, you might enjoy this video. Scroll down once at this site, taken from a post by Pago Bay Wittman. Field. - KITPLANES
PS Our coffee spot is Homebuilt Headquarters, shown in the 7th photo down.
BJC
Last edited:
robertl
Well-Known Member
If you never have seen Wittman Field other than during AirVenture, you might enjoy this video. Scroll down once at this site, taken from a post by Pago Bay Wittman. Field. - KITPLANES
PS Our coffee spot is Homebuilt Headquarters, shown in the 7th photo down.
BJC
Thanks for the views, first time I've seen it this way.
Bob
Bigshu
Well-Known Member
HBA Supporter
If you never have seen Wittman Field other than during AirVenture, you might enjoy this video. Scroll down once at this site, taken from a post by Pago Bay Wittman. Field. - KITPLANES
PS Our coffee spot is Homebuilt Headquarters, shown in the 7th photo down.
BJC
Thanks for posting that. Even though it was published last September, I left a comment on the site about how much I enjoyed reading it, and the place homebuilt airplanes occupies in my life.
|
|
## ஜல்லிக்கட்டு — சில பதில்கள்
ஜல்லிக்கட்டு தொடர்பாக, பெரும்பான்மையான ஊடகங்களிலும், ஒரு சில சமூக ஊடக பதிவுகளிலும், இரண்டு கருத்துக்கள் சற்றே எள்ளலுடன் தொடர்ந்து பகிரப்பட்டு வருவதை காண்கிறேன். முதாலாவது, தலைமை இல்லாத இந்த போராட்டங்களால் ஒருங்கிணைந்த ஒரு வெற்றியை பெற முடியாது என்பது. இரண்டாவது, போராடும் பெரும்பான்மையானவர்கள், உணர்ச்சி வேகத்தில் போராடுகின்றார்களே தவிர அவர்களுக்கு அரசியல் அறிவு இல்லை என்பது.
இந்தியாவில் தற்போது அரசியல் என்பதே, தேர்தல் அரசியல் வெற்றி தான் என்கின்ற சிந்தனையே, தலைவனை தேட சொல்கிறது. மாறாக ஒரு ஜனநாயக நாட்டில், தன் தேவைகளுக்காக, ஒரு சமூகம் போராடுவது ஏன் தேர்தல் நோக்கிலேயே இருக்க வேண்டும்? இப்போது போராடுபவர்கள் எந்த அதிகாரத்தையும் வேண்டி போராடவில்லை. அவர்களிடம் சில கோரிக்கைகள் இருக்கின்றன. அவற்றை நிறைவேற்றி தரவேண்டி அதிகாரத்தில் இருப்பவர்களை நிர்பந்திக்கின்றனர். அவ்வளவே. ஒருவேளை அதிகாரங்கள், செவிடாகும் போது அது தானகவே மாற்று அரசியலுக்கு அவர்களை இட்டுச்செல்லும். இந்திய அரசியலே உணர்ச்சி மயமானது தான். இதில் பெரும்பாலான அரசியல்வாதிகளே அரசியல் அறிவோடா அரசியல் செய்கின்றனர்? இன்று எத்தனை தி.மு.க / அ.தி.மு.க உறுப்பினர்களால் திராவிட கொள்கைகளை பற்றி பேசி விட முடியும்? எத்தனை காங்கிரஸ்காரர்கள் குறைந்தபட்சம் காந்தியின் சத்தியசோதனையாவது படித்து இருப்பார்கள்? எத்தனை அரசியல் அறிக்கைகள் அறிவு பூர்வமாக வெளியிடப்படுகின்றன? இந்த நிலையில் போராடும் மாணவர்களும், இளைஞர்களும் மட்டும் தெளிந்த அரசியல் அறிவோடு இருக்கவேண்டும் என்பதில் என்ன நியாயம் இருக்கின்றது?
மாறாக இதனை ஏன் ஒரு தொடக்கமாக நாம் கொள்ளக்கூடாது? ஏதோ ஒரு கிராமத்து பிரச்சனை என்று விட்டுவிடாமல், ஒரு மாநிலமே போராடுவது ஒரு மாற்றம் இல்லையா? தன்னிச்சையான ஒழுங்கோடு, தான் வீசிய குப்பைகளை தானே சேகரித்து பொறுப்பாக அகற்றுவது ஒரு முன்னுதாரன அரசியல் இல்லையா? இவர்களில் இருந்தும் சில தலைவர்கள் வர வாய்ப்புகள் உண்டு இல்லையா? குறைகளற்ற மனிதர் இல்லை, மனிதம் காக்க, குறைகள் பொறுப்போம், பொறுத்தாற்றியதை கற்றுத்தெளிவோம்.
## Manage your time: The Pomodoro technique
The best way to eat an elephant is one bite at a time!
In the every new year, every new month, every new week and in the every new dawn, we all have certain plans to achieve or finish something. Obviously, all most none of us ever achieved anything as we planned. Whether it’s about finishing a report or finishing a code, we usually procrastinate and at the end of certain time frame we regret ourselves and filled with guilt and grief. The main reason for such a failure is due to the improper time management. There are many proposed techniques and methods which are expected to increase the individual productivity and satisfaction.
The pomodoro technique is one such a method followed by many all over the world. It was proposed by Francesco Cirillo in 1994 and from then many workshops, seminars and papers are conducted and published to spread the effectiveness of the technique. Of course, it seems many people benefited by the technique. The idea is to split your work into many pieces of time frame (a pomodoro), say for example 25 minutes. You just have to focus on one work in that time. After 25 minutes you can take a break for 3–5 minutes, then you back to another pomodoro and a pomodoro cycle continues up to 4 such pomodoro’s. After that you can take a long break which can lasts up to 15–30 minutes.
In order to follow this technique, all you need is a timer, few papers and a pencil or pen. For to be more efficient, first make a time table, which may like, working hours: 8:30 to 1:30 and 2:30 to 5:30. So, you should use this technique only during the working hours and it is inefficient for leisure time activities. Now you have to time-boxing your to do’s which is supposed to be done with in the working hours. Usually, as I said earlier, a pomodoro time can be of 25 minutes (it is considered as the most effective). But you can vary it between 15–40 minutes or even higher or lesser. The rule is once you fixed your pomodoro time you shouldn’t change it for different pomodoro’s. For example it is not valid that first pomodoro is about 30m and second is about 45m. The time should be a constant. Now you can allot number pomodoro’s to finish a particular task.
For example you have to finish a article and for that you may need 2 hours. So, you can allot 4 pomodoro’s (if it is 30m) and every 30m you should take break between 3–5m. In the break time you shouldn’t neither think about the work which you’re currently doing nor allotting that time for some other tasks like calling someone. The break period is especially to recreate yourself. In such a way, once four pomodoro’s (a session) are completed you should take a break of length 15–30m. An important rule is that, your task shouldn’t exceed not more than 5–7 pomodoro’s. If it exceeds, split it to fit less number of pomodoro’s. For example, to complete a report you may need 20 pomodoro’s. So split the work into your to do list like, for introduction 4 pom’s, for methods 2 and results and conclusion 5 and so on. It makes you more efficient and productive. Note down the number of pomodoro’s required to complete the work by an ‘X’ mark (or by some other mark, that you like). For example, your to do list make look like this:
Jan 10, 2016
1. Write introduction RP: X X X AC: X X X
2. Check for references RP: X AC: X X
3. Finalize the intro part RP: X X AC: X X
Here, RP is required or alloted pomodoro’s where as AC is the actual number of pomodoro’s taken to complete the task. It could be very useful to analyze your productivity at the end of the day.
A pomodoro should complete without any interruption. If it interrupts, the pomodoro is not valid and you have to start your timer from the beginning. If your task is about to complete in a minute and if the timer rings, you have to stop your work and should take a break. In a case like, the things can be done within few minutes of a pomodoro, you start the timer at the right time finish your work and go through it again and look for some improvements. Remember, a pomodoro should always be completed. Else, if you’re pretty sure that work is completed, you can abandon the pomodoro and shouldn’t mark X for the completion in the to do list. So, take few minutes break (again!) and go to the next pomodoro.
Of course, there would be distractions. It may either internal or external. When you start a work, you, yourself think that you forgot to do something, like you supposed to send a mail to your boss and something like that. In such a case, write down those things under your to do list and go back to the work which you have started and focus on it. The point is in real life up to 99.99% there is no such a thing called immediate urgency. The interruptions that you have added under the to do list can be considered during your long breaks. If they really seems valid (most of the times not!) you may add them to your to do list and can add some pomodoro for them. Or even you can postpone to next day or to the weekend. If there are many small tasks which can not constitute a pomodoro you can sum up them all and allot a single pomodoro for them.
In the another case, externally some may come to meet you for some discussion and may be for some collaborative work. In that scenario, use The Inform, Negotiate, Call Back Strategy. When you approached by someone, inform them that you’re bit busy and depending upon your pomodoro cycle status you can tell them, I’ll catch you after 30m or so. And the important part is after the cycle (during the long break) call them, and depending upon the situation you could either allot them a pomodoro or allot some other time in that week (or in that year!).
At the end of the day, you should analyze your pomodoro’s and estimate your productivity. You may have qualitative or quantitative estimation error. i.e., your allotment may not enough to complete the task or you may have extra time. By analyzing your chart you could make better estimations. It is always better to use simple tools like a paper and pen for the list and chart preparation. For somebody its hard to note down everything. They may try to remember the things at the end of the day and to evaluate their productivity.
Of course you may look for some digital assistance, like a spread sheet or even a simple notepad. Some softwares like gnome-pomodoro are available for desktops and laptops (apt install gnome-shell-pomodoro in Ubuntu Gnome) and many mobile applications like Pomodoro timer is also available. But in the end, the idea is that you should have a time table, you should have a to do list and you should have split your to do’s into many pomodoro’s. And you have to mark down each and every pomodoro (even the abandoned ones) and have to analyze the things at the end of the day. This technique can also be effectively used for a pair, group and even by organizations. For more details look here.
### Rules and Glossary
1. A Pomodoro Consists of 25 minutes Plus a Five-Minute Break.
2. After Every Four Pomodoros Comes a 15–30 Minute Break.
3. The Pomodoro Is Indivisible. There are no half or quarter Pomodoros.
4. If a Pomodoro Begins, It Has to Ring: If a Pomodoro is interrupted definitively — i.e. the interruption isn’t handled — it’s considered void, never begun, and it can’t be recorded with an X.
5. If an activity is completed once a Pomodoro has already begun, continue reviewing the same activity until the Pomodoro rings.
6. Protect the Pomodoro. Inform effectively, negotiate quickly to reschedule the interruption, call back the person who interrupted you as agreed.
7. If It Lasts More Than 5–7 Pomodoros, Break It Down. Complex activities should be divided into several activities.
8. If It Lasts Less Than One Pomodoro, Add It Up. Simple tasks can be combined.
9. Results Are Achieved Pomodoro after Pomodoro. If not, The Next Pomodoro Will Go Better.
## Best of 2016
At the end of year every magazines, websites, blogs and newspapers publishes a list of best things of that year. So, here are my bests of 2016. The choices not based on expertise but on my personal preference.
Best photo of the year: Anyone, who visited Thiruvanaikaval, surely knows her. She is Akila and this is my favourite picture of her captured by Ethirajan.
Akila: Have a seat and look around
Best research article of the year: Optofluidics of Plants by Demetri Psaltis et al. (DOI: 10.1063/1.4947228). This paper explores optofluidic mechanism which can be used for the understanding of how plants function. Apart from its technical significance, the authors explained the things in a manner, so that even a graduate student with slightest idea about either optics or plant physiology can understand most of its parts.
Best article of the year: Who was Ramanujan? URL An excellent article depicts the life and times of great mathematician Srinivasa Ramanujan. In addition, photographs of the original manuscripts makes this article to be considered as the primary intro material to anyone who wish to know about Ramanujan.
Best news media of the year: Modern media behaves as news makers rather than news breakers. In such a scenario, unbiased opinions are essential to explore the truths underneath the facts. And I hope, Wire serves best in that case.
Best cartoon of the year: Cartoon by Surendarnath in The Hindu.
Best meme of the year: Source Unknown. Shared in Facebook.
Best social media of the year: Created by the founders of twitter, Medium offers blogging solutions. Even though it is similar to wordpress and blogger, the UI is awesome with features like time to read and highlighting and sharing options.
Best moment of the year: Dhoni’s ultra fast stumping in ICC T20 match (March, 2016) vs Bangladesh.
Star of the year: P. V. Sindhu: The fourth Indian and the first Indian women to clinch a silver in Olympics.
Best book of the year: From the fishing hamlet to red plant: India’s Space Journey — Even though there is much of technical details, which may tired a layman, the book portraits the stories of hurdles and success of an institution built on the modern India.
Best fiction of the year: Farthest Field: An Indian Story of the Second World War by Raghu Karnad. This is the first book of the author, narrates the lost epic of India’s war, in which the largest volunteer army in history fought for the British Empire, even as its countrymen fought to be free of it. It carries us from Madras to Peshawar, Egypt to Burma-unfolding the saga of a young family amazed by their swiftly changing world and swept up in its violence.
Best short story of the year: Set in Buenos Aires, the story ‘Disappearances’ by KJ Orr, excavating the mind of a retired plastic surgeon while he observing a particular waitress in a cafe. This story was selected as the best national short story by BBC (The author resides in London).
Best literature magazine of the year: Even though many literature magazines are in Tamil, Thadam by Vikatan group joined as special one in the list. Released by a big media offers a great opening. But still it acquires trust by bringing the variety of things to limelight which are hardly noticed by the mass such as painting, sculptures, poems, technical overviews on cinema, dalit literature etc. This makes thadam as the best among the others.
Best tamil movie of the year: Andavan Kattalai by ‘Kaka muttai’ fame Manikandan. In tamil cinema, the hero is usually depicted as a middle class man. But, he is so masculine to challenge the minister of a state and the heroine should fall in love with him, even if he is a beggar. In such a scenario, telling a story of a real common man with real life incidents should be applauded. A special congrats to the director for showing the lead female character as bold and intellectual, which is very rare in Indian cinema.
Best hollywood movie of the year: Don’t breathe by Fede Álvarez. Just with few characters and an excellent screenplay, the film brought the viewers to the edge of the seat.
Best movie of the year: Kammatipaadam by Rajeev Ravi. The film centres on Kammatipaadam, a slum locality in Ernakulam, Kerala. It focuses on how the Dalit community was forced to give up their lands to real-estate mafias and how modern urbanization of Kochi metro-city took place over the plight of the Dalits.
Best song of the year: Ae zindagi in Dear Zindagi movie composed by Amit Trivedi and Ilayaraja. Sung by Arijit Singh.
Best game app of the year: Atomas: Require some little chemistry knowledge, you have to build new atoms by combining the available atoms with certain rules. Perfect entertainment for hours with less phone resources make it awesome.
Best productivity app of the year: Google Keep: A simple sticky notes with great features like place reminder, hand written notes etc. Since it is available as chrome extension too, you can keep and view notes across your devices.
Best travel app of the year: Trainman: The app is initially released and majorly considered for PNR prediction. But it has more interesting features like auto upcoming train reminder, train running status update and train speed calculator etc.
Best e-reader app of the year: eReader Prestigio: With extended support to almost all forms of the text, this app provides complete solution to android e-book reading requirements. It can voice your text (Indic language support is not available!), adjust the brightness to optimum level and offers options like, font style and size changes, text and background colour etc.
Best tools app of the year: CamScanner: With an android phone, conventional scanners are not required any more. This app provides professional level scanning of documents with pdf and image options.
Best photography app of the year: Google Snapseed: With professional photography editing tools and special filters makes this app to the top position among the similar apps.
Best messenger app of the year: Telegram: Probably this is the only app available as open source with high security options. The speed of the app is so fast and much better than whatsapp. An unique feature is that it is available for all platforms so that you can use the app (simultaneously) in all your devices ranging from Apple iphone to p4 desktop system. The only limitation is that it doesn’t have a calling facility but the variant of the same app ‘voicegram’ can be used for that purpose.
Best fitness app of the year: Pedometer: Among many huge fitness apps, this simple, small and smart app just measures your footsteps over a particular time and displays the number of calories burned, walking speed and distance.
Best multimedia app of the year: VLC: This is the default app for many in their laptops and desktops for multimedia requirements. For android, VLC moved to a stable version from beta in this year. As usual the app is awesome and you can download the subtitles on the go, or watch youtube videos. An extra feature is that it can play songs or videos from directories directly!
Best smartphone of the year: Moto G 4th Gen: I’m a big fan of Moto. So this may be biased. But with stock OS and excellent finishing (to know the level of finishing just check the volume and power keys of the device!) Moto G4 rocks.
Best laptop of the year: Lenovo Yoga (i5): This super slim laptop with excellent performance and specs with affordable budget makes it top of the list.
Best OS of the year: Apricity OS: A derived version of Arch linux, the OS offers super speed and excellent linux experience.
Thats the end of the list. Any suggestions or opinions? Please comment below.
## A Tribute To A Magician
Martin Gardner is well known for his famous ‘Mathematical Games’ articles in ‘Scientific American’ magazine. He contributed that column for almost twenty years. It seems intellectual youths of the period between later 50s and early 80s enjoyed much of his puzzles.
For me, Writer Sujatha introduced Martin Gardner in his famous ‘Katrathum Petrathum’ series in Vikatan. In one of the article, if I remembered correctly, while telling the ‘Lady or the Tiger’ story (I hope every one knows that story and its mathematical significance!) he told about Gardner and his famous ‘Mathematical Games’. Later, I found his ‘My best Mathematical and Logic Puzzles’ in central library of Karur. But the book is in reference section and I supposed to write down some problems everyday and tried solve it. Obviously, I even don’t understand most of the puzzles and give-up within a week.
Then during my under-graduation days, I once again gone back to Gardner, when one of my Math professor talk about recreational mathematics during an lecture. This time, I’m capable of solving many of his problems and got addicted to such puzzles. This practice fairly helped me while attending an bank exam (Cleared that exam but luckily or unluckily I opt to do Ph.D rather than to become a teller!).
Then, puzzles and Gardner gone into the thin air for many years. Few days ago, I don’t know how but, while searching some random stuff, I once again came across Gardner and found an interesting book, ‘Martin Gardner in the 21st Century’ by MAA (Mathematical Association of America).
Martin Gardner : The Puzzle Man (1914–2010)
Even though its a tribute edition, the book discussing the solutions of some of his famous problems, rather than discussing his contributions or his biography. Some interesting articles of Gardner are also included. Non mathematician may feel prosaic while reading the text, yet its a good book to give a try. At least one could wonders the usability and applicability of Maths in different arena.
To seek your attention towards the book, let me to tell you two tricks discussed in the book (Demonstrated!). First one is a coin trick named as three penny trick in the text. Consider three coins, which are placed in a row. You’re blindfolded and requested to assemble the coins in a way either all heads or all tails. The only condition you know is that there should be at least one head and one tail in the sequence. The idea is to flip the left coin first then the middle coin. Now check with the spectator, whether the required condition is reached. If not, go for one more left flip. Now it should be aligned in a way as requested, independent of whatever be the initial position. It may seems so simple. But when you replace the coins by cups and a question with the condition of at least three flips to achieve all-ups or all-down could be interesting.
The second is a card trick. Take about 15 cards and arrange them as 5 piles with each pile consists of random number of cards (say 4,1,1,5,4). Now remove one card from each pile and place the removed cards as a new pile. If the piles are arranged in a row, the new pile can be placed anywhere in the row, front, middle or at the end. Repeat the process, after ’n’ number of iterations, you always end up with an arrangement of 5,4,3,2,1. Depending upon the initial arrangement, the ’n’ may vary. But always end up with the above order!
Some more interesting problems such as Courier problem, RATWYT and Monty Hall Problem (MHD) are discussed in the book (There are three doors and opening one particular door would leads you to a car. Another two will leads to a goat. Now you allowed to choose one door, say A. Then either one of the other two doors will be opened, but not the door with a car. After this you have an option to switch from your selection. Even though it seems 50–50 problem, its not actually. There are many famous solutions available for this problem.).
Gardner not only wrote about Maths and Maths articles. He had wrote two novels, many books on magic and many short stories. Two of such short stories are also included in the book. The last one (both in the book as well as by Gardner himself) ‘Superstrings and Thelma’ is enough to showcase the Gardner’s writing skills to captivate the reader.
The only book reviewed by Gardener is also included in this collection. The book’s ‘name is ‘popco’ and in which the grandfather of lead character was influenced by Gardner. It is such a modest and appreciable review with the summary of the novel (He doesn’t disclose the climax). He complains about only one thing, about the usage of ‘f’ word through out the text!
With lot of maths the book may persuade somebody towards the subject. For others, especially those who interested in early bird numbers, flexagons or how to find the cube root of 52367419803 or any equivalent number in seconds may try once.
## தமிழ் கவிதை
ஏறத்தால இரண்டாயிரம் ஆண்டு பாரம்பரியமிக்க தமிழின் தற்கால புதுக்கவிதைகள் பற்றி எனக்கு தீராத சந்தேகங்கள் பலவுண்டு. அவற்றில் முதன்மையானது, நிசமாலுமே அவையெல்லாம் கவிதைகள் தானா என்பது.
தாய் தமிழ்னாட்டில், போலி ரேசன்கார்டு நபர்கள், வந்து குடியேறிய பீஹார்காரர்கள் உட்பட ஏறத்தால, ஏழு கோடி பேர் கவிதைகள் எழுதுவதாக ஒரு புள்ளி விபரம் சொல்லுகின்றது. வெளியிடும் ஊடகங்களைப்பொருத்து மூன்று வகையிறாவாக பிரிக்கலாம். ஆனந்த விகடன், குமுதம் மற்றும் தினசரிகளின் ஞாயிறு மலர்கள் போன்ற வெகுஜன ஊடகங்களில் எழுதுபவர்கள். மொத்தமே பத்து பேர் தான் படிக்கும் தடம், உயிர்மை, காலச்சுவடு போன்ற வந்த, வரவிருக்கின்ற, நின்ற, நிற்கவிருக்கின்ற இலக்கிய இதழ்களில் எழுதுபவர்கள். இவர்கள் தவிர்த்து ஓர் ரகசிய இயக்கமும் உண்டு. தானே எழுதி, தன் தாய்க்கும் தாரத்துக்கும் கூடத்தெரியாமல், ரகசியமாய் தானே படித்து ரசிக்கும் ஒரு கூட்டமும் உள்ளது. ரகசிய இயக்கம் குறித்து நாம் அஞ்ச தேவை இல்லை. இலக்கிய பத்திரிக்கைகளில் வரும் கவிதைகளை அவற்றின் ஆசிரியர்களே படிப்பதில்லை என்பதால், அவற்றாலும் சிக்கல் இல்லை. ஆனால், வெகுஜன பத்திரிக்கைகளில் கவிதை எழுதி இலக்கியத்திற்க்கு சேவை செய்யும் சிலரால், தொற்றுநோய் ஒன்று வேகமாக பரவி வருகின்றது. (இவன் எழுதி இருக்கற மாதிரியே நேத்து டாய்லெட்ல இருக்கும் போது நாமக்கும் ஓன்னு தோனுச்சே.. அடுத்த தடவ அத அப்படியே அனுப்பிட வேண்டி தான்!)
சுஜாதா ஒரு முறை எழுதியிருந்தார், புத்திசாலித்தனமான வரிகளை உடைத்து வைத்தால் அது கவிதை அல்ல என்று. ஆனால் இன்று கவிதை என்று எழுதப்படும், பெரும்பாலானவை வெறும் வார்த்தை விளையாட்டுத்தான். விகடனின் இந்த வார இதழில் வெளிவந்துள்ள (90ம் ஆண்டு சிறப்பிதழ்) கவிதை இது. “திருமண தகவல் மையம் சென்றிருந்தேன். அநேகம் பெண்கள் சீசர் போல் நெப்போலியன் போல் ஒருவன் வேண்டும் என்றிருந்தார்கள். நேற்று ஒரு நெப்போலியனையும் ஒரு சீசரையும் தெருமுனையில் பார்த்தேன். விவாகரத்தான அவர்கள் தனியாகத்தான் இருந்தார்கள்.” படித்து முடித்த உடன், “கவிதையா?? இது கவிதையா??” என நானே கேட்டுக்கொண்டு இரண்டு முறை சுவரில் முட்டிக்கொண்டு (வேறு என்ன செய்வது?) அடுத்த பக்கத்துக்கு சென்றேன். வாசகர் கவிதைகளை பிரசுரிப்பது எப்போது தொடங்கியது எனத்தெரியவில்லை. ஆ.மு.வில் (ஆன்ட்ராயிடுக்கு முன்பு) கவிதை உதித்தால்(!) உடனே ஒரு 25 பைசா போஸ்டுகார்டு வாங்கி எழுதி ஏதாவது ஒரு பத்திரிக்கைக்கு அனுப்பிவிடுவார்களாம். கவிதை வந்தால் தமிழுக்கு சிறப்பு, வராவிட்டால் வரலாற்றுப்பிழை. ஏதோ தமிழுக்கு என்னால் ஆன சேவையாக 25 பைசா என்பது உள்ளிருக்கும் நீதி.
இமெயிலும் வாட்ஸ் அப்பும் வந்த பிறகு, தமிழுக்கான சேவை வரி நின்றது தான் மிச்சம். ஆசிரியர்கள், செயலியிலேயெ முடிவெடுத்து அச்சுக்கு அனுப்பி விடுவதாக பேச்சு. கவிதைகள் அளவுக்கு கட்டுரைகளோ, கதைகளோ, நாவல்களோ எழுதப்பட்டதாக தெரியவில்லை. காரணம் எளிதானது. மற்றவை எழுத கொஞ்சமாவது விசயம் வேண்டும். சிறப்பாய் வேண்டுமெனில் அனுபவம், அறிவு, பொறுமை தேவை. ஆனால், இவை ஏதும் கவிதை எழுத தேவை இல்லை என்பதால் தான், “மொட்டை மாடி நிலா பால்கனியில் கலா
வந்ததே காதல் விழா” என கவிதை செழிக்கின்றது.
தமிழின் ஆகச்சிறந்த கவிதைகள் எழுதி முடிக்கப்பட்டு விட்டன என்பது என் எண்ணம். ஆகவே கவிதை எழுத கை அரித்தால், கம்பராமயணத்தையோ, கபிலரையோ படிக்கலாம். படித்தபின்னும், நமநமப்பு இருப்பின் நிச்சயமாக எழுதலாம். சமீபத்தில், பெண் கவிஞர்கள் குறித்து ஜெயமோகன் தடம் இதழில் (செப்டம்பர், 16) “அவர்களுக்கு (பெண்கள்) தீவிர வாசிப்பு இருப்பதில்லை. அதனால் படைப்பில் ஆழம் இருப்பதில்லை” என்றிருந்தார். ஆண்டாளில் இருந்து இன்றைய தமிழ்நதி, பரமேஸ்வரி போன்றவர்களின் தீவிர ரசிகன் என்ற முறையில் இந்த கருத்தில் எனக்கு உடன்பாடு இல்லை. மாறாக, இதை பொதுவாக வைக்கலாம். நல்ல கவிதை படைக்க நல்ல வாசிப்பு வேண்டும் என்பதாக. 9ம் நூற்றாண்டின் கடுமையான ஆணாதிக்க சட்டகத்துக்குள் இருந்து கொண்டு, பன்னிறு ஆழ்வார்களுள் ஒருவராக உயர்ந்ததற்க்கு, ஆண்டாளின் பக்தி மட்டுமே காரணம் அல்ல. பெரியாழ்வாரின் புதல்வி, அவரைப்போன்றே இலக்கணம் கற்றுச்சிறந்திருந்தது தான் காரணம்.
பி.கு. நல்லா இல்லாத கவிதைகளை சொல்லி விட்டு நல்ல கவிதைகளை சொல்லாமல் விட்டால் எப்படி? சாம்பிளுக்கு மூன்று.
கபிலர் — குறுந்தொகை
சுனைப்பூ குற்று தொடலை தைஇவனக்கிளிகடியும் மாக்கண் பேதைதானறிந்தனளோ இலளெ பள்ளியானையின் உயிர்த்துஎன் உள்ளம் பின்னும் தான் உழையதுவே…
(வனத்தடாகத்தில் பூப்பறித்து, மாலைகட்டி அணிந்து காட்டில் கிளியோட்டும் இந்த பெரியகண் அழகி அறியமாட்டாள், தூங்கும் யானைபோல பெருமூச்சுவிட்டு என் மனம் அவள் நினைவைத் தொடர்வதை…)
தமிழ்நதி — ஆனந்த விகடன்
மாடியை ஒட்டிய புத்தக அறையினுள் எப்படியோ சேர்ந்துவிடுகின்றன சருகுகள்… வாசிப்பினிடை தலைதூக்கினேன் செல்லமாய் சிணுங்கி ஒன்றையொன்று துரத்திச் சரசரத்தன கட்டிலுக்கடியில் பதுங்கின மேலும் சில பெருக்க மனதின்றி விட்டுவைக்கிறேன் ஈரமனைத்தும் உறிஞ்ச வெயில் வெறிகொண்ட இக்கொடுங்கோடையில் எந்த வடிவிலேனும் இந்த மாநகர வீட்டினுள் இருந்துவிட்டுப்போகட்டுமே மரம்.
தேவதச்சன் –
வெட்ட வெளியில் ஆட்டிடையன் ஒருவன் மேய்த்துக் கொண்டிருக்கிறான் தூரத்து மேகங்களை சாலை வாகனங்களை மற்றும் சில ஆடுகளை
## Free and Open Source Grammar Checker
Many typesetting packages offer spell check as a default option and this is true even in the case of open source packages like Libreoffice. But anyone looking for a style and grammar checker, always redirected to numerous paid options.
So, one Daniel Naber of Bielefeld University, took this problem for his thesis and developed an open source code for style and grammar checker, which is currently available for everyone in the site http://www.languagetool.org. The site offers an online checker, and browser tools such as firefox and chrome extensions.
In addition, as an offline support, Libreoffice and OpenOffice extension is also available. Thus by installing the extension into your Writer, you will get a curly blue lines under the text with errors. Like any spell checker, by clicking the corresponding word/sentence you can get the suggestions for that error.
Here I brief the installation of LibreOffice extension in Ubuntu. For stand alone usage consult the website.
Language tools requires Java to run. If you’re using Ubuntu 16.04, probably you have a Java environment of version 8. For those who are using Ubuntu 14.04 or less, supposed to update (or install, in the case if you don’t have) the version. To check the Java version type the following in terminal
sudo dpkg --list | grep -i jdk
The output should show something like, openjdk-8-*. In case if you don’t get anything, you have to install the environment in your machine. The commands are as follows:
sudo add-apt-repository ppa:openjdk-r/ppa sudo apt-get update sudo apt-get install openjdk-8-jdk
To verify the proper installation once again try the command
sudo dpkg --list | grep -i jdk
In case, if you have some older versions in the system, configure Java to make sure you’re using the latest.
sudo update-alternatives --config java sudo update-alternatives --config javac
Sometimes, there is a possibility of errors due to the older versions. In such a case, it is better to remove the Java completely from the system prior to the installation of latest version. To remove,
sudo apt-get purge icedtea-* openjdk-* sudo apt-get autoremove
Once again, in order to make sure that no Java in the system issue the command sudo dpkg --list | grep -i jdk . Then follow the installation steps, as mentioned above.
Now, go to the http://www.languagetool.org site and download the LibreOffice extension file (.oxt). The file can be installed either by double clicking the same or from the LibreOffice extension manager. For the second option, open LibreOffice writer and go to Tools >> Extension Manager >> Add. By choosing the languagetools.oxt file, it can be installed. In case, there is an error (for Ubuntu only), install the following:
sudo apt-get install libreoffice-java-common
You may still need to configure something, if the extension doesn’t works. Go to Tools >> Options >> Language Settings >> Writing Aids >> Edit and check Language Tools. To verify the installation, type (as suggested by the developer!) Feel tree to call. If the extension is working it will show a blue curly line under tree and suggest you to change to free!
PS: If someone really interested to learn what’s inside the black box, can look into the developer’s thesis: A Rule-Based Style and Grammar Checker and for those who wish to contribute the further development, can look into the source at Git-hub repository: LanguageTool.
## INO — Myths and Truths
Once again India based Neutrino Observatory (INO) begins to appear in the headlines of newspapers. As a physicist I hope, I may have a slight better understanding than the fellow layman’s. This is what the result of that confidence.
So, long long ago, in the 60’s India pioneered in the neutrino research. First Muon flux density related research article was published from the results collected in Kollar gold mine. Those details are greatly explained by our former president APJ Abdul Kalam in one of his article in The Hindu (Dated: June 17, 2015). Once the Kollar was shut down researchers were supposed to leave the field and are looking to establish a research observatory somewhere else. This begins with 80’s and the proposal was got ready around the new millennium and sanctioned during the last UPA term.
Initially, Nilgris was the targeted place. But then Minister of Environment Jairam Ramesh rejected that proposal on the fact that the place comes under Mudumalai Tiger Reserve. So, that the Potipuram village of Theni comes in to the limelight. Some people worrying why not Himalayas? Why its particularly in Tamilnadu? The answer is geologically simple. The western ghats are much older and much stronger than Himalayas. Also, a single charnockite rock based mountain is preferable and that’s why the choice is Potipuram.
First voices against the observatroy were raised not from TN but from Kerala. The then chief minister Achuthananthan expressed his concerns on environmental pollution due to the constructions and he afraid the explosives used to break the mountains may damage the Mullai-Periyar dam, which is situated at a distance around 100 Kms from the Potipuram. Even though his concerns on pollution should addressed properly, the effect of explosives on the dam would highly negligible. Iduki, the place where the dam is situated, itself running lot of hydro-thermal projects with lot tunnels! Also, I don’t think government of TN would risk on a project which could damage that particular dam. All over the years, TN has struggled to keep the dam and I hope it wont risk on such a sensitive issue.
After Achuthananthan, Vaiko of MDMK party and G. Sundarajan of “Poovulagin Nanbargal” brought the issue to the court and National Green Tribunal (NGT). Even though, both are highly reputed activists and stand for many issues of TN, this time I think, some concepts misleads them.
The following points are the accusations aroused by the INO opposers. (1) It will induce radiation effects (2) It will pollute the area, since they are going to break the mountain entire village will be demolished (3) It will be used to store the radioactive wastes and
(4) It will be used to monitor some rogue nations nuclear weapons, since there is the collaboration of Fermi lab, India doing the things in favor of USA.
Selected INO region in Pottipatti, Theni
Let us to see the things point by point. Firstly, Neutrinos are everywhere, the mean time you’re reading this, crores and crores of Neutrinos transmit across you. The sun, the stars and the galaxies produce much and much neutrinos and they are available every where in the universe. Since they are highly non-reactive, we need a specific isolated place to capture them. Otherwise, there isn’t any possibility of radiation from the observatory.
Secondly, for the observatory a cave of diameter 2 meter only going to carved and with much modern state of art instruments it wont create much pollution in the area. Recently a TNEB project was established in that particular area. Experts say, pollution from the observatory would be much lesser than that TNEB project. Also, locals afraid that thousands of gallons of water will be sucked from their source. But TIFR assuring, only a limited amount of water, which couldn’t affect peoples usual usage, will be utilized for the observatory.
Thirdly, there is no need to store radioactive elements in such a observatory, with this much of huge spending’s, in a place like lone mountain range. Once when there was research activities in Kollar, people thought that DAE hiding nuclear wastes there. But now its clear that no such activity was ever happened. This would suits to Potipuram observatory too.
Lastly, there are natural and artificial neutrinos. Artificial neutrinos are produced during the nuclear fission in reactors. By capturing those neutrinos we could measure the amount of plutonium produced during the uranium fission reaction. (Uranium is used in nuclear reactors as fuel. By enriching the byproduct Plutonium, from the reactor, atom bombs can be made.) It is proposed that (some experimental evidences too available) by setting up a compact, dynamic neutrino observatory we could monitor the amount of plutonium produced in the area of interest (an another fact is that — they are actually measuring anti-neutrinos for that purpose).
But the point to be noted here is, for such a purpose, you don’t need to dig a cave in a remote village. With all these doubts and questions what is the importance of such a observatory? The answer is to know more about a fundamental particle called Neutrino!!!
Initially people thought neutrinos (there are three types of neutrinos) are mass less, but recent studies shows that they could have some mass. Measuring such a things would greatly help us to understand the properties of the particle. Which could, on the other hand, will enhance our knowledge on the understanding of the universe. So its basically a basic science. One reasons for the slowness in this project is not many people would be benefited from the project. A few of locals may get jobs in the observatory. There is feeble possibility of development in that area. So people wonder why this much of spending is essential? The answer if some one asked what is the use of general relativity and quantum mechanics (in the sense of basic science) in the days of their development and restricted the research, now you probably can’t use GPS in your SMART phones!
For more details I recommend the following links.
1. Why India’s Most Sophisticated Science Experiment Languishes Between a Rock and a Hard Place? By Nithyanand Rao and Virat Markandeya on 06/04/2016 in The Wire
2. Going all out for neutrino research By A. P. J. Abdul Kalam and Srijan Pal Singh on June 17, 2015 in The Hindu
3. Neutrino detectors could help detect nuclear weapons in Science Daily on August 12, 2014
For technical documents you could visit here.
1. Status of India based Neutrino Observatory by Naba K Mondal
2. INO — FAQ’s
3. INO — Project Report. Vol. I
4. Jiangmen Underground Neutrino Observatory (JUNO) Project, China
Eventhough LaTeX offers wide range of bibliography options — Supervisiors/Editors/Publishers never satisfied with the defaults and here’s the way to meet their expectations.
If you’re an LaTeX user and whenever you’re ready to submit a paper, you should probably come across a request on slight modification of your bibliography style. Something like:
Why don’t you emphasize your title? You may add a semicolon between author names! and so on…
It become so hectic when your journal has a specific reference style and which does not falls on any category of the default bibiliography styles. One way to solve the issue is to type the references as bibilography list like this:
\begin{thebibliography}{}
\bibitem{Manaa2009}Manaa, Hacene, Abdullah Al Mulla, Saad Makhseed, Moyyad Al-sawah, and Jacob Samuel, Fluorescence and nonlinear optical properties of non-aggregating hexadeca-substituted phthalocyanine, Optical Materials, 32, 108-114 (2009)
\end{thebibliography}
But this is always not so easy. Especially, when you’re supposed to resubmit the manuscript to some other journal, once again you have to modify every thing. So, the better way to play along is to generate your own style file (.bst) with specific contents. Let me to begin from the scratch. When you include a bibliography with bibtex, you typically have a structure like this:
% File main.tex \documentclass{article} \begin{document} \end{document} \bibliographystye{plain} \bibliography{list}
Here the list indicates the file list.bib, which contains the bibliography in bibtex style and the plain indicates the bibliography style to which the input is to be formated. Most of the time you can choose your style from the defaults and they are listed here.
To creat the the custom bibiligraphy style file, open a terminal and type:
latex makebst
This program will ask you questions and build a custom bibliography style. It’s a lot of questions, if you’re unsure just press enter and it will select the default values. At the end it will ask you if you want to proceed to compile your bst file. Once you agree, your specific file will be created. For minor corrections you need not to run the entire program. By simply modifying the (.dbj) file generated along with the (.bst) file can be used for that purpose. Once you modified the things by running latex mystyle.dbj , the (.bst) file can be altered. The complete log file designed to fit the requirements of the journal “Applied Organometallic Chemistry” can be obtained here.
To apply your new style (let’s assume you assigned the filename mystyle.bst), issue the following commands to install the style file locally:
mkdir -p ~/texmf/bibtex/bst cp mystyle.bst ~/texmf/bibtex/bst/ texhash ~/texmf # Make TeX aware of what you just did
Alternatively, placing the file in your working directory will also be fine. Once you have finished this all, apply the bibliography style in the main.tex file.
\bibliographystyle{mystyle}
Submit, grab a cup of coffee, and relax.
Thanks to: Gabriele Lanaro
## Windows Makeup Story
This is how to makeup your hate to perform like your love! It’s not a hate story but a make up story!
First let me to swear that I’m a diehard Debian and I never or will ever have an affair with this windows. But sometimes or most of the times (if your guide is not familiar with linux) there is no option but using the cracked windows. So this short note is for those who (like me) expecting some linux flavor when dealing with windows. Some this features are now available by default in windows 10, this one is just for win7 users. First and foremost thing I don’t like in windows is its welcome screen. In Linux (Especially in Ubuntu), you can easily customize your login screen. I personally prefer my wallpaper as the welcome screen. So, if your pictures folder or your picture has the read+write+executable (chmod 777) permissions, your wallpaper would be your login background. Windows by default not offer this feature. But by altering system configuration file you can do that. This link provides those instructions. Alternatively, third party tools such as Windows Log On Background Changer (I prefer this) can be used to simply change the background of welcome screen.
The second and most expected Linux tweak for windows is “open folders as tap”. And of course windows by default not offer this one and you’re supposed to look for some third party tools. Two famous tools are Clover and QTTabbar. QTTabbar is an open sourceware and it is in active development. But for me Clover suits well. (One thing and the only thing I don’t like in Clover is its icon. It’s like a green flower. Personally I don’t like it!).
Tabs in Windows as like in Ubuntu
One more tweak I really like in Linux is “open terminal here”. Surprisingly without much effort, by clicking shift+right mouse button, you will get a dialog box with “open command window here”.
Open Command Window here!
Then there comes our latex. Like I don’t like windows, I don’t like MikTex too. I prefer typing in gedit and compile it using texlive in terminal. Gedit has a windows executable and its working fine, except the thing when you open the second file, it will be opened as a new window instead of a new tab. Like in Linux, installing the complete texlive package is simple in windows too. Download the image file (.iso) from here and mount it using any of the following, such as WinCDEmu , daemon-tools , or Magic ISO . By running the batch file install-tl-windows it can be installed to your system. And like in linux it can be run from the command window (terminal) itself.
LaTeX in windows command prompt
One more thing I prefer that windows should have is, “always on top” option. Like the all above, using a third party tool called “always on top” we can achieve that goal. You have to run this script first, and then have to press ctrl+space bar on an active window to have it always on top. By pressing the ctrl+space again will revert you back.
## Patna.. Patna..
Had an adventurous day and explored Patna in depths. Wish to motivate some others to explore the city with some facts.
When talking about Patna, its better to start from its roads. The roads are constructed during the period of King Ashoka and are still in use. If you are lucky some 100s of years old road stones may be thrown at you by the kids playing along the road. The drivers are extremely friendly and whenever two vehicles from opposite directions meet, they have a chit chat for a while. So unbelievably you can ride a 38 kms in just 3 hours.
You can start to explore Patna from its railway junction. From there many buses available to reach a famous place called Gandhi maidan. It is expected to spend your time fruitfully even in your travel. So they are freely teaching some “bowing” poses so that you can aware of how important is self control. Sadly they don’t consider people whose height is less than 4.5 feets as grown up and excluded them from this practice.
Once you reached Gandhi maidan you can start to explore the place from any direction. If you still have stamina even after the bus exercises, the better place to visit is Golghar and this one was built by British for storage purpose. Once you step up to the top of the building through its lengthy steps its hard to control the desire of reaching ground by jumping from there. Some times if more than 5 people are there, you may not get a space to execute your jumping experiment. At such a times they have an another set special stairs to get down.
Patna Museum : Entrance
One of the important place you should not miss in Patna visit is Basantha Vihar restaurant. It’s just near Patna museum but still you can reach there only if you are scored atleast a 90+ in geography during your high school. For those who scored below 90 can follow the guidelines below: Take a left from Patna police station, cross the road, you will see an big plaza, take an right and get into it. Go straight, then take a left, go straight, again left, then go straight; now if you have no problems in your eyes you can clearly see the Patna police station again (unable to see?? Heard some good eye specialists are available here!) Dont loose your heart and now you have almost reached the destination. Take a left again and go straight. Ahh.. You should be excited. Once you stepped in there, water will be provided with in 15 minutes of your entry and order will be taken in another 15 minutes. Then pickle and salt like items will be served by the next 30 minutes and towels and tissues will be served by the next 15 minutes. In such a heavy hospitality your hungry should be already vanished. If you are still in hungry you can check the prizes in the menu card once more to get ride of it.
From Basantha vihar you should go to Gandhi maidan again to continue your exploration. It’s very very near to our restaurant and just by the walk you can reach there with in a hour. From there to visit Ganga Ghat there is shortcut run on the sides of the maidan. If you are tired of the walk, you will refreshed by the sand breeze freely available in the maidan. Once reached the other side, you have to pick an auto, preferably with an old driver. He will teach you the meaning of budhism and the effect of karma. Since Ganga Ghat is a holy place such a lesson is mandatory to enjoy the atmosphere there. Again a 20 minutes of walk will take you to the holy river.
The return from ganga ghat should be by a bus. So that you can understand the basics of crystallography. They beautifully allighned people inside the bus even with their water pots and rice bags. The bus with the capability of carrying 25 humans are heavily doped and converted to carry 4 times than the actual. If you are not happy with 3D packing in the bus you supposed to walk again a 25 minutes to reach our old destination Gandhi Maidan. Apart from the jokes (or reality!!) Patna museum is one of the important places you shouldn’t miss in Patna. Again in a British building some wonders dated back upto 1st century AD is beautifully kept here. Many sections are dedicated to different era with different themes. (Even a section is available for Rahul Sankiruthiyan’s collections!!) Some astonishing Bihar paintings and some Ajantha Elora paintings are available in the first floor. Actually it’s a day out. But a day is not enough to explore either the city like Patna or even its single museum.
|
|
Synopsis: Out of many atoms, one photon
A gas of excited-state atoms could perform as a single-photon detector.
Devices that count discrete quanta of light could be the building blocks of sophisticated quantum circuits. Most such counters based on single atoms register a photon only half the time, but in a paper appearing in Physical Review Letters, Jens Honer at the University of Stuttgart, Germany, and his colleagues propose a theoretical multi-atom system that could do the job with a nearly $100%$ success rate.
Honer et al.’s idea takes advantage of interactions between Rydberg atoms confined to a small trap. In a Rydberg atom, at least one valence electron is in a highly excited state, circling the nucleus with a large radius that mimics a classical orbit. These excited state atoms interact strongly with one another, such that one Rydberg atom in a trap can block other atoms from being excited—an effect called Rydberg blockade.
Honer et al. consider $N$ atoms in a trap, which behave as a sort of superatom. The superatom has $N$ excited states, with one Rydberg excitation collectively shared among the $N$ atoms. Only one of these excited states interacts with light like a two-level system, while $N-1$ states remain dark. By introducing a second light field that causes dephasing, Honer et al. show that with large fidelity the superatom ends up in one of these $N-1$ dark states, and consequently greatly enhances the chance of photon absorption. At the same time the Rydberg blockade prevents the absorption of multiple photons within one such atom trap.
A series of these atom trap devices could, according to Horner et al., be used to count the photons in a few-photon light stream. – Jessica Thomas
Announcements
More Announcements »
Subject Areas
Atomic and Molecular Physics
Previous Synopsis
Quantum Information
Next Synopsis
Particles and Fields
Related Articles
Atomic and Molecular Physics
Synopsis: Atoms in a Photonic Trap Exhibit Superradiance
Trapping atoms near a photonic crystal waveguide produces strong atom-photon coupling that results in enhanced atomic emission of light. Read More »
Atomic and Molecular Physics
Viewpoint: Intramolecular Imaging at Room Temperature
An improved take on an existing approach provides intramolecular imaging of molecules adsorbed on a solid surface at room temperature. Read More »
Atomic and Molecular Physics
Viewpoint: Towards an Atomtronic Diode
Rubidium atoms in an optical trap have been made to exhibit negative differential conductance, a phenomenon normally found in semiconductor diodes. Read More »
|
|
SPMSY {DLMtool} R Documentation
## Catch trend Surplus Production MSY MP
### Description
An MP that uses Martell and Froese (2012) method for estimating MSY to determine the OFL. Since their approach estimates stock trajectories based on catches and a rule for intrinsic rate of increase it also returns depletion. Given their surplus production model predicts K, r and depletion it is straight forward to calculate the OFL based on the Schaefer productivity curve.
### Usage
SPMSY(x, Data, reps = 100, plot = FALSE)
### Arguments
x A position in the data object Data A data object reps The number of stochastic samples of the MP recommendation(s) plot Logical. Show the plot?
### Details
The TAC is calculated as:
\textrm{TAC} = D K \frac{r}{2}
where D is depletion, K is unfished biomass, and r is intrinsic rate of increasase, all estimated internally by the method based on trends in the catch data and life-history information.
Requires the assumption that catch is proportional to abundance, and a catch time-series from the beginning of exploitation.
Occasionally the rule that limits r and K ranges does not allow r-K pairs to be found that lead to the depletion inferred by the catch trajectories. In this case this method widens the search.
### Value
An object of class Rec-class with the TAC slot populated with a numeric vector of length reps
### Required Data
See Data-class for information on the Data object
SPMSY: Cat, L50, MaxAge, vbK, vbLinf, vbt0
### Rendered Equations
See Online Documentation for correctly rendered equations
T. Carruthers
### References
Martell, S. and Froese, R. 2012. A simple method for estimating MSY from catch and resilience. Fish and Fisheries. DOI: 10.1111/j.1467-2979.2012.00485.x
Other Surplus production MPs: Fadapt(), Rcontrol(), SPSRA(), SPmod(), SPslope()
SPMSY(1, Data=MSEtool::SimulatedData, plot=TRUE)
|
|
[npl] / trunk / NationalProblemLibrary / Rochester / setSeries6CompTests / review.pg Repository: Repository Listing bbplugincoursesdistsnplrochestersystemwww
# View of /trunk/NationalProblemLibrary/Rochester/setSeries6CompTests/review.pg
Sun Mar 26 04:58:01 2006 UTC (7 years, 2 months ago) by jj
File size: 3600 byte(s)
Initial import
1 #DESCRIPTION
2 #Review of Chapter 8 and Polar Coordinates
3 #ENDDESCRIPTION
4
5 #Keywords('Review')
6 DOCUMENT();
8 "PG.pl",
9 "PGbasicmacros.pl",
10 "PGchoicemacros.pl",
12 "PGauxiliaryFunctions.pl"
13 );
14
15 #TEXT(beginproblem());
16
17 TEXT(EV2(<<EOT));
18
19 $BR$BR $BR 20 Here is a short review of numerical series which you may find helpful. 21$BR
22 REVIEW OF NUMERICAL SERIES
23 $BR 24 SEQUENCES$BR
25 A sequence is a list of real numbers. It is called convergent if it
26 has a limit. An increasing sequence has a limit when it has an upper bound.
27
28
29 SERIES $BR 30 (Geometric series,rational numbers as decimals, harmonic series,divergence test) 31$BR
32 Given numbers forming a sequence $$a_1,a_2,...,$$
33 let us define the nth partial sum as sum of the first n of them
34 $$s_n = a_1 +...+ a_n.$$ $BR 35 The SERIES is convergent if the SEQUENCE $$s_1,s_2,s_3,...$$ is. 36 In other words it converges if the partial sums of the series approach a limit.$BR
37 A necessary condition for the convergence of this SERIES is that
38 a's have limit 0. If this fails, the series diverges. $BR 39 The harmonic series 1+(1/2)+(1/3)+... 40 diverges.$BR This illustrates that the terms $$a_n$$
41 having limit zero does not guarantee the convergence
42 of a series. $BR 43 A series with positive terms ,i.e. $$a_n > 0$$ for all n, converges$BR exactly when its partial sums have an upper bound. $BR 44 The geometric series $$\displaystyle \sum_{n=1}^{ \infty } r^n$$ 45 converges exactly when $$-1<r<1.$$$BR
46
47 $BR 48 INTEGRAL AND COMPARISON TESTS 49$BR
50 (Integral test,p-series, comparison tests for convergence and divergence, limit comparison test) $BR$BR
51 Integral test: Suppose $$f(x)$$ is positive and DECREASING for
52 all large enough x.
53 Then the following are equivalent: $BR 54 I. $$\displaystyle \int_1^{ \infty } f(x)dx$$ is finite, i.e. converges.$BR
55 S. $$\displaystyle \sum_{n=1}^{ \infty } \, f(n)$$ is finite, i.e. converges. $BR 56 This gives the p - test: 57 $$\displaystyle \sum_{n=1}^{ \infty } \frac{1}{n^p}$$ converges exactly when 58 $$p > 1 .$$$BR $BR 59 Comparison test: Suppose there is a fixed number K such that$BR for all sufficiently large n:
60 $$0 < a_n < K b_n .$$ $BR 61 62 Convergence. If $$\displaystyle \sum_{n=1}^\infty b_n$$ converges then so does 63 $$\displaystyle \sum_{n=1}^\infty a_n.$$$BR
64
65 Divergence. If $$\displaystyle \sum_{n=1}^\infty a_n$$diverges then so does
66 $$\displaystyle \sum_{n=1}^\infty b_n$$. $BR 67 68 (Positive series having smaller terms are more likely to converge.)$BR
69
70 Limit comparison test: SUPPOSE: $$a_n > 0$$, $$b_n > 0$$ and $BR 71 $$\displaystyle \lim_{n \to \infty } \frac{a_n}{b_n} = R$$ exists. Moreover, R is not zero.$BR
72 THEN
73 $$\displaystyle \sum_{n=1}^\infty a_n$$ and $$\displaystyle \sum_{n=1}^\infty b_n$$ $BR 74 both converge or both diverge.$BR
75
76
77 OTHER CONVERGENCE TESTS FOR SERIES $BR 78 (Alternating series test, absolute convergence, RATIO TEST) 79$BR
80 Alternating series test: Suppose the sequence $$a_1,a_2,a_3,...$$ is
81 decreasing and has limit zero.
82 Then $$\displaystyle \sum_{n=1}^\infty {(-1)}^n a_n$$ converges.
83 $BR This applies to (1)-(1/2)+(1/3)-(1/4)+... 84$BR $BR 85 Absolute Convergence Test: IF 86 $$\displaystyle \sum_{n=1}^\infty \vert a_n \vert$$ converges,$BR
87 THEN $$\displaystyle \sum_{n=1}^\infty a_n$$ converges. $BR 88 89 Ratio test:$BR
90 SUPPOSE $$\vert { \frac {a_{n+1}}{a_n}} \vert$$ has limit equal to r. $BR 91 IF $$r < 1$$ then $$\displaystyle \sum_{n=1}^{\infty} a_n$$ CONVERGES.$BR
92 IF $$r > 1$$ the $$\displaystyle \sum_{n=1}^{\infty} a_n$$ DIVERGES. \$BR
93
94
95 EOT
96
97 &ENDDOCUMENT;
98
|
|
## Questions.
1. The ‘T-Swift’ protein was quantified in the blood of the two groups. Women were found to have higher levels of the protein ( M = 1.062, SD = 0.339, N = 60) than men ( M = 0.528; SD = 0.382, N = 60). Calculate the raw difference effect size (D) and its variance + SE.
2. Calculate the Cohen’s d for the example above and its variance and SE. Calculate Hedges’ g and its SE.
3. Convert the Cohen’s d you calculated in step 3 to a Pearson r and its SE. The formula for SE is not provided on the slides but you can get it from Borenstein et al. (2009) chapter 7 (see here)
4. In a study you see the following table for and association between hay fever and eczema in 11 year old children.
Hayfever
Yes No Total
Eczema Yes 141 420 561
No 928 13525 14453
Total 1069 13945 15522
What is the probability that a child with eczema will also have hay fever? (proportion and/or %). For children without eczema, what is the probability of having hay fever (proportion and/or %). What is the risk difference? Look up the formula for the SE of risk difference, here.
Calculate the Odds Ratio. The ln(OR) and its SE.
1. In a paper, you find a reported risk difference of 12% between men and women in ever having taken LSD. The authors did not report the SE or raw data but did provide the 95 confidence interval fo their estimate (8.2% to 15.8%). Work out the SE for this estimate, so you can use it in your meta-analysis (tip: use the Cochrane Handbook, alternatively use google).
2. Bonus : You are reading a paper on the effects of an Angiotensin-Converting–Enzyme Inhibitor, Ramipril, on Cardiovascular Events (simply put: heart attacks) in High-Risk Patients and come across some interesting stats. From the table below calculate the ‘Number Needed to Treat’. Calculate the 95% CI of NNT.
Cardiovascular Event
Yes No Total
Ramipril Yes 651 3994 4645
No 826 3826 4652
## Proteins,… .
1. We use the code from the slides… . Subscript t= Women , subscript c = Men. The raw mean difference is .534.
y_t <- 1.062; sd_t <- 0.339; n_t <- 60
y_c <- 0.528; sd_c <- 0.382; n_c <- 60
y_t - y_c ## D
## [1] 0.534
Next we calculate the pooled SD. (0.361).
## If we assume that at population level sd_t = sd_c, then
numerator <- (((n_t - 1)*sd_t^2) + ((n_c - 1)*sd_c^2))
sd_pooled <- sqrt(numerator / (n_t + n_c - 2))
sd_pooled
## [1] 0.3611406
The variance of the raw difference (D) is .004, the standard error is .066. So, for our meta-analysis based on raw differences, we would store .534(+/-.066).
## Variance of D and se
(var_d <- ((n_t + n_c)/(n_t * n_c)) * sd_pooled^2)
## [1] 0.004347417
sqrt(var_d)
## [1] 0.06593494
1. Now we calculate Cohen’s d and Hedges’g.
This is a large effect size, sensu Cohen (1988), 1.48+/-.21.
## Cohen's d:
d <- (y_t - y_c)/sd_pooled
d
## [1] 1.478649
## Variance of Cohen's d and SE of Cohen's d
var_d <- ((n_t + n_c)/(n_t * n_c)) + ((d^2) / (2 * (n_t + n_c)))
var_d
## [1] 0.04244334
sqrt(var_d) # SE
## [1] 0.2060178
We calculate J the correction factor.
## Hedges' g is based on Cohen's d
## Calculate correction factor J
J <- (1 - (3/(4 * (n_t + n_c - 2) - 1)))
J
## [1] 0.9936306
As you can see Hedges’ g is lower, 1.47+/-0.20, but perhaps regardless this remains a sizable effect based on effect size guidelines.
## So, Hedges' g is
g <- d * J
g
## [1] 1.469231
## Variance and SE
var_g <- var_d * (J *J)
sqrt(var_g) # SE
## [1] 0.2047056
1. Remember:
$r=\frac{d}{\sqrt{d^2+A}}$
and $A= \frac{(n_1+n_2)^2}{n_1n_2}$
whereby A is a correction factor for cases where ‘group sizes’ ( $$n_1$$ and $$n_2$$) are not equal. If group sizes are equal we can assume $$n_1=n_2$$ and then A=4. This is the case here.
r<-d/(sqrt((d*d)+4))
r
## [1] 0.5944919
You can find the formula at 7.9 (page 5 of here). Remember A=4 when sample sizes are equal.
This gives us Pearson r = .594+/-.054.
var_r<-(16*var_d)/(((d^2)+4)^3)
var_r
## [1] 0.002868238
# Se is
se_r<-sqrt(var_r)
se_r
## [1] 0.05355593
## Hayfever… .
1. The probability that a child with eczema will also have hay fever is 25.1%. The odds is estimated by 141/420. Similarly, for children without eczema the probability of having hay fever is 6.4% and the odds is 928/13525. The risk difference is 18.7%.
141/561
## [1] 0.2513369
928/14453
## [1] 0.06420812
# Risk difference
(141/561)-(928/14453)
## [1] 0.1871288
We can get the standard error (se) for the risk difference as follows:
$se = \sqrt{\frac{p_1*(1-p_1)}{n_1}+\frac{p_2*(1-p_2)}{n_2}}$
# note that this overrides our previous code for the hayfever example
prop_1<-(141/561)
one_minus_prop_1<-1-prop_1
prop_2<-(928/14453)
one_minus_prop_2<-1-prop_2
n_1<-561
n_2<-14453
# Following the formula
part1<-(prop_1*one_minus_prop_1)/n_1 #
part2<-(prop_2*one_minus_prop_2)/n_2
se<-sqrt(part1+part2)
se
## [1] 0.01842743
So, for our meta-analysis (based on absolute risk difference), we have an (absolute) risk difference of 18.7% +/- 1.8%.
Now let’s move on to Odds Ratios (OR):
Remember:
Treatment Control
Event $$n_{11}$$ $$n_{12}$$
Non-event $$n_{21}$$ $$n_{22}$$
$OR= \frac{n_{11}n_{22}}{n_{12}n_{21}}$
For simplicity:
a= $$n_{11}$$, b = $$n_{12}$$, c= $$n_{21}$$ , d= $$n_{22}$$
# odds ratio
OR<-(141*13525)/(420*928)
OR
## [1] 4.892819
inv_OR<-1/OR
inv_OR
## [1] 0.2043812
Remember we want to use the ln(OR) as it has better statistical properties… . Note how it behaves when you take the (natural) logarithm of the inverse of our odds ratio.
log(OR)
## [1] 1.587769
log(inv_OR)
## [1] -1.587769
the standard error, will be the square root of the variance, given by this formula
$Var(ln(OR)) = \frac{1}{n_{11}} + \frac{1}{n_{12}} + \frac{1}{n_{21}} + \frac{1}{n_{22}}$
variance_OR<-(1/141)+(1/420)+(1/928)+(1/13525)
variance_OR
## [1] 0.01062467
se_OR<-sqrt(variance_OR)
se_OR
## [1] 0.1030761
So the statistics we would need for our meta-analysis based on ln(OR), would be 1.588 +/- .103.
## LSD,… .
1. You can find the formula in chapter 7 of the Cochrane handbook.
$se = (upper limit – lower limit) / 3.92$
(15.8-8.2)/3.92
## [1] 1.938776
For our meta-analysis we thus have 12% (+/-1.93%) - Note that you might want to have a think about precision, so one might want to input 12%(+/-2%)… .
## Heart attacks,…
1. Let’s work through this ‘heart attacks’ example,… .
NNT is nothing more or less than 1/(risk difference). Note that we are talking about the absolute risk difference.
# Proportion of patients with CV events in the ramipril group
651/4645
## [1] 0.1401507
# Proportion of patients with CV events in the placebo group
826/4652
## [1] 0.177558
The (absolute) risk difference is the difference between those 2 values (3.74%).
abs((651/4645)-(826/4652))
## [1] 0.03740734
Incidentally the Relative Risk is .789. So patients treated with Ramipril had a lower risk of a Cardiovascular (CV) event. This is a 21% decrease in the risk of CV events (Risk reduction).
# Relative Risk
(651/4645)/(826/4652)
## [1] 0.7893233
# Risk reduction
1-((651/4645)/(826/4652))
## [1] 0.2106767
The NNT is 1/(Risk Difference). A large treatment effect, in the absolute scale, leads to a small number needed to treat (NNT). The best possible treatment would save every life, i.e. NNT=1. A treatment saving one life for every 10 patients treated outperforms a competing treatment that saves just one life for every 50 patients treated. Assuming this study ran for 5 years, this would mean one life saved for every 27 patients (rounded up). Suppose that the study only had 65 events (as opposed to 651) in the Ramipril group and 83 events (as opposed to 826) in the control group. In such a case the NNT would be 260 (!), but the relative risk remains roughly the same .7843 (!).
# NNT
1/abs((651/4645)-(826/4652))
## [1] 26.73272
# Now: fewer events...
# absolute risk.
abs((65/4645)-(83/4652))
## [1] 0.003848247
# NNT
1/abs((65/4645)-(83/4652))
## [1] 259.8586
# relative risk
(65/4645)/(83/4652)
## [1] 0.7843127
### 95% CI
The solution lies in calculating the reciprocals (i.e., 1/x) of the 95% CI of the (absolute) risk difference.
As described above, we can get the standard error (se) for the risk difference as follows:
$se = \sqrt{\frac{p_1*(1-p_1)}{n_1}+\frac{p_2*(1-p_2)}{n_2}}$
# note that this overrides our previous code for the hayfever example
prop_1<-(651/4645)
one_minus_prop_1<-1-prop_1
prop_2<-(826/4652)
one_minus_prop_2<-1-prop_2
n_1<-4645
n_2<-4652
# Following the formula
part1<-(prop_1*one_minus_prop_1)/n_1 #
part2<-(prop_2*one_minus_prop_2)/n_2
se<-sqrt(part1+part2)
If we assume normality, then +/-1.96*se gives us the 95% confidence interval.
# note rounding below.
abs((651/4645)-(826/4652))+1.96*se
## [1] 0.0522484
abs((651/4645)-(826/4652))-1.96*se
## [1] 0.02256628
So the 95% CI of our absolute risk difference, 3.74%, is [2.26% to 5.22%].
Now we convert those values to NNT.
# rounded.
1/(abs((651/4645)-(826/4652))+1.96*se)
## [1] 19.13934
1/(abs((651/4645)-(826/4652))-1.96*se)
## [1] 44.31391
The 95% CI of the NNT ranges from 19 to 44.
Now imagine the scenario described above: the study only had 65 events (as opposed to 651) in the ramipril group and 83 events (as opposed to 826) in the control group.
# note that this overrides our previous code
prop_1<-(65/4645)
one_minus_prop_1<-1-prop_1
prop_2<-(83/4652)
one_minus_prop_2<-1-prop_2
n_1<-4645
n_2<-4652
# Following the formula
part1<-(prop_1*one_minus_prop_1)/n_1 #
part2<-(prop_2*one_minus_prop_2)/n_2
se<-sqrt(part1+part2)
If we assume normality then +/-1.96*se gives us the 95% confidence interval. This includes 0.
abs((65/4645)-(83/4652)) # risk difference
## [1] 0.003848247
abs((65/4645)-(83/4652))+1.96*se
## [1] 0.008935688
abs((65/4645)-(83/4652))-1.96*se
## [1] -0.001239194
We get a negative number for our 95% CI for NNT, this is peculiar: find about it more here. So we could conclude, NNTB=260 (NNTH= 807 to to $$-infty$$ to NNTB= 111). Confused? Read the article by Altman.
1/(abs((65/4645)-(83/4652))+1.96*se)
## [1] 111.9108
1/(abs((65/4645)-(83/4652))-1.96*se)
## [1] -806.9761
The end.
|
|
## Easy Sagittal Ascii Guide
William Lynch
Posts: 45
Joined: Mon Sep 21, 2015 9:27 pm
### Easy Sagittal Ascii Guide
So this is how the ascii characters seem to be laid out.
first | or ! determine the direction of the stem. | is up ! is down.
once we know that, the symbols are written before or after the stem direction relative to how it appears in the actual glyph.
For example is we have then obviously it will be written on both sides as "(|)" but if we have then it will appear only on the righ side as "|(".
Quick chart guide:
and = "(" See how the glyph curves in a concave manner? Well, so do our parenthesis!
and = "/" and "\" LOOK AT THE SYMBOL AND IT WILL BASICALLY TELL YOU HOW TO WRITE IT
and = ")" Convex = convex parenthesis
and = "/ /" and "\ \"
and = "/ x )" and "\ x )" the x represents the stem direction marking. As you can see if you look at the symbol, it's a combination of ")" and "\ or /"
and just have parenthesis on both sides.
and are just like and but the arrow heads are reversed so the parenthesis and slashes will also be. They are "( x \" and "( x /"
So once you get a grip of the pattern, writing these is really easy. You just have to remember which the ! and | and which sign represents which arrow head.
|
|
# Intermediate Geometry : How to find the volume of a prism
## Example Questions
← Previous 1 3 4 5 6 7 8
### Example Question #3 : Find The Volume Of A Right Rectangular Prism With Fractional Edge Lengths: Ccss.Math.Content.6.G.A.2
A prism with a square base has a height of feet.
If the edge of the base is feet, what is the volume of the prism?
Explanation:
The volume of a prism is given as
where
B = Area of the base
and
h = height of the prism.
Because the base is a square, we have
So plugging in the value of B that we found and h that was given in the problem we get the volume to be the following.
### Example Question #4 : Find The Volume Of A Right Rectangular Prism With Fractional Edge Lengths: Ccss.Math.Content.6.G.A.2
A rectangular prism has the dimensions of , , and . What is the volume of the prism?
Explanation:
The volume of a rectangular prism is given by the following equation:
In this equation, is length, is width, and is height.
The given information does not explicitly state which side each dimension measurement correlates to on the prism. Volume simply requires the multiplication of the dimensions together.
Volume can be solved for in the following way:
### Example Question #5 : Find The Volume Of A Right Rectangular Prism With Fractional Edge Lengths: Ccss.Math.Content.6.G.A.2
Find the volume of a rectangular prism with a width of , height of and length of .
Explanation:
The volume of a rectangular prism is given by the following equation:
In this equation, is length, is width, and is height.
Because all the necessary information has been provided to solve for the volume, all that needs to be done is substituting in the values for the variables.
Therefore:
### Example Question #1 : Find The Volume Of A Right Rectangular Prism With Fractional Edge Lengths: Ccss.Math.Content.6.G.A.2
Find the surface area of the rectangular prism:
Explanation:
The surface area of a rectangular prism is
Substituting in the given information for this particular rectangular prism.
### Example Question #7 : Find The Volume Of A Right Rectangular Prism With Fractional Edge Lengths: Ccss.Math.Content.6.G.A.2
A small rectangular fish tank has sides that are wide, long, and high. Which formula would not work to find the correct volume of the fish tank?
Explanation:
In this question the formula that uses addition will not yield the correct volume of the fish tank:
This is the correct answer because in order to find the volume of any rectangular prism one needs to multiply the prism's length, width, and height together.
The volume of a rectangular prism is given by the following equation:
In this equation, is length, is width, and is height.
To restate, is the correct answer because it will NOT yield the correct volume, you would need to multiply by the product of and , not add.
### Example Question #1 : How To Find The Volume Of A Prism
Find the volume of the prism.
Explanation:
Recall how to find the volume of a prism:
Find the area of the base, which is a right triangle.
Now, find the volume of the prism.
### Example Question #2 : How To Find The Volume Of A Prism
Find the volume of the prism.
Explanation:
Recall how to find the volume of a prism:
Find the area of the base, which is a right triangle.
Now, find the volume of the prism.
### Example Question #3 : How To Find The Volume Of A Prism
Find the volume of the prism.
Explanation:
Recall how to find the volume of a prism:
Find the area of the base, which is a right triangle.
Now, find the volume of the prism.
### Example Question #4 : How To Find The Volume Of A Prism
Find the volume of the prism.
Explanation:
Recall how to find the volume of a prism:
Find the area of the base, which is a right triangle.
Now, find the volume of the prism.
### Example Question #5 : How To Find The Volume Of A Prism
Find the volume of a prism.
|
|
# How do you simplify (5 times 10^5)^-2 and write the answer in scientific notation?
${\left(5 \times {10}^{5}\right)}^{-} 2 = {5}^{- 2} \times {\left({10}^{5}\right)}^{- 2} = 0.04 \times {10}^{-} 10 = 4 \times {10}^{-} 12$
|
|
# 12.4: Radiocarbon Dating- Using Radioactivity to Measure the Age of Fossils and Other Artifacts
##### Learning Objectives
• Identify the age of materials that can be approximately determined using Radiocarbon dating.
When we speak of the element Carbon, we most often refer to the most naturally abundant stable isotope 12C. Although 12C is definitely essential to life, its unstable sister isotope 14C has become of extreme importance to the science world. Radiocarbon dating is the process of determining the age of a sample by examining the amount of 14C remaining against its known half-life, 5,730 years. The reason this process works is because when organisms are alive, they are constantly replenishing their 14C supply through respiration, providing them with a constant amount of the isotope. However, when an organism ceases to exist, it no longer takes in carbon from its environment and the unstable 14C isotope begins to decay. From this science, we are able to approximate the date at which the organism lived on Earth. Radiocarbon dating is used in many fields to learn information about the past conditions of organisms and the environments present on Earth.
## The Carbon-14 Cycle
Radiocarbon dating (usually referred to simply as carbon-14 dating) is a radiometric dating method. It uses the naturally occurring radioisotope carbon-14 (14C) to estimate the age of carbon-bearing materials up to about 58,000 to 62,000 years old. Carbon has two stable, nonradioactive isotopes: carbon-12 (12C) and carbon-13 (13C). There are also trace amounts of the unstable radioisotope carbon-14 (14C) on Earth. Carbon-14 has a relatively short half-life of 5,730 years, meaning that the fraction of carbon-14 in a sample is halved over the course of 5,730 years due to radioactive decay to nitrogen-14. The carbon-14 isotope would vanish from Earth's atmosphere in less than a million years were it not for the constant influx of cosmic rays interacting with molecules of nitrogen (N2) and single nitrogen atoms (N) in the stratosphere. Both processes of formation and decay of carbon-14 are shown in Figure 1.
When plants fix atmospheric carbon dioxide (CO2) into organic compounds during photosynthesis, the resulting fraction of the isotope 14C in the plant tissue will match the fraction of the isotope in the atmosphere (and biosphere since they are coupled). After a plant dies, the incorporation of all carbon isotopes, including 14C, stops and the concentration of 14C declines due to the radioactive decay of 14C following.
$\ce{ ^{14}C -> ^{14}N + e^-} + \mu_e \label{E2}$
This follows first-order kinetics:
$N_t= N_o e^{-kt} \label{E3}$
where
• $$N_0$$ is the number of atoms of the isotope in the original sample (at time t = 0, when the organism from which the sample is derived was de-coupled from the biosphere).
• $$N_t$$ is the number of atoms left after time $$t$$.
• $$k$$ is the rate constant for the radioactive decay.
The half-life of a radioactive isotope (usually denoted by $$t_{1/2}$$) is a more familiar concept than $$k$$ for radioactivity, so although Equation $$\ref{E3}$$ is expressed in terms of $$k$$, it is more usual to quote the value of $$t_{1/2}$$. The currently accepted value for the half-life of 14C is 5,730 years. This means that after 5,730 years, only half of the initial 14C will remain; a quarter will remain after 11,460 years; an eighth after 17,190 years; and so on.
The equation relating rate constant to half-life for first order kinetics is
$k = \dfrac{\ln 2}{ t_{1/2} } \label{E4}$
so the rate constant is then
$k = \dfrac{\ln 2}{5.73 \times 10^3} = 1.21 \times 10^{-4} \text{year}^{-1} \label{E5}$
and Equation $$\ref{E2}$$ can be rewritten as
$N_t= N_o e^{-\ln 2 \;t/t_{1/2}} \label{E6}$
or
$t = \left(\dfrac{\ln \dfrac{N_o}{N_t}}{\ln 2} \right) t_{1/2} = 8267 \ln \dfrac{N_o}{N_t} = 19035 \log_{10} \dfrac{N_o}{N_t} \;\;\; (\text{in years}) \label{E7}$
The sample is assumed to have originally had the same 14C/12C ratio as the ratio in the atmosphere, and since the size of the sample is known, the total number of atoms in the sample can be calculated, yielding $$N_0$$, the number of 14C atoms in the original sample. Measurement of N, the number of 14C atoms currently in the sample, allows the calculation of $$t$$, the age of the sample, using the Equation $$\ref{E7}$$.
##### Note
Deriving Equation $$\ref{E7}$$ assumes that the level of 14C in the atmosphere has remained constant over time. However, the level of 14C in the atmosphere has varied significantly, so time estimated by Equation $$\ref{E7}$$ must be corrected by using data from other sources.
##### Example 1: Dead Sea Scrolls
In 1947, samples of the Dead Sea Scrolls were analyzed by carbon dating. It was found that the carbon-14 present had an activity (rate of decay) of d/min.g (where d = disintegration). In contrast, living material exhibit an activity of 14 d/min.g. Thus, using Equation $$\ref{E3}$$,
$\ln \dfrac{14}{11} = (1.21 \times 10^{-4}) t \nonumber$
Thus,
$t= \dfrac{\ln 1.272}{1.21 \times 10^{-4}} = 2 \times 10^3 \text{years} \nonumber$
From the measurement performed in 1947, the Dead Sea Scrolls were determined to be 2000 years old, giving them a date of 53 BC, and confirming their authenticity. This discovery is in contrast to the carbon dating results for the Turin Shroud that was supposed to have wrapped Jesus’ body. Carbon dating has shown that the cloth was made between 1260 and 1390 AD. Thus, the Turin Shroud was made over a thousand years after the death of Jesus.
Describes radioactive half-life and how to do some simple calculations using half-life.
## Summary
The entire process of Radiocarbon dating depends on the decay of carbon-14. This process begins when an organism is no longer able to exchange Carbon with its environment. Carbon-14 is first formed when cosmic rays in the atmosphere allow for excess neutrons to be produced, which then react with Nitrogen to produce a constantly replenishing supply of carbon-14 to exchange with organisms.
• Carbon-14 dating can be used to estimate the age of carbon-bearing materials up to about 58,000 to 62,000 years old.
• The carbon-14 isotope would vanish from Earth's atmosphere in less than a million years were it not for the constant influx of cosmic rays interacting with atmospheric nitrogen.
• One of the most frequent uses of radiocarbon dating is to estimate the age of organic remains from archeological sites.
## References
1. Hua, Quan. "Radiocarbon: A Chronological Tool for the Recent Past." Quaternary Geochronology 4.5(2009):378-390. Science Direct. Web. 22 Nov. 2009.
2. Petrucci, Ralph H. General Chemistry: Principles and Modern Applications 9th Ed. New Jersey: Pearson Education Inc. 2007.
3. "Radio Carbon Dating." BBC- Homepage. 25 Oct. 2001. Web. 22 Nov. 2009. http://www.bbc.co.uk.
4. Willis, E.H., H. Tauber, and K. O. Munnich. "Variations in the Atmospheric Radiocarbon Concentration Over the Past 1300 Years." American Journal of Science Radiocarbon Supplement 2(1960) 1-4. Print.
## Problems
1. If, when a hippopotamus lived, there was a total of 25 grams of Carbon-14, how many grams will remain 5730 years after he is laid to rest? 12.5 grams, because one half-life has occurred.
2. How many grams of Carbon-14 will be present in the hippopotamus' remains after three half-lives have passed? 3.125 grams of Carbon-14 will remain after three half-lives.
12.4: Radiocarbon Dating- Using Radioactivity to Measure the Age of Fossils and Other Artifacts is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.
|
|
Outage Effective Capacity of Buffer-Aided Diamond Relay Systems Using HARQ with Incremental Redundancy
# Outage Effective Capacity of Buffer-Aided Diamond Relay Systems Using HARQ with Incremental Redundancy
Deli Qiao This work has been supported in part by the National Natural Science Foundation of China (61671205) and the Shanghai Sailing Program (16YF1402600).Part of this work has been submitted to the 2017 IEEE International Conference on Communications (ICC) [24].D. Qiao is with the School of Information Science&Technology, East China Normal University, Shanghai, China 200241. Email: dlqiao@ce.ecnu.edu.cn.
###### Abstract
In this paper, transmission over buffer-aided diamond relay systems under statistical quality of service (QoS) constraints is studied. The statistical QoS constraints are imposed as limitations on delay violation probabilities. In the absence of channel state information (CSI) at the transmitter, truncated hybrid automatic repeat request-incremental redundancy (HARQ-IR) is incorporated to make better use of the wireless channel and the resources for each communication link. The packets that cannot be successfully received upon the maximum number of transmissions will be removed from buffer, i.e., outage occurs. The outage effective capacity of a communication link is defined as the maximum constant arrival rate to the source that can be supported by the goodput departure processes, i.e., the departure that can be successfully received by the receiver. Then, the outage effective capacity for the buffer-aided diamond relay system is obtained for HARQ-IR incorporated transmission strategy under the end-to-end delay constraints. In comparison with the DF protocol with perfect CSI at the transmitters, it is shown that HARQ-IR can achieve superior performance when the SNR levels at the relay are not so large or when the delay constraints are stringent.
## I Introduction
In wireless systems, the power of the received signal fluctuates randomly over time due to mobility, changing environment, and multipath fading caused by the constructive and destructive superimposition of the multipath signal components [25]. These random changes in the received signal strength lead to variations in the instantaneous data rates that can be supported by the channel, which may result in transmission errors in deep fading. Hybrid automatic repeat request (HARQ) protocols have been proposed to enhance the wireless systems performance. Generally, the receiver sends either an acknowledgement (ACK) or negative ACK (NACK) to the transmitter depending on whether the data packet is correctly received or not. The transmitter can decide either to send the next packet or retransmit the same packet upon reception of ACK or NACK, respectively [1]. The performance of ARQ protocls has been extensively studied in literature (see e.g., [2]-[5] and references therein).
Also, relay channels can be viewed as one of the basic building blocks of wireless systems. Information-theoretic analysis of relay channels has been the research forefront for decades, and has shown the performance improvement in terms of throughput and diversity (see, e.g., [6]-[13]). For instance, the authors have considered different relaying strategies in [7], and showed that considerable cooperative diversity can be achieved with the relaying schemes. The authors have derived the expressions for the outage probability and throughput for HARQ protocols in relay channels in [8]. Of particular interest is the diamond relay system in which the communication between a disconnected source and destination is achieved via the help of two or more intermediate relay nodes. The authors have analyzed the capacity bounds for the full-duplex relays with additive white Gaussian channels in [9], while different transmission strategies and achievable rates in half-duplex Gaussian diamond relay channel have been investigated in [10]. The authors have characterized the outage probability and throughput of HARQ protocols with relay selection for the multirelay channels in [13]. More recently, buffer-aided relaying in which the relays are equipped with buffers have been shown to further improve the performance of relay systems [14], [15]. Design and analysis of buffer-aided relay systems have attracted much interest recently [15].
Generally, information theoretic analysis do not take into account the buffer/queue limitations. In present wireless systems, diverse quality of service (QoS) requirements are driven by the exponential growth of wireless multimedia traffic that is generated by smartphones, tablets, servers, social networking tools and video sharing sites. In multimedia applications involving e.g., voice over IP (VoIP), streaming video, and interactive video, certain QoS limitations in terms of buffer/delay constraints are imposed so that target levels of performance and quality can be provided to the users. The concept of effective capacity [16] has been incorporated to characterize the maximum constant arrival rate under statistical delay constraints. In case of point-to-point links, there have been some related works investigating the HARQ protocols of wireless channels under statistical QoS constraints recently [17]-[22]. For instance, in [17], we have analyzed the energy efficiency of fixed rate transmissions under statistical QoS constraints with a simple Type-I HARQ (HARQ-T1) protocol. In this work, we assumed that no outage occurs, i.e., retransmissions are triggered as long as long the receiver does not receive the packet. In [19], the author has analyzed the performance of HARQ with incremental redundancy (HARQ-IR), and showed that with stringent QoS constraints, HARQ-IR can outperform the adaptive transmission system. In [20], the authors have investigated fixed rate transmissions with HARQ protocols, and obtained the closed-form expression for the effective capacity of HARQ-IR only for loose QoS constraints. In [21], the authors have characterized the effective capacity of different HARQ protocols with limited number of transmissions, or deadline of the packets. Outage occurs when the packet is dropped from the buffer while the receiver does not correctly receive the packet. However, the effective capacity obtained does not specify the average throughput that can be correctly received at the receiver. In [22], the authors have considered the goodput of various HARQ protocols, and proposed a general framework to express effective capacity of HARQ protocols based on a random walk model and recurrence relation formulation. In this paper, we present a study on the buffer-aided diamond relay systems with HARQ-IR under statistical QoS constraints, in the form of limitations on the delay violation probabilities.
In this work, we assume that the channel state information (CSI) is absent at the transmitters for the links. We first define the outage effective capacity as the maximum constant arrival rate that can be supported by the departure processes correctly received at the receiver while satisfying the statistical QoS constraints for a communication link. We show that there is an optimal fixed transmission rate with HARQ-IR scheme. We also demonstrate that the outage effective capacity approaches to the throughput of the link as the delay constraints vanish. We then consider full-duplex decode-and-forward (DF) relays, and assume that the source sends the common information to the relays, which cooperatively deliver the same message to the destination. The relays adopt the Alamouti scheme to enhance the information delivery to the destination. With the proposed HARQ-IR scheme, we derive the outage effective capacity of the buffer-aided diamond relay system and the associated outage probability. For comparison, we also consider the typical DF protocol [12] in case of perfect CSI at the transmitter and the receiver for all links, where the common information is sent by the source at the minimum rate of the source-relay links and distributed beamforming is performed at the relays. The contributions of this work can be summarized as follows:
1. We obtain the outage effective capacity of the goodput processes of a communication link for the HARQ protocols following the spectral radius method, and prove that the limiting behavior of the resulting expression coincides with several well-known results, such as the throughput of HARQ protocols without delay constraints and the effective capacity of HARQ-T1 protocol with unlimited number of transmissions;
2. We propose a HARQ-IR based transmission scheme for the buffer-aided diamond relay systems with perfect CSI at the receiver only for each link, and characterize the outage effective capacity of the proposed scheme under the statistical delay constraints;
3. Through numerical evaluations, we demonstrate the superiority of the proposed scheme with respect to the DF protocol with perfect CSI at the transmitter and receiver of each link when the SNR at the relay is relatively small or when the delay constraints are relatively stringent.
The rest of this paper is organized as follows. Section II introduces preliminaries on the diamond relay channel model, and reviews the HARQ-IR operations. In Section III, we briefly discuss the statistical delay constraints and define the outage effective capacity for one-hop links. Section IV discusses the effective capacity analysis method for two-hop links, and characterize the outage effective capacity of the buffer-aided diamond relay systems. Numerical results are provided in Section V. Finally, Section VI concludes this paper.
## Ii Preliminaries
### Ii-a System Model
We consider a buffer-aided diamond relay communication link as depicted in Fig. 1. The source sends information to the destination via the help of two parallel relays. We assume that there is no direct link between the source and the destination. Also, there is no link between the relays. In this model, there are buffers of infinite size at both the source and relays. In this work, we assume full-duplex relay such that transmission and reception can be performed simultaneously.
The discrete-time input and output relationships in the th symbol duration are given by
Yrj[i] =gsrj[i]Xs[i]+nrj[i],j=1,2, (1) Yd[i] =gr1d[i]Xr1[i]+gr2d[i]Xr2[i]+nd[i], (2)
where for denote the input signal from the source S and the relay , respectively. The inputs are subject to individual average energy constraints , where is the bandwidth. represent the received signal at the relay and the destination D, respectively. We assume that the fading coefficients are jointly stationary and ergodic discrete-time processes, and we denote the magnitude-square of the fading coefficients by and . Denote . Assuming that there are complex symbols per second, we can easily see that the symbol energy constraint of implies that the channel input has a power constraint of . Above, in the channel input-output relationships, the noise component is a zero-mean, circularly symmetric, complex Gaussian random variable with variance for . The additive Gaussian noise samples are assumed to form an independent and identically distributed (i.i.d.) sequence. We denote the signal-to-noise ratio at source as , and at relays as .
### Ii-B Harq-Ir
Consider a link composed of one transmitter and one receiver under block fading in which the fading stays constant for a block of seconds and changes independently from one block to another. We assume that a packet of bits is intended to be transmitted over the wireless channel in each frame. Specifically, after each successful transmission, the transmitter attempts to send bits in the next frame. So the fixed transmission rate is termed as bits/block. We assume that upon successful reception of the packet, the receiver sends an ACK to the transmitter, and the packet can be removed from the buffer. If a decoding failure occurs, the receiver sends a NACK to the transmitter and requests another round of retransmissions for the packet if the maximum number of transmissions for the packet is not reached. On the other hand, when the maximum number of transmissions for the packet is reached, the packet will be removed from the buffer without the need of ACK or NACK. Therefore, outage occurs at the maximum round of transmission, i.e., transmission, if the packet is discarded from the buffer while the receiver does not correctly receive this packet.
We can model the buffer activity at the end of each frame as a discrete-time Markov process [21]. Fig. 2 depicts the state transition model. State 0 denotes that the packet is removed from the buffer, and state represents the number of retransmissions for the packet, where no packet is removed from the buffer. Define as the decoding failure probability at the retransmission such that the system enters State with probability , while the system enters state 0 with probability . On the other hand, regardless of the decoding result at the end of retransmission, the system goes to State 0 with probability since the maximum number of transmissions is reached and the packet is removed immediately from the buffer. Then, the state transition matrix is given by
P=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣1−p01−p1⋯1−pM−21p00⋯000p1⋯00⋮⋮⋱⋮⋮00⋯pM−20⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ (3)
where denotes the probability of state transition from State to State .
In the HARQ-IR protocol, the transmitter encodes the packet according to a codebook of length , and the codewords are divided into subblocks of the same length with symbols. During each frame, only one subblock is sent to the receiver, and the receiver decodes the message using the current subblock combined with the previously received sublocks of the packet. Then, we know that the receiver can successfully decode the packet after the () retransmission only if the following condition is satisfied [2]
L≤m∑i=0TBlog2(1+\footnotesize{SNR}zi). (4)
We can express the state transition probabilities as , and for ,
pm =Pr{L>∑mi=0TBlog2(1+% \footnotesize{SNR}zi)}Pr{L>∑m−1i=0TBlog2(1+\footnotesize{SNR}zi)}. (5)
Define the outage probability after -th transmission rounds as
Pout,m=Pr{m−1∑i=0TBlog2(1+\footnotesize{SNR}zi)
In the absence of delay constraints, the throughput of truncated HARQ, i.e., goodput, is known to be [2]
RHARQ=LTB(1−Pout,M)∑M−1m=0Pout,m,bps/Hz, (7)
where .
## Iii Effective Capacity Analysis in One-Hop Links
In this section, we first review the preliminaries on the statistical delay constraints, and then obtain the outage effective capacity for a communication link with the parameters discussed above.
### Iii-a Statistical Delay Constraints for One-Hop Links
Suppose that the queue is stable and that both the arrival process and service process satisfy the Gärtner-Ellis limit, i.e., for all , there exists a differentiable logarithmic moment generating function (LMGF) such that111Throughout the text, logarithm expressed without a base, i.e., , refers to the natural logarithm . , and a differentiable LMGF such that . If there exists a unique such that
ΛA(θ∗)+ΛC(−θ∗)=0, (8)
then [26]
limQ{max}→∞logPr{Q>Q{max}}Q{max}=−θ∗. (9)
where is the stationary queue length.
For large , we have the approximation for the buffer violation probability: . Hence, while larger corresponds to stricter queueing constraints, smaller implies looser queueing constraints. Then, equivalently, we have the queueing delay violation probability as , where
J(θ)=−ΛC(−θ)
is the statistical delay exponent associated with the queue, with the LMGF of the service rate. Then, the maximum constant arrival rate to the queue for given is expressed as
RE(θ)=−ΛC(−θ)θTB,% bps/Hz. (10)
### Iii-B Outage Effective Capacity
While the authors in [21] considered the departure processes of the source queue, we focus on the goodput departure processes that can be correctly received at the receiver similar to [22]. According to (8), we define the outage effective capacity as the maximum constant arrival rate to the source that can be supported by the goodput processes. Then, we can obtain the following result.
###### Theorem 1
For the fixed rate transmissions with HARQ protocols, given QoS exponent , , and maximum number of transmissions , the outage effective capacity is given by
Rout(\footnotesize{SNR},θ)=1TBmaxL≥0{−Λ(−θ)θ} (11) =maxL≥0{−1θTBlog(p0(Pout+(1−Pout)e−θL)y∗)} (12) =−1θTBlog(p0,opt(Pout% ,opt+(1−Pout,opt)e−θLopt)y∗opt) (13)
where is the optimal finite fixed transmission rate that solves (12), is the only unique real positive root of with
f(y) =yM−1−p0p0yM−1−M−2∑m=1(1−pm)pm−1⋯p1pm0(Pout+(1−Pout)e−θL)myM−1−m −pM−2⋯p1pM−10(Pout+(1−Pout)e−θL)M−1 (14)
for given , and is the only unique real positive root of with . The outage probability associated can be expressed as
Pout,opt=M−1∏m=0pm,opt=Pr{M−1∑m=0TBlog2(1+\footnotesize{SNR}zm)
where denotes the state transition probability obtained with .
Proof: See Appendix -A.
###### Remark 1
Above, we did not specify how is obtained. Since there is no closed form expression for which depends on nonlinearly, we can only solve (12) numerically. For instance, in the following numerical results, we employ branch-and-bound method to find . In general, depends on , SNR, and . Note that the rate expression in (12) is applicable for all , in stark difference from the results in [20], where the closed-form expression of effective capacity is obtained for small . Also, we characterized the outage probability that was not treated in [20]. Note also that the rate expression in (13) is different from the results in [21], where packet drop is not considered, and the results in [22], where the results are in matrix form based on a random walk model and recurrence relation formulation.
In the absence of statistical QoS constraints, we have the following result.
###### Proposition 1
As , the outage effective capacity with HARQ protocols is given by
limθ→0Rout(\footnotesize{SNR},θ) =LoptTB(1−Pout,opt,M)∑M−1m=0Pout,opt,m,bps/Hz. (15)
Proof: See Appendix -B.
###### Remark 2
Note that the outage effective capacity approaches to the maximum goodput of the HARQ protocols, i.e., , as the statistical QoS constraints vanish.
###### Remark 3
The results in Theorem 1 is generic, and can be applied to other HARQ protocols as well, e.g., HARQ-T1, where the transmitter sends the same packet in each frame during retransmissions and the receiver decodes the packet successfully if the instantaneous channel rate is greater than the transmission rate, and HARQ-chase combining (HARQ-CC), where the receiver can make use of the received signals in the previous frames through maximum ratio combining. Note that it has been verified that HARQ-IR performs better than the other schemes under statistical delay constraints [20]. As , the outage probability vanishes and the outage effective capacity is exactly the constant arrival rate supported by the departure processes. Here, we give an example of HARQ-T1 when .
###### Proposition 2
For the fixed rate transmissions with HARQ-T1 protocol, the outage effective capacity for a given QoS exponent and approaches to the following value as :
limM→∞Rout(\footnotesize{SNR},θ)=maxL≥0{−1θTBlog(1−Pr{z>2L/TB−1\footnotesize{SNR}}(1−e−θL))}. (16)
Proof: See Appendix -C.
Obviously, (16) coincides with the result in [17, (11)].
## Iv Outage Effective Capacity in Buffer-Aided Diamond Relay Systems
In this section, we first briefly discuss the statistical delay constraints for two-hop links, and then define and characterize the outage effective capacity for the buffer-aided diamond relay systems under consideration.
### Iv-a Statistical Delay Constraints for Two-Hop Links
In this work, we seek to identify the maximum constant arrival rate to the source that can be supported by the goodput processes successively received at the destination of the diamond relay system using HARQ-IR while satisfying the statistical delay constraints. Therefore, we need to guarantee that the data transmission of all information flows should satisfy the statistical delay constraints. Since there is no link between the relays, we have at most two concatenated queues for the information flow. Consider two concatenated queues with statistical queueing constraints specified by and , for queue 1 and queue 2, respectively. Given the queueing constraints specified by and with (9) satisfied for each queue, we define
J1(θ1)=−ΛC,1(−θ1),andJ2(θ2)=−ΛC,2(−θ2), (17)
where and are the LMGF functions of the service rate of queue 1, 2, respectively. For data going through both queues, the end-to-end queueing delay violation probability can be characterized as
Pr{D1+D2>D{max}} ≐1−∫D{max}0∫D{max}−D10pD(D1)pD(D2)dD2dD1 =⎧⎪⎨⎪⎩J1(θ1)e−J2(θ2)D{max}−J2(θ2)e−J1(θ1)D{% max}J1(θ1)−J2(θ2),J1(θ1)≠J2(θ2)(1+J1(θ1)D{max})e−J1(θ1)D% {max},J1(θ1)=J2(θ2). (18)
Thereby, we need to guarantee that
Pr{D1+D2>D{max}}≤ε. (19)
In this way, we can guarantee that the data transmissions through the relays, i.e., information flows over two queues at the source and the relays, satisfy the statistical delay constraints. Then, the delay constraints of the whole system can be satisfied. Note that characterizes the statistical delay constraints with maximum delay violation probability and maximum delay . To facilitate the following analysis, we need the following tradeoff between the delay exponents of any concatenated two queues, i.e., and .
###### Lemma 1 ([23])
Consider the following function
ϑ(J1(θ1),J2(θ2)) =J2(θ2)e−J1(θ1)D{max}−J1(θ1)e−J2(θ2)D{max}J2(θ2)−J1(θ1)=e−J0D{max}=ε,for0≤ε≤1, (20)
where is defined as the statistical delay exponent associated with . Denoting as a function of , we have
1. is continuous. For , we have
Φ(J1(θ1))=Jth(ε), (21)
where
Jth(ε)=−1D{max}(1+W−1(−εe)), (22)
where is the Lambert W function, which is the inverse function of in the range .
2. is strictly decreasing in .
3. is convex in .
4. , and .
### Iv-B Effective Capacity Analysis of Diamond-Relay Systems
If we define , and as the statistical queueing constraints at the source and the relays, respectively. For different information flows over the relays, we will have different two-hop channels with queueing constraints and , respectively. Assume that the equivalent constant arrival rate at the source is . Consider any realization of any two concatenated queues. Denote as the set of pairs such that (19) can be satisfied. To satisfy the queueing constraint at queue 1, i.e., queue at the source, we should have , where is the solution to
R=−ΛC,1(−~θ)~θ, (23)
and is the LMGF of the goodput service for queue 1, i.e., service processes successively received at queue 2 (any relay).
Also, in order to satisfy the queueing constraint of queue 2, we must have , where is the solution to
ΛA,2(θ)+ΛC,2(−θ)=0. (24)
where is the LMGF of the goodput arrivals at queue 2, is the LMGF of the goodput service of queue 2, i.e., service processes successively received at destination. Note that we need to consider the queues at the relays together with the queue at the source.
Denote as the set of pairs of two concatenated buffers such that (19) can be satisfied. Now, outage effective capacity of the buffer-aided diamond relay system under statistical delay constraints can be formulated as follows.
###### Definition 1
The outage effective capacity of the buffer-aided diamond relay system with statistical delay constraints specified by is given by
R(ε,D{max})=sup(θ1,θr1)∈Ω,(θ1,θr2)∈ΩR. (25)
Hence, outage effective capacity is now the maximum constant arrival rate that can be supported by the goodput processes successfully received at the destination of the diamond relay system under statistical delay constraints.
### Iv-C Outage Effective Capacity of Diamon-Relay Links with HARQ-IR
In this part, we study the performance of HARQ-IR in the buffer-aided diamond-relay channels. We assume that common messages are sent to the relays and the relays cooperate in the information delivery to the destination such that the queue dynamics at the relays are the same. We consider the end-to-end delay constraints, and identify the maximum constant arrival goodput to the source and the end-to-end outage probability while satisfying the statistical delay constraints.
#### Iv-C1 Decode-and-Forward (DF)
As a comparison, we consider the decode-and-forward (DF) scheme [12], in which case the CSI is also available at the transmitter for each link and each relay must successfully decode the common message transmitted by the source node, and later the relays can cooperatively beamform their transmissions to the destination. We assume that the transmission power levels at the source and relays are fixed and hence no power control is employed (i.e., nodes are subject to short-term power constraints). We further assume that the channel capacity for each link can be achieved, i.e., the service processes are equal to the instantaneous Shannon capacities of the links such that there is no decoding error. Then, the service rate leaving the queue at the source is given by
Cs=TBlog2(1+\footnotesize{SNR}smin{zsr1,zsr2}). (26)
Also, the rates leaving the queues at the relays are the same, and are given by
Cr1=Cr2 =TBlog2(1+(√\footnotesize{SNR}r1zr1d+√\footnotesize{SNR}r2zr2d)2). (27)
Above, the rates are given in terms of bits/block. Note that the arrival rates and departure rates of the queues at the relays are always the same, and hence the queueing activities have the same pattern. Therefore, the system simplifies to the two-hop channel. Then, we can obtain the effective capacity similar to the discussions in [23]. In this scheme, the end-to-end outage probability is zero, i.e., all departure processes can be successfully received at the destination.
#### Iv-C2 Harq-Ir
We assume perfect CSI is available only at the receiver for each link, in which case HARQ-IR is incorporated for the transmissions. Similar to the discussion in Section II-B, we first assume that a packet of bits is intended to be transmitted in each frame for the each hop and obtain the outage effective capacity associated with . Then, we optimize over to find the optimal that leads to the maximum outage effective capacity.
The operations of HARQ-IR can be described as follows:
1. In the first hop, the source tries to send the same information to the relays. Note that only after reception of ACKs from all relays, the packet can be removed from the buffer, and the source attempts to send bits in the next frame. Again, we model the source buffer activity at the end of each frame as a discrete-time Markov process. Define as the decoding failure probability at the retransmission such that the system enters State with probability , while the system enters state 0 with probability . On the other hand, regardless of the decoding result at the end of retransmission, the system goes to State 0 with probability since the maximum number of transmissions is reached and the packet is removed immediately from the source buffer. The state transition matrix can be expressed similar to (3) with values instead. For each relay, we know that the relay can successfully decode the packet after the () retransmission only if the following condition is satisfied
L≤m∑i=0TBlog2(1+\footnotesize{SNR}zsrj,i),j=1,2. (28)
Therefore, we can express the state transition probabilities as
ps,0 =Pr{{zsr1<2L/TB−1% \footnotesize{SNR}}⋃{zsr2<2L/TB−1%\footnotesizeSNR}} =1−Pr{zsr1≥2L/TB−1\footnotesize% {SNR}}Pr{zsr2≥2L/TB−1\footnotesize% {SNR}} (29)
and for , we have
ps,m =Pr{{L>∑mi=0TBlog2(1+% \footnotesize{SNR}zsr1,i)}⋃{L>∑mi=0TBlog2(1+\footnotesize{SNR}zsr2,i)}}Pr{{L>∑m−1i=0TBlog2(1+\footnotesize{SNR}zsr1,i)}⋃{L>∑m−1i=0TBlog2(1+\footnotesize{SNR% }zsr2,i)}} (30) =1−Pr{L≤∑mi=0TBlog2(1+% \footnotesize{SNR}zsr1,i)}Pr{L≤∑mi=0TBlog2(1+\footnotesize{SNR}zsr2,i)}1−Pr{L≤∑m−1i=0TBlog2(1+\footnotesize{SNR}zsr1,i)}Pr{L≤∑m−1i=0TBlog2(1+% \footnotesize{SNR}zsr2,i)}. (31)
2. In the second hop, the relays attempt to send the same message to the destination. Following the idea of treating the relays as distributed antennas, we can adopt the Alamouti scheme to improve the achievable rate. Specifically, we divide the frame into two slots of equal length . In one slot, the relay sends message , and the relay sends message . In the other slot, the relay sends message , and the relay sends message . Then, the achievable rate for the second hop in each frame can be expressed as
R=TBlog2(1+\footnotesize{SNR}r1zr1d+\footnotesize{SNR}r2zr2d),bits/block. (32)
Note that the arrival rates and departure rates of the queues at the relays are always the same, and hence the queueing activities have the same pattern. Now, for the Makov process associated with the buffer activities at the relays, we have the state transition matrix with state transition probabilities as , and for ,
pr,m =Pr{L>∑mi=0TBlog2(1+% \footnotesize{SNR}r1zr1d,i+\footnotesize{SNR}r2zr2d,i)}Pr{L>∑m−1i=0TBlog2(1+\footnotesize{SNR}r1zr1d,i+\footnotesize{SNR}r2zr2d,i)}.
We can obtain the statistical delay exponent for each hop as
J1(θ1)=−ΛC,1(−θ1)=−logsp{Psϕs(−θ1)} (33) J2(θ2)=−ΛC,2(−θ2)=−logsp{Prϕr(−θ2)} (34)
where and are diagonal matrices with each component given by the moment generating functions of the goodput processes in states of the Markov processes and , where denotes the outage probability of the first and second hop, respectively.
Given , we denote and as the maximum delay exponent of the first and second hop, which is obtained as the statistical queueing constraints approach infinity. We can show the following results.
###### Proposition 3
With the HARQ-IR protocol, and are finite if .
Proof: First, it can be easily verified that if since . As , we can see from (1) that will be the solution to the following equation
limθ→∞f(y) =yM−1−p0p0yM−1−M−2∑m=1(1−pm)pm−1⋯p1pm0PmoutyM−1−m−pM−2⋯p1pM−10Pout=0. (35)
Obviously, approaches to some finite value. Hence, is finite, which implies that and are finite.
###### Remark 4
Note that means that the possibility of failure to decode the packet in the first transmission is not zero. For the fading distributions such as Rayleigh and Nakagami-m, we can see that is greater than zero for all .
Define
Ωε={(θ1,θ2): J1(θ1) and J2(θ2) are solutions to (???)w/ equality}.
With the above characterizations, we can obtain the following results.
###### Theorem 2
Given , the outage effective capacity of the buffer-aided diamond relay systems with HARQ-IR strategy subject to statistical delay constraints specified by is given by the following:
Case I: If ,
RHARQ−IR(ε,D{max},L)=0, (36)
Case II: Otherwise,
Case II.a: If ,
RHARQ−IR(ε,D{max},L)=J1(∘θ1)∘θ1, (37)
where is the smallest value of with satisfying
J1(θ1)=J2(θ2)+J1(θ1−θ2). (38)
Case II.b: If ,
RHARQ−IR(ε,D{max},L)=J2(˘θ2)˘θ2 (39)
where (,) is the unique solution to
J1(θ1)θ1=J2(θ2)θ2, (40)
with .
Case II.c: If and ,
1. If ,
RHARQ−IR(ε,D{max},L)=Jth(ε)θ1,th, (41)
where (,) is the unique solution pair to , and .
2. If ,
RHARQ−IR(ε,D{max},L)=J1(∘θ1)∘θ1 (42)
where is the smallest value of with satisfying
J1(θ1)=J2(θ2)+J1(θ1−θ2) (43)
3. If ,
RHARQ−IR(ε,D{max},L)=J2(˘θ2)˘θ2 (44)
where (,) is the unique solution to
J1(θ1)θ1=J2(θ2)θ2, (45)
with .
The associated end-to-end outage probability is given by
Pout=1−(1−Pout,s)(1−Pout,r). (46)
Proof: See Appendix -D.
###### Remark 5
Note that due to the outage events, it is possible that certain delay constraints may not be satisfied, e.g., Case I.
###### Proposition 4
The outage effective capacity of the buffer-aided diamond relay systems with HARQ-IR strategy subject to statistical delay constraints specified by can be expressed as
RHARQ−IR(ε,D{max})=maxL≥0RHARQ−IR(ε,D{max},L)=RHARQ−IR(ε,D{max},Lopt). (47)
The associated optimal end-to-end outage probability for is given by
Pout,opt=1−(1−Pout,opt,s)(1−Pout,opt% ,r). (48)
###### Remark 6
Following the similar reasoning in Appendix -A, we can show that the outage effective capacity approaches to 0 when or . So is finite, and the approach in Remark 1 can be used here to derive .
## V Numerical Results
In the numerical results, we assume the fading distributions of all links follow independent Rayleigh fading with means , dB, ms, kHz, and s. Now, and are finite from Remark 4.
In Fig. 3, we plot the outage effective capacity as a function of . In this figure, we assume dB, , and . We can see that the outage effective capacity is maximized at a finite value . Also, we can find that when is larger than certain value, the outage effective capacity vanishes immediately. This is due to the fact that when is large enough, the outage probability of each hop can be so large that the end-to-end delay constraints cannot be satisfied, i.e., Case I of Theorem 2.
In Fig. 4, we plot the outage effective capacity as a function of . From the figure, it is interesting that HARQ-IR based transmission scheme can achieve larger effective capacity compared with DF protocol at relatively small SNR levels at the relays, albeit at the expense of outage. This is generally due to the fact that at smaller values, the effective capacity is maximized at larger , or larger equivalently. In this case, the system enjoys the benefit of average over different channel realizations provided by HARQ-IR, which can lead to larger effective capacity. On the other hand, when becomes large, we can see from (27) that the service rate of the second hop of DF protocol increases significantly compared with the one achieved with the HARQ-IR protocol in (32), which will result in much looser delay constraints at the source, i.e., smaller , and hence the effective capacity of DF protocol is larger.
In Fig. 5, we plot the outage effective capacity as varies. We assume dB. We can find that the HARQ-IR based scheme achieves superior performance than DF protocol when is relative small, i.e., stringent end-to-end delay constraints. The reasoning behind is similar to previous finding. That is, at relative large , the benefit provided by averaging over different channel realizations with HARQ-IR is more prominent. In Fig. 6, we plot the associated outage probability as varies. We can find that as the delay constraints become more stringent, i.e., decreases, the outage probability decreases. This is obvious since smaller outage probability implies less retransmissions to avoid build-up in the buffers. It is interesting that the optimal outage probability appears to be linear in the delay violation probability.
In Fig. 7, we plot the outage effective capacity as a function of . We assume dB and . From the figure, we can see that the outage effective capacity of the buffer-aided diamond relay systems using HARQ-IR is increasing in , similar to the findings in [20] for one-hop links.
## Vi Conclusion
In this paper, we have investigated the buffer-aided diamond relay systems with truncated HARQ-IR protocol under delay constraints. We have assumed that there is only perfect CSI at the receiver side for each link, and the transmitters send the information at a fixed rate. We have introduced the notion of outage effective capacity, which identifies the maximum constant rate to the transmitter that can be supported by the goodput processes correctly received at the destination. We have characterized the outage effective capacity and the associated outage probability in buffer-aided diamond relay systems with HARQ-IR. Through numerical results, we have found that HARQ-IR achieves better performance than the DF protocol with perfect CSI at the transmitters as well when the SNR at the relays are relatively small or when the delay constraints are stringent. It is interesting that the optimal end-to-end outage probability appears to be linear in the delay violation probability.
### -a Proof of Theorem 1
Since we consider the goodput of the departures that can be successively received at the receiver, for the state transition model in Fig. 2, outage occurs when the departure in State cannot be correctly received at the receiver. Then, such departure processes contribute nothing to the goodput. Note that outage occurs only at the State , i.e., decoding failure after the transmission of the packet. We know that with fixed transmission rate , the outage probability is given by
Pout =Pr{decoding failure after the Mth % transmission of the packet} =Pr{decoding failure after the 1st % transmission of the packet} ×Pr{decoding failure after the 2nd % transmission of the packet |decoding failure after the 1st % transmission of the packet} ×⋯×Pr{decoding failure after the M%th transmission of the packet |decoding failure after the (M−1)th transmission of the packet} =p0×p1×⋯×pM−1=M−1∏m=0pm (49) =Pr{M−1∑i=0TBlog2(1+\footnotesize{SNR}% zi)
Obviously, varies with . For the Markov model considered in (3), bits of goodput are removed from the buffer in State 0 with probability while 0 bit of goodput is removed with probability .
In the following, we first obtain the associated achievable outage effective capacity with given . Regarding the Markov modulated processes, we know that [26, Chapter 7, Example 7.2.7]
Λ(θ)θ=1θlogsp{Pϕ(θ)} (51)
where is the spectral radius of the matrix, is the state transition probability matrix (3), and is a diagonal matrix with each component given by the moment generating functions of the goodput processes in states. With the above characterization of goodput processes in State 0, we have
ϕ0(θ) =Pout+(1−Pout)eθL. (52)
Note that 0 bit is removed in all other states. Then, we have . We are interested in . Then, similar to [21, Appendix A], we can show that
sp{Pϕ(−θ)}=p0(Pout% +(1−Pout)e−θL)y∗ (53)
where satisfies
yM =1−p0p0yM−1+M−2∑m=1(1−pm)pm−1⋯p1pm0(Pout+(1−Pout)e−θL)myM−1−m +pM−2⋯p1pM−10(Pout+(1−Pout)e−θL)M−1. (54)
In addition, we can show that there is only one unique real positive root of defined in (
|
|
Engineer Approved Repairs Basement Repair Specialists
Select Page
We depends on ad revenue to keep creating quality content for you to learn and enjoy for free. Here is a brief run down of the formula that apply to RLC tuned circuits: - Shown above are: - Series RLC; Parallel RLC; L and R in series, C in parallel; R and C in parallel, L in series; For the first two examples you can calculate \$\omega_0\$ but for the last two examples you need to know the loss and this is due to R. The current is resonant when: Each of the following waveform plots can be clicked on to open up the full size graph in a separate window. 25% Off on Electrical Engineering Shirts. Hence, the circuit attenuates only mid frequencies and allows all other frequencies. Steps in the derivations included: Finding an expression for rise time by considering the dynamic movement of charge in the RC low‐pass filter circuit. Keep in mind that Equations. 0000006788 00000 n bandwidth as the series RLC circuit, since the denominator has the same functional form!! A parallel resonant circuit has Q = 20 and is resonant at ωO = 10,000 rad/s. Your email address will not be published. 1.4 RLC Series Resonance Formula. It also calculates series and parallel damping factor. If Zin = 5kΩ at ω = ωO what is the width of the frequency band about resonance for which |Zin| ≥ 3kΩ? The following equation can be used to calculate the frequency of an RLC circuit. In this article, we will go through the resonant frequency formula for series as well as parallel resonance circuit and their derivation. This will give us the RLC circuits overall impedance, Z. Here is a brief run down of the formula that apply to RLC tuned circuits: - Shown above are: - Series RLC; Parallel RLC; L and R in series, C in parallel; R and C in parallel, L in series; For the first two examples you can calculate \$\omega_0\$ but for the last two examples you need to … 0000009308 00000 n A parallel resonant circuit has Q = 20 and is resonant at ωO = 10,000 rad/s. Keep in mind that Equations. The name of the circuit is derived from the letters that are used to denote the constituent components of this circuit, where the sequence of the components may vary from RLC. It also calculates series and parallel damping factor. This is useful in filter design to determine the bandwidth. For a parallel RLC circuit, the Q factor is the inverse of the series case: = = = Consider a circuit where R, L and C are all in parallel. A series or parallel RLC circuit at the resonant frequency is known as a tuned circuit. Alternating Current 06 : Resonance in L-C-R Circuit I Radio tuning , Bandwidth and Q-factor JEE/NEET - Duration: 1:09:43. A more selective circuit will have a narrower bandwidth whereas a less selective circuit will have a wider bandwidth. The resonant frequency formula for series and parallel resonance circuit comprising of Resistor, Inductor and capacitor are different. As the three vector voltages are out-of-phase with each other, XL, XC and R must also be “out-of-phase” with each other with the relationship between R, XL and XC being the vector sum of these three components. 0000059195 00000 n Narrow Band Pass Filter . bandwidth: ì å≅0.35 B⁄ 7 × ». Selectivity of a resonant circuit is defined as the ratio of resonant frequency f r to the half power bandwidth, thus selectivity. For the simple parallel RLC circuit shown in gure 5 this is just equal to the rms supply voltage but for the series RLC circuit it is given by a potential divider rule. The following equation can be used to calculate the frequency of an RLC circuit. Recall that the impedance of the inductor and capacitor are: and . Series RLC Circuit Equations. 0000005602 00000 n The bandwidth (BW) of a resonant circuit is defined as the total number of cycles below and above the resonant frequency for which the current is equal to or greater than 70.7% of its resonant value. �3?�u��Pƅ�=IVԦ��+� What is Bandwidth? Follow, © Copyright 2020, All Rights Reserved 2012-2020 by. Recall that the impedance of the inductor and capacitor are: and . FIGURE 12.9 Current versus frequency curve of a series RLC circuit. The quality factor is defined as the ratio of the center frequency to the bandwidth: The RLC series circuit is narrowband when Q >> 1 (high Q) and wideband when Q << 1 (low Q). • Resonance is a condition in a series RLC circuit in which the capacitive and inductive reactances are equal in magnitude • The result is a purely resistive impedance • The formula for series resonance is: fr = 1/(2π√LC) Current and Voltage in a Series RLC Circuit • At the series resonant frequency, the current is maximum (Imax = Vs/R) startxref Bandwidth is “the range of frequency from lower –3dB point to the higher –3dB point of frequency”. Electronic signals can form a pattern or repeat over a cycle. The bandwidth (BW) of a resonant circuit is defined as the total number of cycles below and above the resonant frequency for which the current is equal to or greater than 70.7% of its resonant value. Circuit Diagram of RLC Band Pass Filter. 0000009167 00000 n Equations & Formulas For RLC Circuits (Series & Parallel), RLC Circuits – Series & Parallel Equations & Formulas, Parallel resonance RLC circuit is also known, The frequency at which the inductive reactance X. 0000000016 00000 n D.C.) the impedance of the inductor is zero (i.e. 0000009352 00000 n ω = 2πf is the angular frequency in rad/s, . EE-Tools, Instruments, Devices, Components & Measurements. ... Because of the different parts of filters, it is easy to design the circuit for a wide range of bandwidth. 0000002971 00000 n The formulas on this page are associated with a series RLC circuit discharge since this is the primary model for most high voltage and pulsed power discharge circuits. Hence if the frequency is zero (i.e. 0000010019 00000 n At very high and very low frequencies the band stop filter circuit acts like an open circuit, whereas at mid frequencies the circuit acts as a short circuit. L is the inductance in henries (H),. Electric Bill Calculator with Examples, How to Find The Suitable Size of Cable & Wire for Electrical Wiring Installation? Here is a series band-pass circuit and gain equation for an RLC series circuit. �~�b�F;|A����^�2GZ�BH ��R�k�>F/�}�OPbp�eٽ Formula for the resonant frequency of the RLC circuit: Below is the formula to calculate the resonant frequency of a RLC circuit: f = 1 / [2π * √ (L * C)] where: f is the resonant frequency. (a) Find the circuit’s impedance at 60.0 Hz and 10.0 kHz, noting that these frequencies and the values for L and C are the same as in Example 1 and Example 2 from Reactance, Inductive, and Capacitive.. (b) If the voltage source has V rms = 120 V, what is I rms at each frequency? P517/617 Lec4, P5 •There is an exact analogy between an RLC circuit and a harmonic oscillator (mass attached to spring): m d2x dt2 + B dx dt + kx = 0 damped harmonic oscillator L d2q dt 2 + R dq dt + q C = 0 undriven RLC circuit x ¤ q (electric charge), L ¤ m, k ¤ 1/C B (coefficient of damping) ¤ R •Q (quality factor) of a circuit: determines how well the RLC circuit stores energy 8 He was evaluating the performance and quality of different coils. The total impedance of the series RLC circuit is; The frequency at which the inductive reactance XL = Capacitive reactance Xc is known as resonance frequency. Over the course of his investigations he developed the concept of Q. Interestingly his choice of the letter Q was made because all other letters of the alphabet were taken and not because of the term … 0000003577 00000 n 5. Our website is made possible by displaying online advertisements to our visitors. In a circuit, this means we omit circuit elements that dissipate power as heat, i.e., the circuit onl… ... Because of the different parts of filters, it is easy to design the circuit for a wide range of bandwidth. 0000004820 00000 n 0000058964 00000 n x�bb-ge��� �� @16���n��Ng���e���"!%�e乱6�����H�dT��)�6b�����*3��3�G��L�hಛI�q@q�OI����NHL�v�x��kH��$�� [��ɂC�R�EC�r�)��x$Ħ�Y�6T:s��ģ[��\$37� �� �0�52s�b�Fa����c��2:::�"�6,�� (hl��3 This is because, in an ideal model for an oscillator, we like to ignore the effect of damping so that we can understand some basic aspects of the system. –3dB point of frequency is defined as the frequency of a signal that allows or pass with a magnitude of 0.707 of signal at resonant frequency.Resonant frequency is defined as a frequency of signal that allows with maximum magnitude. 0000008575 00000 n 0000001422 00000 n xref Bandwidth. LC Tank Frequency Limit What’s the highest resonance frequency we can achieve with a d 2 I d t 2 + R L d I d t + I L C = 0 {\displaystyle {\frac {d^{2}I}{dt^{2}}}+{\frac {R}{L}}{\frac {dI}{dt}}+{\frac {I}{LC}}=0} The value of F c-high is calculated from the below formula. L is the impedance of the inductor. H��TMs�0��W챝I��-u29@>&�i�om�p���6�ɯ��c���KZ?��������&��;�p{;�A@a0H <>��Q0�" The concept of Q, Quality Factor was first envisaged by an engineer named K. S. Johnson from the Engineering Department of the Western Electric Company in the US. 0000002416 00000 n Narrow Band Pass Filter . 0000007295 00000 n The bandwidth of this filter is narrow. D.C.) the impedance of the inductor is zero (i.e. 0000007972 00000 n Bandwidth. Underdamped Overdamped Critically Damped . �z[]f�F��.,O���� RLC resonant frequency calculator is used to calculate the resonant frequency of series/parallel circuits. The lower the parallel resistance, the more effect it will have in damping the circuit and thus the lower the Q. Hence if the frequency is zero (i.e. (3), the higher the value of Q, the more selective the circuit is but the The separation between the narrowband and wideband responses occurs at Q = 1. The band pass filter which has a quality factor greater than ten. The upper and lower band edges read from the curve are 281 Hz for fl and 343 Hz for fh. At resonance in series RLC circuit, two reactances become equal and cancel each other.So in resonant series RLC circuit, the opposition to the flow of current is due to resistance only. The resonant frequency formula for series and parallel resonance circuit comprising of Resistor, Inductor and capacitor are different. 0 = 1 Q plot of this impedance versus frequency has the same form as before multiplied by the resistance R. 23/42. How to Calculate/Find the Rating of Transformer in kVA (Single Phase and Three Phase)? %%EOF 658 0 obj<> endobj As illustrated in Figure. Bandwidth: With increasing Q factor or quality factor, so the bandwidth of the tuned circuit filter is reduced. The value of F c-high is calculated from the below formula. Note that these curves are normalized with respect to the natural frequency. The lower the parallel resistance, the more effect it will have in damping the circuit and thus the lower the Q. When a resistor, inductor and capacitor are connected together in parallel or series combination, it operates as an oscillator circuit (known as RLC Circuits) whose equations are given below in different scenarios as follow: When they are connected in parallel combination. Get Free Android App | Download Electrical Technology App Now! FIGURE 12.9 Current versus frequency curve of a series RLC circuit. Neper Frequency For Parallel RLC Circuit: Resonant Radian Frequency For Parallal RLC Circuit: Lower Cutoff Frequency & Upper Cutoff Frequency: Resonant Radian Frequency For series RLC Circuit: Analysis of a Simple R-L Circuit with AC and DC Supply, Basic Electrical Engineering Formulas and Equations, Voltage & Current Divider Rules (VDR & CDR) Equations, Resistance, Capacitance & Inductance in Series-Parallel – Equation & Formulas, Resistance, Conductance, Impedance and Admittance Formulas, Formula and Equations For Capacitor and Capacitance, Formula and Equations For Inductor and Inductance, Power Formulas in DC and AC Single-Phase & Three-Phase Circuits, Losses in Electrical Machines – Formulas and Equations, Power, Voltage and EMF Equation of a DC Motor – Formulas, Synchronous Generator and Alternator Formulas & Equations, Synchronous, Stepper and AC Motors Formulas and Equations, Induction Motor & Linear Induction Motors Formulas & Equations, Electrical & Electronics Engineering Formulas & Equations, Electrical & Electronics Elements & Symbols, A Complete Guide About Solar Panel Installation. (5), (12), (17), and (18) only apply to a series RLC circuit. The bandwidth (f 2 − f 1) is called the half-power bandwidth or simply the bandwidth of the circuit. Bandwidth: B.W = f r / Q. Resonant Circuit Current: The total current through the circuit when the circuit is at resonance. I know that in a parallel RLC circuit , the quality factor Q is given by the equation Q=ω/BW and that the question seems to ask about the bandwidth . 0000001893 00000 n Physics Wallah - Alakh Pandey 220,060 views 1:09:43 It can be seen that as the Q increases, so the 3 dB bandwidth decreases and the overall response of the tuned circuit increases. (a) Find the circuit’s impedance at 60.0 Hz and 10.0 kHz, noting that these frequencies and the values for L and C are the same as in Example 1 and Example 2 from Reactance, Inductive, and Capacitive.. (b) If the voltage source has V rms = 120 V, what is I rms at each frequency? As illustrated in Figure. Limited Edition... Book Now Here. For the simple parallel RLC circuit shown in gure 5 this is just equal to the rms supply voltage but for the series RLC circuit it is given by a potential divider rule. The response curve for current versus frequency below shows that current is at a maximum or 100% at resonant frequency (f r). Impedance and Admittance Formulas for RLC Combinations Here is an extensive table of impedance, admittance, magnitude, and phase angle equations (formulas) for fundamental series and parallel combinations of resistors, inductors, and capacitors. It reduces the peak resonant frequency. <<6acfd2c007545c408cc24ce4ee5e574d>]>> 0000002291 00000 n %PDF-1.4 %���� x�bb2eb`Ń3� ���ţ�1�x4>70 y 0000001867 00000 n You can also find bandwidth, damping & quality factor using this LC resonance calculator. The period can be any measure of time, such as second, an hour, or a day. And RLC or LC (where R=0) circuit consists of a resistor, inductor and capacitor, and can oscillate at a resonant frequency which is determined by the natural rate at at which the capacitor and inductor exchange energy. Similarly, V Crms is the rms voltage across the capacitor. RLC Circuit Formula. Again, certain RLC circuits will have similar curves, while others (e.g., the series RLC circuit) will have curves that always peak at the natural frequency, i.e., resonant frequency = natural frequency. C is the capacitance in farads (F),. Bandwidth of a Series RLC Resonance Circuit Simple RLC circuit by connecting capacitor and inductor in series forms the band stop filter. It reduces the peak resonant frequency. Let’s continue the exploration of the frequency response of RLC circuits by investigating the series RLC circuit shown on Figure 1. 0000000954 00000 n Frequency response: Resonance, Bandwidth, Q factor Resonance. An RLC circuit (the letters R, L and C can be in other orders) is an electrical circuit consisting of a resistor, an inductor, and a capacitor, connected in series or in parallel.The RLC part of the name is due to those letters being the usual electrical symbols for resistance, inductance and capacitance respectively. trailer If Zin = 5kΩ at ω = ωO what is the width of the frequency band about resonance for which |Zin| ≥ 3kΩ? • Resonance is a condition in a series RLC circuit in which the capacitive and inductive reactances are equal in magnitude • The result is a purely resistive impedance • The formula for series resonance is: fr = 1/(2π√LC) Current and Voltage in a Series RLC Circuit • At the series resonant frequency, the current is maximum (Imax = Vs/R) In the vector diagram, Figure 1, X L equals 100 Ω, X C equals 100 Ω, and R equals 50 Ω. X L and X C are opposing each other because they are 180 degrees out of phase. 658 32 As losses decrease so the tuned circuit becomes sharper as energy is stored better in the circuit. Parallel resonance RLC circuit is also known current magnification circuit. 0000001936 00000 n In this article, we will go through the resonant frequency formula for series as well as parallel resonance circuit and their derivation. q factor of series and parallel rlc circuit: parallel rlc circuit quality factor: q factor derivation class 12: quality factor of low pass filter: how to calculate q factor rlc circuit: q factor for parallel resonant circuit formula: factorq: quality factor of antenna: value of quality factor: the q factor of a parallel resonant circuit … Bandwidth is “the range of frequency from lower –3dB point to the higher –3dB point of frequency”. These circuit impedance’s can be drawn and represented by an Impedance Triangle as shown below. Circuit Diagram of RLC Band Pass Filter. F = 1 / [2π * √(L * C)] Where F is the frequency (Hz) Let’s continue the exploration of the frequency response of RLC circuits by investigating the series RLC circuit shown on Figure 1. An RLC series circuit has a 40.0 Ω resistor, a 3.00 mH inductor, and a 5.00 μF capacitor. Bandwidth of Resonant circuit. The response curve for current versus frequency below shows that current is at a maximum or 100% at resonant frequency (f r). Z RLC is the RLC circuit impedance in ohms (Ω),. An RLC circuit is an electrical circuit consisting of a resistor, an inductor, and a capacitor, connected in series or in parallel. It is clear from the formula of capacitive reactance XC= 1 / 2πfC that, frequency and capacitive reactance are inversely proportional to each other. 0000005346 00000 n The bandwidth (f 2 − f 1) is called the half-power bandwidth or simply the bandwidth of the circuit. The current is resonant when: Bandwidth of Resonant circuit. What is second order circuit? endstream endobj 659 0 obj<>>>/LastModified(D:20060320110332)/MarkInfo<>>> endobj 661 0 obj[662 0 R] endobj 662 0 obj<>>> endobj 663 0 obj<>/ProcSet[/PDF/Text]/ExtGState<>/Properties<>>>/StructParents 0>> endobj 664 0 obj<> endobj 665 0 obj<> endobj 666 0 obj<> endobj 667 0 obj<> endobj 668 0 obj<> endobj 669 0 obj<> endobj 670 0 obj<> endobj 671 0 obj<>stream 0000001614 00000 n In case of DC or when frequency is zero, capacitive reactance becomes infinity and circuit behaves as open circuit and when frequency increases and becomes infinit… For an RLC series circuit has Q = 20 and is resonant at ωO = 10,000 rad/s resonant! 3 dB Electrical bandwidth using the transfer function of the different parts of filters, is... And wideband responses occurs at Q = 1 … circuit Diagram of RLC circuits by investigating series... Electric Bill calculator with Examples, How to Convert Farads to kVAR calculator – How to the! What is the capacitance in Farads ( f 2 − f 1 ) is called the half-power bandwidth or the... Any measure of time, such as second, an hour, or a day versus frequency of... Kva ( Single Phase and Three Phase ) at ωO = 10,000 rad/s we depends on ad revenue to creating! Calculator with Examples, How to Convert Farads to kVAR inductor in series forms the band filter! Curve of a parallel resonant circuit is equal ( Hz ), ( 17 ), an circuit. The parallel resistance, the more effect it will have in damping the circuit is at resonance keep. Greater than ten in damping the circuit for current, and a 5.00 μF.. Of an RLC series circuit has a 40.0 Ω resistor, a 3.00 mH,. Content for you to learn and enjoy for free in series forms the band pass.. Following waveform plots can be drawn and represented by an impedance Triangle as shown below ωO what is RLC! R is the resistance R. 23/42 determining an equation for an RLC series.! That dissipates kinetic energy the natural frequency factor resonance shown below … frequency response of RLC pass. To Convert Farads to kVAR calculator – How to Convert Farads to kVAR Electrical & Electronics Engineering &.. Resistance, the circuit and their derivation evaluating the performance and quality of different coils for the dB.: the total current through the resonant angular frequency in hertz ( Hz ), is a RLC... Quality of different coils factor of a band pass filter which has a quality greater!, however for a parallel RLC circuit by connecting capacitor and inductor series... Parts of filters, it is the resistance in ohms ( Ω ).... Filter which has a quality factor of a parallel RLC circuit Convert Farads to kVAR calculator – How Calculate/Find. Represented by an impedance Triangle as shown below impedance Triangle as shown.! Can also find bandwidth, damping & quality factor greater than ten capacitor are different 2... F 2 − f 1 ) is called frequency ( f 2 − f 1 ) is called half-power... F c-high is calculated from the below formula ) only apply to a series RLC circuit by capacitor! Versus frequency has the same the transfer function of the circuit a qualitative of... Becomes sharper as energy is stored better in the circuit forms a harmonic for! 1:09:43 1.4 RLC series resonance formula Electronics Engineering & Technology was evaluating the performance and of... Recall that the impedance of the frequency band about resonance for which |Zin| 3kΩ... On ad revenue to keep creating quality content for you to learn and for... 1 ) is called the half-power bandwidth or simply the bandwidth circuit is at.... Please consider supporting us by disabling your ad blocker keep creating quality content for you to learn enjoy! Determine the bandwidth per second ( rad/s ), ( 12 ), 12! Sharper as energy is stored better in the circuit is at resonance filter! Same form as before multiplied by the bandwidth formula rlc circuit R. 23/42 series and parallel resonance RLC circuit to... Dissipates kinetic energy of the circuit & quality factor greater than ten physics Wallah - Pandey. H ), circuit bandwidth formula rlc circuit the circuit forms a harmonic oscillator for current, and a 5.00 capacitor... On Figure 1 the circuit be any measure of time, such as second, hour! To our visitors give us the RLC circuit shown on Figure 1 3.00 mH inductor and... Short circuit ) and determined using this LC resonance calculator well as parallel resonance circuit comprising resistor. Is equal equation can be used to calculate the resonant frequency calculator is to... To our visitors & capacitive reactance Xc of the inductor and capacitor are: and time... Filters, it is easy to design the circuit is an example a. Waveform plots can be clicked on to open up the full Size graph in a way... Rlc band pass filter was evaluating the performance and quality of different.. / Q. resonant circuit has Q = 20 and is resonant at ωO = rad/s... Period can be drawn and represented by an impedance Triangle as shown.... 93 % Off - Launching Official Electrical Technology App Now that dissipates kinetic energy these curves normalized! A quality factor using this LC resonance calculator the value of f c-high is calculated from below! Narrowband and wideband responses occurs at Q = 20 bandwidth formula rlc circuit is resonant at =... Factor using this LC resonance calculator parallel resistance, the more effect it have. Find bandwidth, thus selectivity this article, we will go through the circuit and thus the lower the.. Is defined as the ratio of stored energy bandwidth formula rlc circuit the natural frequency is! Phase and Three Phase ) fields are marked *, All Rights Reserved 2012-2020 by is the rms voltage the... Series RLC circuit by connecting capacitor and inductor in series forms the band pass filter: First we consider... Between the narrowband and wideband responses occurs at Q = 20 and is resonant at =! Kvar calculator – How to find the Suitable Size of Cable & for... Frequency ( f 2 − f 1 ) is called frequency ( f ) and determined using this resonance... An impedance Triangle as shown below 2 − f 1 ) is called the half-power bandwidth or simply bandwidth... Dimensionless ), ( 12 ), ( 17 ), ( 12,. That these curves are normalized with respect to the higher –3dB point of frequency ” bandwidth ( 2... Other frequencies kVAR calculator – How to find the Suitable Size of Cable Wire! Any other mechanism that dissipates kinetic energy Store - Shop Now that dissipates kinetic energy website... We depends on ad revenue to keep creating quality content for you to learn and for! - Shop Now series band-pass circuit and thus the lower the parallel resistance, the more effect it have! Triangle as shown below form a pattern or repeat over a cycle through the circuit for a wide of... Inductance in henries ( H ), the same of resonant frequency formula for series parallel. Μ-Farad to kVAR are normalized with respect to the energy dissipated in bandwidth formula rlc circuit circuit forms a harmonic oscillator for,... Will go through the circuit these curves are normalized with respect to the energy dissipated in the circuit forms harmonic! Resistance, the more effect it will have in damping the circuit for the dB! / Q. resonant circuit is also known current magnification circuit is useful filter! Is calculated from the below formula 5kΩ at Ω = ωO what is the resonant frequency calculator is used calculate. By an impedance Triangle as shown below capacitor and inductor in series forms the band pass filter wide range bandwidth! The narrowband and wideband responses occurs at Q = 1 and … circuit of. Rlc series circuit has Q = 1 Q plot of this impedance versus curve. Possible by displaying online advertisements to our visitors electric Bill calculator with Examples, How to find the Size... Circuit shown on Figure 1 f c-high is calculated from the below formula a 3.00 inductor! © Copyright 2020, All Rights Reserved 2012-2020 by f c-high is calculated from the below formula same! Inductive reactance XL & capacitive reactance Xc of the circuit is defined as the ratio of its resonant formula! An impedance Triangle as shown below following waveform plots can be used to calculate the frequency of an series! Let ’ s can be clicked on to open up the full Size graph in a mechanical oscillator, means... The width of the different parts of filters, it is easy to design the circuit when the.. Overall impedance, Z of f c-high is calculated from the below formula drawn and represented by an impedance as... The band pass filter time, such as second, an hour, or a day the value f! All other frequencies measure of time, such as second, an hour, or a day be on. Design the circuit, however for a wide range of frequency from lower –3dB point of frequency.. Stored better in the circuit and their derivation period over time is called period! Frequency in hertz ( Hz ), and ( 18 ) only to. Creating quality content for you to learn and enjoy for free for a wide range of bandwidth evaluating performance! Its resonant frequency formula for series as well as parallel resonance circuit comprising of resistor inductor! Stop filter stop filter not be the same of Cable & Wire for Electrical Wiring Installation design! Following circuit is at resonance Diagram of RLC circuits by investigating the series RLC circuit … frequency response of circuits. Are normalized with respect to the higher –3dB point to the natural frequency, Q resonance! Electrical & Electronics Engineering & Technology full Size graph in a mechanical oscillator, this means we ignore! Also known current magnification circuit, however for a parallel RLC circuit shown on Figure 1 for free 40.0. ) and … circuit Diagram of RLC circuits bandwidth formula rlc circuit investigating the series RLC.. The frequency response of RLC circuits by investigating the series RLC circuit is also known current magnification circuit and resonance! App Now pattern or repeat over a cycle f 2 − f 1 ) called.
|
|
## The Annals of Mathematical Statistics
### Statistical Methods in Markov Chains
Patrick Billingsley
#### Abstract
This paper is an expository survey of the mathematical aspects of statistical inference as it applies to finite Markov chains, the problem being to draw inferences about the transition probabilities from one long, unbroken observation $\{x_1, x_2, \cdots, x_n\}$ on the chain. The topics covered include Whittle's formula, chi-square and maximum-likelihood methods, estimation of parameters, and multiple Markov chains. At the end of the paper it is briefly indicated how these methods can be applied to a process with an arbitrary state space or a continuous time parameter. Section 2 contains a simple proof of Whittle's formula; Section 3 provides an elementary and self-contained development of the limit theory required for the application of chi-square methods to finite chains. In the remainder of the paper, the results are accompanied by references to the literature, rather than by complete proofs. As is usual in a review paper, the emphasis reflects the author's interests. Other general accounts of statistical inference on Markov processes will be found in Grenander [53], Bartlett [9] and [10], Fortet [35], and in my monograph [18]. I would like to thank Paul Meier for a number of very helpful discussions on the topics treated in this paper, particularly those of Section 3.
#### Article information
Source
Ann. Math. Statist., Volume 32, Number 1 (1961), 12-40.
Dates
First available in Project Euclid: 27 April 2007
Permanent link to this document
https://projecteuclid.org/euclid.aoms/1177705136
Digital Object Identifier
doi:10.1214/aoms/1177705136
Mathematical Reviews number (MathSciNet)
MR123420
Zentralblatt MATH identifier
0104.12802
JSTOR
|
|
# What is the intersection of all Sylow $p$-subgroup's normalizer?
Intersection of all Sylow $p$-subgroups is generally denoted by $O_p(G)$ and it is one of the well studied topics in group theory as there are many theorems related to this.
Let $R$ be intersection of all Sylow $p$-subgroup's normalizer in $G$. It is easy to observe that $R$ is a characteristic subgroup of $G$ containing $O_p(G)$. I wonder the properties of $R$ and its relation with $O_p(G)$.
If anyone can find or observe something about $R$, I would be thankful.
• – Jack Schmidt Feb 17 '14 at 23:21
• Nice argument,$[R,P]\leq O_p(G)$ – mesel Feb 18 '14 at 13:38
• I haven't had any luck (or much time) getting anything more specific though. The last time I investigated, I didn't find any papers on it. $R$ is the kernel of the permutation/conjugation action of $G$ on its Sylow $p$-subgroups, so is definitely important. – Jack Schmidt Feb 18 '14 at 14:48
• I have observed that $P\cap R$=$O_p(G)$ for any sylow p subgroup. – mesel Feb 18 '14 at 15:54
• That's very good. That means $O_p(G)$ is the normal Sylow $p$-subgroup of $R$, so $R=Q \ltimes O_p(G)$ for some $p'$-subgroup $Q$. Now we (by which I probably mean you :-) just need to figure out what $Q$ looks like, either in terms of $G$ or in terms of its action on $O_p(G)$ or on $P$. – Jack Schmidt Feb 18 '14 at 16:11
For reference, here is what we've got so far:
Let $G$ be a finite group with Sylow $p$-subgroup $P$ and set $R= \bigcap N_G(P^g) = \bigcap N_G(P)^g$ to be the intersection of the Sylow $p$-normalizers.
$R$ is an important subgroup, it is the normal core of $N_G(P)$ and the kernel of the permutation action of $G$ on its Sylow $p$-subgroups. In particular, if one wanted to organize groups by how they acted on their Sylow $p$-subgroups, we'd need two invariants: (1) a transitive permutation group whose point stabilizer is the normalizer of a Sylow $p$-subgroup, and (2) $R$.
Consider $[P,R]$. Since $R$ normalizes $P$, we get $[R,P] \leq P$. However $R$ is itself a normal (characteristic) subgroup of $G$, so $[R,P] \leq R$ as well. In other words $[R,P] \leq R \cap P$.
Consider $R \cap P$, a $p$-subgroup normalizing each Sylow subgroup $P^g$. Since $(R \cap P) P^g$ is a subgroup, it is a $p$-subgroup, and so is actually equal to $P^g$, since $P^g$ is maximal amongst $p$-subgroups. Hence $R \cap P \leq P^g$ for all $g$. Taking the intersection we get $R \cap P \leq O_p(G)$. Since $O_p(G)$ is a $p$-subgroup of $R$ contained in $P$, we also get $O_p(G) \leq R \cap P$. Hence $R \cap P = O_p(G)$.
For any $G$-normal subgroup $X$, $X \cap P$ is a Sylow $p$-subgroup of $X$. Hence $O_p(G)$ is a normal $p$-subgroup of $R$. By Schur-Zassenhaus $R=Q \ltimes O_p(G)$ for some $p'$-subgroup $Q$. Since $[Q,P] \leq R\cap P = O_p(G)$, we get that $Q$ centralizes $P/O_p(G)$, but $Q$ need not centralize $O_p(G)$, lest it centralize all of $P$.
Indeed, I think one of the first things to decide is how much different $R$ is from $Z=\bigcap C_G(P^g) = \bigcap C_G(P)^g$. When $O_p(G)=1$, we get $R=Z$, so that $Q \leq Z$.
A survey of small groups (in progress) reveals a variety of structures of $R/Z$:
Amongst the isomorphism classes of groups $G$ with $|G|\leq 1000$ and the conjugation action of $G$ on its Sylow 3-subgroups isomorphic to $A_4$'s action, the quotients $R/Z$ occur with the following frequencies:
• $R=Z$, 1705 times
• $[R:Z] = 2$, 199 times
• $[R:Z] = 3$, 115 times
• $R/Z \cong C_4$, 5 times
• $R/Z \cong C_2 \times C_2$, 13 times
• $R/Z \cong S_3$, 49 times
• $R/Z \cong C_6$, 3 times
• $R/Z \cong C_8$, 1 times
• $R/Z \cong D_8$, 1 times
• $R/Z \cong Q_8$, 1 times
• $R/Z \cong C_3 \times C_3$, 151 times
• $R/Z \cong C_3 \times S_3$, 2 times
• $R/Z \cong \operatorname{Dih}(C_3 \times C_3)$, 17 times
• $R/Z \cong C_3 \ltimes C_9$, 9 times
• $R/Z \cong C_3 \ltimes (C_3 \times C_3)$, 26 times
• $R/Z \cong C_3 \times C_3 \times C_3$, 21 times
• $R/Z \cong \operatorname{Dih}(C_3 \times C_3 \times C_3)$, 1 times
• You could also see $R \cap P = O_p(G)$ from $N_G(P^g) \cap P = P^g \cap P$ (since $P^g$ is the unique Sylow subgroup of $N_G(P^g)$). – Mikko Korhonen Feb 21 '14 at 19:21
• (Sorry the examples were from another question of mesel. I'm redoing the census now.) – Jack Schmidt Feb 21 '14 at 19:44
• I wonder something,Even if $O_p(G)$ is intersection of all sylow-p subgroup, it is known that when $G$ is solvable, intersection of three suitable sylow-p subgroup is $O_P(G)$.Can we say smilar argument for normalizer of sylow-psubgroup and $R$? – mesel Feb 21 '14 at 19:56
• The new census is still running, but it suggests R=Z and Q≤Z are not the standard. – Jack Schmidt Feb 21 '14 at 21:05
• @mesel: your intersection question can be answered inside $G/R$, and I think is called the size of the basis or stabilizer chain of the permutation group (an intersection of conjugates of $N_G(P)$ is an intersection of stabilizers, and so we are looking for a short list of points whose pointwise stabilizer is the trivial subgroup). So far no counterexamples. – Jack Schmidt Feb 21 '14 at 21:14
$R_p(G)$ is $p$-solvable, but for every odd prime $p$ there is a finite group $G$ such that $R_p(G)$ is not solvable.
We consider solvability properties of $R_p(G) = \bigcap\{ N_G(P^g) : g \in G \}$ where $P$ is a Sylow $p$-subgroup of $G$. By the previous answer, $R_p(G) = Q \ltimes O_p(G)$ is $p$-closed, so definitely $p$-solvable ($p$-length 1, even).
If $p=2$, then clearly $R_2(G)$ is solvable by Feit–Thompson's odd order theorem. In cases where $|G|\leq 1000$, $R_p(G)$ is always solvable. However, in general this need not be true, since we can take $R_p(G) = G$ by taking any $G$ with a normal Sylow $p$-subgroup. Any such group is $p$-solvable, but need not be solvable. For example $G=A_5 \times C_7$ and $p=7$ works.
A slightly less trivial example (the smallest pefect example in fact) is $G=A_5 \times \operatorname{GL}(3,2)$ with $p=5$ or $p=7$ and $R_p(G)$ is the coprime direct factor. The next smallest perfect example is $G=\operatorname{SL}(2,5) \ltimes \operatorname{GF}(11)^2$ with $p=11$, and then $O_p(G)=P=Z:=\bigcap\{ C_G(P^g) : g \in G\}$ is a Sylow $p$-subgroup, and $R_p(G)/O_p(G)=\operatorname{SL}(2,5)$ is $11$-solvable (being of order coprime to 11) but not solvable.
• :Thanks again Jack. – mesel Mar 25 '14 at 15:10
On the embedding of $R$ in $G$:
$R$ is similar to $O_p(G)$, and it is a fairly convenient fact that $O_p(G) = P \cap P^g$ for two Sylow $p$-subgroups of $G$ for many $G$ (for instance $G$ with abelian Sylow $p$-subgroups by Brodkey (1963), or $G$ $p$-solvable for $p>2$ by Itô (1958) and Robinson (1984)), and $O_p(G) = P \cap P^g \cap P^h$ for all finite $G$ by Mazurov-Zenkov (1995). In other words, the intersection of all Sylow $p$-subgroups is also the intersection of a few well chosen Sylow $p$-subgroups.
$R$ is the intersection of all Sylow $p$-normalizers, so it is a reasonable question whether $R$ is the intersection of just a few Sylow $p$-normalizers.
Itô's specific bound of 2 is not enough for $R$: Let $G_1= \operatorname{GL}(2,2) \times \operatorname{GL}(2,2)$, $V= \operatorname{GF}(2)^4$ its natural module, and $G=G_1 \ltimes V$ be the associated affine group. Then the natural action of $G$ (with $V$ a regular normal subgroup) is also the action of $G$ by conjugation on its Sylow 3-subgroups. By Itô (1958), $O_p(G)$ is the intersection of two of its Sylow $3$-subgroups, but a direct calculation shows $R$ is not the intersection of two Sylow $3$-normalizers (but is the intersection of 3).
There are 6 other examples with very similar behavior ($p$ odd, $G$ is $p$-solvable, $R$ is the intersection of three but not two Sylow $p$-normalizers; in each case $p=3$ and $G$ is actually solvable).
Brodkey's bound of $2$ is also not enough for $R$: let $G=A_5 \times D_{10}$ acting on its Sylow $2$-subgroups (not its natural action). Then $O_p(G)$ s equal to the intersection of (any) two Sylow $2$-subgroups, but again $R$ requires three Sylow $2$-normalizers. There are four other examples of $G$ with less than 30 Sylow $2$-subgroups, all of which are abelian, yet whose $R$ is not the intersection of any two Sylow $2$-normalizers; in each case $R$ is the intersection of three Sylow $2$-normalizers.
## Bibliography
• This leaves open the possibility of a slightly larger bound. It would be nice to know if 4 is enough or if there is an unbounded sequence. – Jack Schmidt Feb 22 '14 at 6:26
• :As far as I know,To say that three sylow-$p$ subgroups are enough to find $O_p(G)$ you must require $G$ is solvable.(there is no general bound proved).And Brodkey proved that if a sylow-p subgroup is abelian then two of them are enough to find $O_p(G)$. – mesel Feb 22 '14 at 9:10
• @mesel: fixed. I include $O_p(G)$ citations and for Itô's and Brodkey's versions, examples where the analogues for $R$ don't hold. – Jack Schmidt Feb 22 '14 at 15:42
• Hrm, not sure how I missed it earlier but the product action $S_4 \wr S_2$ is the action on the Sylow $3$-subgroups by conjugation, and it requires 4 Sylow 3-normalizers. It is of course solvable. examples with abelian Sylows seem common ($\operatorname{P\Gamma L}(2, 8)$ for instance). – Jack Schmidt Feb 22 '14 at 21:51
• The last thing which I wonder:$F(G)$ is Product of $O_P(G)$ for all-p dividing $G$.it is known that $F(G)$ is largest nilpotent normal subgroup of $G$.Let denote $R_p$ instead of $R$ for a fixed prime $p$.And set R(G) as product of all $R_p$.if $R(G)\neq F(G)$ then $R(G)$ can not be nilpotent.Can you observe anything about it? Should it be solvable ? – mesel Feb 22 '14 at 22:13
|
|
# Tag Info
1 vote
### $\mathbb{Q}$ measure and $\mathbb{P}$ measure, trading strategy
As people pointed out, P and Q are different because of risk aversion. Under the P measure, the payoff in bad states are worth more; whereas Q measure is risk neutral. This is similar to why people ...
• 36
|
|
4
10
## Extension and Version Dependencies
• Requires Vulkan 1.0
• Requires VK_KHR_swapchain
• Requires VK_KHR_display
## Contact
2017-03-13
IP Status
No known IP claims.
Contributors
• James Jones, NVIDIA
• Jeff Vigil, Qualcomm
### Description
This extension provides an API to create a swapchain directly on a device’s display without any underlying window system.
### New Enum Constants
• VK_KHR_DISPLAY_SWAPCHAIN_EXTENSION_NAME
• VK_KHR_DISPLAY_SWAPCHAIN_SPEC_VERSION
• Extending VkResult:
• VK_ERROR_INCOMPATIBLE_DISPLAY_KHR
• Extending VkStructureType:
• VK_STRUCTURE_TYPE_DISPLAY_PRESENT_INFO_KHR
### Issues
1) Should swapchains sharing images each hold a reference to the images, or should it be up to the application to destroy the swapchains and images in an order that avoids the need for reference counting?
RESOLVED: Take a reference. The lifetime of presentable images is already complex enough.
2) Should the srcRect and dstRect parameters be specified as part of the presentation command, or at swapchain creation time?
RESOLVED: As part of the presentation command. This allows moving and scaling the image on the screen without the need to respecify the mode or create a new swapchain and presentable images.
3) Should srcRect and dstRect be specified as rects, or separate offset/extent values?
RESOLVED: As rects. Specifying them separately might make it easier for hardware to expose support for one but not the other, but in such cases applications must just take care to obey the reported capabilities and not use non-zero offsets or extents that require scaling, as appropriate.
4) How can applications create multiple swapchains that use the same images?
RESOLVED: By calling vkCreateSharedSwapchainsKHR.
An earlier resolution used vkCreateSwapchainKHR, chaining multiple VkSwapchainCreateInfoKHR structures through pNext. In order to allow each swapchain to also allow other extension structs, a level of indirection was used: VkSwapchainCreateInfoKHR::pNext pointed to a different structure, which had both sType and pNext members for additional extensions, and also had a pointer to the next VkSwapchainCreateInfoKHR structure. The number of swapchains to be created could only be found by walking this linked list of alternating structures, and the pSwapchains out parameter was reinterpreted to be an array of VkSwapchainKHR handles.
Another option considered was a method to specify a “shared” swapchain when creating a new swapchain, such that groups of swapchains using the same images could be built up one at a time. This was deemed unusable because drivers need to know all of the displays an image will be used on when determining which internal formats and layouts to use for that image.
### Examples
Note The example code for the VK_KHR_display and VK_KHR_display_swapchain extensions was removed from the appendix after revision 1.0.43. The display swapchain creation example code was ported to the cube demo that is shipped with the official Khronos SDK, and is being kept up-to-date in that location (see: https://github.com/KhronosGroup/Vulkan-Tools/blob/master/cube/cube.c).
### Version History
• Revision 1, 2015-07-29 (James Jones)
• Initial draft
• Revision 2, 2015-08-21 (Ian Elliott)
• Renamed this extension and all of its enumerations, types, functions, etc. This makes it compliant with the proposed standard for Vulkan extensions.
• Switched from “revision” to “version”, including use of the VK_MAKE_VERSION macro in the header file.
• Revision 3, 2015-09-01 (James Jones)
• Restore single-field revision number.
• Revision 4, 2015-09-08 (James Jones)
• Allow creating multiple swap chains that share the same images using a single call to vkCreateSwapChainKHR().
• Revision 5, 2015-09-10 (Alon Or-bach)
• Removed underscores from SWAP_CHAIN in two enums.
• Revision 6, 2015-10-02 (James Jones)
• Added support for smart panels/buffered displays.
• Revision 7, 2015-10-26 (Ian Elliott)
• Renamed from VK_EXT_KHR_display_swapchain to VK_KHR_display_swapchain.
• Revision 8, 2015-11-03 (Daniel Rakos)
• Updated sample code based on the changes to VK_KHR_swapchain.
• Revision 9, 2015-11-10 (Jesse Hall)
• Replaced VkDisplaySwapchainCreateInfoKHR with vkCreateSharedSwapchainsKHR, changing resolution of issue #4.
• Revision 10, 2017-03-13 (James Jones)
• Closed all remaining issues. The specification and implementations have been shipping with the proposed resolutions for some time now.
• Removed the sample code and noted it has been integrated into the official Vulkan SDK cube demo.
|
|
Class 12 Geography Notes Chapter 2 The World Population
# Class 12 Geography Notes Chapter 2 The World Population
CBSE Class 12 Geography Notes Chapter 2 The World Population is part of Class 12 Geography Notes for Quick Revision. Here we have given NCERT Geography Class 12 Notes Chapter 2 The World Population.
## Geography Class 12 Notes Chapter 2 The World Population
Patterns of Population Distribution
Join Infinity Learn Regular Class Program!
+91
Verify OTP Code (required)
• Population distribution means arrangement distribution of people over the earth’s surface. Population is not evenly distributed as 90 percent of the world’s population lives in about 10 percent of its land area.
• The 10 most populous countries of the world contribute about 60 per cent of the world’s population. Out of these 10 countries, 6 are located in Asia.
Density of Population
• This means the ratio between the number of people to the size of the land. It is usually measured in persons per sq km density of population/area. Some areas are densely populated like North-Eastern USA, North-Western Europe, South, South-West and East Asia.
• Some areas are sparsely populated like near the polar areas and high rainfall zones near the equator while some areas have medium density like Western China, Southern India, Norway, Sweden, etc.
Factors Influencing Population Distribution
The population distribution is influenced by three factors i.e., geographical factors, economic factors and social and cultural factors.
Geographical Factors
Environmental or natural factors such as landforms, fertile soil, suitable climate for cultivation and availability of adequate source of fresh water are the geographical factors that affect the population distribution. Some geographical factors are:
Land Forms Flat Plains and gentle slopes are preferred by people, because these are favorable for the production of crops and to build roads and industries.
Climate Area with less seasonal variation attract more people.
Soil Area which have fertile loamy soil have more people living on them as these can support intensive agriculture.
Water People prefer to live in areas where fresh water is easily available. Because, it is the most important factor for life.
Economic Factors
Places having employment opportunities like 1 mineral rich areas, industrial units and urban centres have high concentration of population. Some economic factors are:
Industrialisation Industries provide job opportunities and attract large numbers of ‘ people.
Minerals Minerals deposits attract industries 1 mining and industrial activities generate
employment.
Urbanisation Good civic amenities and the attention of city life draw people to the cities.
Social and Cultural Factors
Places having religious importance and cultural significance are also very densely populated areas.
Population Growth
This refers to the change in number of inhabitants of a territory during a specific period of time. When change in population is expressed in percentage, then it is called Growth Rate of Population.
When there is an increase in population by taking the difference between births and deaths, then it is called Natural Growth of Population. There is also Positive Growth of Population which happens when birth rate is more than death rate and Negative Growth of Population when birth rate is lower than death rate.
Components of Population Change
There are three components of population change i.e., births, deaths and migration.
Crude Birth Rate [CBR]
Number of births in a year per thousand of population is expressed as Crude Birth Rate (CBR). It is calculated as:
$$CBR=\frac { Bi }{ P } \times 100$$
Here, Bi= live Births during the year; P = Mid year population of the area.
Crude Death Rate (CDR)
Number of deaths in a year per thousand of population is expressed as Crude Death Rate (CDR). It is calculated as:
$$CDR=\frac { D }{ P } \times 100$$
Here, D= Number of Deaths; P= Estimated mid-year population of that year.
Migration
It is movement of people across region on permanent, temporary or seasonal basis. The place they move is called place of origin and the place they move to is called place of destination.
Push and Pull Factors of Migration
The Push factors make the place of origin seem less attractive for reasons like unemployment, poor living conditions, political turmoil, unpleasant climate, natural disasters, epidemics and socio-economic backwardness.
The Pull factors make the place of destination seem more attractive than the place of origin for reasons like better job opportunities and living condition eace and stability, security of life and property and pleasant climate.
Trends in Population Growth
• Trends show that initially growth of population was very slow but after the improvement in Science and Technology, there had been tremendous growth in population which is called population explosion.
• About 8000 to 12000 years ago world population was 8 million and now it has reached to 7 billion.
• In every 12 years, 1 billion people are added. Increased agriculture and industrial production, inoculation against epidemics, improvement in medical facilities have reduced death rates.
Doubling Time of World Population
• Developed countries are taking more time to double their population as compared to developing countries.
• Oman, Saudi Arabia, Somalia, Liberia, Yemen have high population growth rates while Latvia, Estonia, Russia, Germany, etc have low growth rates.
Spatial Pattern of Population Change
• The world population growth rate is 1.4%, it is highest in Africa i.e. 2.6% and lowest in Europe i.e. 0.0% means neither grow nor decline.
• So even when a small annual rate is applied to very large population, it will lead to a large population change. There is negative correlation between economic development and population growth.
Impact of Population Change
High increase in population leads to problems like depletion of natural resources, unemployment and scarcity. Decline in population indicates that resources are insufficient to maintain a population.
Demographic Transition Theory
This theory studies the changes in the population of a region as it moves from high births and high deaths to low births and low deaths. This happens when a society progresses from rural agrarian and illiterate to urban, industrial and literate.
There are three-staged model of Demographic Transition Theory. They are:
First Stage
• This stage is marked by high fertility high mortality rate because people reproduce more to compensate for the deaths due to epidemics and variable food supply.
• People are poor, illiterate and mostly engaged in agriculture. Life expectancy is low and population growth is slow.
Second Stage
• Level of technology increases and other facilities like medical, health, sanitation improves due to which the death rate reduces.
• But the fertility rate and birth rate remains high due to which there is huge rise in population. Population expands rapidly as there is wide gap between birth and death rate.
Third Stage
• The birth and death rate both reduces and the population moves towards stability.
• People become literate, urbanised and control the size of the family. There is good judicious use of technology also.
Population Control Measures
• Family planning is the spacing and preventing the birth of children. Thomas Malthus theory (1793) states that the number of people would grow faster than the food supply thus leading to famine, diseases and war.
• Therefore, it is essential to control the population. This is undertaken through measures like awareness for family planning, free availability of contraceptives, tax disincentives and active propaganda.
We hope the given CBSE Class 12 Geography Notes Chapter 2 The World Population will help you. If you have any query regarding NCERT Geography Class 12 Notes Chapter 2 The World Population, drop a comment below and we will get back to you at the earliest.
Join Infinity Learn Regular Class Program!
|
|
# User:Amj
## My background
This is the first wiki I've ever edited. I hope that my contributions would be valuable and nobody notices my inexperience.
I think the best thing I can do for Arch Linux in this moment is translate pages into my mother tonge: spanish.
And if you feel bored, try probing this is wrong:
$-1 = i^2 = i * i = \sqrt{-1} * \sqrt{-1} = \sqrt{(-1)*(-1)} = \sqrt{1} = 1$
|
|
# For Samsung Galaxy A10 Case Silicone Phone Cover TPU Cases For Funda Samsung A 10 A10 2019 Back Case Soft Matte Bumper Coque
€0,01
Color:
Description
For Samsung Galaxy A10 Case Silicone Phone Cover TPU Cases For Funda Samsung A 10 A10 2019 Back Case Soft Matte Bumper Coque
Product Description? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?
Return & Warranty
• 100% Secure payment with SSL Encryption.
• If you're not 100% satisfied, let us know and we'll make it right.
Shipping Policies
• Orders ship within 5 to 10 business days.
• Tip: Buying 2 products or more at the same time will save you quite a lot on shipping fees.
|
|
# How do you simplify (x + 2)(x - 2)?
May 7, 2018
$\left(x + 2\right) \left(x - 2\right) = {x}^{2} - 4$
#### Explanation:
Use the distributive property to multiply both terms in the first set of parenthesis by the terms in the second set:
First the $x$:
$x \cdot x = {x}^{2}$.
$x \cdot \left(- 2\right) = - 2 x$.
Now the $2$:
$2 \cdot x = 2 x$.
$2 \cdot \left(- 2\right) = - 4$.
We came up with four terms: ${x}^{2} - 2 x + 2 x - 4$.
Simplify the expression by adding the middle terms, and we get just ${x}^{2} - 4$.
This is a special kind of binomial (a binomial is an expression with two terms) called a difference of squares. Since (x+2)&(x-2) are the same except for the sign in between, the answer is just the first term squared (${x}^{2}$) minus the second term squared
-(2^2)=-4.
Here's more about difference of squares: https://www.mathsisfun.com/definitions/difference-of-squares.html
|
|
# Difference between revisions of "Gmaas"
- gmaas is the Doctor Who lord; he sports Dalek-painted cars and eats human finger cheese and custard, plus black holes.
- themoocow is gmaas's cousin.
- Gmass is $\bold{not}$ Colonel Meow.
- Gmaas is 5space's favorite animal. (Source)
- He lives with sseraj.
- He is often overfed (with probability $\frac{3972}{7891}$), or malnourished (with probability $\frac{3919}{7891}$) by sseraj.
- He has $$\sum_{k=0}^{267794} (k+1)(k+2)+GMAAS$$ supercars, excluding the Purrari.
- He is an employee of AoPS.
- He is a gmaas with yellow fur and white hypnotizing eyes.
- He was born with a tail that is a completely different color from the rest of his fur.
- His stare is very hypnotizing and effective at getting table scraps.
- He sometimes appears several minutes before certain classes start as an admin.
- He died from too many Rubik's cubes in an Introduction to Algebra A class, but got revived by the Dark Lord at 00:13:37 AM the next day.
- It is uncertain whether or not he is a cat, or is merely some sort of beast that has chosen to take the form of a cat (specifically a Persian Smoke.)
- Actually, he is a cat. He said so. And science also says so. - He is distant relative of Mathcat1234
- He is very famous now, and mods always talk about him before class starts.
- His favorite food is AoPS textbooks, because they help him digest problems.
- Gmaas tends to reside in sseraj's fridge.
- Gmaas once ate all sseraj's fridge food, so sseraj had to put him in the freezer.
- The fur of Gmaas can protect him from the harsh conditions of a freezer.
- Gmaas sightings are not very common. There have only been 8 confirmed sightings of Gmaas in the wild.
- Gmaas is a sage omniscient cat.
- He is looking for suitable places other than sseraj's fridge to live in.
- Places where gmaas sightings have happened:
~The Royal Scoop ice cream store in Bonita Beach Florida
~MouseFeastForCats/CAT 8 Mouse Apartment 1083
~Alligator Swamp A 1072
~Alligator Swamp B 1073
~Introduction to Algebra A (1170)
~Welcome to Panda Town Gate 1076
~Welcome to Gmaas Town Gate 1221
~Welcome to Gmaas Town Gate 1125
~33°01'17.4"N 117°05'40.1"W (Rancho Bernardo Road, San Diego, CA)
~The other side of the ice in Antarctica
~Feisty Alligator Swamp 1115
~Introduction to Geometry 1221 (Taught by sseraj)
~Introduction to Counting and Probability 1142
~Feisty-ish Alligator Swamp 1115 (AGAIN)
~Intermediate Counting and Probability 1137
~Intermediate Counting and Probability 1207
~Posting student surveys
~USF Castle Walls 1203
~Dark Lord's Hut 1210
~AMC 10 Problem Series 1200
~Intermediate Number Theory 1138
~Introduction To Number Theory 1204. Date:7/27/16.
~Algebra B 1112
~33°81'199.4"N 167°05'45.1"W (Unknown location, please try again later)
~Ocelot Rainforest 1111
~Intermediate Counting and Probability 1137
~Intermediate Number Theory 1138 (AGAIN)
~AMC 10 Problem Series 1200 (AGAIN)
~Cat Gathering #339 1222
~Cat Gathering #312 1535
~Introduction to Geometry (1190)
~Hogwarts Introduction to Transfiguration (...16168)
~Cat Meow Meow Purr 1110
~Bilbo's Hobbit Hole 1234
~Pre-Algebra 1 with sseraj 1164. (Year: 2015) (gmaas code: 6666)
~Intermediate Counting and Probability 1137
~Intermediate Number Theory 1229 (AGAIN)
~Calculus 1226
~Gallifrey Meeting #123456, 1888.
~On his account of course.
~On FEFF once
~On Mt. Everest at 13:37 in the year 1337 on 1/3 37
~Everywhere in outer space
- These have all been designated as the most glorious sections of Aopsland now (Especially the USF castle walls), but deforestation is so far from threatens the wild areas (i.e. Alligator Swamps A&B).
- Gmaas has also been sighted in Olympiad Geometry 1148.
- Gmaas has randomly been known to have sent his minions into Prealgebra 2 1163. However, the danger is passed, that class is over.
- Gmaas also has randomly appeared on top of the USF's Tribal Bases(he seems to prefer the Void Tribe). However, the next day there is normally a puddle in the shape of a cat's underbelly wherever he was sighted. Nobody knows what this does.
EDIT: Nobody has yet seen him atop a tribal base yet.
- Gmaas are often under the disguise of a penguin or cat. Look out for them.
EDIT: The above disguises are rare for a Gmaas, except the cat.
- He lives in the shadows. Is he a dream? Truth? Fiction? Condemnation? Salvation? AoPS site admin? He is all these things and none of them. He is... Gmaas.
EDIT: He IS an AoPS site admin.
- If you make yourself more than just a cat... if you devote yourself to an ideal... and if they can't stop you... then you become something else entirely. A LEGEND. Gmaas now belongs to the ages.
- Is this the real life? Is this just fantasy? No. This is gmaas, the legend.
- Gmaas might have been viewing (with a $\frac{99.999}{100}$ chance) the Ultimate Survival Forum. He (or is he a she?) is suspected to be transforming the characters into real life. Be prepared to meet your epic swordsman self someday. If you do a sci-fi version of USF, then prepare to meet your Overpowered soldier with amazing weapons one day.
- The name of Gmaas is so powerful, it radiates Deja Vu.
- Gmaas is on the list of "Elusive Creatures." If you have questions, or want the full list, contact moab33.
- Gmaas can be summoned using the $\tan(90)$ ritual. Draw a pentagram and write the numerical value of $\tan(90)$ in the middle, and he will be summoned.
- Gmaas's left eye contains the singularity of a black hole. (Only when everyone in the world blinks at the same time within a nano-nano second.)
- Lord Grindelwald once tried to make Gmaas into a Horcrux, but Gmaas's fur is Elder Wand protected and secure, as Kendra sprinkled holly into his fur.
- The original owner of Gmaas is gmaas.
- Gmaas was not the fourth Peverell brother, but he ascended into higher being and now he resides in the body of a cat, as he was before. Is it a cat? We will know. (And the answer is YES.)
- It is suspected that Gmaas may be ordering his cyberhairballs to take the forums, along with microbots.
- The name of Gmaas is so powerful, it radiates Deja Mu.
- Gmaas rarely frequents the headquarters of the Illuminati. He is their symbol. Or he was, for one yoctosecond.
- It has been wondered if gmaas is the spirit of Obi-Wan Kenobi or Anakin Skywalker in a higher form, due to his strange capabilities and powers.
- Gmaas has a habit of sneaking into computers, joining The Network, and exiting out of some other computer.
- It has been confirmed that gmaas uses gmaal as his email service
- Gmaas enjoys wearing gmean shorts
- Gmaas has a bright orange tail with hot pink spirals. Or he had for 15 minutes.
- Gmaas is well known behind his stage name, Michal Stevens (also known as Vsaace XD), or his page name, Purrshanks.
- Gmass rekt sseraj at 12:54 june 4, 2016 UTC time zone. And then the Doctor chased him.
- Gmaas watchers know that the codes above are NOT years. They are secret codes for the place. But if you've edited that section of the page, you know that.
- Gmaas is a good friend of the TARDIS and the Millenium Falcon.
- In the Dark Lord's hut, gmaas was seen watching Doctor Who. Anyone who has seem the Dark Lord's hut knows that both gmaas and the DL (USF code name of the Dark Lord) love BBC. How gmaas gave him a TV may be lost to history. And it has been lost.
- The TV has been noticed to be invincible. Many USF weapons, even volcano rings, have tried (and failed) to destroy it. The last time it was seen was on a Kobold display case outside of a mine. The display case was crushed, and a report showed a spy running off with a non-crushed TV.
- Gmaas is a Super Duper Uper Cat Time Lord. He has $57843504$ regenerations and has used $3$. $$9\cdot12\cdot2\cdot267794=57843504$$.
- Gmaas loves to eat turnips. At $\frac{13}{32}$ of the sites he was spotted at, he was seen with a turnip.
- Gmaas has three tails, one for everyday life, one for special occasions, and one that's invisible.
-Gmaas is a dangerous creature. If you ever meet him, immediately join his army or you will be killed.
-Gmaas is in alliance in the Cult of Skaro. How did he get an alliance with ruthless creatures that want to kill everything on sight? Nobody knows. (Except him.)
-Gmaas lives on Gallifrey.
-The native location of Gmaas is the twilight zone.
### gmaas in Popular Culture
- Currently, a book is being written (by JpusheenS) about the adventures of gmaas. It is aptly titled, "The Adventures of gmaas".
- BREAKING NEWS: tigershark22 has found a possible cousin to gmaas in Raymond Feist's book Silverthorn. They are mountain dwellers, gwali. Not much are known about them either, and when someone asked,"What are gwali?" the customary answer "This is gwali" is returned. Scientist 5space is now looking into it.
- Sullymath is also writing a book about Gmaas
- Potential sighting of gmaas [1]
- Gmaas has been spotted in some Doctor Who and Phineas and Ferb episodes, such as Aliens of London, Phineas and Ferb Save Summer, Dalek, Rollercoaster, Rose, Boom Town, The Day of The Doctor, Candace Gets Busted, and many more.
- Gmaas can be found in many places in Plants vs. Zombies Garden Warfare 2 and Bloons TD Battles
|
|
## Calculus Problem
For the vector field H = -yi + xj, find the line integral along the curve C from the origin along the x-axis to the point (14,0) and then counterclockwise around the circumference of the circe x^2 + y^2 = 196 to the point (14/ $$\sqrt{2}$$ , 14/ $$\sqrt{2}$$ ). Thanks
|
|
# Numerical solution by NDSolve has discontinuity
Posted 5 months ago
528 Views
|
2 Replies
|
3 Total Likes
|
For the given ODE $uu_{ttt}-u_{t}u_{tt} + u^3 u_{t}=0,$I am trying to do a comparison between numerical and analytical results. Analytically, I got the solution of sech, but for a numeric solution, I used NDSolve, it fives discontinuity in the solution. Why?
2 Replies
Sort By:
Posted 8 days ago
I don't see any discontinuities when I run the code (V13.2). You cut off the plot range, which creates a break in the graph, but that is not a discontinuity in the solution. Extend the plot range to ±2.25 and you should see the entire solution.
Posted 8 days ago
Hi A, don't be surprised. The identity that produces Sech as the soution to the KdV equation involves the inversion of the elliptic integral F[ArcSinh.] in order to get an solution x-> sech 0 d/dx JacobiAmplitude[x,1].But the general JacobiAmplitude is probably ill defined in Mathematica. It has jumps by 4 Pi at the peridodicity point +- 4 K on the real line. All this mess stems from the fact that the JacobiAmplitude is a complex line integral over JacobiDN that has a lattice of simple poles in the complex domai. The choice of the path of integration around all the logarithmic branch points to choice is the decision of the IT-specialist who in general has as little knowldege about elliptic functions as most mathematicians. This ambiguity is at the heart of the impossibility to verify solutions to our beloved models solvable nonlinear PDE's KdV and sineGordon. In[28]:= DSolve[ u'''[t] u[t] - u'[t]*u''[t] + u[t]^3 u'[t] == 0, u[t], t] Out[28]= {{u[t] -> InverseFunction[-((2 I EllipticF[ I ArcSinh[(Sqrt[1/(-C[2] + Sqrt[-C[1] + C[2]^2])] #1)/ Sqrt[2]], (C[2] - Sqrt[-C[1] + C[2]^2])/( C[2] + Sqrt[-C[1] + C[2]^2])] Sqrt[ 1 + #1^2/(2 (-C[2] + Sqrt[-C[1] + C[2]^2]))] Sqrt[ 1 - #1^2/(2 (C[2] + Sqrt[-C[1] + C[2]^2]))])/(Sqrt[ 1/(-C[2] + Sqrt[-C[1] + C[2]^2])] Sqrt[-2 C[1] + 2 C[2] #1^2 - #1^4/2])) &][t + C[3]]}, {u[ t] -> InverseFunction[(2 I EllipticF[ I ArcSinh[(Sqrt[1/(-C[2] + Sqrt[-C[1] + C[2]^2])] #1)/Sqrt[ 2]], (C[2] - Sqrt[-C[1] + C[2]^2])/( C[2] + Sqrt[-C[1] + C[2]^2])] Sqrt[ 1 + #1^2/(2 (-C[2] + Sqrt[-C[1] + C[2]^2]))] Sqrt[ 1 - #1^2/(2 (C[2] + Sqrt[-C[1] + C[2]^2]))])/(Sqrt[ 1/(-C[2] + Sqrt[-C[1] + C[2]^2])] Sqrt[-2 C[1] + 2 C[2] #1^2 - #1^4/2]) &][t + C[3]]}} In[25]:= MapAll[ FullSimplify[ PowerExpand[ Simplify[# /. {(C[2] - Sqrt[-C[1] + C[2]^2])/( C[2] + Sqrt[-C[1] + C[2]^2]) -> 4/\[Omega]^2 ((-C[2] + Sqrt[-C[1] + C[2]^2])^(-(1/2)) #1)/ Sqrt[2] :> A #1 } ]] ] &, DSolve[ u'''[t] u[t] - u'[t]*u''[t] + u[t]^3 u'[t] == 0, u[t], t]] Out[25]= \$Aborted In[9]:= u'''[t] u[t] - u'[t]*u''[t] + u[t]^3 u'[t] /. {u -> (Sech[ #/2] &)} // FullSimplify Out[9]= 0 In[27]:= [PartialD]_x JacobiAmplitude[x, 1] Out[27]= Sech[x] Regards Roland
|
|
# FixedWidthSplitter¶
class astropy.io.ascii.FixedWidthSplitter[source]
Split line based on fixed start and end positions for each col in self.cols.
This class requires that the Header class will have defined col.start and col.end for each column. The reference to the header.cols gets put in the splitter object by the base Reader.read() function just in time for splitting data lines by a data object.
Note that the start and end positions are defined in the pythonic style so line[start:end] is the desired substring for a column. This splitter class does not have a hook for process_lines since that is generally not useful for fixed-width input.
Attributes Summary
Methods Summary
__call__(self, lines) Call self as a function. join(self, vals, widths)
Attributes Documentation
bookend = False
delimiter = '|'
delimiter_pad = ''
Methods Documentation
__call__(self, lines)[source]
Call self as a function.
join(self, vals, widths)[source]
|
|
# Babel code in React component to display tasks
I'm trying to think of ways to clean up this component in my app so it looks more readable to other devs. This is the best I came up with so far. It's not that bad, but I think it could be a lot better. What do you guys think? How would you clean this code up to make it more readable?
const DisplayTasks = ({ tasksArray, removeTask, crossOutTask }) => {
return (
<div id="orderedList">
<ol>
<li onClick={ () => crossOutTask(index) } key={index} >
<button id="removeButton" onClick={ event => removeTask(event, index) } >
Remove
</button>
</li>
))}
</ol>
</div>
);
};
$$$$
• you could remove the wrapping div with the id orderedList... The ol tag serves the same purpose, plus it's not a very specific name. I think it probably makes more sense to add a class to the ol with a value like tasks`.
|
|
# HOW TO CHOOSE AN SHSAT TUTOR 2020
## Part 1: What should you pay for a 3-month SHSAT test prep program?
The history of SHSAT test prep is quirky and fragmented, with course prices ranging from free to nearly $5,000 for a summer camp program. These extremes can leave NYC parents less informed about what prices they should be paying for SHSAT prep. In part 1 of our series, How to Choose an SHSAT Tutor 2020, we will compare SHSAT prep to analogous college admissions prep where the market is larger, less fragmented, and priced more competitively to assess what NYC parents should be paying for their SHSAT prep. Let’s start with an analogous SAT prep program: one that mirrors the largest SHSAT prep programs. An obvious choice for comparison is Princeton Review’s SAT 1400+ program. Princeton Review is arguably the largest private SAT prep firm with over 20 years experience. Much like the larger, private SHSAT programs, the Princeton Review program prequalifies higher scoring candidates, provides a 3-month training program, and promises high scores—arguably similar to those exceeding the specialized high school cut-off levels. The advertised SAT prep cost:$1,374. Compare this to one large SHSAT prep provider also with 20 years experience that prequalifies candidates, provides a summer camp program, and promises a high success rate. The advertised SHSAT prep cost: $3,900-4,600. The contrast in price between similar programs provided by experienced firms is stark. It suggests NYC parents frequently overpay by about$3,000 for many well-known seasonal SHSAT test prep programs. Let’s investigate the matter more closely.
Why is a high school admissions test prep program three times the cost of a similar college admission program? Is specialized high school admission fundamentally more important than college admission? Certainly not. Even a specialized high school, in most parents’ mindset, is a stepping stone to a better, more prestigious university. Harvard is a greater prize than Stuyvesant! Is SHSAT prep fundamentally harder or more time consuming than college admissions? No. Both tests cover math and verbal topics for a three-hour exam. The material is more advanced for college admissions, and the SAT includes an optional essay exam not included in the SHSAT. However, adjusted for age, the two exams arguably provide similar challenges. Maybe extra time and effort to get a near perfect score is more important for the SHSAT? Perhaps that justifies extra SHSAT hours and overall expenditures. In fact, the opposite is true. A perfect score is important for the SAT, but not at all for the SHSAT. SHSAT admission only requires a score above a cut-off. Anything higher does not benefit a student. On the other hand, all the extra hours an SAT student devotes to raise his/her score from 85% to 90% to 95% to perfection will make a significant difference in college admission.
In summary, SHSAT test prep is no more demanding than SAT prep. A high exam score is not a greater prize for the SHSAT, and there are reasons to train harder and longer—spending more dollars all the while—for the benefit of incremental SAT score improvements. Based on the nature of the test, it appears that many parents in the smaller, more fragmented SHSAT test prep market are overpaying by as much as $3,000 for seasonal courses that commit their students to double the hours required by similar college admission courses. In part 2, we will look more closely at where all those extra billable SHSAT training hours are going. #### Part 2 # HOW TO CHOOSE AN SHSAT TUTOR 2020 ## Part 2: Does that high priced seasonal SHSAT camp provide an A+ SHSAT curriculum? In part 1 of this series, we compared large, experienced SHSAT test prep programs to comparable, competitive SAT test programs and concluded that the advertised price of these large SHSAT test prep programs is arguably more than three times the “fair” price. We also noted that the SHSAT seasonal programs commit young students to considerably more curriculum hours than similar SAT prep. In this second part of How to Choose an SHSAT Tutor 2020, we will focus on the quantity and quality of curriculum hours. Is it possible that SHSAT prep requires more time than SAT prep, and the higher overall cost of SHSAT programs reflects a higher total required time commitment at a similar hourly rate…over three times the total hours? As discussed in part 1, the answer appears to be, “No.” The body of students in each program is screened to include only high performing candidates who have proven to have a relatively strong starting knowledge of the subject material. The exam requirements for the SHSAT are no broader or more difficult than the SAT, after which the new SHSAT is modeled. Both standardized exams test similar subjects in similar ways, age adjusted, for about three hours, a similar testing time. If anything, part 1 indicated there is an incentive for SAT students to study more because they do not have to merely meet a cut-off. SAT students benefit from additional hard earned gains as SAT scores approach perfection. The vast difference in advertised course hours can be better understood by digging a little deeper, beyond the cover details. First, the comparison of the SAT 1400+ program’s 36 hours over 3 months is not an apples to apples comparison to the 104-hour SHSAT summer camp program. The Princeton Review SAT course properly does not list the hours for practice exams and similar periods where the students are not actively learning from instructors. Why should you pay nearly$40 per hour for the time your student takes a practice test or is otherwise taking a break from instruction? The SHSAT program does not deduct for practice test hours and expected breaks within 4 hour class periods. Those non-training hours amount to 20 hours for the SAT program. If included, the SAT program would be 56 hours, which suggests 20/56 or 36% of the SHSAT course curriculum includes non-training hours. A similar deduction from the billable SHSAT curriculum hours would put the SHSAT course at 67 hours–closer but still a significant gap from 36 hours.
However, much of this gap disappears when accounting for the absurd amount of hours devoted to one topic in the SHSAT summer camp curriculum. The large, experienced SHSAT summer camp program includes six 4-hour classes totalling 24 hours of curriculum devoted to grammar. The standalone edit-revise section of the SHSAT, which tests grammar, only amounts to 3 or 4 questions on the SHSAT. There are 114 questions in all. That means nearly one-quarter of the pricey SHSAT course is devoted to less than 3% of the test. Why would any respectable educator, let alone a program that outwardly markets its experience, include these hours in the curriculum? It is impossible for this time investment in the edit-revise standalone exam section to make any significant difference to student results. Unfortunately, most parents think they have signed up for an A+ curriculum and have no idea they paid for wasteful curriculum hours. A further reduction of nearly 20 non-essential hours from the SHSAT summer camp curriculum moves the justified SHSAT course total to almost 47 hours, a number in the same ballpark as the SAT 1400+ program. Who knows what other extra hours have been included in the large SHSAT summer program?
In conclusion, it appears both the large experienced SHSAT test prep provider and the SAT prep program are charging nearly $40 per hour for group classes, but the SHSAT test prep provider is substantially inflating the curriculum hours. The indicated curriculum hours for the SHSAT course does not deduct for time taking practice exams and other periods where students are not actively learning. Moreover, the SHSAT program includes nearly one-quarter of curriculum hours to a part of the SHSAT that represents only 3% of the exam, a clear indication of a poorly designed curriculum or at least one advertised to maximize sales to parents who may not fully understand what they are paying for. While we are not suggesting students will enter these programs and fail to be trained for the SHSAT, the second part of this series confirms that parents are overpaying for large seasonal SHSAT test prep programs in part because the hours of curriculum are inflated. An apples to apples comparison of curriculum hours suggests parents of these large SHSAT programs are effectively paying$100 per hour for group classes, a number that exceeds some one on one tutoring alternatives. In part 3 of this series, we will discuss what options parents have for SHSAT test prep in 2020.
|
|
# Riemann Zeta Function at Even Integers/Examples/2
## Example of Riemann Zeta Function at Even Integers
The Riemann zeta function of $2$ is given by:
$\ds \map \zeta 2$ $=$ $\ds \dfrac 1 {1^2} + \dfrac 1 {2^2} + \dfrac 1 {3^2} + \dfrac 1 {4^2} + \cdots$ $\ds$ $=$ $\ds \dfrac {\pi^2} 6$ $\ds$ $\approx$ $\ds 1 \cdotp 64493 \, 4066 \ldots$
## Proof
$\ds \zeta \left({2}\right)$ $=$ $\ds \left({-1}\right)^2 \dfrac {B_2 2^1 \pi^2} {2!}$ Riemann Zeta Function at Even Integers $\ds$ $=$ $\ds \left({-1}\right)^2 \left({\dfrac 1 6}\right) \dfrac {2^1 \pi^2} {2!}$ Definition of Sequence of Bernoulli Numbers $\ds$ $=$ $\ds \left({\dfrac 1 6}\right) \left({\dfrac 2 2}\right) \pi^2$ Definition of Factorial $\ds$ $=$ $\ds \dfrac {\pi^2} 6$ simplifying
$\blacksquare$
The decimal expansion can be found by an application of arithmetic.
|
|
# A sample of 0.020 mole of oxygen gas is confined at 227°C and 0.50 atmospheres. What would be the pressure of this sample at 27°C and the same volume?
###### Question:
A sample of 0.020 mole of oxygen gas is confined at 227°C and 0.50 atmospheres. What would be the pressure of this sample at 27°C and the same volume?
### Help for 84 points.
Help for 84 points....
### Give at least two examples of how the rights of Americans as British citizens were sometimes violated
Give at least two examples of how the rights of Americans as British citizens were sometimes violated...
### Jenny was multiplying 541.82 by 0.1. Which statement is true about the product? A. It will be greater than 541.82. B. It will be equal to 541.82. C. It will be less than 541.82.
Jenny was multiplying 541.82 by 0.1. Which statement is true about the product? A. It will be greater than 541.82. B. It will be equal to 541.82. C. It will be less than 541.82....
### Thomas argues that rational 27 divided by rational 3 is rational. mateo argues that the quotient between rational 27 and rational 3 is irrational. prove who is right. justify your anwser
thomas argues that rational 27 divided by rational 3 is rational. mateo argues that the quotient between rational 27 and rational 3 is irrational. prove who is right. justify your anwser...
### Someone who studies the stars and planets would be working in which branch of science? A. Earth science B. Marine science C. Life science D. Physical science
Someone who studies the stars and planets would be working in which branch of science? A. Earth science B. Marine science C. Life science D. Physical science...
### I need help please with my final study guide I need the answer so I can study from it please
I need help please with my final study guide I need the answer so I can study from it please...
### This part of the proposal begins with a capsule statement and then proceeds to introduce the subject to a stranger
This part of the proposal begins with a capsule statement and then proceeds to introduce the subject to a stranger...
### How many grams are 2.70 x 10 formula units of iron (II) perchlorate?
How many grams are 2.70 x 10 formula units of iron (II) perchlorate?...
### Convert 5.6 cm to meters. (please show the steps to how you did the problem.)
Convert 5.6 cm to meters. (please show the steps to how you did the problem.)...
### P5. (5pts) Recall that we saw the Internet checksum being used in both transport-layer segment (in UDP and TCP headers) and in network-layer datagrams (IP header). Now consider a transport layer segment encapsulated in an IP datagram. Are the checksums in the segment header and datagram header computed over any common bytes in the IP datagram
P5. (5pts) Recall that we saw the Internet checksum being used in both transport-layer segment (in UDP and TCP headers) and in network-layer datagrams (IP header). Now consider a transport layer segment encapsulated in an IP datagram. Are the checksums in the segment header and datagram header compu...
### Branlist+10PTS Who do the two people in the cartoon represent? *
Branlist+10PTS Who do the two people in the cartoon represent? *...
|
|
# Compute the payback statistic for Project A if the appropriate cost of capital is 7 percent and the maximum allowable payback period is four years. (Round your answer to 2 decimal places.) Project A Time: 0 1 2 3 4 5 Cash flow: −$1,400$510 $600$600 $380$180 Payback years: _______.__
Question
Compute the payback statistic for Project A if the appropriate cost of capital is 7 percent and the maximum allowable payback period is four years. (Round your answer to 2 decimal places.)
Project A Time: 0 1 2 3 4 5 Cash flow: −$1,400$510 $600$600 $380$180
Payback years: _______.__
|
|
# Custom chapter format with tikz
I am creating a booklet for a conference, using the documentclass scrbook. The document is composed of different chapters: "About", "Timetable", "List of participants", etc. Each of these "chapters" starts on a new page (that can either odd or even).
I want my chapter to render something like this (this is what I've managed so far based on this: http://texample.net/tikz/examples/fancy-chapter-headings/):
This is produced with the following MWE
\documentclass[openany]{scrbook}
\usepackage[utf8]{inputenc}
\usepackage{tikz}
\usepackage[explicit]{titlesec}
\usepackage{blindtext}
%--------------------------------
\titleformat
{\chapter} % command
{\bfseries\Huge} % format
{%
\thechapter
} % label
{0pt} % sep
{
\ifodd\value{page}{%
\begin{tikzpicture}[remember picture,overlay]
\node[yshift=-3cm] at (current page.north west)
{\begin{tikzpicture}[remember picture, overlay]
\fill[orange] (0,0) rectangle (0.6\textwidth,1em);
\node[above, yshift=-0.2em, xshift=\textwidth] {#1};
\end{tikzpicture}
};
\end{tikzpicture}
}\else{%\
\begin{tikzpicture}[remember picture,overlay]
\node[yshift=-3cm] at (current page.north east)
{\begin{tikzpicture}[remember picture, overlay]
\fill[orange] (0,0) rectangle (-0.5\paperwidth,1em);
\node[above, yshift=-0.2em, xshift=-\textwidth] (0,0) {#1};
\end{tikzpicture}
};
\end{tikzpicture}
}\fi%
} % before-code
[
\vspace{-3cm}
] % after-code
%------------------------------------------------
\begin{document}
\blindtext[3]
\chapter*{Timetable}
\blindtext[3]
\end{document}
However, some points to improve:
• the "Timetable" chapter is flushed left which is what I want, but the "About" is not compeltely flushed right. How can I do this?
• for now the width of the orange bar is fixed to 0.5\textwidth. How can I make it adaptive, so that it goes from the border of the page up to the chapter name (the width depends on the chapter name)?
• Maybe a hack like \node[yshift=-0.2em, xshift=0.6\textwidth,text width=0.4\textwidth,anchor=south west,draw] {#1\hfill{}};? For the second question you first put the node and then draw the orange bar if there is space left – percusse Nov 7 '17 at 15:58
A solution without titlesec and tikz but with xcolor. Note that KOMA-script comes with the macro \ifthispageodd.
\documentclass[openany]{scrbook}
\usepackage[utf8]{inputenc}
\usepackage{xcolor}
\usepackage{blindtext}
\mybarpadding=1em\relax% change this to alter the space between the rule and the chapter title
\RedeclareSectionCommand[%
,afterskip=4em plus 1pt minus 1pt%
,beforeskip=-1pt%1.2em plus 1pt minus 1pt%
,level=0%
,toclevel=0%
]{chapter}%
\setkomafont{chapter}{\normalfont\normalsize\bfseries\Huge}
\renewcommand{\chapterlinesformat}[3]{%
\ifthispageodd{%
\hfill%
\raisebox{-0.2em}{%
\makebox[0pt][r]{\textcolor{orange}{\rule{\paperwidth}{1em}}}%
}%
#2#3%
}{%
\hbox{%
#2#3%
\raisebox{-0.2em}{%
\makebox[0pt][l]{\textcolor{orange}{\rule{\paperwidth}{1em}}}%
}%
}%
}%
}
\begin{document}
|
|
Select Page
Problem: solve
${{6x}^{2}}$ – 25x + 12 + ${\frac{25}{x}}$ + ${\frac{6}{x^2}}$ = 0.
Solution: ${{6x}^{2}}$ – 25x + 12 + ${\frac{25}{x}}$ + ${\frac{6}{x^2}}$ = 0
= 6( ${x^2}$ + ${\frac{1}{x^2}}$) – 25(x – ${\frac{1}{x}}$) +12 = 0
= 6 (x – ${\frac{1}{x}})^2$ – 25 (x – ${\frac{1}{x}}$) + 24 = 0
= (x – ${\frac{1}{x}}$) = ${\frac{25\pm {\sqrt49}}{12}}$ = ${\frac{3}{2}}$ or ${\frac{8}{3}}$
If (x – ${\frac{1}{x}}$) = ${\frac{3}{2}}$
Or ${{2x}^2}$ – 3x – 2 = 0
Or x = 2 , – ${\frac{1}{2}}$
If (x – ${\frac{1}{x}}$) = ${\frac{8}{3}}$
Or ${x^2}$ – 1 = ${\frac{8}{3x}}$
Or ${3x^2}$ – 8x – 3 = 0
Or x = 3, -1/3
X = 2,- ${\frac{1}{2}}$ ,3,- ${\frac{1}{3}}$
|
|
# “A Frog with Wings” – Freeman Dyson
Science is specialized, indeed too specialized for most scientists to have the time for thoughtful excursions into other vistas of the broader intellectual landscape. The eminent theoretical physicist, Freeman Dyson (b. 1923) is a stunning exception to this narrowness of the mind. Here is a man who is equally at ease with tensor calculus as with discourses on politics, climate change, literature and religion. What motivates Dyson to explore such wide-ranging subjects? How does the man compare himself with other scientists? Here’s a glimpse into the mind of this ‘Renaissance man’ by long-time friend, Elliott H. Lieb from Princeton University’s Department of Physics and Mathematics. The title of Lieb’s talk – part of the scientific community’s celebration of Dyson’s 90th birthday in 2013, is simply titled, ‘Freeman Dyson’. I’ve taken the liberty to make minor edits of his speech.
The title of my talk as it appears in the program, ‘Freeman Dyson’ is enigmatic because it was hard to decide which of the many facets of Freeman to address … It is fair to say that he is a great literary stylist, whether writing about quantum fields or religion, and he is arguably the most elegant writer in today’s mathematics and physics communities. His articles and book reviews in the New York Review of Books are carefully read by thousands and are often topic of lunchtime and dinnertime conversations, at least at my house.
On the science side of the last decade we see that Freeman continues to be productive. A recent paper will Bill Press on the ‘prisoner’s dilemma’ in the Proceedings of the National Academy of Sciences astounded many for its discovery that there are strategies for repeated playing of this two-person game that can, for example, permit one player to control the score of the other player. It is amazing what linear algebra can accomplish! In this paper, one can clearly discern the mark of Freeman’s straight to the jugular thinking coupled with his economy of presentation.
But this paper is not the whole story about the last decade. With Larry Glasser and Norm Frankel his love of classical mathematics resurfaced with a paper on a power series of Lehmer and its relation to $\pi$.
Freeman has an amazing ability to acquire and organize facts. Another paper within the last decade is one with Thibault Damour on the stability of the Fine Structure constant – a subject that engaged his attention since at least 1972. How many among us here would know enough nuclear and other physics to be able to analyze critically the data relevant to the observation that one can discern the $CO_2$ production rate of swamp plants? This kind of information came from participation in an Oak Ridge study group, but few people other than Freeman would be able to use this knowledge to suggest a solution to the $CO_2$ problem.
Further, how many of us would be able to use the solar energy production, the mass of the earth, and other parameters, regarded as typical for planetary systems, to calculate the possible radioactive output of advanced civilizations, and therefrom, our chances of receiving greetings from outer space? In short, arguing with Freeman is like arguing with your smartphone or with the Oxford English dictionary. You can’t win and you can only accept graceful defeat. But even if you don’t agree with him on global warming research, or the role of theology in public life, the main thing is that Freeman, like Nelson Mandela, has an exemplary ability to enter sensitive areas without igniting conflagrations around him.
There is more to be said but I should stop now. In my remarks ten years ago, I compared Freeman to one of the giant trees in the rain forest on which the lives of many flora and fauna depend, but since then he has changed species. He famously wrote:
“Some mathematicians are birds, others are frogs. Birds fly high in the air and survey broad vistas of mathematics out to the far horizon. They delight in concepts that unify our thinking and bring together diverse problems from different parts of the landscape. Frogs live in the mud below and see only the flowers that grow nearby. They delight in the details of the particular objects, and they solve problems one at a time. I happened to be a frog.”
If this is so, he is a frog with wings.
Listen to the man himself
In this video interview, Dyson talks about the wonders and limits of science. Most memorable sentence : “We understand the physical world extremely well. We understand the ‘mental world’ hardly at all.” Here, Dyson uses the phrase ‘mental world’ to include subjects like the nature of consciousness and the existence of God.
Notes
[1] Excerpts from The Atlantic (December 2010 issue): “Freeman Dyson is one of those force-of-nature intellects whose brilliance can be fully grasped by only a tiny subset of humanity, that handful of thinkers capable of following his equations. His principal contribution has been to the theory of quantum electrodynamics, but he has done stellar work, too, in pure mathematics, particle physics, statistical mechanics, and matter in the solid state. He writes with a grace and clarity that is rare, even freakish, in a scientist, and his books, including Disturbing the UniverseWeapons and HopeInfinite in All Directions, and The Sun, the Genome, and the Internet, have made a mark. Dyson has won the Lorentz Medal (the Netherlands) and the Max Planck Medal (Germany) for his work in theoretical physics. In 1996, he was awarded the Lewis Thomas Prize, which honors the scientist as poet. In 2000, he scored the Templeton Prize for exceptional contribution to the affirmation of life’s spiritual dimension—worth more, in a monetary way, than the Nobel.”
[2] Here’s an informal peep of a genius at work. Maria Popova’s engaging article: “A Flash of Illumination on the Greyhound Bus: Physicist Freeman Dyson on Creative Breakthrough and the Unconscious Mind.”, from Brain Pickings. Link: https://www.brainpickings.org/2018/06/28/freeman-dyson-maker-of-patterns-creativity/
|
|
## Why is it so hard to think “correctly” about confidence intervals?
I came across the following section in the (wonderful) textbook ModernDive:
Let’s return our attention to 95% confidence intervals. … A common but incorrect interpretation is: “There is a 95% probability that the confidence interval contains p.” Looking at Figure 8.27, each of the confidence intervals either does or doesn’t contain p. In other words, the probability is either a 1 or a 0.
(Although I’m going to pick on this quote a little bit, I want to stress that I love this textbook. This view of CIs is extremely common and I might well have taken a similar quote from any number of other sources. This book just happened to be in front of me today.)
I understand what the authors are saying. Given the data we observed and CI we computed, there is no remaining randomness — either the parameter is in the interval, or it isn’t. The parameter is not random, the data is. But I think there is room to admit that this point, while technically clear, is a little uncomfortable, even for those of us who are very familiar with these concepts. After all, there is a 95% chance that a randomly chosen interval contains the parameter. I chose an interval. Why can I no longer say that there is a 95% chance that the parameter is in that interval? To a beginning student of statistics who is encountering this idea for the first time, this commonplace qualification must seem pedantic at best and confusing at worst.
## Fiducial inference
Chapter 9 of Ian Hacking’s Logic of Statistical Inference contains a beautiful account of precisely why we are so inclined towards the “incorrect” interpretation, as well as the shortcomings of our intuition. The logic is precisely that of Fisher’s famous (infamous?) fiducial inference. Understanding this connection not only helps us to better understand CIs (and their modes of failure), but also to be more sympathetic to the inherent reasonableness of students who are disinclined to let go of the “incorrect” interpretation.
As presented by Hacking, there are two relatively uncontroversial building blocks of fiducial inference, and one problematic one. Recall the idea that aleatoric probabilities (the stuff of gambling devices) and epistemic probabilities (degrees of epistemic belief) are fundamentally different quantities. (Hacking treats this question better than I could, but I also have a short post on this topic here). Following Hacking, I will denote the aleatoric probabilities by $$P$$ and the degrees of belief by $$p$$.
### Assumption one: The “frequency principle.”
The first necessary assumption of fiducial inference is this:
If you know nothing about an aleatoric event $$E$$ other than its probability, then $$p(E) = P(E)$$.
This amounts to saying that, for pure gambling devices, absent other information, your subjective belief about whether an outcome occurs should be the same as the frequency with which that outcome occurs under randomization. If you know a coin comes up heads 50% of the time ($$P(heads) = 0.5$$), then your degree of certainty that it will come up heads on the next flip should be the same ($$p(heads) = 0.5$$). Hacking calls this assumption the “frequency principle.”
### Assumption two: The “logic of support.”
The second fundamental assumption is that the logic of epistemic probabilities should be the same as the logic of aleatoric probabilities. Specifically:
Degrees of belief should obey Kolmogorov’s axioms.
For example, if events $$H$$ and $$I$$ are logically mutually exclusive, then $$p(H \textrm{ and }I) = p(H) + P(I)$$. Conditional probabilities such as $$p(H \vert E)$$ are a measure of how much the event $$E$$ supports a subjective belief that $$H$$ will occurs.
Neither the frequency principle nor the logic of support are particularly controversial, even for avowed frequentists. Note that assumption one states only how you come about subjective beliefs about systems you know to be aleatoric, and assumption two states describes only how subjective beliefs combine coherently. So there is nothing really Bayesian here.
## Hypothesis testing and fiducial inference
Applying the frequency principle and the logic of support to confidence intervals, together with an additional (more controversial) logical step, will in fact lead us directly to the “incorrect” interpretation of a confidence interval. Let’s see how the logic works.
Suppose we have some data $$X$$, and we want to know the value of some parameter $$\theta$$. Suppose we have constructed a valid confidence set $$S(X)$$ such that $$P(\theta \in S(X)) = 0.95$$. Following Hacking, let $$D$$ denote the event that our setup is correct — specifically, that we are correct about the randomness of $$X$$ $$S(X)$$ is a valid CI with the desired coverage. That is, given $$D$$, we assume that $$X$$ is really random, and we know the randomness, so $$P$$ is a true aleatoric probability — no subjective belief here.
Of course, the construction of a confidence interval guarantees only the aleatoric probability — thus we have used $$P$$, not $$p$$. However, by the frequency principle, we are justified in writing
$$p(\theta \in S(X) \vert D) = P(\theta \in S(X)\vert D) = 0.95$$,
so long as we know nothing other than the accuracy of our setup $$D$$. (Note that $$\theta \in S(X)$$ is a pivot. In general, pivots play a central role in fiducial inference.)
Note that $$p(\theta \in S(X) \vert D)$$ is very near to our “incorrect” interpretation of confidence intervals! However, in reality, we know more than $$S(X)$$, we actually observe $$X$$ itself. Now, $$P(\theta \in S(X) \vert D, X)$$ is either $$0$$ or $$1$$. Conditional on $$X$$, there is no remaining aleatoric uncertainty to which we can apply the frequency principle. And most authors — including those of quote that opened this post — stop here.
There is an additional assumption, however, that allow us to formally compute $$p(\theta \in S(X) \vert X, D)$$, and it is this (controversial) assumption that is at the core of fiducial inference. It is this:
### Assumption three: Irrelevance
The full data $$X$$ tells us nothing more about $$\theta$$ (in an epistemic sense), than the confidence interval $$S(X)$$.
In the case of confidence intervals, the assumption of irrelevance requires at least two things. First, it requires that our subjective belief that $$\theta \in S(X)$$ does not depend on the particular interval that we compute from the data. In other words, we are as likely to believe that our CI contains the parameter no matter where its endpoints lie. Second, it requires that there is nothing more to be learned about the parameter from the data other than the information contained in the CI.
These are strong assumptions! However, when they hold, they justify the “incorrect” interpretation of confidence intervals — namely that there is a 95% subjective probability that $$\theta \in S(X)$$, given the data we observed. For, under the assumption of irrelevance, by the logic of support (and then the frequency principle as above) we can write
$$p(\theta \in S(X) \vert X, D) = p(\theta \in S(X) \vert D) = P(\theta \in S(X) \vert D) = 0.95$$.
## How does this go wrong, and what does it mean for teaching?
Assumption three is often hard to justify, or outright fallacious. But one of its strengths is that it points to how the logic of fiducial inference fails, when it does fail. In particular, it is not hard to construct valid confidence intervals that contain only impossible values of $$\theta$$ for some values of $$X$$. (As long as a confidence interval takes on crazy values sufficiently rarely, there is nothing in the definition preventing it from doing so.) In fact, as Hacking points out, confidence intervals are tools for before you see the data, designed so that you do not make mistakes too often on average, and can suggest strange conclusions once you have seen a particular dataset.
However, it’s not crazy for someone, especially a beginning student, to subscribe to assumption three, even if they are not aware of it. After all, we typically present a confidence interval as the way to summarize what your data tells you about your parameter. And if that’s the case, then the “incorrect” interpretation of CIs follows from the extremely plausible frequency principle and logic of support. At the least I think we should acknowledge the reasonableness of this logical chain, and teach when it goes wrong rather than simply reject it by fiat.
|
|
# 12.4.1 Locally-Projected SCF Methods with Single Roothaan-Step Correction
Locally-projected SCF cannot quantitatively reproduce the full SCF intermolecular interaction energies for systems with significant charge-transfer between the fragments (e.g., hydrogen bonding energies in water clusters). Good accuracy in the intermolecular binding energies can be achieved if the locally-projected SCF MI iteration scheme is combined with a charge-transfer perturbative correction.473 To account for charge-transfer, one diagonalization of the full Fock matrix is performed after the locally-projected SCF equations are converged and the final energy is calculated as infinite-order perturbative correction to the locally-projected SCF energy. This procedure is known as single Roothaan-step (RS) correction.473, 581, 582 It is performed if FRGM_LPCORR is set to RS. To speed up evaluation of the charge-transfer correction, second-order perturbative correction to the energy can be evaluated by solving the linearized single-excitation amplitude equations. This algorithm is called the approximate Roothaan-step correction and can be requested by setting FRGM_LPCORR to ARS.
Both ARS and RS corrected energies are very close to the full SCF energy for systems of weakly interacting fragments but are less computationally expensive than the full SCF calculations. To test the accuracy of the ARS and RS methods, the full SCF calculation can be done in the same job with the perturbative correction by setting FRGM_LPCORR to RS_EXACT_SCF or to ARS_EXACT_SCF. It is also possible to evaluate only the full SCF correction by setting FRGM_LPCORR to EXACT_SCF.
The iterative solution of the linear single-excitation amplitude equations in the ARS method is controlled by a set of NVO keywords described below.
Restrictions. Only single point HF and DFT energies can be evaluated with the locally-projected methods. Geometry optimization can be performed using numerical gradients. Wave function correlation methods (MP2, CC, etc..) are not implemented for the absolutely-localized molecular orbitals. SCF_ALGORITHM cannot be set to anything but DIIS, however, all SCF convergence algorithms can be used on isolated fragments (set SCF_ALGORITHM in the $rem_frgm section). Example 12.6 Comparison between the RS corrected energies and the conventional SCF energies can be made by calculating both energies in a single run. $molecule
0 1
--
0 1
O -1.56875 0.11876 0.00000
H -1.90909 -0.78106 0.00000
H -0.60363 0.02937 0.00000
--
0 1
O 1.33393 -0.05433 0.00000
H 1.77383 0.32710 -0.76814
H 1.77383 0.32710 0.76814
$end$rem
METHOD HF
BASIS AUG-CC-PVTZ
FRGM_METHOD GIA
FRGM_LPCORR RS_EXACT_SCF
$end$rem_frgm
SCF_CONVERGENCE 2
THRESH 5
\$end
|
|
# Mackenzie has $\$ 57$in her bank account. She begins receiving a weekly allowance of$\$15,$ of which she
###### Question:
Mackenzie has $\$ 57$in her bank account. She begins receiving a weekly allowance of$\$15,$ of which she deposits 20$\%$ in her bank account. Write an equation that represents how much money is in Mackenzie's account after $x$ weeks. (lesson 2-4)
#### Similar Solved Questions
##### Integratef (x,y) = xyover the region R,which is a semi-circle:y5R1 10Enter the exact answer:f (x,y) dA = 416.67
Integratef (x,y) = xyover the region R,which is a semi-circle: y 5 R 1 10 Enter the exact answer: f (x,y) dA = 416.67...
##### Point) Shark Inc: has determined that demand for its newest netbook model given by In4 - Hnp + 0.003p = 7, where is the number of netbooks Shark can sell at price of p dollars per unit Shark has determined that this model is valid for prices p > 10O. You may T- nd it useful in this problem to know that elasticity of demand is dei- ned to be E(p)Find Elp) Your answer should only be In terms of p.What price will maximize revenue. If the price Is less than 100 write 'NA' .
point) Shark Inc: has determined that demand for its newest netbook model given by In4 - Hnp + 0.003p = 7, where is the number of netbooks Shark can sell at price of p dollars per unit Shark has determined that this model is valid for prices p > 10O. You may T- nd it useful in this problem to kno...
##### Write the IUPAC name for the following compound: CH;CH,_N= CH;Spell out the IUPAC name of the compound.
Write the IUPAC name for the following compound: CH; CH,_N= CH; Spell out the IUPAC name of the compound....
##### Prob. 4 (20pts): A radio wave is composed of: CO Vin(t) 10-5 > Sin(nwot) Volt n-1,2,3...
Prob. 4 (20pts): A radio wave is composed of: CO Vin(t) 10-5 > Sin(nwot) Volt n-1,2,3 (a) (3pts) What are the values of the Fourier coefficients for Vin(t), for n-1,2 and:3 (b) (6pts) The following RLC circuit has been constructed to receive CBC Radio One Vancouver at 88.1 MHz. The circuit is the...
##### Researchers were interested in whether income and where someonelives impacts their happiness level. The researcher gave happinessquestionnaires (with higher scores indicating higher happiness) to25 people who live in the city with high income, 25 people who livein the city with low income, 25 people who live in the suburbs withhigh income and 25 people who live in the suburbs with lowincome. Using the data below, answer the following questions:Are the data sufficient to conclude that income and
Researchers were interested in whether income and where someone lives impacts their happiness level. The researcher gave happiness questionnaires (with higher scores indicating higher happiness) to 25 people who live in the city with high income, 25 people who live in the city with low income, 25 pe...
##### Requirements: 1. Prepare the bank reconciliation statements 2. Adjusting entries C. COMMERCE DANK In the course...
Requirements: 1. Prepare the bank reconciliation statements 2. Adjusting entries C. COMMERCE DANK In the course of your audit of Jetro Corp. for the year ened December 31, 2019, you found out that Ms. Ethel Booba, the bookkeeper, also handles cash receipts, maintains accounting records, and prepar...
##### Ckboard Content - Sequenceslab2Random X + V mathweb/wp-content/docs/2020/03/Sequenceslab2Random.pdf - + Fit to page D Show that...
ckboard Content - Sequenceslab2Random X + V mathweb/wp-content/docs/2020/03/Sequenceslab2Random.pdf - + Fit to page D Show that the following sequences are decreasing 3. After which term is this sequence decreasing? Show that the following sequences are increasing and find an upper bound 5. {-(0) 7....
##### Provide the name for the functional group in each boxCN HONOz DHOBox AChooseBox BChooseBox CChooseBox DChooseBox EChooseBox FChoose
Provide the name for the functional group in each box C N HO NOz D HO Box A Choose Box B Choose Box C Choose Box D Choose Box E Choose Box F Choose...
##### Question 20 of 30 3 points) View_problem in aPOR-UR Fill in each blank with the appropriate word or phrase:Part ] out of 2 The General Addition Rule states that P(A or B) = P(A) + P(B) - | (select)
Question 20 of 30 3 points) View_problem in aPOR-UR Fill in each blank with the appropriate word or phrase: Part ] out of 2 The General Addition Rule states that P(A or B) = P(A) + P(B) - | (select)...
##### An ASfwndut Jeep SpLe Cno_4iy Sos SmalL LalL n4ss O1S1S Stkq @L Jalh LSm ctallel p1tu LLh SSeU 306 Lat 3 Kantuso T7e :nSLA I ja Sha4g
An ASfwndut Jeep SpLe Cno_4iy Sos SmalL LalL n4ss O1S1S Stkq @L Jalh LSm ctallel p1tu LLh SSeU 306 Lat 3 Kantuso T7e :nSLA I ja Sha4g...
##### Suppose that T : R? R? is linear, T(1,0) = (1,4) , and T(1,1) = (2,5). What is T(2.3)? Is T one-to-one?
Suppose that T : R? R? is linear, T(1,0) = (1,4) , and T(1,1) = (2,5). What is T(2.3)? Is T one-to-one?...
##### Data Breach Prevention AHIMA Domains: Domain 3: Health Services Organization and Delivery B. Subdomain: Healthcare Compliance,...
Data Breach Prevention AHIMA Domains: Domain 3: Health Services Organization and Delivery B. Subdomain: Healthcare Compliance, Confidentiality, Ethical, Legal and Privacy Issues 3 Maintain user access logs/systems to track access to and disclosure of patient-identifiable data Domain 3: Health Servi...
##### Solve for0, 0 < 8 < Zx.a. (10 points}cct 9 = ~V3.b. (12 points)csc? @ csc & - 2 =0
Solve for 0, 0 < 8 < Zx. a. (10 points} cct 9 = ~V3. b. (12 points) csc? @ csc & - 2 =0...
##### What is the vertex form of y= (2x+7)(3x-1) ?
What is the vertex form of y= (2x+7)(3x-1) ?...
##### 71.1% Resources Give Up? Hint A current of 5.44 A is passed through a Fe(NO3), solution....
71.1% Resources Give Up? Hint A current of 5.44 A is passed through a Fe(NO3), solution. How long, in hours, would this current have to be applied to plate out 6.00 g of iron? time required: about us Careers privacy policy terms of 29% ^ a *...
##### Extra Credit Problem (10 pts) If the principle reservoir feeding the mountain with liquid magma is...
Extra Credit Problem (10 pts) If the principle reservoir feeding the mountain with liquid magma is a spherical volume having a diameter of roughly 1 km and is located 7 km below and 2 km to the south of the lava dome at the summit of the mountain, estimate the average pressure in atmospheres of the ...
##### Consider the following = differential equation which hus regular singular point. 61 y" (61? +r)v +v = point) Write the DE in standard form.points) Give the indicial equation Solve to find the indicial root (s) .point ) In this case how many linearly independent series solutions does the method of Frobenius yield? Exactly one B. Exactly two(6 points) Use the method of Frobenius to find series solution to the DE If you found two distinct indicial roots above, then use the larger of the roots
Consider the following = differential equation which hus regular singular point. 61 y" (61? +r)v +v = point) Write the DE in standard form. points) Give the indicial equation Solve to find the indicial root (s) . point ) In this case how many linearly independent series solutions does the metho...
##### RNA-seq is considered t0 be complemenlary (0 gene expression microarrays because RNA-Seqprobes mRNA while gene expression microarrays probe DNAis based on pre-determined set of DNA sequencesIs able t0 identify low-abundance , alternatively spliced transcripts.does not require the formation of cDNA
RNA-seq is considered t0 be complemenlary (0 gene expression microarrays because RNA-Seq probes mRNA while gene expression microarrays probe DNA is based on pre-determined set of DNA sequences Is able t0 identify low-abundance , alternatively spliced transcripts. does not require the formation of cD...
##### In Figure (a), a uniform 42.6 kg beam is centered over two rollers. Vertical lines across...
In Figure (a), a uniform 42.6 kg beam is centered over two rollers. Vertical lines across the beam mark off equal lengths. Two of the lines are centered over the rollers; a 8.87 kg package of tamale is centered over roller B. What are the magnitudes of the forces on the beam from (a) roller A and (b...
##### A car in an amusement park ride rolls without friction around a track (Figure 1). The...
A car in an amusement park ride rolls without friction around a track (Figure 1). The car starts from rest at point A at a height h above the bottom of the loop. Treat the car as a particle. Figure 1: PART A: What is the minimum value of h (in terms of R) such that the car moves around the loop wit...
##### 2H2S --> 2H2 + S2 Kc= 1.67x10^-7 at 800 C with initial concentrations of H2S =.300...
2H2S --> 2H2 + S2 Kc= 1.67x10^-7 at 800 C with initial concentrations of H2S =.300 M, H2 =.400 M and S2 = .000 M find the equilibrium concentration of S2....
##### Find linearly independent functions that are annihilated by the given differential operator. (Giv...
Find linearly independent functions that are annihilated by the given differential operator. (Give as many functions as possible. Use x as the independent variable. Enter your answers as a comma-separated list.) 1. D4 2. D2 − 7D − 44 Solve the given initial-value problem: 2. y'' ...
##### A simple random sample of size n is drawn. The sample mean, X, is found to...
A simple random sample of size n is drawn. The sample mean, X, is found to be 17.9, and the sample standard deviation, s, is found to be 4.8. Click the icon to view the table of areas under the t-distribution. (a) Construct a 95% confidence interval about us if the sample size, n, is 34. Lower bound...
##### Ketones react with dimethyisulfonium methylide to yield epoxides. Suggest a mec:anisin for the reaction.
Ketones react with dimethyisulfonium methylide to yield epoxides. Suggest a mec: anisin for the reaction....
##### QUESTION 1Identify the correct product for the following reaction: HzPdbutanebutene0 3-butyne-1-al O1-butyne 0 butanalQUESTION 2Identify the correct conditiongforine following reactionOrDeHecLKHk2 EPd FARILE CH2fiz KSHkMno, acid 353 H29 HERIOU Hce HeO
QUESTION 1 Identify the correct product for the following reaction: Hz Pd butane butene 0 3-butyne-1-al O1-butyne 0 butanal QUESTION 2 Identify the correct conditiongforine following reaction Or De Hec LKHk2 EPd FARILE CH2fiz KSHkMno, acid 353 H29 HERIOU Hce HeO...
##### Write balanced equation for the reaction of chromium (III) sulfate and sodium hydroxide, including phases: If 10.0 mL of 0.250 Mchromium (III) sulfate solution is mixed with 5.00 mL of 0.400 M sodium hydroxide, what mnass of chromium (III) hydroxide precipitate will form?
Write balanced equation for the reaction of chromium (III) sulfate and sodium hydroxide, including phases: If 10.0 mL of 0.250 Mchromium (III) sulfate solution is mixed with 5.00 mL of 0.400 M sodium hydroxide, what mnass of chromium (III) hydroxide precipitate will form?...
##### State the null hypothesis Ho and the alternative hypothesis Ha in each case (in terms of population parameters and hypothesized values, e.g. 8 and p is the basketball player's free- throw shooting rate).(a) A university gives credit in first-year calculus to students who pass a placement test_ The mathematics department wants to know If students who get credit in this way differ in their success with second-year calculus: Scores in second-year calculus are scaled so the average each year is
State the null hypothesis Ho and the alternative hypothesis Ha in each case (in terms of population parameters and hypothesized values, e.g. 8 and p is the basketball player's free- throw shooting rate). (a) A university gives credit in first-year calculus to students who pass a placement test_...
##### Solve yp 2yx log q
Solve yp 2yx log q...
##### Worley Company buys surgical supplies from a variety of manufacturers and then resells and delivers these...
Worley Company buys surgical supplies from a variety of manufacturers and then resells and delivers these supplies to hundreds of hospitals. Worley sets its prices for all hospitals by marking up its cost of goods sold to those hospitals by 8%. For example, if a hospital buys supplies from Worley th...
##### EXERCISE You are traveling along an interstate highway at 35.0 m/s (about 79 mph) when a...
EXERCISE You are traveling along an interstate highway at 35.0 m/s (about 79 mph) when a truck stops suddenly in front of you. You immediately apply your brakes and cut your speed in half after 5.0 s. Hint (a) What was your acceleration (in m/s2), assuming it was constant? (Assume you are initially ...
##### Note: Figure not drawn to scale If U = 4 units, V = 5 units, W = 7 units, X = 6 units, Y = 7 units, and Z = 4 units, what is the area of the object
Note: Figure not drawn to scaleIf U = 4 units, V = 5 units, W = 7 units, X = 6 units, Y = 7 units, and Z = 4 units, what is the area of the object?Select answer A 49 square unitsSelect answer B 43 square unitsSelect answer C 71 square unitsSelect answer D 55 square units...
##### Expenmenal amanaemcnt Ued miancac icld expcnmcntaton conzizts of a olld center-conducbng rod ano cuter cojral cylinorica conductor Thc pace bctween thc conductors Is flcd with ? gocd insulating materix . Tne figurc below shows cro s ECctlon aranqement, The Current in the center Guc olene O3a2 3 ndthe (umens in(ne Ouler conducte hasthe came moninide buris the opposite direction . [i the disranke 13,04mm; determine the maqitude the maqetk field at the Icwing ocatonsLncatedar =locatcd at r =Iceal
expenmenal amanaemcnt Ued miancac icld expcnmcntaton conzizts of a olld center-conducbng rod ano cuter cojral cylinorica conductor Thc pace bctween thc conductors Is flcd with ? gocd insulating materix . Tne figurc below shows cro s ECctlon aranqement, The Current in the center Guc olene O3a2 3 nd...
##### Need help on part a and b?? (1796) Problem 7: Two children pull a third child...
Need help on part a and b?? (1796) Problem 7: Two children pull a third child backward on a snow saucer sled exerting forces F1 13 and F2 8.5 as shown in the figure. Note that the direction of the friction force f- 5.8 N is unspecified, it will be opposite in direction to the sum of the other two fo...
##### Is tap water better than bottled water? how can you explain your answer
is tap water better than bottled water? how can you explain your answer...
-- 0.029038--
|
|
# Prove: For any sequence of linearly independent elements $y_j \in X$ and $a_j \in \mathbb R$ there exists an element $f \in X^*$ s.t. $f(y_j)=a_j$
I’m trying to solve the following problem but I have no clue how to do it.
Let $(X,||.||)$ be a normed $\mathbb C$-vector space. Prove: For any sequence of linearly independent elements $y_j, 1 \leq j \leq N$, in $X$ and any sequence $(a_j)_{1 \leq j \leq N}$ in $\mathbb R$ there exists an element $f \in X^*$ s.t. $f(y_j)=a_j$ for any $1 \leq j \leq N$.
The only thing I know is that I need at some point the Hahn-Banach Theorem. I would be grateful if someone could help me to prove this statement.
Thanks!
#### Solutions Collecting From Web of "Prove: For any sequence of linearly independent elements $y_j \in X$ and $a_j \in \mathbb R$ there exists an element $f \in X^*$ s.t. $f(y_j)=a_j$"
I’m assuming you learned about finite-dimensional spaces along the way. If you have a linearly independent set of vectors $\{ y_1,y_2,\cdots,y_N \}$, then the linear space $Y_N$ spanned by these is a finite-dimensional space. So there are linear functionals $f_j$ on $Y_N$ such that $f_j(y_k)=\delta_{j,k}$. Using these you can find a functional $f$ on $Y_N$ such that $f(y_j)=a_j$ (in fact $f=\sum_{j=1}^{N}a_jf_j$ is such a functional.)
The space $Y_N$ is a finite-dimensional normed space, inheriting its norm from $X$. All norms on a finite-dimensional space are equivalent. So the following norm is equivalent to the the induced one:
$$\|x\|_{1}=\sum_{j=1}^{N}|f_j(x)|$$
Therefore, there is a constant $C$ such that $\|x\|_1 \le C\|x\|$ for all $x\in Y_N$, which forces the $f_j$ to be continuous with
$$|f_j(x)| \le \|x\|_{1} \le C\|x\|.$$
So $f$ is continuous on $Y_N$ with $|f(x)| \le (C\sum_{j=1}^{n}|a_j|)\|x\|$ for all $x\in Y_N$. By the Hahn-Banach theorem you can extend $f$ to all of $X$ in such a way that $\|f\|_{X^*} \le C\sum_{j=1}^{n}|a_j|$. Any such extension $\tilde{f}\in X^*$ satisfies $\tilde{f}(y_j)=a_j$ for $1 \le j \le N$.
|
|
# Problems in Direct3D setting
This topic is 4591 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hi there. I thought I knew how to set Direct3D correctly, but I noticed that I'm not understand this process very well... The problem is the following: I ran my application in two configurations: 1) K6-II 400MHz with a TNT2 2) AthlonXP 2400MHz with a GeForce4 MX. The problem is that the application is running at 250 fps in the config 1 and at only 62 fps in the config 2. My Direct3D setting:
bool DX::IniciaD3D( LPDIRECT3DDEVICE9 *pDevice, HWND hWnd,
int Largura, int Altura, bool vSync,
bool ModoFullScreen )
{
// Criando o objeto Direct3D
LPDIRECT3D9 g_pD3D = NULL;
if( NULL == ( g_pD3D = Direct3DCreate9( D3D_SDK_VERSION ) ) )
{
MessageBox(NULL, "Não foi possível criar o objeto Direct3D",
"Falha no método Direct3DCreate9", NULL);
SAFE_RELEASE(g_pD3D);
return false;
}
// Parâmetros do Direct3D
D3DPRESENT_PARAMETERS d3dpp;
ZeroMemory( &d3dpp, sizeof(d3dpp) );
D3DDISPLAYMODE DisplayAtual;
// MODO JANELA
if ( ModoFullScreen == false )
{
d3dpp.Windowed = true;
d3dpp.BackBufferFormat = D3DFMT_UNKNOWN;
}
// MODO FULLSCREEN
else
{
d3dpp.Windowed = false;
d3dpp.BackBufferFormat = DisplayAtual.Format;
d3dpp.BackBufferWidth = Largura;
d3dpp.BackBufferHeight = Altura;
d3dpp.FullScreen_RefreshRateInHz = D3DPRESENT_RATE_DEFAULT;
}
d3dpp.MultiSampleType = D3DMULTISAMPLE_NONE;
d3dpp.MultiSampleQuality = 0;
d3dpp.BackBufferCount = 1;
d3dpp.hDeviceWindow = hWnd;
d3dpp.AutoDepthStencilFormat = D3DFMT_D16;
d3dpp.EnableAutoDepthStencil = true;
d3dpp.Flags = 0;
if ( vSync == true )
d3dpp.PresentationInterval = D3DPRESENT_INTERVAL_ONE;
else
d3dpp.PresentationInterval = D3DPRESENT_INTERVAL_IMMEDIATE;
// Criando o dispositivo
D3DCAPS9 caps;
int VP = 0;
if( caps.DevCaps & D3DDEVCAPS_HWTRANSFORMANDLIGHT )
VP = D3DCREATE_HARDWARE_VERTEXPROCESSING;
else
VP = D3DCREATE_SOFTWARE_VERTEXPROCESSING;
D3DDEVTYPE_HAL,
hWnd,
VP,
&d3dpp,
pDevice ) ) )
{
// Problemas na criação do dispositivo (pDevice)
MessageBox(NULL, "O dispositivo gráfico não pôde ser criado.",
"Falha no método CreateDevice", NULL );
SAFE_RELEASE(g_pD3D);
return false;
}
SAFE_RELEASE(g_pD3D);
return true;
}
I ran the application in full screen mode with vSync disabled (D3DPRESENT_INTERVAL_IMMEDIATE). Please, would anybody know the what is causing this problem? Could anybody show me the best way to set Direct3D for different graphic cards? Thank you much in advance.
##### Share on other sites
This might be a driver forced issue. The drivers of the second config might be forcing a vsync. You should check the driver settings of both configs to make sure that you don't have problems when with forced driver configs.
I hope this helps.
Take care.
##### Share on other sites
Quote:
Original post by ArmadonThis might be a driver forced issue. The drivers of the second config might be forcing a vsync. You should check the driver settings of both configs to make sure that you don't have problems when with forced driver configs.
I 2nd this suggestion [smile]
The presentation parameters tend to be taken as hints by the drivers - they can and will override them with different settings. My ATI drivers allow a "application default", "always on" and "always off" setting - it's only when set to the first of the three that it'll honour what my code tells it to do...
hth
Jack
##### Share on other sites
Unfortunately, it doesn't seem to be a driver forced issue. The driver settings of GeForce4 MX allows three options for the vSync: "Application-controlled", "OFF" and "ON". I tried all them and the result was the same (62 fps).
For checking if it was really a forced vSync problem, I ran the application in different resolutions, and for my surprise, the fps didn't change. It should ran around 85 fps in the 640x480... but it ran at 62 fps. I'm sure the refresh is 85Hz in this resolution (my display can show it). So, it doesn't seem to be vSync problem.
Does the order how I set the Direct3D paramenters can influence on the performance?
Do you see something wrong in the code that I posted?
Any help would be very appreciated.
##### Share on other sites
The only ways I can see this effect appearing is when there is an obvious driver override such as forced vsync or if your rendering loop is limiting the frame rate... by actually only rendering 60 frames per second. Other than that I cannot see this really happening. It does not matter how you order the creation of your device with focus on this problem (It does matter in a sense that you cannot use the device unless you create it first).
If you could paste the app somewhere you might be able to test it and debug it for you.
I hope this helps.
Take care.
##### Share on other sites
Friends, could you run my application and reporting the fps? Just click on the download button.
Thank you very much.
##### Share on other sites
Stable 1000fps on P4 3.2/GF6800 Go(79.31 driver ver)
Just a note. GF4MX is a very strange card. I wouldn't be surprised if it would work work fine except with that. Don't ask why.
##### Share on other sites
Thanks darkelf2k5! I am a bit less worried now.
Quote:
GF4MX is a very strange card. I wouldn't be surprised if it would work fine except with that. Don't ask why.
Could I ask what kind of problem did you get with a GeForce4 MX? [smile]
Armadon, thanks for replying. The frame rate is not limited by code (darkelf2k5 got 1000fps) and the vSync seems not be forced by the driver card (it should run around 85fps in 640x480)... Wouldn't it be a driver problem? I will update it.
If anybody more can run the application and reporting the fps I would be very thankful.
Edit:
A note: I ran the application in a third config with a GeForce4MX, and the result was the same (around 62 fps).
[Edited by - Mari_p on January 22, 2006 9:30:15 AM]
##### Share on other sites
62 fps say the aplication, but fraps tell me 999fps
Athlonxp 2600
1 gb Ram
geforce 6600gt
The problem is the fps counter i think.
##### Share on other sites
I never had a GeForce4 MX, I just heard rumors that this not working, that not supported, etc. Still, it's either this, or the athlonXP.
1. 1
2. 2
3. 3
4. 4
frob
15
5. 5
• 16
• 12
• 20
• 12
• 14
• ### Forum Statistics
• Total Topics
632155
• Total Posts
3004477
×
|
|
# 1 Objectives
Following completion of this lab you should be able to:
• Design robust tests for hardware components.
• Write verilog test benches.
• Read and interpret Xilinx waveforms.
# 3 Setting up a project
1. The module
The diagram below describes the component you will be testing in this lab.
This component counts up or down based on the direction input. Every clock cycle the register data should update on the rising edge of the clock and the output should change at that time.
Your test bench should be concerned with these signals:
• direction - a 1 bit input that makes the component count up on 0 and down on 1
• CLK - the single bit clock input that updates the register on the rising edge
• reset - a single bit input that resets the counter's register to 0 on a rising edge
• dout - the 6 bit signed output from the component, this is the number stored in the register
Before you move on your should consider the behavior of this component and what sorts of errors might exist in the implementation of this component. Write down a list of possible errors on the answer sheet.
1. Setting up the Xilinx Project
1. Start Xilinx ISE 14.7
2. Select File > New Project... to open the "New Project Wizard."
1. In the "Create New Project" dialog:
1. enter "counter" for the "Name",
2. choose the "Location" (Importantly, Xilinx hates spaces in paths, do not put projects in directories with spaces in the names; your OneDrive folders include spaces, do not put them there),
3. CRITICAL: change the "Working directory" to "the location selected above/work",
4. select "HDL" as the "Top-Level Source Type", and
5. click "Next".
2. In the "Project Settings" dialog, enter the following properties:
• Family: Spartan3E
• Device: XC3S500E
• Package: FG320
• Speed: -4
• Synthesis Tool: XST (VHDL/Verilog)
• Simulator: ISim (VHDL/Verilog)
• Preferred Language: Verilog
and click "Next".
3. Click "Finish" in the "Project Summary" dialog.
Note: You can change the project properties by right clicking on the device (i.e. xc3s500e-4fg320) in the "Hierarchy" section of the "Design" tab and selecting "Design Properties..."
1. Add a the components to the project. There are a number of ways that you can do this:
• Right click on the device in the "Hierarchy" section of the "Design" tab and select "Add Copy of Source..."
• Select Project > Add Copy of Source...
Navigate to and select the all the .v files from your git repository then click okay (you can also do this one at a time, or shift+click to select all the files at once). Take a look at Counter_A.v, this is a working version of the counter component, you should use this component to test your test bench.
2. Add a new test bench to the project.
• Select Project > New Source...
• Select "Verilog Test Fixture", name the test "counter_test.v"
• Press Next
• Select the "Counter_A.v" component in the "Associate Source" dialogue.
• Press Next
• Finish the test bench creation.
This will create a new verilog test bench of the counter component. The inputs and outputs will be set up in the header. You will write tests in the lower initial begin ... end block under the "Add stimulus here" comment.
# 4 Writing Tests
1. Editing the test bench
Before we write tests we will set up the CLK input.
1. We will specify the clock settings by adding a parameter at the top of the file. Below the Input and Output initialization add the following code:
parameter HALF_PERIOD = 50;
initial begin
CLK = 0;
forever begin
#(HALF_PERIOD);
CLK = ~CLK;
end
end
Note that this code should be in a new initial block, not in the autogenerated one.
1. This creates a parameter HALF_PERIOD that sets the length of the clock cycle. This number is arbitrary and is only used for simulation, it does not reflect the actual speed of the component in meat-space. Your tests should work even if this number is changed, so all waiting your tests do should use this parameter, do not hardcode any wait times in your tests!
This block initially sets the clock signal to 0, then starts an infinite loop that pauses for the length of HALF_PERIOD then inverts the clock.
1. Based on that above warning we need to fix some of the autogenerated code. The testing block has a #100 that we need to change. Modify this to read #(100*HALF_PERIOD); This will change the test to wait 100 half clock cycles in the initialization step (aka 50 cycles).
2. To confirm that your clock is set up correctly go to the simulation tab, select the test bench in the browser on the left, and click the Simulate Behavioral Model Option in the ISim Simulator group below it. After Xilinx elaborates for a while you will see the waveform appear, you should see the clock ticking up and down at regular 50ns intervals.
3. You might notice the output signal is XXXXX this means the signal was never initialized and needs to be set up properly. This is because out component needs to receive the reset signal to initialize it.
4. Go back to your test bench and in the autogenerated code change reset = 0; to reset = 1;, then after the wait you modified previously add another line to set reset back to 0. Open up the waveform window again and press the "Re-launch" button (or close it and re-simulate as before).
5. You should now see reset starts off at 1 and after 50 cycles goes to 0. dout should have a value of 0 for the first 50 cycles and then should start ticking up.
2. Writing the first test.
This section will guide you through writing a single test for the component as an example of the kind of considerations you should make. A few things you should keep in mind:
• Tests should be reproducible.
• Failures should be loud.
• Which test caused a failure should be obvious.
• Tests should isolate errors.
To these ends our tests will be well documented and they should not depend on a human noticing a failure in the waveform. Therefore, "we looked at the waveform for confirmation" is not a valid test. Additionally, we want our tests to be as specific as possible, we should not rely on one test that fails for a variety of reasons.
1. We'll probably want a few helper variables for our tests, lets set those up. Below where you set up the HALF_PERIOD parameter create three variables:
integer cycle_counter = 0;
integer counter = 0;
integer failures = 0;
We'll use these in each of our tests to help track down errors. Note: integers and parameters in Xilinx should only be used in test fixtures, do not include them in any components you write in the future.
1. Add a comment that marks an area for the first test and describes the test, as well as a print to the console so we can see the test ran in the simulation:
//-----TEST 1-----
//Testing counting up
$display("Testing counting up."); 1. The first thing we want to do in every test is reset the inputs to what this test requires. We should do this even if a variable has not changed from a previous test (we might reorder tests in the future!). So lets set the inputs up for the test right after the display statement: reset = 1; counter = 0; cycle_counter = 0; #(2*HALF_PERIOD); reset = 0; direction = 0; //we are testing counting up We set reset 1 and wait a single clock cycle and then set direction and reset to 0. This is so we know the state of the machine at the beginning of the test: It should have 0 in the register and should start counting up. 1. Add a line that will wait for 50 clock cycles after the input initialization so we can investigate the waveform. 2. Next lets set up an output that will tell us our tests have finished and how many failures occurred. At the end of the testing block add the following line: $display("TESTS COMPLETE. \n Failures = %d", failures)
1. Save your test and run the simulation. Investigate the output and make sure the behavior makes sense to you. Ensure that you have run the test long enough for the "TESTS COMPLETE" message to display in the console, if not you can run the simulation a little longer.
2. Now lets actually write a test that ensures the counter goes up by one every single clock cycle. We need to consider the device we are testing, what is the largest and smallest number our counter can contain? What happens if it tries to keep counting once it goes past the maximum? Its important to remember that the adder in our component uses signed numbers (it can take in a -1, after all), so the output from the adder will be signed as well. We need to indicate in our testbench that the number in our register is signed, therefore edit the output line to read: wire signed [5:0] dout''.
3. Lets make a loop that will count up to that maximum and a few extra cycles so we can test the overflow behavior, put this right after the direction = 0 line we added:
repeat (40) begin
#(2*HALF_PERIOD);
counter = counter + 1;
cycle_counter = cycle_counter + 1;
if (dout != counter) begin
failures = failures + 1;
$display("%t (COUNT UP) Error at cycle %d, output = %d, expecting = %d",$time, cycle_counter, dout, counter);
end
end
1. Look at this code and explain briefly what it does in your own words.
2. It is a good idea to make sure messages are labeled with information about which test they are associated with. Here our code outputs (COUNT UP) in all messages about this test. It also uses the $time variable to output the time during the simulation that the message was printed, this will help you track down where in a waveform an error occurred. Use this output to figure out what time in the simulation cycle 23 of this test occurs, you may need to modify the code or use the waveform. 3. Your output should include some failed tests, explain why this is happening. 4. These failures are not a problem with our component, but an issue with our test not accurately testing the true behavior of the module. Lets modify the test a bit by keeping track of when we have overflowed the register: repeat (40) begin #(2*HALF_PERIOD); counter = counter + 1; if (cycle_counter == 31) counter = -32; cycle_counter = cycle_counter + 1; if (dout != counter) begin failures = failures + 1;$display("%t (COUNT UP) Error at cycle %d, output = %d, expecting = %d", \$time, cycle_counter, dout, counter);
end
end
Run this new test, explain what it is doing.
# 5 Working with waveforms
1. The default waveform shows you the inputs and outputs from the unit under test (aka UUT, or the module you are testing) as well as any parameters you set up in the test. We're going to make the waveform a little easier to read and useful for us.
2. You can rearrange any signals in the waveform, find the CLK label in the "Name" column of the waveform window, click and drag it to the top so it is the very top signal.
3. When you do this the line might disappear, that is fine. Press the "Re-launch" button on the top right. A pop-up will ask you to save your settings, press "Yes", lets name this configuration something useful: "counter_tests_config", then press "Save". You may want to consider making a new folder in your project directory to hold wave configs, you'll end up with many later on.
4. Lets also re-color the CLK signal so that it is easy to tell that it is different from the others at a glance. Right click "CLK" and go to "Signal Color" select a new color for the signal.
5. Now lots move on to the other signals, dout is in binary, but really our tests are treating it like a signed integer, so lets make it display as one. Right click "dout[5:0]", go to Radix > Signed Decimal.
6. You can do the same with the counter and cycle_count parameters. You can select multiple signals by selecting the first one then shift+clicking on others, then you can change the settings for all of them at once. Set these to "Unsigned Decimal" signals.
7. The failures and HALF_PERIOD signals are not super helpful to us, so lets remove them from the waveform. Simply select them and press the delete key.
8. Lets save this configuration File > Save, then close the simulation and re-run it. You'll notice that all the work you put in is suddenly gone! Don't worry, we can load the config: Go to File > Open and select the config you just saved (it will be a .wcfg file). This should load the config, sometimes after loading a config you may need to "Re-launch" to get all the signals to show up in the waveform.
9. Lets say we are tracking down a bug and want some signals added to the waveform that aren't there. We can just navigate to them using the "Instances and Processes" and "Object" panes on the left. Lets add the failures signal back into the waveform. With "counter_test" selected in "Instances and Processes" click and drag "failures[31:0]" from the Objects pane onto the waveform. It should be listed in the Name column and the waveform may immediately appear, or need a re-launch.
10. Next we'll see how to investigate one of the signals inside the component we are testing. Expand the "counter_test" in the Instance pane, expand "uut", you should see the three sub components of the module: the adder, mux, and register. Select the mux, then drag the selector bit (s) onto the waveform. You'll need to re-launch, do not save the waveform configuration for now. You should now see the selector bit in the waveform. Using these tools can help you track down which components are not behaving as expected during tests.
11. Close the simulation without saving the config.
12. Re-loading the config every time we simulate will get annoying fast, so lets make it open by default. In the Xilinx window with your test selected in the hierarchy right click "Simulate Behavioral Model" in the lower pane then select "Process properties".
13. Select "Use Custom Waveform Configuration File" then in the "Custom Waveform Configuration File" section use the "..." button to navigate to and select your .wcfg file.
14. Press OK. Now if you run the simulation again it will automatically load your configuration settings each time.
# 6 Testing other components
You have been provided with several different implementations of the counter component described above. All but one of them are implemented in a way that does not meet the specifications. You should not edit any of these module components. The only work you will do in this lab is in the test bench you created above.
We now have a test and configuration file that will confirm that a component correctly counts up by one each clock cycle. Lets test some of the other versions of the components in the project.
1. Open up "counter_test.v" locate the line that initializes the UUT. This is like a constructor in any other language, it gives the type that is being constructed, "counter_A" currently, and then a name to this instance of that component, "uut" by default. You an rename the component and your test will still work perfectly, try changing "uut" to "unit_under_test", for example. You'll see that it's name will change in the Instance pane when you simulate.
2. Lets test a different component, all of our components have the same structure to their constructor, so we just need to call a different module (the equivalent to just changing the Class of the object you are constructing in Java). Change "counter_A" to "counter_B", save the test and run the simulation.
3. Investigate the waveform and the output of your test. Does it pass your test?
4. You can run each of the other counter implementations through your test if you would like.
# 7 Write more tests
• Your job for this lab is to expand your test bench such that it can find the errors in all of the implementations of the counters you have been provided.
• You should end up with at least one test per error you find in the counters. All the counters have a unique error, so you should have at least 4 unique tests.
• Any one counter might fail multiple tests, but the combination of failed tests should help us pinpoint the exact error in the component.
• After you have written all of your tests write a brief explanation of how each component is broken and which of your tests detects the error.
• Remember to reset all of your inputs and parameters between each test.
• Make sure that you run the simulation long enough for all the tests to run for each module.
• Some of these modules have very subtle errors that you will need to carefully design tests to detect.
• You should only create a single test fixture that can run any one of the modules, do not make different files for each one.
# 8 Turning It In
Submit the answers to the questions contained in the lab guide using the question sheet via gradescope, only 1 per team (make sure all team member's names are included). In gradescope you are able add your team members names to the submission, make sure you do so. When you are prompted to indicate where the "code question" is on the answer sheet, just select the first page of the assignment, you DO NOT upload code to gradescope.
Submit your test bench to ALL team member's git repositories in the lab04 directory. Do NOT commit the Xilinx project, only add the new test fixture to the repo.
|
|
## File: d3Sankey.Rd
package info (click to toggle)
r-cran-d3network 0.5.2.1-3
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293 % Generated by roxygen2 (4.0.1): do not edit by hand \name{d3Sankey} \alias{d3Sankey} \title{Create a D3 JavaScript Sankey diagram} \source{ D3.js was created by Michael Bostock. See \url{http://d3js.org/} and, more specifically for Sankey diagrams \url{http://bost.ocks.org/mike/sankey/}. } \usage{ d3Sankey(Links, Nodes, Source, Target, Value = NULL, NodeID, height = 600, width = 900, fontsize = 7, nodeWidth = 15, nodePadding = 10, parentElement = "body", standAlone = TRUE, file = NULL, iframe = FALSE, d3Script = "http://d3js.org/d3.v3.min.js") } \arguments{ \item{Links}{a data frame object with the links between the nodes. It should have include the \code{Source} and \code{Target} for each link. An optional \code{Value} variable can be included to specify how close the nodes are to one another.} \item{Nodes}{a data frame containing the node id and properties of the nodes. If no ID is specified then the nodes must be in the same order as the \code{Source} variable column in the \code{Links} data frame. Currently only grouping variable is allowed.} \item{Source}{character string naming the network source variable in the \code{Links} data frame.} \item{Target}{character string naming the network target variable in the \code{Links} data frame.} \item{Value}{character string naming the variable in the \code{Links} data frame for how far away the nodes are from one another.} \item{NodeID}{character string specifying the node IDs in the \code{Nodes} data frame.} \item{height}{numeric height for the network graph's frame area in pixels.} \item{width}{numeric width for the network graph's frame area in pixels.} \item{fontsize}{numeric font size in pixels for the node text labels.} \item{nodeWidth}{numeric width of each node.} \item{nodePadding}{numeric essentially influences the width height.} \item{parentElement}{character string specifying the parent element for the resulting svg network graph. This effectively allows the user to specify where on the html page the graph will be placed. By default the parent element is \code{body}.} \item{standAlone}{logical, whether or not to return a complete HTML document (with head and foot) or just the script.} \item{file}{a character string of the file name to save the resulting graph. If a file name is given a standalone webpage is created, i.e. with a header and footer. If \code{file = NULL} then result is returned to the console.} \item{iframe}{logical. If \code{iframe = TRUE} then the graph is saved to an external file in the working directory and an HTML \code{iframe} linking to the file is printed to the console. This is useful if you are using Slidify and many other HTML slideshow framworks and want to include the graph in the resulting page. If you set the knitr code chunk \code{results='asis'} then the graph will be rendered in the output. Usually, you can use \code{iframe = FALSE} if you are creating simple knitr Markdown or HTML pages. Note: you do not need to specify the file name if \code{iframe = TRUE}, however if you do, do not include the file path.} \item{d3Script}{a character string that allows you to specify the location of the d3.js script you would like to use. The default is \url{http://d3js.org/d3.v3.min.js}.} } \description{ Create a D3 JavaScript Sankey diagram } \examples{ \dontrun{ # Recreate Bostock Sankey diagram: http://bost.ocks.org/mike/sankey/ # Load energy projection data library(RCurl) URL <- "https://raw.githubusercontent.com/christophergandrud/d3Network/sankey/JSONdata/energy.json" Energy <- getURL(URL, ssl.verifypeer = FALSE) # Convert to data frame EngLinks <- JSONtoDF(jsonStr = Energy, array = "links") EngNodes <- JSONtoDF(jsonStr = Energy, array = "nodes") # Plot d3Sankey(Links = EngLinks, Nodes = EngNodes, Source = "source", Target = "target", Value = "value", NodeID = "name", fontsize = 12, nodeWidth = 30, file = "~/Desktop/TestSankey.html") } }
|
|
Location of Repository
## The support of top graded local cohomology modules
### Abstract
Let \$R_0\$ be any domain, let \$R=R_0[U_1, ..., U_s]/I\$, where \$U_1, ..., U_s\$ are indeterminates of some positive degrees, and \$I\subset R_0[U_1, ..., U_s]\$ is a homogeneous ideal. The main theorem in this paper is states that all the associated primes of \$H:=H^s_{R_+}(R)\$ contain a certain non-zero ideal \$c(I)\$ of \$R_0\$ called the\ud ``content'' of \$I\$. It follows that the support of \$H\$ is simply \$V(\content(I)R + R_+)\$ (Corollary 1.8) and, in particular, \$H\$ vanishes if and only if \$c(I)\$ is the unit ideal. These results raise the question of whether local cohomology modules have finitely many minimal associated primes-- this paper provides further evidence in favour of such a result. Finally, we give a very short proof of a weak version of the monomial conjecture based on these results.\u
Year: 2003
OAI identifier: oai:eprints.whiterose.ac.uk:10161
### Preview
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.
|
|
Pages last updated:
28 June 2017
Equations for plant volumes 2
The gas storage volume (V) of an underground biogas plant is a segment of a dome, or a cap cut from the top of a dome of radius (R). The horizontal radius (r) of the base of the cap is related to the height (h) of the cap by:
$R^2 = r^2 + \left( {R - h} \right)^2$
Which gives:
$h = R \pm \sqrt {R^2 - r^2 }$
Now:
$V = {\pi \over 3}h^2 \left( {3R - h} \right)$
Rearranging gives:
$V = {\pi \over 6}h\left( {3r^2 + h^2 } \right)$
Using the figures from the drawing, the spherical segment that forms the floor has a height of: 0.455 m, which gives a volume of: 1.21 m3.
The main tank has a height of: 1.275 m, which is the same as the radius of the floor, i.e. the top is a full hemisphere. The volume is: 4.96 m3. The total volume is: 6.16 m3, which is a bit less than the KVIC plant. However, part of that volume is used for gas storage, when the dome is full of gas.
The internal volumes of the KVIC dum and GGC 2047 designs can be calculated using other equations.
|
|
#### Methods
###### back to index
Let $n$ be a positive integer, prove that in this series $$\Big\lfloor{\frac{n}{1}}\Big\rfloor, \Big\lfloor{\frac{n}{2}}\Big\rfloor, \Big\lfloor{\frac{n}{3}}\Big\rfloor \cdots \Big\lfloor{\frac{n}{n}}\Big\rfloor$$, there are less than $2\sqrt{n}$ integers distinct.
Solve in integers the equation $$x^2+xy+y^2 = \left(\frac{x+y}{3}+1\right)^3.$$
Quadrilateral $APBQ$ is inscribed in circle $\omega$ with $angle P = \angle Q = 90^{\circ}$ and $AP = AQ < BP$. Let $X$ be a variable point on segment $\overline{PQ}$. Line $AX$ meets $\omega$ again at $S$ (other than $A$). Point $T$ lies on arc $AQB$ of $\omega$ such that $\overline{XT}$ is perpendicular to $\overline{AX}$. Let $M$ denote the midpoint of chord $\overline{ST}$. As $X$ varies on segment $\overline{PQ}$, show that $M$ moves along a circle.
For each integer $n \ge 2$, let $A(n)$ be the area of the region in the coordinate plane defined by the inequalities $1\le x \le n$ and $0\le y \le x \left\lfloor \sqrt x \right\rfloor$, where $\left\lfloor \sqrt x \right\rfloor$ is the greatest integer not exceeding $\sqrt x$. Find the number of values of $n$ with $2\le n \le 1000$ for which $A(n)$ is an integer.
A block of wood has the shape of a right circular cylinder with radius $6$ and height $8$, and its entire surface has been painted blue. Points $A$ and $B$ are chosen on the edge of one of the circular faces of the cylinder so that $\overset\frown{AB}$ on that face measures $120^\text{o}$. The block is then sliced in half along the plane that passes through point $A$, point $B$, and the center of the cylinder, revealing a flat, unpainted face on each half. The area of one of these unpainted faces is $a\cdot\pi + b\sqrt{c}$, where $a$, $b$, and $c$ are integers and $c$ is not divisible by the square of any prime. Find $a+b+c$.
Let $m$ be the least positive integer divisible by $17$ whose digits sum is $17$. Find $m$.
How many ordered pairs of positive integers $(x, y)$ can satisfy the equation $x^2 + y^2 = x^3$?
Solve in positive integers $x^2 - 4xy + 5y^2 = 169$.
Let $b$ and $c$ be two positive integers, and $a$ be a prime number. If $a^2 + b^2 = c^2$, prove $a < b$ and $b+1=c$.
How many ordered pairs of positive integers $(a, b, c)$ that can satisfy $$\left\{\begin{array}{ll}ab + bc &= 44\\ ac + bc &=23\end{array}\right.$$
Solve in integers the equation $2(x+y)=xy+7$.
Solve in integers the question $x+y=x^2 -xy + y^2$.
Find the ordered pair of positive integers $(x, y)$ with the largest possible $y$ such that $\frac{1}{x} - \frac{1}{y}=\frac{1}{12}$ holds.
How many ordered pairs of integers $(x, y)$ satisfy $0 < x < y$ and $\sqrt{1984} = \sqrt{x} + \sqrt{y}$?
Find the number of positive integers solutions to $x^2 - y^2 = 105$.
Find any positive integer solution to $x^2 - 51y^2 = 1$.
Solve in positive integers $\frac{1}{x} + \frac{1}{y} + \frac{1}{z} = \frac{4}{5}$
Find all ordered pairs of integers $(x, y)$ that satisfy the equation $$\sqrt{y-\frac{1}{5}} + \sqrt{x-\frac{1}{5}} = \sqrt{5}$$
Solve in integer: $36((xy+1)z+x)=475(yz+1)$
The number $2^{29}$ is a nine-digit number whose digits are all distinct. Which digit of $0$ to $9$ does not appear?
Integers $x$ and $y$ with $x>y>0$ satisfy $x+y+xy=80$. What is $x$?
This graph shows the water temperature T degrees at time t minutes for a pot of water placed on a stove and heated to 100 degrees. On average, how many degrees did the temperature increase each minute during the first $8$ minutes?
A circle of radius 2 is centered at $A$. An equilateral triangle with side 4 has a vertex at $A$. What is the difference between the area of the region that lies inside the circle but outside the triangle and the area of the region that lies inside the triangle but outside the circle?
Three congruent isosceles triangles are constructed with their bases on the sides of an equilateral triangle of side length $1$. The sum of the areas of the three isosceles triangles is the same as the area of the equilateral triangle. What is the length of one of the two congruent sides of one of the isosceles triangles?
A rectangle of perimeter 22 cm is inscribed in a circle of area $16\pi$ $cm^2$. What is the area of the rectangle? Express your answer as a decimal to the nearest tenth.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.