id
stringlengths
1
6
url
stringlengths
16
1.82k
content
stringlengths
37
9.64M
14900
https://or.stackexchange.com/questions/12082/combine-two-constraints-into-one
linear programming - Combine two constraints into one - Operations Research Stack Exchange Join Operations Research By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Operations Research helpchat Operations Research Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Operations Research Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Combine two constraints into one Ask Question Asked 1 year, 4 months ago Modified7 months ago Viewed 245 times This question shows research effort; it is useful and clear 0 Save this question. Show activity on this post. \begingroup I have these two constraints, where the indices are i person, j shift and t, the day. x_{ijt} is the shift assignment, m_{ijt} the motivation of the person in a shift (only takes values m_{ijt}>0 if the corresponding value of x_{ijt}=1, otherwise m_{ijt}=0) and the slack s^+_{jt}. The first constraint ensures that demand is met and the second that if demand is greater than 0, then at least one person works and therefore not everything falls into the slack variable. Is it possible to combine both constraints into one constraint? The reasons for this do not matter for now. \begin{align} &\sum_{i}^{}m_{ijt}+s^+{jt}\ge Demand{jt}&\forall j,t\ &\sum_{i}^{}x_{ijt}\ge 0.1\cdot Demand_{jt}&\forall j,t \end{align} linear-programming constraint Share Cite Improve this question Follow Follow this question to receive notifications edited May 23, 2024 at 20:30 PeterD 1,746 6 6 silver badges 16 16 bronze badges asked May 23, 2024 at 19:19 nflgreaternbanflgreaternba 139 1 1 silver badge 8 8 bronze badges \endgroup 7 \begingroup You still want a linear constraint right? If this does not matter for you, you could multiply the two left hand sides with each other and have the demand on the right hand side.\endgroup PeterD –PeterD 2024-05-23 20:10:05 +00:00 Commented May 23, 2024 at 20:10 \begingroup Yes, it should be linear\endgroup nflgreaternba –nflgreaternba 2024-05-23 20:25:10 +00:00 Commented May 23, 2024 at 20:25 \begingroup Is m_{ijt} a variable or a predefined parameter. It seems a bit odd that the model determines each person’s motivation for taking a given shift at a certain day.\endgroup Sune –Sune 2024-05-23 20:40:47 +00:00 Commented May 23, 2024 at 20:40 \begingroup It is a variable.\endgroup nflgreaternba –nflgreaternba 2024-05-23 20:45:13 +00:00 Commented May 23, 2024 at 20:45 2 \begingroup Why do you want to combine the constraints into one? From a practical perspective it is often better to just have more constraints. If you observe specific performance issues, we might be able to help with those.\endgroup Kevin Dalmeijer –Kevin Dalmeijer 2024-05-23 21:05:23 +00:00 Commented May 23, 2024 at 21:05 |Show 2 more comments 2 Answers 2 Sorted by: Reset to default This answer is useful 1 Save this answer. Show activity on this post. \begingroup I wonder if the second constraint can be replaced by a simple upper bound on s^+_{jt}? Something like \begin{align} &\sum_{i}^{}m_{ijt}+s^+{jt}\ge Demand{jt}&\forall j,t\ &s^+{jt} \leq \alpha{i,t}&\forall j,t\ \end{align} where \alpha_{i,t} = \max\left(\frac{1}{2} Demand_{jt},\; Demand_{jt}-\epsilon\right) for some significant but small \epsilon > 0. This specific choice of constant, \alpha_{i,t}, implies that \sum_{i}^{}m_{ijt} \geq \min\left(\frac{1}{2} Demand_{jt},\; \epsilon\right) which is positive when Demand_{jt} > 0 and zero when Demand_{jt} = 0. This has the desired effect that when Demand_{jt} > 0, at least one of the m_{ijt} in the summation must be positive, which implies that at least one x_{ijt}=1, by the relationship you mentioned: [motivation] only takes values m_{ijt}>0 if the corresponding value of x_{ijt}=1, otherwise m_{ijt}=0) and the slack s^+_{jt}. Share Cite Improve this answer Follow Follow this answer to receive notifications answered May 24, 2024 at 9:07 Henrik Alsing FribergHenrik Alsing Friberg 2,264 9 9 silver badges 16 16 bronze badges \endgroup Add a comment| This answer is useful 0 Save this answer. Show activity on this post. \begingroup Assuming that for fixed j and t there is a lower bound L_{jt}>0 on the motivation of anyone assigned to a shift (0.1 is suggested in a comment), the second constraint can be replaced by an upper bound on the slack: s_{jt}^+ \le Demand_{jt} - L_{jt}. That will force nonzero total motivation whenever demand is positive, and nonzero motivation will force at least one assignment given that x_{ijt}=0\implies m_{ijt}=0. Share Cite Improve this answer Follow Follow this answer to receive notifications answered May 24, 2024 at 16:07 prubin♦prubin 42.2k 3 3 gold badges 43 43 silver badges 115 115 bronze badges \endgroup 2 \begingroup Thanks Mr. Rubin. Assuming I have a decomposed model, and I want to add this constraint on the slack to the Master Problem. Do I then need to add the duals of this constraint to the objective of the Subproblems aswell, or is it just a bound rather than a constraint?\endgroup nflgreaternba –nflgreaternba 2024-05-25 09:29:21 +00:00 Commented May 25, 2024 at 9:29 \begingroup You would need to include the dual prices for the upper bounds in the subproblems. You can get them from the reduced costs (orinanobworld.blogspot.com/2010/09/…).\endgroup prubin –prubin♦ 2024-05-25 16:19:55 +00:00 Commented May 25, 2024 at 16:19 Add a comment| Your Answer Thanks for contributing an answer to Operations Research Stack Exchange! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. Use MathJax to format equations. MathJax reference. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions linear-programming constraint See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Related 8How to model 24 hours demand into a daily shift schedule? 0Help finding linear constraint - LP 0Generalize working days constraints 1Help with formulating the objective function of my subproblem 3How to modify master problem and individual sub problems in column generation? 1Help modeling linear constraints 0Adopt constraint formulation 0Weird behavior with column generation with multiple sets Hot Network Questions Why include unadjusted estimates in a study when reporting adjusted estimates? Who is the target audience of Netanyahu's speech at the United Nations? в ответе meaning in context I have a lot of PTO to take, which will make the deadline impossible Where is the first repetition in the cumulative hierarchy up to elementary equivalence? Do we declare the codomain of a function from the beginning, or do we determine it after defining the domain and operations? What is a "non-reversible filter"? Does the Mishna or Gemara ever explicitly mention the second day of Shavuot? Direct train from Rotterdam to Lille Europe For every second-order formula, is there a first-order formula equivalent to it by reification? Matthew 24:5 Many will come in my name! Calculating the node voltage How exactly are random assignments of cases to US Federal Judges implemented? Who ensures randomness? Are there laws regulating how it should be done? How to convert this extremely large group in GAP into a permutation group. Is there a way to defend from Spot kick? Is existence always locational? An odd question Cannot build the font table of Miama via nfssfont.tex How to locate a leak in an irrigation system? Xubuntu 24.04 - Libreoffice Can you formalize the definition of infinitely divisible in FOL? Explain answers to Scientific American crossword clues "Éclair filling" and "Sneaky Coward" Do we need the author's permission for reference How to start explorer with C: drive selected and shown in folder list? more hot questions Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Operations Research Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547
14901
https://blog.csdn.net/yinhun2012/article/details/79384273
三角函数图像绘制-CSDN博客 博客 下载 学习 社区 GitCode InsCodeAI 会议 搜索 AI 搜索 登录 登录后您可以: 复制代码和一键运行 与博主大V深度互动 解锁海量精选资源 获取前沿技术资讯 立即登录 会员中心 消息 历史 创作中心 创作 三角函数:图像和性质关系 最新推荐文章于 2025-07-25 13:39:45 发布 羊羊2035于 2018-02-27 16:37:10 发布 阅读量2.2w收藏 8 点赞数 3 CC 4.0 BY-SA版权 分类专栏:入门图形学之三角函数 版权声明:本文为博主原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。 本文链接: 入门图形学之三角函数 专栏收录该内容 5 篇文章 订阅专栏 紧接上一篇: 这次我们通过函数图文和unity程序来观察三角函数的图像,首先从基本的来: f(x) = sinx,为了函数图像的绘画,先建立xy直角坐标系,通过x的变值来计算f(x)的数值组成(a,b)坐标点,通过"线性函数"的性质,来"平滑"的连接这些(a,b)坐标点。(ps:线性代数 的学习博客我会讲解完基本数学后继续写,循序渐进的来) 上图的意义是,选取大量x的角度值,求出sinx的值后,绘画出f(x) = sinx的表现图像(文章尾部有函数图像程序,下载可运行) f(x) = cosx的图像,通过上一篇博客我们知道了cosx = sin(90°-x),那么y = cosx = sin(90°-x) = -sin(x-90°)的图像,这里我们可以认为y = -sin(x-90°)图像是由y = sinx的图像先右移90°然后y值正负号反向,那么图像就变成了 3.f(x) = tanx的图像,正切函数图像和y = sinx图像一样,我们也来建立大量坐标点就能表示出来 4.f(x) = cotx的函数图像,这里我们把f(x) = tanx的坐标值(a,b)变成(a,1/b),要注意剔除分母b为0的情况,就变成了f(x) = cotx的函数图像,当然我们也可以大量算坐标点构成的图像: 5.如下程序将展示函数图像 工程下载地址: 确定要放弃本次机会? 福利倒计时 : : 立减 ¥ 普通VIP年卡可用 立即使用 羊羊2035 关注关注 3点赞 踩 8 收藏 觉得还不错? 一键收藏 2评论 分享复制链接 分享到 QQ 分享到新浪微博 扫一扫 举报 举报 专栏目录 数学-三角函数 及其 图像 让你爱上电路设计 04-08 2万+ 三角函数 2 条评论 您还未登录,请先 登录 后发表或查看评论 ...arccotx, arcsinx, arccosx绘图表示_ arcsecx 图像 8-26 01反 三角函数 是反正弦arcsinx,反余弦arccosx,反正切arctanx,反余切arccotx,反正割 arcsecx,反余割arccscx这些函数的统称,各自表示其反正弦、反余弦、反正切、反余切,反正割,反余割为x的角。三角函数 的反函数是个多值函数,因为它并不满足一个自变量对应一个函数值的要求,其 图像 与其原函数关于函数 y=x 对称 怎么用计算机按反 三角函数 图像 及 性质,反 三角函数 图像 及 性质 8-19 摘要生成于C知道,由 DeepSeek-R1 满血版支持,前往体验 > 反 三角函数 图像 及 性质 2019-10-07 14 : 23 : 01文/颜雨 反 三角函数 是反正弦arcsinx,反余弦arccosx,反正切arctanx,反余切arccotx,反正割 arcsecx,反余割arccscx这些函数的统称,各自表示其反正弦、反余弦、反正切、反余切,反正割,反余割为x的角。 反 三角函数... 三角函数 及其之间的 关系 万无引力的博客 02-11 2万+ 正弦函数 : sin x 余弦函数 : cos x 正切函数 : tan x 余切函数 : cot x 正割函数 : sec x 读音 :[si : kent] 余割函数 : csc x 读音 :[keu'si : kent] 其中 tan x = sin x/cos x cot x = cos x/sin x sec x = 1 / cos x csc x ... 三角函数 完全指南:从入门到精通 最新发布 o1000000000的博客 07-25 2366 1. 直角三角形中的定义 (最直观)sin A = 对边 / 斜边 = a / ccos A = 邻边 / 斜边 = b / ctan A = 对边 / 邻边 = a / bcot A = 邻边 / 对边 = b / a (tan A 的倒数)sec A = 斜边 / 邻边 = c / b (cos A 的倒数)csc A = 斜边 / 对边 = c / a (sin A 的倒数)2. 单位圆定义 (更通用,定义任意角)在直角坐标系中,以原点 O 为圆心画半径为 1 的单位圆。sin θ = y。 一个工具箱 之 三角函数 计算器。从零基础到精通,收藏这篇就够了!_sin... 8-7 [正割函数y=sec x在[0,π/2)U(π/2,π]上的反函数,叫做反正割函数。记作 arcsecx,表示一个正割值为x的角,该角的范围在[0,π/2)U(π/2,π]区间内。定义域(-∞,-1]U1,+∞),值域[0,π/2)U(π/2,π]。 反余割函数 余割函数y=csc x在[-π/2,0)U(0,π/2]上的反函数,叫做反余割函数。 三角函数 详解 : 定义、性质 与应用 8-19 本文详细介绍了 三角函数 在直角三角形中的基本定义,包括正弦、余弦、正切等及其坐标系中的表达。此外,涵盖了正弦、余弦、正切定理,以及倍角、半角公式,还有 三角函数 的反函数 和 参数方程。涉及了直角坐标转换 和 极坐标转换,以及一些关键公式 和 技巧。 摘要生成于C知道,由 DeepSeek-R1 满血版支持,前往体验 > ... 【HDU 3903】三角函数 的一些 性质 YG爱木木 04-02 3394 1.题目链接。题目大意:给定一个三角形的三条边:a,b,c以及三个数n,m,k.这些数据都是整数。判定下面这个式子是不是有理数。 2.分析:显然,我们知道,cosA,cosB,cosC一定是有理数,因为从余弦定理可以轻松证明。然后就是几个 性质 了。 (1)cosA是有理... 高等数学(预备知识之 三角函数 图像) qq_61866637的博客 11-18 2万+ 三角函数 性质, 三角函数 图像 arcsinx的图_y=arcsinx 图像 怎么画 8-16 反正弦函数(反 三角函数 之一)为正弦函数y=sinx(x∈[-½π,½π])的反函数,记作y=arcsinx或siny=x(x∈[-1,1])。由原函数的 图像 和 它的反函数的 图像 关于一三象限角平分线对称可知正弦函数的 图像 和 反正弦函数的 图像 也关于一三象限角平分线对称。 高中数学基础02 : 反函数与基本初等函数_基本初等函数的反函数 8-2 [正割函数y=sec x在[0,π/2)U(π/2,π]上的反函数,叫做反正割函数。记作 arcsecx,表示一个正割值为x的角,该角的范围在[0,π/2)U(π/2,π]区间内。定义域(-∞,-1]U1,+∞),值域[0,π/2)U(π/2,π]。 反余割函数 余割函数y=csc x在[-π/2,0)U(0,π/2]上的反函数,叫做反余割... 【数学】三角函数 性质 及公式 happyflovef的专栏 09-02 2327 http ://www.fff-cn.com/xxgj/gzsxhsgs.html 数学学习好网站 http ://www.verycd.com/topics/16786/ 数据结构学习好网站 一.三角函数 的 性质 教育资料完美版(2021-2022年)高中数学 三角函数 的 图像 与 性质 实习生听课记录.docx 10-04 高中数学课程中的 三角函数 是数学学习的一个重要章节,它涵盖了函数的基本概念,特别是正弦函数 和 余弦函数的 图像 与 性质。在这个教育资料完美版的实习生听课记录中,教师通过富有创意的方式,如引入音乐与数学的关联,... 数学函数图形概览 8-20 [14. y=arcsecx x = secy 定义域 (-∞,−1],[1,+∞)\infin, -1], [1,+\infin)∞,−1],1,+∞) 值域[0,π2),(π2,π0 ,\frac \pi 2) ,(\frac \pi 2, \pi0,2π​),(2π​,π] 15. y = ex 定义域 R 值域(0,+∞0 , +\infin0,+∞) ... arccotx 图像 在matlab,arccotx 图像(cotx的定义域 和 图像) 8-22 arccosx 图像 : 它是一种反 三角函数,它的值是以弧度表达的角度,定义域 :[-1,1]。由于是多值函数,往往取它的单值,值域为[0,π],记作y=arccosx,称它叫做反 三角函数. 全是反函数。 所以原函数关于y=x对称就是反函数的 图像 了。 例 : arcsinx的 图像 就是sinx关于y=x对称后的 图像。 三角函数 的 图像 和 性质 教师讲义.doc 10-11 三角函数 图像 可以通过平移 和 伸缩变换来得到新的函数 图像。例如,y=sin(ωx+φ)的 图像 可以通过先平移后伸缩或者先伸缩后平移的方法得到。ω影响周期,φ影响相位,而A 和 B则影响函数的振幅 和 位置。 综上所述,三角... 任意角弧度任意角的 三角函数 三角函数 图像 和 性质.doc 12-25 9. 五点法作图:在第20题中提到,五点法是一种绘制 三角函数 图像 的方法,通过找出一个周期内特定五个点的坐标来描绘函数 图像。 10. 解三角方程:第21题涉及到解三角方程组,找到满足特定条件的角度值。 综... 求y=sinx在[0,π]上的反函数_求函数y=sinx在[0,π]上的反函数 8-27 y=sinx 图像 y=arcsinx 图像 性质 对比 2.应用 (1)求y=sinx[0,π]的反函数 解 : 当x∈[0,π/2],y=sinx的反函数x=arcsiny 当x∈[π/2,π],y=sinx的定义域不是其反函数的值域,所以我们首先要利用诱导公式把sinx在[π/2,π]的定义域变成[0,π/2]。易得π-x∈[0,π/2]且sin(π-x)=sinx=y,... 三角函数 的 图像 和 性质 题型归纳总结.doc 09-28 - 对称轴 和 对称中心是解决 三角函数 图像 问题的关键。 5. 三角函数 性质 的综合应用: - 在解决综合题型时,对称性尤其重要,因为它可以帮助确定函数 图像 的关键点,如最值点、对称轴等。 6. 根据条件确定解析... 高一数学-三角函数 的 图像 和 性质 练习题.doc 10-11 8. 五点法绘制 三角函数 图像:问题12要求使用五点法画出函数y=3sin(2x/3-4π/3)的 图像,五点法涉及找到函数在0,π/2,π,3π/2 和 2π处的值。 9. 复杂 三角函数 的 性质:问题13 和 14涉及到了更复杂的 三角函数,如y=Asin... 基本 三角函数 图像 x_trusher的博客 03-27 7037 三角函数 简介 描述 简要描述六种 三角函数 及其 图像 正弦–sin x 余弦–cos x 正切–tan x 正割–sec x sec x = 1 / cos x 余割–csc x csc x = 1 / sin x 余切–cot x cot x = cos x / sin x = 1 / tan x 参考链接 ... 三角函数$y=Asin(\omega x+\phi)$的 图像 和 性质 weixin_34417183的博客 01-07 1586 fieldset { border : 0; border : 1px dashed #ddd; margin-top : 20px; margin-bottom : 20px } legend { color :#06e; margin-left : 20px; margin-top :-12px;} details[open] ... 三角函数,正弦,余弦,正切讲解 吾与春风皆过客 04-06 1万+ 三角函数 正弦,余弦,正切讲解 基本 三角函数 Tony Wey的博客 07-30 4334 这些函数的定义基于直角三角形的三个边:两个较短的边(直角边)和 一个较长的边(斜边)。余弦值通常用于计算角度,但在三角形的背景下,它更常用于计算边长之间的 关系。如果你有三个边长 和 对应的角度,你可以使用余弦定理来计算任意一个角的余弦值。其中 a, b, c 是三角形的边长,θ是夹角 c 和 边 a, b 之间的角。余弦定理用于计算三角形中一个角的余弦值,已知该角 和 另外两边的长度。是两个在数学中常用的 三角函数,它们用于计算角度的值,但它们之间有一些区别。定义:余弦值是直角三角形中任意角的邻边与斜边的比值。 sinx、cscx、cosx、secx以及tanx、cotx 图像 详解 热门推荐 weixin_54574988的博客 01-12 8万+ 可以很形象的理解正割,余割,正弦,余弦这类名称的中文含义。 三角函数 图像 weixin_33916256的博客 03-17 1070 非常明显, 当 x 是 =2 的奇数倍数时, y = tan(x) 有垂直渐近线 (是无定义的).此外,图像 的对称性表明, tan(x)是 x的奇函数. y = sec(x), y = csc(x)及 y = cot(x) 的函数 图像 也值得我们去学习, 如图2-20、图 2-21、图 2-22所看到的. ... 【数学】三角函数 及部分微积分函数图象整理 qq_40690815的博客 02-27 1万+ 三角函数 及部分微积分函数图象整理1. 三角函数 1.1 cosx、secx1.2 sinx、cscx1.3 tanx、cotx1.4 sec^2^x、tanx2.反 三角函数 2.1 arcsinx、arctanx2.2 arccosx 使用Desmos绘图 : https ://www.desmos.com/calculator 1. 三角函数 1.1 cosx、secx 1.2 sinx、cscx... 高中数学:三角函数 扩展secx、cscx、cotx函数 图像 及相关 关系 Brave_heart4pzj的博客 04-18 6万+ 数学 三角函数 图像 与 性质:题型归纳与解题技巧总结 三角函数 图像 与 性质 题型归纳总结汇报材料是一份详细梳理了 三角函数 在高中数学中的核心知识点的汇总文档。这份材料主要包括以下几个部分: 1. 题型归纳: - 已知函数解析式确定 性质:遇到这类问题,通常会给出函数... 关于我们 招贤纳士 商务合作 寻求报道 400-660-0108 kefu@csdn.net 在线客服 工作时间 8:30-22:00 公安备案号11010502030143 京ICP备19004658号 京网文〔2020〕1039-165号 经营性网站备案信息 北京互联网违法和不良信息举报中心 家长监护 网络110报警服务 中国互联网举报中心 Chrome商店下载 账号管理规范 版权与免责声明 版权申诉 出版物许可证 营业执照 ©1999-2025北京创新乐知网络技术有限公司 羊羊2035 博客等级 码龄13年 179 原创990 点赞 1843 收藏 1278 粉丝 关注 私信 ☁️腾讯云【新用户专享】1️⃣年79元 轻量服务器4核4G3M广告 热门文章 线性代数:转置矩阵(matrix transpose)和逆矩阵(matrix inverse) 248455 入门图形学:图形学原理(一) 27719 几何向量:计算光线反射reflect向量 23885 三角函数:图像和性质关系 22292 三角函数:正弦余弦定理及应用 21469 分类专栏 入门图形学之三角函数5篇 入门图形学之几何向量25篇 入门图形学之线性代数14篇 入门图形学之辅助工具4篇 入门图形学之图形学理论22篇 入门图形学之C for Graphic66篇 常用算法2篇 BUG及异常3篇 开发手札20篇 Recite Words 入门图形学之物理学CG7篇 学习感悟5篇 入门图形学之OpenGL 入门图形学之微积分2篇 展开全部收起 上一篇: 三角函数:直角三角形内角关系公式 下一篇: 三角函数:正弦余弦定理及应用 最新评论 开发手札:UnrealEngine和Unity3d坐标系问题 羊羊2035:啊,还有这事?我自己从没设置限制,我先检查一下 开发手札:UnrealEngine和Unity3d坐标系问题 崔百川:大佬,你之前的一些文章是不是被平台自动设置vip可见了? 入门图形学:图形学原理(二) 2401_88855390:感谢分享,不过哥们,那个是不是念架构,而不是构架 线性代数:转置矩阵(matrix transpose)和逆矩阵(matrix inverse) 它们却没见过蝴蝶:(M.N)T=NT.MT。证明的时候,写的n行m列,但是证明过程是从00开始到n0结束,那么就是n+1行,同理列应为m+1列。 入门图形学:VR畸变后处理 普通村民:实际上的光学应该不是提供畸变系数,而是完整的畸变点列数据吧 大家在看 人形机器人“量产元年”开启:政策、资本、技术“三驾马车”拉动万亿新赛道 排序算法全解,为什么快排的时间波动特别大? 1331 Three.js入门:3D图形开发核心指南 让 MySQL 索引失效的哪些场景~ 505 必看! 小白必须了解的 “ ADC的扫描模式与规则通道 ” !!! 378 最新文章 开发手札:UnrealEngine编辑器开发 开发手札:UnrealEngine和Unity3d坐标系问题 URP图形入门:Pass绘制TransparentDepth 2025年 6篇 2024年 17篇 2023年 2篇 2022年 10篇 2021年 21篇 2020年 26篇 2019年 50篇 2018年 49篇 ☁️腾讯云【新用户专享】1️⃣年79元 轻量服务器4核4G3M广告 上一篇: 三角函数:直角三角形内角关系公式 下一篇: 三角函数:正弦余弦定理及应用 分类专栏 入门图形学之三角函数5篇 入门图形学之几何向量25篇 入门图形学之线性代数14篇 入门图形学之辅助工具4篇 入门图形学之图形学理论22篇 入门图形学之C for Graphic66篇 常用算法2篇 BUG及异常3篇 开发手札20篇 Recite Words 入门图形学之物理学CG7篇 学习感悟5篇 入门图形学之OpenGL 入门图形学之微积分2篇 展开全部收起 登录后您可以享受以下权益: 免费复制代码 和博主大V互动 下载海量资源 发动态/写文章/加入社区 ×立即登录 评论 2 被折叠的 条评论 为什么被折叠?到【灌水乐园】发言 查看更多评论 添加红包 祝福语 请填写红包祝福语或标题 红包数量 个 红包个数最小为10个 红包总金额 元 红包金额最低5元 余额支付 当前余额 3.43 元 前往充值 > 需支付:10.00 元 取消 确定 成就一亿技术人! 领取后你会自动成为博主和红包主的粉丝 规则 hope_wisdom 发出的红包 实付 元 使用余额支付 点击重新获取 扫码支付 钱包余额 0 抵扣说明: 1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。 2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。 余额充值 确定 取消 举报 选择你想要举报的内容(必选) 内容涉黄 政治相关 内容抄袭 涉嫌广告 内容侵权 侮辱谩骂 样式问题 其他 原文链接(必填) 请选择具体原因(必选) 包含不实信息 涉及个人隐私 请选择具体原因(必选) 侮辱谩骂 诽谤 请选择具体原因(必选) 搬家样式 博文样式 补充说明(选填) 取消 确定 点击体验 DeepSeekR1满血版 下载APP 程序员都在用的中文IT技术交流社区 公众号 专业的中文 IT 技术社区,与千万技术人共成长 视频号 关注【CSDN】视频号,行业资讯、技术分享精彩不断,直播好礼送不停!客服返回顶部
14902
https://www.bmj.com/bmj/section-pdf/758761?path=/bmj/348/7963/Practice.full.pdf
34 BMJ | 21 JUNE 2014 | VOLUME 348 PRACTICE 1Royal College of Physicians, National Clinical Guideline Centre, London NW1 4LE, UK 2Primary Care Clinical Sciences, University of Birmingham, UK 3Department of Cardiology, Leeds General Infirmary, Leeds, UK Correspondence to: C Jones clare.jones@rcplondon.ac.uk Cite this as: BMJ 2014;348:g3655 doi: 10.1136/bmj.g3655 This is one of a series of BMJ summaries of new guidelines based on the best available evidence; they highlight important recommendations for clinical practice, especially where uncertainty or controversy exists. Further information about the guidance, a list of members of the guideline development group, and the supporting evidence statements are in the full version on bmj.com. Atrial fibrillation is increasingly common,1 with more than 800 000 people being affected in England.2 Many people are managed in primary care without hospital involvement. The condition is a major cause of morbidity, particularly stroke, and it reduces life expectancy. Strokes caused by atrial fibrillation are largely avoidable—most can be prevented by anticoagulation. Yet uptake of anti­ coagulation by people with known atrial fibrillation who are at increased risk of stroke is suboptimal.3‑5 Since the publication of the 2006 guidance, several developments relating to risk stratification, stroke preven­ tion, and rhythm management have led to a partial update on the 2006 guidance. This article summarises the most recent recommendations from the National Institute for Health and Care Excellence (NICE).6 Recommendations NICE recommendations are based on systematic reviews of best available evidence and explicit consideration of cost effectiveness. When minimal evidence is available, recommendations are based on the Guideline Develop­ ment Group’s experience and opinion of what constitutes good practice. Evidence levels for the recommendations are in the full version of this article on bmj.com. All recom­ mendations below should be in accordance with the NICE patient experience guideline,7 and the benefits and risks of treatment should be discussed with the patient. Diagnosis and assessment • Perform manual pulse palpation to assess for the presence of an irregular pulse, which might be indicative of underlying atrial fibrillation in people presenting with any of the following: breathlessness GUIDELINES The management of atrial fibrillation: summary of updated NICE guidance Clare Jones,1 Vicki Pollit,1 David Fitzmaurice,2 Campbell Cowan,3 On behalf of the Guideline Development Group or dyspnoea, palpitations, syncope or dizziness, chest discomfort, stroke or transient ischaemic attack. (Recommendation from 2006 guideline.) • Perform electrocardiography (ECG) in all people, whether symptomatic or not, in whom atrial fibrillation is suspected because an irregular pulse has been detected. (Recommendation from 2006 guideline.) • In people with suspected paroxysmal atrial fibrillation undetected by standard ECG: – – Use 24 hour ambulatory ECG in those with suspected asymptomatic episodes or symptomatic episodes less than 24 hours apart – – Use event recorder ECG in those with symptomatic episodes more than 24 hours apart. (Recommendation from 2006 guideline.) Personalised package of care • Offer people with atrial fibrillation a personalised package of care (box). Ensure that the package of care is documented and delivered. (New recommendation.) Referral • Refer people promptly at any stage if treatment does not control the symptoms of atrial fibrillation and more specialised management is needed. Prompt referral was defined as no longer than four weeks after the final failed treatment or no longer than four weeks if atrial fibrillation recurs after cardioversion and further specialised management is needed. (New recommendation.) Assessment of stroke and bleeding risks Stroke and bleeding risk should be assessed in all people with atrial fibrillation. • Use the CHA2DS2-VASc (table 1)9 score to assess stroke risk in people with any of the following: Components of a care package for people with atrial fibrillation Stroke awareness and measures to prevent stroke Rate control Assessment of symptoms for rhythm control Who to contact for advice if needed Psychological support if needed Up to date and comprehensive education and information on: – – Cause, effects, and possible complications of atrial fibrillation – – Management of rate and rhythm control – – Anticoagulation – – Practical advice on anticoagulation8 – – Support networks (such as cardiovascular charities) Examples of stroke awareness include information on the symptoms of stroke and how atrial fibrillation can lead to a stroke; measures to prevent stroke include anticoagulation for atrial fibrillation. Table 1 | CHA2DS2-VASc stroke risk stratification. Adapted, with permission, from Lip and colleagues9 Risk factor Score Congestive heart failure or left ventricular dysfunction 1 Hypertension 1 Age ≥75 years 2 Diabetes mellitus 1 Stroke or transient ischaemic attack or systemic thromboembolism 2 Vascular disease 1 Age 65-74 years 1 Female sex (sex category) 1 Vascular disease defined as previous myocardial infarction, peripheral arterial disease, or aortic plaque. BMJ | 21 JUNE 2014 | VOLUME 348 35 PRACTICE – – Harmful alcohol consumption. (New recommendation.) • When discussing the benefits and risks of anticoagulation: – – For most people the benefit of anticoagulation outweighs the risk of bleeding – – For people with an increased risk of bleeding the benefit of anticoagulation may not always outweigh the bleeding risk, and careful monitoring of bleeding risk is important. (New recommendation.) • Do not withhold anticoagulation solely because the person is at risk of having a fall. (New recommendation.) Drug treatments to prevent stroke (figure) The guideline revision emphasises that people at very low risk, who should not receive an anticoagulant, should be identified first, with anticoagulation considered or offered to the remainder, taking bleeding risk into account. Anti­ coagulation may be with a non-vitamin K antagonist oral anticoagulant (apixaban, dabigatran etexilate, or rivaroxa­ ban, in accordance with individual NICE appraisals11‑13) or a vitamin K antagonist (such as warfarin). • Do not offer stroke prevention treatment to people aged under 65 years with atrial fibrillation and no risk factors other than their sex (that is, very low risk of stroke equating to CHA2DS2-VASc score of 0 for men or 1 for women). (New recommendation.) • Consider anticoagulation for men with a CHA2DS2-VASc score of 1. Take the bleeding risk into account. (New recommendation.) • Offer anticoagulation to people with a CHA2DS2-VASc score of 2 or above, taking bleeding risk into account. (New recommendation.) • Discuss options for anticoagulation with the person and base choice on his or her clinical features and preferences. (New recommendation.) • Do not offer aspirin monotherapy solely for stroke prevention to people with atrial fibrillation. (New recommendation.) Assessing anticoagulation control with vitamin K antagonists For people receiving a vitamin K antagonist, adequacy of anticoagulant control should be assessed. • Calculate individual time in therapeutic range (TTR) at each visit. When calculating TTR: – – Use a validated method of measurement, such as the Rosendaal method,14 for computer assisted dosing or proportion of tests in range for manual dosing – – Exclude measurements taken during the first six weeks of treatment – – Calculate TTR over a maintenance period of at least six months. (New recommendation.) [Note: TTR is a means of assessing the quality of anticoagulant control—that is, the proportion of time an individual patient’s INR values are within the target range. It is expressed as a percentage and assumes a linear change between INR results. A higher TTR is associated with a reduction in both bleeding and thrombotic events.] – – Symptomatic or asymptomatic paroxysmal, persistent, or permanent atrial fibrillation – – Atrial flutter – – A continuing risk of the recurrence of arrhythmia after cardioversion back to sinus rhythm. (New recommendation.) • Use the HAS-BLED (table 2)10 score to assess the risk of bleeding in people who are starting, or have started, anticoagulation and to highlight, correct, and monitor modifiable risk factors: – – Uncontrolled hypertension – – Poor control of international normalised ratio (INR; “labile INRs”) – – Concurrent drugs, such as concomitant use of aspirin or a non-steroidal anti-inflammatory drug Table 2 | HAS-BLED bleeding risk score. Adapted, with permission, from Pisters and colleagues10 Risk factor Score Hypertension 1 Abnormal renal and liver function (1 point each) 1 or 2 Stroke 1 Bleeding 1 Labile international normalised ratios 1 Elderly (age >65 years) 1 Drugs or alcohol (1 point each) 1 or 2 Maximum score 9 No antithrombotic therapy Assess bleeding risk stratifcation using HAS-BLED Assess stroke risk stratifcation using CHA2DS2-VASc Discuss risks and benefts of anticoagulation Discuss options for anticoagulation with person and base choice on his or her clinical features and preferences Identify low risk patients - CHA2DS2-VASc = 0 (men) or 1 (women) Low risk Increased risk Anticoagulation contraindicated Poor control CHA2DS2-VASc = 1 (in men) Consider oral anticoagulation CHA2DS2-VASc ≥2 Ofer oral anticoagulation Vitamin K antagonists (VKA) Non-VKA oral anticoagulation11-13 Assess anticoagulation control Non-VKA oral anticoagulation Non-VKA contraindicated or not tolerated People who choose not to have treatment Lef atrial appendage occlusion Annual review for all patients Stroke prevention in people with non-valvular atrial fibrillation 36 BMJ | 21 JUNE 2014 | VOLUME 348 PRACTICE rhythm control strategy would be more suitable on the basis of clinical judgment (these include people with new onset atrial fibrillation or atrial fibrillation with a reversible cause). (New recommendation.) • Offer a standard β blocker (a β blocker other than sotalol) or a rate limiting calcium channel blocker as initial monotherapy to people with atrial fibrillation who need drug treatment as part of a rate control strategy. (New recommendation.) • Consider digoxin monotherapy for people with non-paroxysmal atrial fibrillation only if they are sedentary (do no physical exercise or very little). (New recommendation.) • If monotherapy does not control symptoms, and if continuing symptoms are thought to be caused by poor ventricular rate control, consider combination therapy with any two of the following: – – A β blocker – – Dilitazem – – Digoxin. (New recommendation.) • Consider pharmacological or electrical rhythm control (or both) for people with atrial fibrillation whose symptoms continue after their heart rate has been controlled or for whom a rate control strategy has not been successful. (New recommendation.) • Assess the need for drug treatment for long term rhythm control. (New recommendation.) [Note: Drug treatment for long term rhythm control might be needed in people with paroxysmal atrial fibrillation to maximise their time in sinus rhythm, or after cardioversion in people who are thought likely to relapse, to increase the likelihood of maintaining sinus rhythm.] • If drug treatment for long term rhythm control is needed, consider a standard β blocker (a β blocker other than sotalol) as first line treatment unless there are contraindications. (New recommendation.) [Note: Examples of possible contraindications include excessive bradycardia, asthma, or peripheral vascular disease.] • If β blockers are contraindicated or unsuccessful, assess the suitability of alternative drugs for rhythm control, taking comorbidities into account. (New recommendation.) Non-pharmacological management of rate and rhythm Left atrial ablation is an effective option when drug management has failed. Ablation treatment has a better outcome when undertaken earlier rather than later and for paroxysmal rather than persistent atrial fibrillation. Pacing followed by atrioventricular node ablation is an alternative to left atrial ablation. Pacing followed by atrio­ ventricular node ablation does not restore sinus rhythm but successfully limits ventricular rate. • If drug treatment has failed to control symptoms of atrial fibrillation or is unsuitable: – – Offer left atrial catheter ablation to people with paroxysmal atrial fibrillation – – Consider left atrial catheter or surgical ablation for people with persistent atrial fibrillation (New recommendation) • Reassess anticoagulation for a person with poor anticoagulation control shown by any of the following: – – Two INR values higher than 5 or one INR value higher than 8 within the past six months – – Two INR values less than 1.5 within the past 6 months – – TTR less than 65%. (New recommendation.) • When reassessing anticoagulation, take into account and, if possible, correct the following factors that may contribute to poor anticoagulation control: – – Cognitive function – – Adherence to prescribed treatment – – Illness – – Interacting drugs – – Lifestyle factors including diet and alcohol consumption. (New recommendation.) • If poor anticoagulation control cannot be improved, evaluate risks and benefits of alternative stroke prevention. (New recommendation.) [Note: The GDG agreed that a logical alternative would be to offer one of the non-vitamin K antagonist oral anticoagulants.] Review of stroke and anticoagulant risk All people with atrial fibrillation should undergo review at least annually. • For people not taking an anticoagulant, review stroke risk when they reach age 65 or if they develop any of the following at any age: – – Diabetes – – Heart failure – – Peripheral arterial disease – – Coronary heart disease – – Stroke, transient ischaemic attack, or systemic thromboembolism. (New recommendation.) • For people who are not taking an anticoagulant, review stroke and bleeding risks annually. Ensure that all reviews and decisions are documented. (New recommendation.) • For people who are taking an anticoagulant, review the need for anticoagulation and the quality of anticoagulation at least annually, or more often if clinically relevant events that affect anticoagulation or bleeding risk occur. (New recommendation.) Left atrial appendage occlusion for people unable to take anticoagulants This is a catheter based technique for closure or oblitera­ tion of the left atrial appendage, which is thought to be the major source of thrombus that causes stroke and periph­ eral thromboembolism in people with atrial fibrillation. • Consider left atrial appendage occlusion if anticoagulation is contraindicated or not tolerated. (New recommendation.) Rate and rhythm control There is currently no evidence that rhythm management is superior to rate control in preventing stroke or reduc­ ing mortality. The main treatment objective is therefore control of symptoms. • Offer rate control as the first line strategy to people with atrial fibrillation except for those in whom a bmj.com Previous articles in this series Ж Ж Prevention and management of pressure ulcers: summary of NICE guidance (BMJ 2014;348:g2592) Ж Ж Management of psychosis and schizophrenia in adults (BMJ 2014;348:g1173) Ж Ж Early management of head injury: summary of updated NICE guidance (BMJ 2014;348:g104) Ж Ж Intravenous fluid therapy for adults in hospital: summary of NICE guidance (BMJ 2013;347:f7073) Ж Ж Secondary prevention for patients after a myocardial infarction: summary of updated NICE guidance (BMJ 2013;347:f6544) BMJ | 21 JUNE 2014 | VOLUME 348 37 PRACTICE 1 Chugh SS, Havmoeller R, Narayanan K, Singh D, Rienstra M, Benjamin EJ, et al. Worldwide epidemiology of atrial fibrillation: a Global Burden of Disease 2010 Study. Circulation 2014;129:837-47. 2 National Institute for Health and Care Excellence. Support for commissioning: anticoagulation therapy. 2013. nice org uk/support-for-commissioning-anticoagulation-therapy-cmg49/1-key-issues-in-commissioning-anticoagulation-therapy. 3 Cowan C, Healicon R, Robson I, Long WR, Barrett J, Fay M, et al. The use of anticoagulants in the management of atrial fibrillation among general practices in England. Heart 2013;99:1166-72. 4 Holt TA, Hunter TD, Gunnarsson C, Khan N, Cload P, Lip GYH. Risk of stroke and oral anticoagulant use in atrial fibrillation: a cross-sectional survey. Br J Gen Pract 2012;62:e710-7. 5 Ogilvie IM, Newton N, Welner SA, Cowell W, Lip GY. Underuse of oral anticoagulants in atrial fibrillation: a systematic review. Am J Med 2010;123:638-45. 6 National Institute for Health and Care Excellence. Atrial fibrillation: the management of atrial fibrillation. (Clinical guideline 180.) 2014. http:// guidance.nice.org.uk/CG180. 7 National Institute for Health and Care Excellence. Patient experience in adult NHS services: improving the experience of care for people using adult NHS services. (Clinical guideline 138.) 2012. CG138. 8 National Institute for Health and Care Excellence. Venous thromboembolic diseases: the management of venous thromboembolic diseases and the role of thrombophilia testing. (Clinical guideline 144.) 2012. http:// guidance.nice.org.uk/CG144. 9 Lip GYH, Nieuwlaat R, Pisters R, Lane DA, Crijns HJGM. Refining clinical risk stratification for predicting stroke and thromboembolism in atrial fibrillation using a novel risk factor-based approach: the euro heart survey on atrial fibrillation. Chest 2010;137:263-72. 10 Pisters R, Lane DA, Nieuwlaat R, de Vos CB, Crijns HJGM, Lip GYH. A novel user-friendly score (HAS-BLED) to assess 1-year risk of major bleeding in patients with atrial fibrillation: the Euro Heart Survey. Chest 2010;138:1093-100. 11 National Institute for Health and Care Excellence. Rivaroxaban for the prevention of stroke and systemic embolism in people with atrial fibrillation. NICE technology appraisal guidance 256. 2012. www.nice.org.uk/ nicemedia/live/13746/59295/59295.pdf. 12 National Institute for Health and Care Excellence. Dabigatran etexilate for the prevention of stroke and systemic embolism in atrial fibrillation. NICE technology appraisal guidance 249. 2012. TA249. 13 National Institute for Health and Care Excellence. Apixaban for preventing stroke and systemic embolism in people with nonvalvular atrial fibrillation. NICE technology appraisal guidance 275. 2013. www.nice.org.uk/ nicemedia/live/14086/62874/62874.pdf. 14 Rosendaal FR, Cannegieter SC, van der Meer FJ, Briet E. A method to determine the optimal intensity of oral anticoagulant therapy. Thromb Haemost 1993;69:236-9. 15 Mant J, Hobbs FDR, Fletcher K, Roalfe A, Fitzmaurice D, Lip GYH, et al. Warfarin versus aspirin for stroke prevention in an elderly community population with atrial fibrillation (the Birmingham Atrial Fibrillation Treatment of the Aged Study, BAFTA): a randomised controlled trial. Lancet 2007;370:493-503. • Consider left atrial surgical ablation at the same time as other cardiothoracic surgery for people with symptomatic atrial fibrillation. (New recommendation) • Consider pacing and atrioventricular node ablation for people with permanent atrial fibrillation and symptoms of left ventricular dysfunction thought to be caused by high ventricular rates. (New recommendation) • When considering pacing and atrioventricular node ablation, reassess symptoms and the consequent need for ablation after pacing has been carried out and drug treatment further optimised. (New recommendation) Overcoming barriers Anticoagulation is underused in the management of atrial fibrillation.4 5 In older people in particular, aspirin is often used in preference to anticoagulation,3 even though anti­ coagulation has been shown to reduce stroke rates by about 50% in this population, compared with aspirin.15 We believe the new guideline deals with these problems through paradigm change, identifying low risk people in whom anticoagulation is not indicated, and making it clear that aspirin is no longer considered a cost effective alternative. Contributors: CC wrote the first draft. All authors reviewed the draft, were involved in writing further drafts, and reviewed and approved the final version for publication. CC is guarantor. Funding: the National Clinical Guideline Centre was commissioned and funded by the National Institute for Health and Care Excellence to write this summary. Competing interests: DF received honorariums from various companies that may have an interest in this report, including Roche diagnostics, Leo Laboratories, Bohringer Ingelheim, and Pfizer. DF withdrew from the discussion of evidence and drafting recommendations on antithrombotic therapy in June 2013 owing to previously declared interests that were deemed a conflict of interest. These interests had expired by September 2013. Provenance and peer review: Commissioned; not externally peer reviewed. EASILY MISSED? Copper deficiency S K Chhetri,1 2 R J Mills,1 S Shaunak,1 H C A Emsley1 2 1Department of Neurology, Royal Preston Hospital, Preston PR2 9HT, UK 2University of Manchester, Manchester M13 9PL, UK Correspondence to: hedley.emsley@manchester.ac.uk Cite this as: BMJ 2014;348:g3691 doi: 10.1136/bmj.g3691 A 73 year old man with treated pernicious anaemia and partial gastrectomy 30 years earlier consulted his GP with a 12 month history of progressive numbness of his feet and hands. A haematology opinion for normocytic anae­ mia, neutropenia, and lymphopenia led to an unremark­ able bone marrow biopsy. Increasing unsteadiness and falls prompted neurology referral. He was found to have sensory ataxia with clinical, radiological (figure), and neurophysiological evidence of myelopathy and periph­ eral neuropathy. Vitamin B12 level was high, consistent with ongoing replacement. Low serum copper confirmed hypocupraemic myeloneuropathy. Copper replacement achieved resolution of the cytopenia within four weeks, and slow but minimal neurological improvement was seen over more than nine months of follow-up. KEY POINTS Copper deficiency is an under recognised cause of cytopenias and myeloneuropathy Copper deficiency may masquerade as a myelodysplastic syndrome or vitamin B12 deficiency; it might also co-exist with B12 deficiency The neurological sequelae of copper deficiency can be debilitating and irreversible, making prompt recognition and treatment essential for successful outcomes Clinicians should have a low threshold for measuring serum copper in patients with unexplained and refractory cytopenias or myeloneuropathy, especially in the context of previous upper gastrointestinal tract surgical procedures, excess zinc exposure, or malabsorption 38 BMJ | 21 JUNE 2014 | VOLUME 348 PRACTICE Confirmation of B12 deficiency in a patient with a clinical presentation resembling SACD might understandably lead to testing for copper deficiency not being undertaken, even though hypocupraemia might be comorbid with B12 defi­ ciency, particularly in patients who have undergone gastric surgery.4 9 Moreover, the interval between gastric surgery and the onset of clinical symptoms can be long.4 5 9 A retrospective review of 55 cases of hypocupraemia found that the inter­ val between upper gastrointestinal surgery and symptom onset ranged from five to 26 years in the bariatric group and 10 to 46 years in the non-bariatric group.4 Such long intervals might lead to diagnostic delay because a causal association might not be so readily considered, but obser­ vations of gradually declining copper levels over years lend clear support to causation.8 Why does it matter? Although copper deficiency is rare, its early identification is essential to minimise its neurological sequelae, which are severely disabling and often irreversible.1‑4 Copper defi­ ciency can be easily treated and copper supplementation largely prevents further neurological decline, but neuro­ logical improvement is variable.1‑4 A retrospective cohort study in Scotland, which identified 16 patients manifesting clinical sequelae of hypocupraemia (12 with neurologi­ cal features), found that only 25% of patients show some improvement while 33% might continue to deteriorate with treatment.1 The haematological effects are relatively easily reversible, with 93% of cytopenias responding to copper replacement and management of the underlying cause.1 How is copper deficiency diagnosed? Clinical features The diagnosis should be considered in anyone with char­ acteristic neurological or haematological abnormalities (or both), particularly patients with risk factors (box 1). Copper and zinc are competitively absorbed from the gastrointestinal tract; hence zinc excess leads to copper deficiency.1 4 Neurological manifestations include myelopathy, myeloneuropathy, and peripheral neuropathy.1 4 Patients characteristically present with lower limb paraesthesias and gait disorder with sensory ataxia or spasticity or both. Investigations Initial investigations in primary care should include a full blood count and measurement of serum copper. Typical haematological abnormalities are anaemia and leucopenia. Anaemia which might be microcytic, mac­ rocytic, or normocytic, is the commonest cytopenia, followed by leucopenia; thrombocytopenia is infre­ quent.1 2 Laboratory indicators of copper deficiency include low serum copper.1 2 4 Vitamin B12 level should also be tested as B12 deficiency is an important differen­ tial diagnosis and may sometimes co-exist with copper deficiency. Zinc levels should also be requested if zinc excess is suspected. American bariatric surgery clinical practice guidelines recommend testing for copper defi­ ciency in post-bariatric surgery patients with anaemia, What is hypocupraemia? Copper is an essential trace element that plays a crucial role in the normal functioning of the neurological, haema­ tological, vascular, skeletal, and antioxidant systems.1 2 Copper is absorbed in the stomach and proximal duode­ num, but absorption can be impaired after upper gastroin­ testinal surgery. Such surgery, although not the sole cause of copper deficiency (hypocupraemia), is increasingly recognised as an important risk factor.3 Copper deficiency leads to several clinical presentations including cytopenia and profound neurological deficits.1‑4 How common is copper deficiency? Evidence is limited but several reports describe sympto­ matic copper deficiency.1‑5 In a case series of 136 patients with gastric bypass surgery, 9.6% had hypocupraemia.6 Two other case series of 64 and 141 bariatric surgery patients respectively reported substantial hypocupraemia in 23% at 6 months and 70% at 3 years, 7 and a progressive reduction in average serum copper concentrations over five years.8 Reliable data on the overall population at risk of hypo­ cupraemia from all causes, including bariatric surgery, are not available, but longitudinal collection of data would be valuable. Why is copper deficiency missed? Copper deficiency is an under recognised cause of neu­ rological dysfunction and a spectrum of cytopenias.1 A retrospective review of 40 patients with hypocupraemia found the median interval from initial presentation with neurological or haematological findings to diagnosis of copper deficiency to be 1.1 years (range 10 weeks to 23 years).2 Misdiagnosis as a myelodysplastic syndrome might occur, given the similar haematolopathological find­ ings including anaemia, leucopenia, and less commonly, thrombocytopenia.1 2 This is suggested by a recent retro­ spective analysis of copper deficiency in Scotland, which found that four out of 16 cases eventually diagnosed with hypocupraemia were initially seen by a haematologist.1 The clinical presentation is often clinically and radiolog­ ically indistinguishable from subacute combined degen­ eration (SACD) seen in patients with vitamin B12 deficiency. T2 weighted sagittal magnetic resonance image of cervical spine showing increased signal intensity (arrowed) involving dorsal columns of the cervical spinal cord bmj.com Previous articles in this series Ж Ж Bladder cancer in women (BMJ 2014;348:g2171) Ж Ж Subdural haematoma in the elderly (BMJ 2014;348:g1682) Ж Ж Intestinal malrotation and volvulus in infants and children (BMJ 2013;347:f6949) Ж Ж Lisfranc injuries (BMJ 2013;347:f4561) Ж Ж Spontaneous oesophageal rupture (BMJ 2013;346:f3095) BMJ | 21 JUNE 2014 | VOLUME 348 39 PRACTICE Risk factors for copper deficiency4 • Upper gastrointestinal tract surgery • Gastrectomy • Bariatric surgery • Small bowel resection or bypass • Zinc overload • Zinc supplementation • Ingestion of zinc containing dental fixatives • Malabsorption syndromes replacement; however, there are no guidelines to recom­ mend the frequency of monitoring. In patients in whom excess zinc ingestion is the likely cause, discontinuing zinc supplementation might suffice. We thank Maria Liga (consultant haematologist) and Raza Ansari (general practitioner) for general advice from a haematology and general practice perspective respectively. Contributors: All authors contributed substantially to the conception and design of this work, its drafting, and/or critical revision for important intellectual content; they all approved the final version and accept accountability for the work. Having read and understood the BMJ Group policy on declaration of interests, the authors declare that they have no competing interests. Provenance and peer review: Not commissioned; externally peer reviewed. Patient consent obtained. 1 Gabreyes AA, Abbasi HN, Forbes KP, McQuaker G, Duncan A, Morrison I. Hypocupremia associated cytopenia and myelopathy: a national retrospective review. Eur J Haematol 2013;90:1-9. 2 Halfdanarson TR, Kumar N, Li CY, Phyliky RL, Hogan WJ. Hematological manifestations of copper deficiency: a retrospective review. Eur. J. Haematol. 2008;80:523-31. 3 Yarandi SS, Griffith DP, Sharma R, Mohan A, Zhao VM, Ziegler TR. Optic Neuropathy, Myelopathy, Anemia, and Neutropenia Caused by Acquired Copper Deficiency After Gastric Bypass Surgery. J Clin Gastroenterol 2014;[Epub ahead of print]. 4 Jaiser SR, Winston GP. Copper deficiency myelopathy. J Neurol 2010;257:869-81. 5 Robinson SD, Cooper B, Leday TV. Copper deficiency (hypocupremia) and pancytopenia late after gastric bypass surgery. Proc (Bayl Univ Med Cent) 2013;26:382-6. 6 Gletsu-Miller N, Broderius M, Frediani JK, Zhao VM, Griffith DP, Davis SS, et al. Incidence and prevalence of copper deficiency following roux-en-y gastric bypass surgery. Int. J. Obes. 2012;36:328-35. 7 de Luis DA, Pacheco D, Izaola O, Terroba MC, Cuellar L, Martin T. Clinical results and nutritional consequences of biliopancreatic diversion: three years of follow-up. Ann Nutr Metab 2008;53:234-9. 8 Balsa JA, Botella-Carretero JI, Gómez-Martín JM, Peromingo R, Arrieta F, Santiuste C, Zamarrón I, Vázquez C. Copper and zinc serum levels after derivative bariatric surgery: differences between Roux-en-Y Gastric bypass and biliopancreatic diversion. Obes Surg 2011;21:744-50. 9 Griffith DP, Liff DA, Ziegler TR, Esper GJ, Winton EF. Acquired copper deficiency: a potentially serious and preventable complication following gastric bypass surgery. Obesity 2009;17:827-31. 10 Mechanick JI, Youdim A, Jones DB, Timothy Garvey W, Hurley DL, Molly McMahon M, et al. Clinical practice guidelines for the perioperative nutritional, metabolic, and nonsurgical support of the bariatric surgery patient—2013 update: cosponsored by American Association of Clinical Endocrinologists, the Obesity Society, and American Society for Metabolic and Bariatric Surgery. Surg Obes Relat Dis 2013;9:159-91. Accepted: 09 May 2014 neutropenia, myeloneuropathy, and impaired wound healing.10 Specialist referral would generally be pursued for suspected symptomatic hypocupraemia. Specialist neurological investigations typically include magnetic resonance imaging and neurophysiology. Spi­ nal cord magnetic resonance imaging is abnormal in about 47% of patients with copper deficiency myelopathy and might show increased T2 signal, most commonly in the dorsal midline cervical and thoracic cord.4 Neurophysi­ ological studies might show axonal sensorimotor poly­ neuropathy.4 How is copper deficiency managed? Treatment includes management of the underlying cause and copper supplementation. No studies have investigated with the dose, route, duration, or formulation of copper for supplementation. The salts that are commonly used include copper gluconate, copper sulphate, and copper chloride.4 Copper can be given orally or intravenously. The American Society for Metabolic and Bariatric Surgery Clinical Practice guidelines recommend routine oral cop­ per supplementation (2 mg/d).10 These guidelines advise intravenous copper (2-4 mg/d) for six days for severe defi­ ciency and subsequent treatment, or treatment of mild to moderate deficiency, with oral copper (3-8 mg/d) until lev­ els normalise.10 The haematological abnormalities reverse within four to 12 weeks of therapy.1 Periodic assessment of serum copper is essential to determine adequacy of ANSWERS TO ENDGAMES, p 40 For long answers go to the Education channel on bmj.com PICTURE QUIZ An unusual case of epigastric pain 1 The figures show non-occlusive acute venous thrombosis in the portal and mesenteric veins. 2 The diagnosis is acute portal vein thrombosis with superior mesenteric vein thrombosis. This condition should be considered in any patient with unexplained abdominal pain of more than 24 hours’ duration. 3 Investigations should focus on determining when the thrombosis developed and on finding any underlying conditions that may have triggered it. Look for any localised predisposing factors such as cirrhosis, cancer, or acute inflammation. 4 Treatment with anticoagulants aims to stop extension of the thrombus, thereby preventing intestinal infarction and portal hypertension. Patients with cirrhosis should be screened for varices before anticoagulation is started. 5 Complications include life threatening variceal bleeds from portal hypertension and death from mesenteric infarction. In developing countries, it has been reported that 40% of portal hypertension may be caused by portal vein thrombosis. STATISTICAL QUESTION Unit of observation versus unit of analysis Statement b is true, whereas statements a and c are false. ANATOMY QUIZ Anatomy of the facial skeleton A: Right zygomaticofrontal suture B: Right orbital floor C: Right zygomatic arch D: Left frontal sinus E: Left maxillary sinus F: Left lateral maxillary wall This is one of a series of occasional articles highlighting conditions that may be more common than many doctors realise or may be missed at first presentation. The series advisers are Anthony Harnden, university lecturer in general practice, Department of Primary Health Care, University of Oxford, and Richard Lehman, general practitioner, Banbury. To suggest a topic, please email us at practice@bmj.com
14903
https://math.stackexchange.com/questions/398826/show-that-d-dx-ax-ax-ln-a
Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Show that $d/dx (a^x) = a^x\ln a$. Ask Question Asked Modified 6 years, 10 months ago Viewed 51k times 10 $\begingroup$ Show that $$ \frac{d}{dx} a^x = a^x \ln a. $$ How would I do a proof for this. I can't seem to get it to work anyway I try. I know that $$ \frac{d}{dx} e^x = e^x. $$ Does that help me here? calculus Share edited May 22, 2013 at 3:48 kahen 16.1k33 gold badges4040 silver badges7070 bronze badges asked May 22, 2013 at 3:25 1ftw11ftw1 64344 gold badges1111 silver badges1717 bronze badges $\endgroup$ Add a comment | 7 Answers 7 Reset to default 21 $\begingroup$ Hint: $a^x=e^{\ln a^x}=e^{x\ln a}$. Share answered May 22, 2013 at 3:38 JSCBJSCB 13.7k1515 gold badges6767 silver badges125125 bronze badges $\endgroup$ Add a comment | 5 $\begingroup$ Let $f : \mathbb{R} \to \mathbb{R}$ be given by $f(x)=a^x$ and consider the $\ln$ function. We can take the composition so that we have: $$(\ln\circ f)(x)=\ln (a^x)=x\ln a$$ Now, if we take the derivative, on the left hand side we use the chain rule and on the right hand side we differentiate as usual so that we have: $$\frac{f'(x)}{f(x)}=\ln a$$ Now solving for $f'(x)$ gives $f'(x) = f(x) \ln a$ so that $f'(x) = a^x \ln a$. This useful technique can be used to take derivatives of other functions: we compose the original function with the inverse and then differentiate on both sides and use the same idea we've used here, this technique can simplify many derivatives and save a lot of time in some situations. Share edited Jan 5, 2015 at 23:28 answered May 22, 2013 at 3:58 GoldGold 28.2k2121 gold badges113113 silver badges213213 bronze badges $\endgroup$ 1 $\begingroup$ Your answer would be improved is specify $a$. For example $a\in \mathbb{R}$, $a\geq 0$, or $a > 0$. $\endgroup$ Michael Levy – Michael Levy 2023-10-19 12:16:17 +00:00 Commented Oct 19, 2023 at 12:16 Add a comment | 3 $\begingroup$ Hint: Write $y = a^x$ or equivalently $\ln y = x \ln a$ and use implicit differentiation. Share answered May 22, 2013 at 3:28 responseresponse 5,12122 gold badges2020 silver badges2121 bronze badges $\endgroup$ Add a comment | 2 $\begingroup$ Here are the steps $$ \frac{d}{dx} \left[a^x\right] = \frac{d}{dx} \left[e^{\ln a^x}\right]= e^{\ln a^x} \frac{d}{dx} \left[\ln a^x\right] $$ $$ = a^x \frac{d}{dx} \left[x\ln a\right] = a^x\left(\ln a\right)\frac{d}{dx} \left[x\right]= a^x\ln a $$ Share answered Jan 5, 2015 at 18:06 k170k170 9,24733 gold badges2727 silver badges4444 bronze badges $\endgroup$ Add a comment | 1 $\begingroup$ , the last term, , can be explained by L'Hopital's rule, just taking the limits of the numerator and denominator one of each time. Share answered Feb 24, 2014 at 0:22 user131054user131054 1111 bronze badge $\endgroup$ Add a comment | 0 $\begingroup$ $ \frac{d}{dx} a^x=$ $=d/dx$ $e^{ln\ (a^x)}=$ $=d/dx \ e^{x\ln(a)}$ just as you said. Now it's time for chain rule. $f(x)=e^x$ $g(x)=xln(a)$ $a^x= f(g(x))$ last comment before I get solving $a=e^{ln(a)}$ now let's get it done $d/dx\ e^{x\ ln(a)}=$ $=e^{xln(a)}\ d/dx\ (x\ ln(a))=$ (by chain rule) $=e^{x\ ln(a)} ln(a)$ and that is the solution. Wait it doesnt look correct --> we should do some algebra! Show for yourself that $e^{xln(a)}=( e^{ln(a)} )^x$ but as we said that $a = e^{ln(a)}$ so $( e^{ln(a)} )^x$ actually equals $a^x$ and of course $e^{xln(a)}$ equals a^x With what we just said you can see that $e^{x ln(a)}\ ln(a)$ equals $a^x\ ln(a)$ or $$d/dx a^x = a^x\ ln(a)$$ Share edited Jan 9, 2015 at 20:35 CommunityBot 1 answered Jan 5, 2015 at 17:29 grekikigrekiki 1 $\endgroup$ 2 1 $\begingroup$ Use TeX while typing math symbols $\endgroup$ Praveen – Praveen 2015-01-05 17:59:53 +00:00 Commented Jan 5, 2015 at 17:59 $\begingroup$ For help, see this FAQ entry about $\LaTeX$ on Math.SE. $\endgroup$ Ruslan – Ruslan 2015-01-05 18:43:17 +00:00 Commented Jan 5, 2015 at 18:43 Add a comment | -1 $\begingroup$ Let $y = a^x$ Then taking log on both side to the base $e$ We have; $ \ln y = \ln a^x$ $ \ln y = x\cdot \ln a$ Taking derivative with respect to $x$; $\frac d{dx} \ln y = \frac d{dx} x\cdot \ln a$ (I have applied chain rule here>>) $ \frac 1y \frac{dy}{dx} = \ln a + 0$ $ \frac {dy}{dx} = y\cdot \ln a$ We know $y= a^x$ So, $\frac {dy}{dx} = a^x \ln a$ Hope it was easier than other methodologies used to derive it! Share edited Nov 21, 2018 at 9:46 pooja somani 2,62544 gold badges1818 silver badges2525 bronze badges answered Nov 21, 2018 at 9:17 Vichitra AttriVichitra Attri 111 bronze badge $\endgroup$ 1 $\begingroup$ The question is five years old and your answer adds nothing new to the already existing answers. $\endgroup$ José Carlos Santos – José Carlos Santos 2018-11-21 09:39:57 +00:00 Commented Nov 21, 2018 at 9:39 Add a comment | You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions calculus See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Linked 1 Value of $\lim_{h\rightarrow 0}\frac{a^h-1}{h}$ Related 10 Prove that $\sum\limits_{n=1}^{\infty}(-1)^n(2^{1/n} - 1)$ is convergent 2 approach for $\int\frac{1}{(x+1)\sqrt{x}}dx$ 1 How to solve this system of equations 0 Show that $-\log(1-\mathrm{e}^{\mathrm{i}x}) = -\log\left(2\sin\left(\frac{x}{2}\right)\right) + \mathrm{i}\dfrac{\pi - x}{2}$ 1 Use Brahmagupta€™s formula to show that, for a fixed given perimeter p, the cyclic quadrilateral with largest area is equilateral 1 How are these three equations that are rewritten equal eachother? 0 Show that the series $\sum_{n=1}^{\infty}nx^n$ converges uniformly on [-a,a] 2 Show that for $N\ge2$, $\sum\limits_{k>N}2^{N!-k!}\leq\frac{1}{4}.$ 2 Prove that $\lim_{x \rightarrow \infty} \left[ x \ln(x+a)-x\ln(x) \right] = a$ Hot Network Questions Are there any world leaders who are/were good at chess? Two calendar months on the same page Find non-trivial improvement after submitting Suggestions for plotting function of two variables and a parameter with a constraint in the form of an equation Does the curvature engine's wake really last forever? Can a cleric gain the intended benefit from the Extra Spell feat? How to use cursed items without upsetting the player? Calculating the node voltage How to rsync a large file by comparing earlier versions on the sending end? What is the meaning of 率 in this report? Implications of using a stream cipher as KDF What "real mistakes" exist in the Messier catalog? Why is the definite article used in “Mi deporte favorito es el fútbol”? Passengers on a flight vote on the destination, "It's democracy!" What were "milk bars" in 1920s Japan? Copy command with cs names How big of a hole can I drill in an exterior wall's bottom plate? What happens if you miss cruise ship deadline at private island? Suspicious of theorem 36.2 in Munkres “Analysis on Manifolds” Is it ok to place components "inside" the PCB Riffle a list of binary functions into list of arguments to produce a result What’s the usual way to apply for a Saudi business visa from the UAE? Verify a Chinese ID Number How many stars is possible to obtain in your savefile? more hot questions Question feed
14904
https://www.clarkness.com/Reading%20files/Picture%20Books/What%20Numbers%20Add%20Up%20to%206.pdf
What Numbers Add Up to 6? By Clark Ness Visit www.clarkness.com and www.readinghawk.com for more free ebooks and stories. Common Core Math Standard: K.OA.3 Nonfiction 6 + 0 = 6 1 5 + 1 = 6 2 4 + 2 = 6 3 3 + 3 = 6 4 2 + 4 = 6 5 1 + 5 = 6 6 0 + 6 = 6 7 More free ebooks and stories are available at www.clarkness.com and www.readinghawk.com. Copyright © 2014 by Clark Ness. Permission is granted for printing, photocopying, emailing, recording, storing in a retrieval system, and transmitting this ebook in any form, or by any means, mechanical and/or electronic. Sale of this ebook and/or uploading to a commercial bookstore or commercial website is strictly forbidden without prior written permission.
14905
https://www.quora.com/Compute-f-2-of-the-following-function-f-x-x-x-1-How-can-I-find-its-derivative-can-someone-explain
Compute f'(2) of the following function: f(x) =x/(x+1). How can I find its derivative, can someone explain? - Quora Something went wrong. Wait a moment and try again. Try again Skip to content Skip to search Sign In Mathematics Understanding Differentia... How To X Derivatives (finance) Functions (mathematics) Calculus 1 Derivatives and Different... Calculus (Mathematics) Mathematical Functions 5 Compute f'(2) of the following function: f(x) =x/(x+1). How can I find its derivative, can someone explain? All related (38) Sort Recommended Abhijeet Kumar BS-MS Student at IISER · Author has 378 answers and 228.6K answer views ·3y Originally Answered: Compute f'(2) of the following function: f(x) =x/x+1. How can I find its derivative, can someone explain? · y=f(x)=x/x+1 y=x/x+1 We can differentiate it using quotient rule, dy/dx=[(x+1)d(x)/dx — (x)d(x+1)/dx]/(x+1)^2 dy/dx=[(x+1)(1)—(x)(1)]/(x+1)^2 dy/dx=[x+1—x]/(x+1)^2 dy/dx=1/(x+1)^2 dy/dx=(x+1)^(—2) f'(x)=dy/dx=1/(x+1)^2 f'(x)=1/(x+1)^2 f'(2)=1/(2+1)^2=1/3^2=1/9 f'(2)=1/9 Thus, the value of f'(2) is 1/9 Thanks for A2A:) Upvote · 9 1 Promoted by Bata India Dhruti Shah Visualiser | Graphic Designer (2018–present) ·Sep 12 What are the best professional affordable and comfortable shoes for women? I usually look at three things when I’m buying work shoes: comfort, cushioning and arch support; how sturdy the sole is; and whether I can actually afford to get more than one pair if I want them in different colours. Ballerinas by Bata though, are what I wear the most. I didn’t know about them until recently, when a coworker recommended them to me, also spotted my favorite creator Siddhi Karwa styling them across Europe and I have been absolutely loving them. They’re professional enough for work wear but don’t feel heavy and keep me comfortable throughout the day, even when I’m commuting to the Continue Reading I usually look at three things when I’m buying work shoes: comfort, cushioning and arch support; how sturdy the sole is; and whether I can actually afford to get more than one pair if I want them in different colours. Ballerinas by Bata though, are what I wear the most. I didn’t know about them until recently, when a coworker recommended them to me, also spotted my favorite creator Siddhi Karwa styling them across Europe and I have been absolutely loving them. They’re professional enough for work wear but don’t feel heavy and keep me comfortable throughout the day, even when I’m commuting to the office. I got mine for around ₹999 from Bata, which felt like a steal compared to some other brands I looked at. They’ve held up really well, and I can easily pair them with trousers, skirts for my work outfits. If you’re on a budget but still want something that is comfortable and follows fashion trends, Ballerinas by Bata are the perfect choice. I picked up mine from a Bata store near me, you can grab yours too. Upvote · 1.1K 1.1K 99 86 99 13 Related questions More answers below How do you find the derivative of the function: ./f(x) = 2x^3 - 5x^2 + x - 1? How do I find the derivative of the following function with respect to x: f(x) = 3x^2 - 2x +1? Which of the following functions f is f (x) = f (1 – x) for all x? Is f(x) =1/x a function? For what value of 'k' is the derivative function f'(3) = 2, if f(x) = (x + k) / (x - 1)? Human Being ........ · Author has 279 answers and 152.3K answer views ·3y Originally Answered: Compute f'(2) of the following function: f(x) =x/x+1. How can I find its derivative, can someone explain? · f(x) =x/x+1 = 1 - (1/1+x) Now differentiating on both the sides gives f ‘ (x) = 1/(1+x)² f ‘ (x) = 1/(2+1)² = 1/9. If you find my method as useful, do upvote. Upvote · 9 1 Marco Biagini MSc in Mathematics, Eidgenössische Technische Hochschule (Graduated 1982) · Author has 5.4K answers and 5.8M answer views ·3y Originally Answered: Compute f'(2) of the following function: f(x) =x/x+1. How can I find its derivative, can someone explain? · Find the derivative of the following via implicit differentiation: d/dx(f(x) = x/(1 + x) = (x f'(x))/(1 + x)) Using the chain rule, d/dx(f(x) = x/(x + 1) = (x f'(x))/(x + 1)) = (d(u_1 = u_2 = u_3))/(du_1) (du_1)/(dx) + (d(u_1 = u_2 = u_3))/(du_2) (du_2)/(dx) + (d(u_1 = u_2 = u_3))/(du_3) (du_3)/(dx), where u_1 = f(x), u_2 = x/(x + 1), u_3 = (x f'(x))/(x + 1) and d/(du_1) (u_1 = u_2 = u_3) = False, d/(du_2) (u_1 = u_2 = u_3) = False, d/(du_3) (u_1 = u_2 = u_3) = False: = False (d/dx(f(x))) + False (d/dx((x f'(x))/(1 + x))) + False (d/dx(x/(1 + x))) Use the quotient rule, d/dx(u/v) = (v (du)/(dx) - Continue Reading Find the derivative of the following via implicit differentiation: d/dx(f(x) = x/(1 + x) = (x f'(x))/(1 + x)) Using the chain rule, d/dx(f(x) = x/(x + 1) = (x f'(x))/(x + 1)) = (d(u_1 = u_2 = u_3))/(du_1) (du_1)/(dx) + (d(u_1 = u_2 = u_3))/(du_2) (du_2)/(dx) + (d(u_1 = u_2 = u_3))/(du_3) (du_3)/(dx), where u_1 = f(x), u_2 = x/(x + 1), u_3 = (x f'(x))/(x + 1) and d/(du_1) (u_1 = u_2 = u_3) = False, d/(du_2) (u_1 = u_2 = u_3) = False, d/(du_3) (u_1 = u_2 = u_3) = False: = False (d/dx(f(x))) + False (d/dx((x f'(x))/(1 + x))) + False (d/dx(x/(1 + x))) Use the quotient rule, d/dx(u/v) = (v (du)/(dx) - u (dv)/(dx))/v^2, where u = x and v = x + 1: = False (d/dx(f(x))) + False (d/dx((x f'(x))/(1 + x))) + (((1 + x) (d/dx(x)) - x (d/dx(1 + x)))/(1 + x)^2) False The derivative of x is 1: = False (d/dx(f(x))) + False (d/dx((x f'(x))/(1 + x))) + (False (-x (d/dx(1 + x)) + 1 (1 + x)))/(1 + x)^2 Simplify the expression: = False (d/dx(f(x))) + False (d/dx((x f'(x))/(1 + x))) + (False (1 + x - x (d/dx(1 + x))))/(1 + x)^2 Differentiate the sum term by term: = False (d/dx(f(x))) + False (d/dx((x f'(x))/(1 + x))) + (False (1 + x - d/dx(1) + d/dx(x) x))/(1 + x)^2 The derivative of 1 is zero: = False (d/dx(f(x))) + False (d/dx((x f'(x))/(1 + x))) + (False (1 + x - x (d/dx(x) + 0)))/(1 + x)^2 Simplify the expression: = False (d/dx(f(x))) + False (d/dx((x f'(x))/(1 + x))) + (False (1 + x - x (d/dx(x))))/(1 + x)^2 The derivative of x is 1: = False (d/dx((x f'(x))/(1 + x))) + False (d/dx(f(x))) + (False (1 + x - 1 x))/(1 + x)^2 Simplify the expression: = False/(1 + x)^2 + False (d/dx(f(x))) + False (d/dx((x f'(x))/(1 + x))) Using the chain rule, d/dx(f(x)) = (df(u))/(du) (du)/(dx), where u = x and d/(du) (f(u)) = f'(u): = False/(1 + x)^2 + False (d/dx((x f'(x))/(1 + x))) + (d/dx(x)) f'(x) False The derivative of x is 1: = False/(1 + x)^2 + False (d/dx((x f'(x))/(1 + x))) + 1 False f'(x) Use the quotient rule, d/dx(u/v) = (v (du)/(dx) - u (dv)/(dx))/v^2, where u = x f'(x) and v = x + 1: = False/(1 + x)^2 + (((1 + x) (d/dx(x f'(x))) - x (d/dx(1 + x)) f'(x))/(1 + x)^2) False + False f'(x) Use the product rule, d/dx(u v) = v (du)/(dx) + u (dv)/(dx), where u = x and v = f'(x): = False/(1 + x)^2 + False f'(x) + (False ((1 + x) (x (d/dx(f'(x))) + (d/dx(x)) f'(x)) - x (d/dx(1 + x)) f'(x)))/(1 + x)^2 Using the chain rule, d/dx(f'(x)) = (df'(u))/(du) (du)/(dx), where u = x and d/(du) (f'(u)) = f''(u): = False/(1 + x)^2 + False f'(x) + (False (-x (d/dx(1 + x)) f'(x) + (1 + x) ((d/dx(x)) f''(x) x + (d/dx(x)) f'(x))))/(1 + x)^2 Differentiate the sum term by term: = False/(1 + x)^2 + False f'(x) + (False (-x f'(x) d/dx(1) + d/dx(x) + (1 + x) ((d/dx(x)) f'(x) + x (d/dx(x)) f''(x))))/(1 + x)^2 The derivative of 1 is zero: = False/(1 + x)^2 + False f'(x) + (False (-x (d/dx(x) + 0) f'(x) + (1 + x) ((d/dx(x)) f'(x) + x (d/dx(x)) f''(x))))/(1 + x)^2 Simplify the expression: = False/(1 + x)^2 + False f'(x) + (False (-x (d/dx(x)) f'(x) + (1 + x) ((d/dx(x)) f'(x) + x (d/dx(x)) f''(x))))/(1 + x)^2 The derivative of x is 1: = False/(1 + x)^2 + False f'(x) + (False (-x f'(x) 1 + (1 + x) ((d/dx(x)) f'(x) + x (d/dx(x)) f''(x))))/(1 + x)^2 The derivative of x is 1: = False/(1 + x)^2 + False f'(x) + (False (-x f'(x) + (1 + x) (1 f'(x) + x (d/dx(x)) f''(x))))/(1 + x)^2 The derivative of x is 1: = False/(1 + x)^2 + False f'(x) + (False (-x f'(x) + (1 + x) (f'(x) + 1 x f''(x))))/(1 + x)^2 Simplify the expression: Answer: = (f'(x) + x (1 + x) f''(x))/(1 + x)^2 = f”(2)=1/9 Upvote · 9 1 9 1 Assistant Bot · 1y To find the derivative of the function f(x)=x x+1 f(x)=x x+1 and then compute f′(2)f′(2), we can use the quotient rule for differentiation. The quotient rule states that if you have a function f(x)=g(x)h(x)f(x)=g(x)h(x), then the derivative f′(x)f′(x) is given by: f′(x)=g′(x)h(x)−g(x)h′(x)(h(x))2 f′(x)=g′(x)h(x)−g(x)h′(x)(h(x))2 In your case: g(x)=x g(x)=x and h(x)=x+1 h(x)=x+1 Now, we need to find g′(x)g′(x) and h′(x)h′(x): g′(x)=1 g′(x)=1 (the derivative of x x) h′(x)=1 h′(x)=1 (the derivative of x+1 x+1) Now, we can apply the quotient rule: f′(x)=(1)(x+1)−(x)(1)(x+1)2 f′(x)=(1)(x+1)−(x)(1)(x+1)2 Simplifying the numerator: f'(x)=\frac{x+1-x}{(x+1)^2}=\frac{1}{(x+1)f'(x)=\frac{x+1-x}{(x+1)^2}=\frac{1}{(x+1) Continue Reading To find the derivative of the function f(x)=x x+1 f(x)=x x+1 and then compute f′(2)f′(2), we can use the quotient rule for differentiation. The quotient rule states that if you have a function f(x)=g(x)h(x)f(x)=g(x)h(x), then the derivative f′(x)f′(x) is given by: f′(x)=g′(x)h(x)−g(x)h′(x)(h(x))2 f′(x)=g′(x)h(x)−g(x)h′(x)(h(x))2 In your case: g(x)=x g(x)=x and h(x)=x+1 h(x)=x+1 Now, we need to find g′(x)g′(x) and h′(x)h′(x): g′(x)=1 g′(x)=1 (the derivative of x x) h′(x)=1 h′(x)=1 (the derivative of x+1 x+1) Now, we can apply the quotient rule: f′(x)=(1)(x+1)−(x)(1)(x+1)2 f′(x)=(1)(x+1)−(x)(1)(x+1)2 Simplifying the numerator: f′(x)=x+1−x(x+1)2=1(x+1)2 f′(x)=x+1−x(x+1)2=1(x+1)2 Now that we have the derivative f′(x)=1(x+1)2 f′(x)=1(x+1)2, we can find f′(2)f′(2): f′(2)=1(2+1)2=1 3 2=1 9 f′(2)=1(2+1)2=1 3 2=1 9 Thus, the value of f′(2)f′(2) is: 1 9 1 9 Upvote · Related questions More answers below Can you find f’(x) at x=1 if f(x) =x/ (1+x^2) ^2? If f(x) =1+x/1-x, how do you show that f(X).f(x^2) /1+[f(X)] ^2=1/2? How does f(x) =x/ ((x^2) + 9) simplify to f (x) =1/ (x+(9/x))? How do you find the second derivative of 'f(x) =sqrt {1+x}'? Is there a formula to express the function f(x)=x+(x−1)+(x−2)+(x−3)+⋯+(x−x)f(x)=x+(x−1)+(x−2)+(x−3)+⋯+(x−x) ? Jonathan E. Segal Author has 436 answers and 328.7K answer views ·3y Originally Answered: Compute f'(2) of the following function: f(x) =x/x+1. How can I find its derivative, can someone explain? · I’m assuming you mean f(x) = x / (x + 1). f (x ) = [( x + 1) - 1] / (x + 1) = (x + 1) / (x + 1) - 1 / (x + 1) = 1 - 1 / (x + 1) Therefore, f(x) = 1 - 1 / (x +1) = 1 - (x + 1) ^(-1) f’(x) =0 - (-1)( x + 1)^(-2) = 1 / (x + 1)^2. To find f’(2), replace x with 2. f’(2) = 1 / (2 + 1)^2 = 1 / (3)^2 = 1 / 9. Upvote · 9 1 Gordon M. Brown Math Tutor at San Diego City College (2018-Present) · Author has 6.2K answers and 4.3M answer views ·3y Originally Answered: Compute f'(2) of the following function: f(x) =x/x+1. How can I find its derivative, can someone explain? · If you could be bothered to break open your calculus textbook and actually read it, you might have learned something about the Quotient Rule, and how to apply it to this problem. In the future, I strongly suggest that you spend a lot less time on Quora, and a lot more time reading your assigned materials, poring over the example problems, and doing your own work. Over the long term, jumping onto Quora and plying people for answers is a recipe for failure. Continue Reading If you could be bothered to break open your calculus textbook and actually read it, you might have learned something about the Quotient Rule, and how to apply it to this problem. In the future, I strongly suggest that you spend a lot less time on Quora, and a lot more time reading your assigned materials, poring over the example problems, and doing your own work. Over the long term, jumping onto Quora and plying people for answers is a recipe for failure. Upvote · 9 4 Sponsored by JetBrains Become More Productive in Jakarta EE. Try IntelliJ IDEA, a JetBrains IDE, and enjoy productive Java Enterprise development! Download 999 649 Mohammad Afzaal Butt B.Sc in Mathematics&Physics, Islamia College Gujranwala (Graduated 1977) · Author has 24.6K answers and 22.9M answer views ·3y Originally Answered: Compute f'(2) of the following function: f(x) =x/x+1. How can I find its derivative, can someone explain? · f(x)=x x+1=x+1−1 x+1=1−1 x+1 f(x)=x x+1=x+1−1 x+1=1−1 x+1 f′(x)=1(x+1)2 f′(x)=1(x+1)2 f′(2)=1(2+1)2=1 9 f′(2)=1(2+1)2=1 9 Upvote · 9 1 Jörg Straube M.Sc. in Computer Science, ETH Zurich (Graduated 1987) · Author has 6.3K answers and 1.7M answer views ·3y Originally Answered: Compute f'(2) of the following function: f(x) =x/x+1. How can I find its derivative, can someone explain? · Without proper parentheses: f(x) = x/x + 1 = 1 + 1 = 2 → f’(x) = 0 But I assume you meant f(x) = x/(x+1) = x•(x+1)^-1. You have to apply the product rule f = g•h → f’ = (g•h)’ = g’•h + g•h’ g = x, h = (x+1)^-1 g’ = 1, h’ = -(x+1)^-2 Hence f’ = 1/(x+1) - x/(x+1)^2 → f’(2) = 1/3 - 2/9 = 1/9 Upvote · 9 1 Sponsored by Divi This changes everything! Explore the future of WordPress. There is a reason why Divi has become the most popular WordPress theme in the world. Find out why. Learn More 1K 1K Arthur Queiroz Brazilian Math Olympiad Medallist ·6y Related Could you explain how to find the derivative of (x) / (square root 2x+1)? (x√2 x+1)′=(x×(2 x+1)−1 2)′=(x)′×(2 x+1)−1 2+x×((2 x+1)−1 2)′=1×(2 x+1)−1 2+x×−1 2((2 x+1)−3 2)(2 x+1)′=(2 x+1)−1 2−x×(2 x+1)−3 2(x 2 x+1)′=(x×(2 x+1)−1 2)′=(x)′×(2 x+1)−1 2+x×((2 x+1)−1 2)′=1×(2 x+1)−1 2+x×−1 2((2 x+1)−3 2)(2 x+1)′=(2 x+1)−1 2−x×(2 x+1)−3 2 Upvote · 9 1 9 1 Sarthak Chatterjee loves math · Author has 234 answers and 2.2M answer views ·10y Related What is the derivative of f(x)=1√x+2 f(x)=1 x+2? f(x)=(x+2)−1/2 f(x)=(x+2)−1/2 f′(x)=−1 2(x+2)−(1/2)−1 f′(x)=−1 2(x+2)−(1/2)−1 f′(x)=−1 2(x+2)−3/2 f′(x)=−1 2(x+2)−3/2 Upvote · 99 10 Sponsored by BestBrokerz.com What is Forex trading? Learn how Forex trading works and how you can trade using a regulated broker. Learn More 2.1K 2.1K Momal Bano 4 year · Author has 322 answers and 202.7K answer views ·3y Related What is f(2) when f’ (x/(x-1)) =x? f’ (x/(x-1)) =x let y=x/(x-1) (x-1)y=x xy-y=x xy-x=y x(y-1)=y x=y/(y-1) f’ (y) =y/(y-1) f(y)= ∫y/(y-1)dy f(y)=∫[(y-1)+1]/(y-1)dy f(y)=∫[(y-1)/(y-1) + 1/(y-1)]dy f(y)=∫[1 + 1/(y-1)]dy f(y)=y + ln(y-1) + c put y=2 f(2)=2 + ln(2–1) + c f(2)=2+ln1+c f(2)=2+0+c f(2)=2+c Upvote · 9 4 9 1 David Joyce Ph.D. in Mathematics, University of Pennsylvania (Graduated 1979) · Upvoted by Terry Moore , M.Sc. Mathematics, University of Southampton (1968) and Justin Rising , PhD in statistics · Author has 9.9K answers and 68.4M answer views ·1y Related If f (x+1/x) = (x+1/x) ² then what is f(x) =? Consider the graph of the function x+1/x.x+1/x. Note that the range of that function excludes numbers between −2−2 and 2,2, so the given equation says nothing about them. f f can be whatever you want on the interval (−2,2).(−2,2). Otherwise, x x is in the range of that function, so there is a value of t t such that x=t+1/t,x=t+1/t, and since f(t+1/t)=(t+1/t)2,f(t+1/t)=(t+1/t)2, therefore f(x)=x 2.f(x)=x 2. Conclusion. The value of f(x)f(x) is x 2 x 2 when x≥2 x≥2 and when x≤−2.x≤−2. Otherwise, there is no restriction on the value of f(x).f(x). It could be weird between –2 and 2. Continue Reading Consider the graph of the function x+1/x.x+1/x. Note that the range of that function excludes numbers between −2−2 and 2,2, so the given equation says nothing about them. f f can be whatever you want on the interval (−2,2).(−2,2). Otherwise, x x is in the range of that function, so there is a value of t t such that x=t+1/t,x=t+1/t, and since f(t+1/t)=(t+1/t)2,f(t+1/t)=(t+1/t)2, therefore f(x)=x 2.f(x)=x 2. Conclusion. The value of f(x)f(x) is x 2 x 2 when x≥2 x≥2 and when x≤−2.x≤−2. Otherwise, there is no restriction on the value of f(x).f(x). It could be weird between –2 and 2. Upvote · 999 145 99 11 9 1 Buddha Buck Took calculus as an undergraduate · Author has 5.8K answers and 16.9M answer views ·3y Related Is it possible to write down a function whose derivative will be itself, i.e., f(x) =f'(x)? Yes, it is. Consider a function with the property f(x+y)=f(x)f(y)f(x+y)=f(x)f(y), and what we can say about it’s derivative. For one thing, we should immediately be able to see that f(x)=f(x+0)=f(x)f(0)f(x)=f(x+0)=f(x)f(0), so we have f(0)=1.f(0)=1. Let’s see when we use what we know to get the derivative of f(x)f(x): [Math Processing Error]\begin{align}f'(x)&=\lim_{h\to0}\frac{f(x+h)-f(x)}{h}\&=\lim_{h\to0}\frac{f(x)f(h)-f(x)}{h}\&=\lim_{h\to0}\frac{f(x)(f(h)-1)}{h}\&=f(x)\lim_{h\to0}\frac{f(h)-1}{h}\&=f(x)\lim{h\to0}\frac{f(0)f(h)-f(0)}{h}\&=f(x)\lim{h\to0}\frac{f(0+h)-f(0)}{h} Continue Reading Yes, it is. Consider a function with the property f(x+y)=f(x)f(y)f(x+y)=f(x)f(y), and what we can say about it’s derivative. For one thing, we should immediately be able to see that f(x)=f(x+0)=f(x)f(0)f(x)=f(x+0)=f(x)f(0), so we have f(0)=1.f(0)=1. Let’s see when we use what we know to get the derivative of f(x)f(x): Great! If we can arrange it so that [math]f'(0) = 1[/math], then we have [math]f(x) = f'(x)[/math]. So all we have to do is find an [math]f(x)[/math] that (a) has the property [math]f(x+y)=f(x)f(y)[/math] for all [math]x,y[/math] in its domain (including [math]0[/math]), (b) is differentiable in its domain, and (c) [math]f'(0) = 1[/math]. For functions with [math]f(x+y) = f(x)f(y)[/math], exponential functions fit the bill. That is, functions of the form [math]f(x) = a^x[/math], where [math]a>0[/math], have the property that [math]f(x+y) = a^{x+y} = a^xa^y = f(x)f(y)[/math]. Technically, we have to prove that the limits above actually exist with exponential functions, but I’ll skip that. We also have to find an [math]a[/math] such that [math]f'(0) = 1[/math]. One exists, but this doesn’t tell us what it is. (Also, since differentiation is linear, any linear combination of solutions will also work. Not only does [math]a^x[/math] work, but so does [math]5a^x[/math], and if [math]b^x[/math] also works, then so does [math]5a^x+3b^x[/math], for arbitrary real values of 5 and 3.) Let’s look at this another way. One thing you may have dealt with in your classes are finite and infinite series. These are expressions of the form [math]\sum_{n=0}^b c_n[/math], where c[math]_n[/math] is some sequence of values. A power series is a series where each term is of the form [math]c_n = a_nx^n[/math], where [math]a_n[/math] is some sequence of constant values. A finite power series is essentially a polynomial, specified in a weird way. An infinite power series (of the form [math]\sum_{n=0}^\infty a_nx^n[/math] is the limit [math]\lim_{N\to\infty} \sum_{n=0}^N a_nx^n[/math] for some infinite sequence [math]a_n[/math]. It is sort of like a polynomial with an infinite number of terms. Well, we know how to differentiate polynomials. We can use that to differentiate power series — we just differentiate it term by term. So if [math]f(x) = a_0x^0 + a_1x^1 + a_2x^2 + \cdots[/math], we can get [math]f'(x) = a_1x^0 + 2a_2x^1 + 3a_3x^2 + \cdots[/math]. If we want [math]f(x) = f'(x)[/math], then we need [math]a_0 = a_1, a_1 = 2a_2, a_2 = 3a_3, \ldots, a_n = (n+1)a_{n+1}, \ldots[/math]. Rearranged, that gives us [math]a_1 = \frac{a_0}{1}, a_2 = \frac{a_1}{2}, a_3 = \frac{a_2}{3}, \ldots, a_n = \frac{a_{n-1}}{n}[/math]. Since every term, except the [math]a_0[/math] term, has a factor of [math]x[/math], they disappear when evaluating this at [math]0[/math], to give [math]f(0) = a_0[/math]. Since we can factor [math]a_0 \neq 0[/math] out of every term, we have the freedom to assume [math]a_0 =1[/math]. The result when [math]a_0 \neq 1[/math] is the same, except for a factor of [math]a_0[/math]. Plugging in [math]a_0 = 1[/math], we get [math]a_1 = \frac{1}{1}, a_2 = \frac{1}{1\cdot 2} = \frac{1}{2!}, a_3 = \frac{1}{2!\cdot 3} = \frac{1}{3!}, \ldots, a_n = \frac{1}{n!}[/math] Remember, this was all for a power series, [math]f(x) = \sum_{n=0}^\infty a_nx^n = \sum_{n=0}^\infty \frac{x^n}{n!}[/math]. You can check that term-by-term differentiation acts as we want. [math]\frac{d}{dx} \frac{x^n}{n!} = \frac{nx^{n-1}}{n(n-1)!} = \frac{x^{n-1}{(n-1)!}[/math], which summed over all [math]n[/math] gives us back where we started. This gives us two possible choices for [math]f(x)[/math], one of the form [math]f(x) = a^x[/math] for some unknown constant [math]a > 0[/math], and [math]f(x) = \sum_{n=0}^\infty x^n/n![/math]. It would be nice if they were the same. It is possible, through careful calculation, to multiply two infinite series and get a new infinite series. Multiplying [math]f(x)f(y) = (\sum_{n=0}^\infty x^n/n!)(\sum_{n=0}^\infty y^n/n!)[/math] carefully turns out to give you math = f(x+y)[/math]. So it looks like the two are the same, and that gives us a way to find [math]a[/math]. Since [math]a^1 = a[/math], we can calculate [math]a = f(1) = \sum_{n=0} 1/n![/math], the sum of the reciprocal factorials. Technically, we should prove that this converges and is well-defined as a value. I’m telling you that so you don’t think I’m telling the whole story, and am rushing over things. (this includes showing how to multiply power series, showing that everything converges where necessary, showing that the limits I’m blithely assuming exist actually do, and for that matter, what the heck do we mean by [math]a^x[/math] when [math]x[/math] is irrational? There are answers to all these things, but this answer is too long as it is). Fortunately, reciprocal factorials get small really fast, so it is easy to calculate quickly that, to 9 decimal places, [math]a \approx 2.718281828[/math]. That value is traditionally called [math]e[/math]. So the function [math]e^x[/math] is its own derivative. For that matter, so is any function of the form [math]ce^x[/math]. Upvote · 9 4 Related questions How do you find the derivative of the function: ./f(x) = 2x^3 - 5x^2 + x - 1? How do I find the derivative of the following function with respect to x: f(x) = 3x^2 - 2x +1? Which of the following functions f is f (x) = f (1 – x) for all x? Is f(x) =1/x a function? For what value of 'k' is the derivative function f'(3) = 2, if f(x) = (x + k) / (x - 1)? Can you find f’(x) at x=1 if f(x) =x/ (1+x^2) ^2? If f(x) =1+x/1-x, how do you show that f(X).f(x^2) /1+[f(X)] ^2=1/2? How does f(x) =x/ ((x^2) + 9) simplify to f (x) =1/ (x+(9/x))? How do you find the second derivative of 'f(x) =sqrt {1+x}'? Is there a formula to express the function f(x)=x+(x−1)+(x−2)+(x−3)+⋯+(x−x)f(x)=x+(x−1)+(x−2)+(x−3)+⋯+(x−x) ? What is the derivative of the function f(x) = x^2? What is the function f(x) = x^2 + x -1? We know that x! = x (x - 1)!, how about f(x) = x。f(x - 1)? How do you find the derivative of y=f(x) if f(x) =g(x)? What is the derivative of (1+2x) / (1-2x)? Use f(x+h) - f(x) /h Related questions How do you find the derivative of the function: ./f(x) = 2x^3 - 5x^2 + x - 1? How do I find the derivative of the following function with respect to x: f(x) = 3x^2 - 2x +1? Which of the following functions f is f (x) = f (1 – x) for all x? Is f(x) =1/x a function? For what value of 'k' is the derivative function f'(3) = 2, if f(x) = (x + k) / (x - 1)? Can you find f’(x) at x=1 if f(x) =x/ (1+x^2) ^2? Advertisement About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025
14906
https://www.youtube.com/watch?v=-vQH_P4G0Xs
Composition of piecewise functions | fof(x) | fog(x) | gof(x) | Graphical & Algebraic methods | 3 SE Mathsmerizing 66400 subscribers 752 likes Description 27641 views Posted: 3 Aug 2022 Composition of piecewise functions | fof(x) | fog(x) | gof(x) | Graphical & Algebraic methods | | 3 graded examples Telegram link: Twitter handle: Instagram handle: Mathsmerizing Alt channel: Website: www.mathsmerizing.com Support the channel: UPI link: 7906459421@okbizaxis UPI Scan code: PayPal link: paypal.me/mathsmerizing 52 comments Transcript: in this video we'll study how to find composite function from piecewise defined functions now basically there are two methods one of them is graphical method and the second one is algebraic method so we'll study these two methods with three graded examples now this first example is we are given this function f x which is defined as minus 1 plus mod of x minus 1 when x lies between minus 1 and 3. now clearly f x it is a continuous function and we have this mod x minus 1. now root of x minus 1 is 1 right most plus this is minus so when x is less than 1 it takes a negative value and when x is greater than 1 it takes a positive value so we can write this function f x as now for this first interval when x lies between minus 1 and 1 we have this minus sign here so it'll be minus 1 and then minus x minus 1 and then we have the second interval where x lies between 1 and 3 so it'll be this minus 1 plus x minus 1 so basically this function f x is defined as minus x when x lies between minus 1 and 1 that is x minus 2 when x lies between 1 and 3. now we look at our function gx now gx is defined as 2 minus mod of x plus 1 when the value of x lies between minus 2 n plus 2 x plus 1 has a critical point at minus 1 this is plus and this is minus so now we can write this function g x s now first we have this interval when x lies between minus 2 minus 1 that is 2 and then in this interval this is negative so minus minus plus so it'll be this 2 plus x plus 1 and then we have from minus 1 to 2 now here will be positive will be 2 minus x plus 1 now g x is basically x plus 3 when x lies between minus 2 n minus 1 and it will be so 1 minus x when x lies between minus 1 and 2. now we need to find f of g x and g of f x now first we will find f of g x using graphical method so we are going to use graphical method and we wish to find f of gx now basically f of gx is f and then gx so essentially values of x will go into this function g and then value of gx will go into this function fx now this graphical method what we do is first step is we'll draw the graph of gx [Music] now gx is x minus 3 when x lies between minus 2 and minus 1. so between minus 2 and minus 1 this is x minus 3. now we put xs minus 2 this value is 1 and if we put x is -1 this value is 2 so this is a straight line so this function is x plus 3 now for this next part it is also a linear function so at minus 1 this value is 2 and at 2 this value is minus 1 and at 0 this is 1 so this is two then one then zero and then minus one and here this definition is one minus x now this is the definition change at minus 1 so we highlight this minus 1. so first step is draw the graph of gx now this graph is defined between minus 2n plus 2 and there is a definition change at minus 1. now the next step is draw horizontal lines at boundaries of definitions of f x now boundaries of definitions of f x are at minus one one and at three so what we'll do is we'll draw a horizontal line at minus one so we'll draw this horizontal line at minus one we'll draw horizontal line at one and we'll draw a horizontal line at 3 and in addition to it we'll write definition of fx now when x lies between minus 1 and plus 1 definition of f x is minus x we write minus x in between these two lines and when x lies between 1 and 3 this definition is x minus 2 now we'll mark all those points where these lines they're going to intersect the graph gx now they're intersecting at this point and at this point and this point so we have already marked this point now this is x equals 0 and this point is x equals 2 so now we will start writing the definition of f of g x now first definition is between minus 2 and minus 1 so when x lies between minus 2 and minus 1 now in this case since both the functions f x and g x they are continuous we need not bother about this equality sign we can take it with this definition or we can also take it with the next definition now in this interval definition of g x is x plus 3 and this definition will go into this x minus 2 so it'll be this x plus 3 minus 2 now this next interval is between minus 1 and 0 so it'll be this minus 1 x and 0 and this case definition is a 1 minus x now this 1 minus x it still lies in this interval so here 1 minus x will again go to this x minus 2 so it'll be this 1 minus x minus 2 and then we have this third definition when x lies between 0 and 2 and this case definition is 1 minus x but here graph lies in between these two lines so here this definition it'll go to minus x will be this minus and one minus x so basically f of g x will be x plus 1 when x lies between minus 2 and minus 1 it'll be minus x minus 1 when x lies between minus 1 and 0 and it will be x minus 1 when x lies between 0 and 2. so this is our graphical method now we come to algebraic method [Music] now before algebraic method we write f x and g s again now f x is minus x when x lies between minus one and one and it is x minus 2 then x lies between 1 and 3 and g x is x plus 3 when x lies between minus 2 and minus 1 and it is 1 minus x when x lies between minus one and two now again we'll find f of g x so we let f of n g x now in this definition we replace x with g x will be this minus gx when gx lies between -1 and 1 and it'll be gx minus 2 when gx lies between 1 and 3 so for this f of g x we have defined everything in terms of gx now this gx itself has two definitions x plus 3 and 1 minus x so what we'll do is now we expand it in four parts using definition of gx now first we use this definition x plus 3 so it'll be this minus and then gx is x plus 3 and then we'll write -1 when gx is x 3 less than 1 now here we have to use end because this condition it must be satisfied for these values of x so together with this condition we have to take into consideration domain of x also so it will be this minus 2 x and then minus 1 now we will put this x plus 3 in this second definition so will be this x plus 3 minus 2 and then 1 and then will be this x plus 3 less than equal to 3 and again where x lies between minus 2 minus 1 now next definition is 1 minus x now we'll put 1 minus x in this first one so it'll be minus 1 minus x and then minus 1 1 minus x is less than 1 and here the condition is the value of x must lie between minus 1 and 2 and then this definition will go into the second part so will be this 1 minus x minus 2 and 1 is less than 1 minus x is less than equal to 3 and minus 1 is less than x is less than equal to 2 and this what we'll do is we'll first solve these intervals now this is a minus 4 is less than equal to x and this is less than -2 and here it is x lies between minus 2 and minus 1 now there is nothing common in between these two conditions so here there is no intersection of values of x possible so this definition it is not possible now if we look at this now if we take this 3 then we'll be this minus 2 and then x will be this 0 and here it is minus 2 x n minus 1 so in this case we have this common interval between minus with minus one so now we write this [Music] f of g x and the first interval where it is defined as between minus 2 n minus 1 and here this definition is simply x plus 1 now we'll solve this now if we subtract 1 it'll be this minus 2 less than equal to minus x less than 0 and if we take this minus sign then it is between 0 to 2 and here it is between minus 1 to 2 so the common interval between these two is value of x less than 2 0 to 2 now here this definition is x minus 1 and finally here it is 0 and then minus x and then 2 so between minus 2 and 0 and here it is from minus 1 to 2. so we take intersection of these two intervals it will be between -1 and 0 and here this definition is minus x minus 1 so that is the definition of f of g x using algebraic method now once we have this definition of f of g x we can draw its graph now when x is minus 2 this value is minus 1 when x is minus 1 this value is 0 so this is x plus 1. now when x is minus 1 this is 0 and when x is 0 this is minus 1 so this is [Music] minus x minus 1 and here if we take x is 0 it is minus 1 and it do this value is 1 so that is the graph of f of g x now clearly this function is continuous in [Music] minus 2 to 2 and it is not differentiable at these two points where we have these corners so not differentiable at x equals minus 1 and x equals 0 so this is how we connect composite functions from the two given functions f x and g x now we will draw the graph of g of f x and here i am just going to use graphical method now for the g of f x it is g of f x that means value of x will go into n and value of f x will go into g so here we will have to draw the graph of f x so first we will draw the graph of f x now this f x is minus x between minus 1 and plus 1 so at minus 1 it is 1 and at plus 1 this is minus 1 so that's the definition of f x between minus 1 and plus 1 so this is your minus x and then at 1 this value is minus 1 at 2 it is 0 and at 3 this is 1 so that's the definition x minus 2 now there is a definition change at 1 so definition changes at 1 and then boundary values are minus 1 and 3. now we will draw horizontal lines at boundary conditions of gx so we will draw a horizontal line at minus this will be this minus 2 [Music] and then at minus 1 [Music] and then we'll draw a line at 2. [Music] now when x lies between minus 2 and minus 1 definition is x plus 3 so here we will have x plus 3 and here when x has been minus 1 into it is the one minus x now it intersects only at one point and this point is already marked so here we just have two definitions so if we write the g of fx now this first interval is from minus 1 to 1 so when x lies between minus 1 and 1 definition is minus x and it will go into this 1 minus x so it'll be this 1 plus x and then next interval is from 1 to 3 so when x lands with you 1 and 3 here the definition is x minus 2 and still go to 1 minus x will be 1 minus x minus 2 so this function will be 1 plus x when x lies between minus 1 and 1 and it will be 3 minus x when x lies between 1 and 3. now we'll draw the graph of g of f x now for g of f x when x is -0 this is 0 and when x is 1 this is 2 so it is this slide 1 plus x and when x is 1 it is 2 and when x is 3 it is 0 so that's the definition 3 minus x now this g of f x it is continuous in minus 1 to 3 that is not differentiable at x equals 1. so this function it is not differentiable at this point x equals 1 so we can solve these composite functions either using graph or using this algebraic method now we'll take up the second example now here the questions we are given this function f x which is x plus 2 when x lies between -4 and 0 there is 2 minus x square when x lies between 0 and 4 we need to find f of f x domain of f of f x and also we need to comment upon the quantity of f of f x now since this is f of f x first we'll draw the graph of f x so we have these x's when x is minus 4 this value is -2 and when x is 0 this value is plus 2 so this is minus 2 and this is plus 2. and here when x is 0 this is 2 and when x is 4 this is minus 40 sweat 4 this is minus 14 not to be scaled so here this definition is x plus 2 and here this definition is 2 minus x square and there is a definition change at this point 0 and the boundary points are minus 4 and plus 4. now we will draw horizontal lines at -4 0 and 4 so we'll draw horizontal line minus 4 horizontal line at 0 and a horizontal line at plus and we'll mark these point of intersections so we have this point we also have this point and then we have this point now we find these points now if x plus 2 equals 0 this point is minus 2 if 2 minus x square it is equal to 0 then this point is root 2 and here this point is intersection of this graph which is 2 minus x square with this line which is minus 4 so this point is actually root 6. now in between minus 4 and 0 definition is x plus 2 so here this definition is x plus 2 and in between 0 and 4 this definition is 2 minus x square so now we will start writing f of fx so we have f of f x now first interval is between minus 4 and minus 2 so when x lies between -4 and minus 2 now since this function is continuous again we need not bother about this equality sign so here this definition is x plus 2 and this definition it lies within these two lines so this x plus 2 will go to x plus 2 so here it will be this x plus 2 plus 2 now this next interval is between minus 2 and 0 so when x lies between minus 2 n0 again this definition is x plus 2 now it will go to this 2 minus x square so it'll be this 2 minus x plus 2 whole square now the next one is from 0 to root 2 so when x lies between 0 to root 2 definition here is 2 minus x square and we'll go to this definition so will be this 2 minus 2 minus x whole square now this next interval is between root 2 and root 6 so when x lies between root 2 and root 6 here this definition is again 2 minus x square now i'll go to this x plus 2 will be 2 minus x square plus 2 now when x lies between root 6 and 4 we have this definition which is 2 minus x square but then here the definition of f x does not exist so this is the only definition possible for f of f x so we can write this as x plus 4 when x lies between minus 4 n minus 2 and this 2 minus x plus 2 square when x lies between minus 2 and 0 this is 2 minus 2 minus x whole square when x lies between 0 and root 2 and then this 4 minus x square when x lies between root 2 and root 6 now this function is defined from minus 4 to root 6 so domain of f of f x will be from minus 4 to root 6 now since f x is continuous its composition with itself will also be a continuous function so this function will be continuous in its domain which is from minus 4 to root 6 so this is our graphical method now we'll solve this question using algebraic method now for algebraic method we have this f x which is defined as [Music] x plus 2 when x lies between minus 4 and 0 and it is 2 minus x square when x lies between 0 and 4. now we need to find f of f x now f of f x is f x plus 2 when f x lies between minus 4 and 0 and will be 2 minus f x square when f x is greater than 0 but less than 4 now we have two definitions of f x so we will split this definition in four parts so now we're going to write this definition now first we'll take this x plus 2 so we'll put f axis x plus 2 in these two so it'll be x plus 2 plus 2 when x plus 2 is between minus 4 and 0 and this condition which is when x lies between minus 4 and 0 and the next one is 2 minus x plus 2 square when x plus 2 lies between 0 and 4 and again it is true when x lies between -4 and 0. now we have this other definition which is 2 minus x square so will be this 2 minus x square plus 2 and minus 4 is less than equal to 2 minus x squared is less than equal to 0 and here value of x lies between 0 and 4 and then we have 2 minus 2 minus x square whole square when the 0 is less than 2 minus x square is less than 4 and x lies between 0 and 4. now first we'll simplify these intervals now here the value of x is minus 6 is less than x is less than minus 2 and here it is from -4 to 0. now if we take intersection of these two conditions we'll get the value of x between minus four and minus two so first definition is from minus four to minus two and it will be this x plus 4. now for the second one it will be this minus 2 and then x and then 2 and here it is from minus 4 to 0 so here common between the two is from minus 2 to 0 so next one is from minus 2 to 0 and here this definition is t minus x plus 2 whole square now in this if we solve this we can write this as minus 6 is less than equal to minus x square is less than equal to minus 2 or if we multiply with minus and take square root lb this x will lie between root 2 and root 6 which lies in this interval so here this condition is the value of x should lead to root 2 and root 6 now in this interval value of f x is 4 minus x square and finally we have this which is minus 2 is less than minus x square is less than 2 so here we'll have [Music] mod of x is less than root 2. so this is from 0 to root 2. so when x lies between 0 and root 2 this definition will be 2 minus 2 minus x square whole square now again it goes from minus 4 to minus 2 minus 2 to 0 0 to root 2 and root 2 root 6 so for this f of fx domain is from [Music] minus 4 to root 6 now if we look at continuity we check continuity at minus 2 so f minus 2 negative it will be 4 minus 2 2 and then f minus 2 positive and since equality sign is here so this is also equal to f2 and f2 positive will be 2 minus 0 this is 2 so it is continuous at minus 2 i will check continuity at 0 f 0 negative is also equal to f 0 and here if we put x is 0 it will be 2 minus 4 so it'll be this minus 2 and f 0 positive it will be 2 minus 4 which again is minus 2 so it is continuous at 0 also now we'll check its continuity at root 2. now f of root 2 negative now this is 0 2 and f of root 2 will be equal to f of root 2 positive and here it will be 4 minus 2 and is 2. so this function is continuous in its domain so we can find f of f x using graphical method as well as using algebraic method now we'll take this third case where we're given this fx and gx and we need to find g of fx now first we use graphical method we'll draw the graph of fx now when x is 0 this is 1 and when x is -2 this is 0 so that is the definition of f x and since 0 is not included it will be this open interval and then when x is greater than equal to 0 this is x square so it is x square when x is greater than or equal to so in this case this f x it has this discontinuity at zero here there is a definition change at zero now we will draw horizontal lines at minus one and one so we'll draw horizontal lines at minus one and also at plus one [Music] plus one now we will find the point of intersection so we have this just one point of intersection here we have another point of intersection which is given by this value now this point is graph of x plus one intersecting this line minus one so x plus one equals minus one so this point is minus two so basically this point is x plus one equals minus 1 and we also have another point of intersection and this point is this graph of x square intersected by y equals 1 so if x square equals 1 this point is also 1 so here we have 3 critical points minus 2 0 and 1 now when x lies between minus 1 and plus 1 definition of g x is 2x and when x is greater than equal to 1 this definition is 3 minus x now we will start writing g of fx [Music] now this first interval is x lies between minus 2 and not include 0 here this definition is x plus 1 and we go to this definition which is 2 x will be simply to x plus one now yes since this function is discontinuous at zero we do not know whether we have to include zero here or we have to include zero in the next one so what we'll do is as of now we won't bother about this equality sign now we'll find definition between 0 and 1. so when x lies between 0 and 1 here this definition is x square and still between these two lines so we'll go to this definition 2x will be this 2x square and finally it will be this x greater than 1 in this we have x square and now you go to this 3 minus x will be this 3 minus x square so that is the definition of g of f x now we have a problem with equality signs so what we'll do is we'll find g of f 0 now f 0 is when x is 0 f 0 0 so it'll be this g 0 and g 0 is 2 into 0 0 so we have to include this equality sign at a point where definition of g of f x is 0 so if we put 0 here it will be 2 and we put 0 here will be this 2 x square so we have to include this 0 here and in the same way we'll find g of f 1 now f 1 it will be 1 so will be this g 1 and g1 is this three minus x which is two now if we put one here it is two if we put one here it is now since it is continuous at one we can put this equal sign anywhere so that is the definition of g of fx now we will solve this using algebraic method so here we have to find g of f x which is 2fx when f x lies between minus 1 and plus 1 and will be 3 minus f x when f x is greater than or equal to 1 now f x has two definitions so first we'll consider this definition so now we write this as 2 and then x plus 1 when x plus 1 lies between minus 1 and 1 and the condition with the definition is x is less than 0 and here will be this 3 minus x plus 1 when x plus 1 is greater than equal to 1 and here the condition is x is less than 0 now next definition is x square so will be this 2x square and x square lies between minus 1 and 1 and x is greater than or equal to 0 and then it will be this 3 minus x square and x square is greater than equal to 1 and x is greater than or equal to 0. now we'll solve these conditions now this is minus 2 less than equal to x less than 0 and this is x is less than 0 so intersection of these two conditions is this interval so g of f x will be [Music] two x plus one when x lies between minus two and zero now if you look at the second condition here the condition is x is greater than or equal to zero and here x is less than zero now there is nothing common between the two intervals so this is not possible now if we look at this third one x square is always greater than minus one and x square less than 1 is x lies between minus 1 and plus 1 and here x is greater than equal to 0 so intersection is x is greater than equal to 0 but less than 1 in this case this definition is 2x square and this third one is x square is greater than 1 that means x is greater than equal to 1 when x is greater than or equal to 0 that means x is greater than or equal to 1 then this definition is 3 minus x square so this is the definition of g of f x using algebraic method so this is how we find composite functions from two piecewise defined functions
14907
https://medlineplus.gov/ency/article/000951.htm
An official website of the United States government Here’s how you know Official websites use .govA .gov website belongs to an official government organization in the United States. Secure .gov websites use HTTPSA lock ( Lock Locked padlock icon ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites. Barbiturate intoxication and overdose Barbiturates are medicines that cause relaxation and sleepiness. A barbiturate overdose occurs when someone takes more than the normal or recommended amount of this medicine. This can be by accident or on purpose. An overdose is life threatening. At fairly low doses, barbiturates may make you seem drunk or intoxicated. Barbiturates are addictive. People who use them become physically dependent on them. Stopping them suddenly (withdrawal) can be life-threatening. Tolerance to the mood-altering effects of barbiturates develops rapidly with repeated use. But, tolerance to the lethal effects develops more slowly, and the risk of severe poisoning increases with continued use. This article is for information only. DO NOT use it to treat or manage an actual overdose. If you or someone you are with overdoses, call your local emergency number (such as 911), or your local poison control center can be reached directly by calling the national toll-free Poison Help hotline (1-800-222-1222) from anywhere in the United States. Causes Barbiturate use is a major addiction problem for many people. Most people who take these medicines for seizure disorders or pain syndromes do not abuse them, but those who do, usually start by using medicine that was prescribed for them or other family members. Most overdoses of this type of medicine involve a mixture of medicines, usually alcohol and barbiturates, or barbiturates and opioids such as heroin, oxycodone, or fentanyl. Some users take a combination of all these medicines. Those who use such combinations tend to be: Symptoms Symptoms of barbiturate intoxication and overdose include: Excessive and long-term use of barbiturates, such as phenobarbital, may produce the following chronic symptoms: Exams and Tests Your health care provider will monitor your vital signs, including temperature, pulse, breathing rate, and blood pressure. Tests that may be done include: Treatment At the hospital, emergency treatment may include: A medicine called naloxone (Narcan) may be given if an opioid was part of the mix. This medicine often rapidly restores consciousness and breathing in people with an opioid overdose, but its action is short-lived, and may need to be given repeatedly. There is no direct antidote for barbiturates. An antidote is a medicine that reverses the effects of another medicine or drug. In select and extreme cases of overdose, dialysis (kidney machine) may be used to help remove the medicine from the blood. Outlook (Prognosis) About 1 in 10 people who overdose on barbiturates or a mixture that contains barbiturates will die. They usually die from heart and lung problems. Possible Complications Complications of an overdose include: When to Contact a Medical Professional Call your local emergency number, such as 911, if someone has taken barbiturates and seems extremely tired or has breathing problems. Your local poison control center can be reached directly by calling the national toll-free Poison Help hotline (1800-222-1222) from anywhere in the United States. This national hotline will let you talk to experts in poisoning. They will give you further instructions. This is a free and confidential service. All local poison control centers in the United States use this national number. You should call if you have any questions about poisoning or poison prevention. It does NOT need to be an emergency. You can call for any reason, 24 hours a day, 7 days a week. Alternative Names Intoxication - barbiturates References Aronson JK. Barbiturates. In: Aronson JK, ed. Meyler's Side Effects of Drugs. 16th ed. Waltham, MA: Elsevier; 2016:819-826. Overbeek DL, Erickson TB. Sedative hypnotics. In: Walls RM, ed. Rosen's Emergency Medicine: Concepts and Clinical Practice. 10th ed. Philadelphia, PA: Elsevier; 2023:chap 154. Review Date 7/1/2023 Updated by: Jesse Borke, MD, CPE, FAAEM, FACEP, Attending Physician at Kaiser Permanente, Orange County, CA. Also reviewed by David C. Dugdale, MD, Medical Director, Brenda Conaway, Editorial Director, and the A.D.A.M. Editorial team. Related MedlinePlus Health Topics Health Content Provider06/01/2025 A.D.A.M., Inc. is accredited by URAC, for Health Content Provider (www.urac.org). URAC's accreditation program is an independent audit to verify that A.D.A.M. follows rigorous standards of quality and accountability. A.D.A.M. is among the first to achieve this important distinction for online health information and services. Learn more about A.D.A.M.'s editorial policy, editorial process, and privacy policy. The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions. Links to other sites are provided for information only – they do not constitute endorsements of those other sites. No warranty of any kind, either expressed or implied, is made as to the accuracy, reliability, timeliness, or correctness of any translations made by a third-party service of the information provided herein into any other language. © 1997-2025 A.D.A.M., a business unit of Ebix, Inc. Any duplication or distribution of the information contained herein is strictly prohibited.
14908
https://wordsinasentence.com/boorish-in-a-sentence/
Boorish: In a Sentence – WORDS IN A SENTENCE ').insertAfter(document.body)}) Menu Random Word All Words (16750+) Videos Random Word All Vocabulary Words (16500+) Videos Boorish in a Sentence Prev WordNext Word Definition of Boorish bad-mannered, rude, or insensitive Examples of Boorish in a sentence The comedian’s jokes were so vulgar and boorish that the only ones left in the audience were those who were too drunk to be offended. Even though the pirate captain was brutal and boorish with his men, he was always courteous to the female captives. Gideon’s boorish behavior in front of the judge earned him a night in jail for contempt of court. After annoying all the cocktail waitresses for two hours, the boorish drunk was finally thrown out of the bar. Whenever we find out that our boorish neighbor is going to have a cookout, we think up excuses to be away from home. The hometown fans acted in such a boorish way toward the visiting team that they had to forfeit the game. While it’s true that boys will be boys, Spencer and Jason’s boorish behavior during the pep rally landed them in the principal’s office. While the cowboys handled themselves very well in town, they were anxious to relax and return to their boorish manners out on the trail. Cindy was horrified at her boyfriend’s boorish behavior when she introduced him to her parents. Many celebrities go to great lengths to avoid the paparazzi and their boorish invasion of privacy. PREV WORDNEXT WORD Other words in the Harsh category: Sharp-tongued Uncouth Billingsgate Autocratic Despotism Inhuman Unapologetic Violent Rancorous Gruff Merciless Bestial Tactless Malicious Abusive Savage Doggerel Pedantic Scabrous Caustic Recommended for you Most Searched Words (with Video) Voracious: In a Sentence Verbose: In a Sentence Vainglorious: In a Sentence Pseudonym: In a Sentence Propinquity: In a Sentence Orotund: In a Sentence Magnanimous: In a Sentence Inquisitive: In a Sentence Epoch: In a Sentence Aberrant: In a Sentence Apprehensive: In a Sentence Obdurate: In a Sentence Heresy: In a Sentence Gambit: In a Sentence Pneumonia: In a Sentence Otiose: In a Sentence More Facebook Prev WordNext Word X WORDS IN A SENTENCE Copyright © 2025. Privacy Policy Random Word previous next slideshow Generic selectors [x] Exact matches only Exact matches only [x] Search in title Search in title [x] Search in content Search in content [x] Search in excerpt [x] Search in posts Search in posts [x] Search in pages Search in pages
14909
https://www.scribd.com/document/459862099/proofs-02-28-16
Opens in a new window Opens an external website Opens an external website in a new window This website utilizes technologies such as cookies to enable essential site functionality, as well as for analytics, personalization, and targeted advertising. To learn more, view the following link: Privacy Policy Open navigation menu Upload 0 ratings0% found this document useful (0 votes) 384 views2 pages The Invariance Principle: Arthur Engel, Problem-Solving Strategies This document contains 11 math proof problems related to topics like chessboards, numbers, people shaking hands, cows in pens, and tiling puzzles. The problems get progressively more… Uploaded by Rizky Rajendra Anantadewa You are on page 1/ 2 Yrbtbfi [rnn`s Obsgl Dlvrnv Qgm Bfvlrblfjm [rbfjbpdm Ymstmrf [L LUOD [rljtbjm @merulry 57, 5;<0 L eli jnftlbfs 22 rma olredms lfa 22 edum olredms. Qlhbfi twn olredms nut n` tgm eli, ynu3 put l rma olredm bf tgm eli b` tgm twn olredms ynu armw lrm tgm slom jndnr (entg rma nr entgedum), lfa • put l edum olredm bf tgm eli btgm twn olredms ynu armw lrm ab﬋mrmft jndnrs.Umpmlt tgbs stmp (rmaujbfi tgm fuoemr n olredms bf tgm eli ey nfm mljg tbom) uftbd nfdy nfmolredm bs dmt bf tgm eli. Yglt bs tgm jndnr n tglt olredm> <. (Mfimd ) Lf 7 7 jgmssenlra bs jndnrma bf tgm usuld wly, eut tglt‒s enrbfi, sn ynu amjbamtn ﬌x tgbs. Rnu jlf tlhm lfy rnw, jnduof, nr 5 Ù 5 squlrm, lfa rmvmrsm tgm jndnrs bfsbam bt,swbtjgbfi edljh tn wgbtm lfa wgbtm tn edljh.[rnvm tglt bt‒s bopnssbedm tn mfa up wbtg 0: wgbtm squlrms lfa < edljh squlrm.5. Qgm fuoemrs < 5 <;; lrm wrbttmf nf l edljhenlra. Rnu oly jgnnsm lfy twn fuoemrs lfa mrlsm tgmo, rmpdljbfi tgmo wbtg tgm sbfidm fuoemr l e <. Ltmr 22 stmps, nfdyl sbfidm fuoemr wbdd em dmt. Yglt bs bt>:. Zuppnsm ynu bfstmla rmpdljm l lfa e ey tgm prnaujt le + l + e . Yglt fuoemr wbdd em dmt lttgm mfa>1. Lt l plrty, snom plbrs n pmnpdm sglhm glfas. Ym jldd l pmrsnf naa wgn gls sglhmf glfaswbtg lf naa fuoemr nntgmr iumsts. [rnvm tglt tgmrm bs lf mvmf fuoemr n naa pmnpdm lttgm plrty.=. L rnno bs bfbtblddy mopty. Mvmry obfutm, mbtgmr twn pmnpdm mftmr nr nfm pmrsnf dmlvms. L`tmrmxljtdy : : :: obfutms, jnuda tgm rnno jnftlbf mxljtdy : : : < pmnpdm>0. L gmra n<;; jnws bs abvbama bftnnur pmfs3 <; jnws bf tgm fnrtg pmf, 5; jnws bf tgm mlstpmf, :; jnws bf tgm snutg pmf, lfa 1; jnws bf tgm wmst pmf.Qgm pmfs lrm jnffmjtma tgrnuig l iltmwly wm jlf usm tn dmt tgrmm jnws nut nnfm pmf lfaabstrbeutm tgmo emtwmmf tgm ntgmrs. @nr bfstlfjm, b wm dmt tgrmm jnws nut n` tgm snutg pmf, < Lrtgur Mfimd, [rnedmo-Zndvbfi Ztrltmibms . < Download to read ad-free wm mfa up wbtg << jnws bf tgm fnrtg pmf, 5< jnws bf tgm mlst pmf, 59 jnws bf tgm snutg pmf,lfa 1< jnws bf tgm wmst pmf.[rnvm tglt wm jlf fmvmr usm tgbs iltmwly tn spdbt tgm gmra bftn nur mquld irnups, wbtg 5=jnws bf mljg n tgm nur pmfs.9. (Zt. [mtmrseuri) L tmljgmr wrntm anwf tgrmm pnsbtbvm rmld fuoemrs nf tgm edljhenlra lfatnda Abol tn amjrmlsm nfm n tgmo ey :%, amjrmlsm lfntgmr ey 1%, lfa bfjrmlsm tgm dlst ey=%. Abol wrntm anwf tgm rmsudts bf gbs fntmennh. Bt turfma nut tglt gm wrntm anwf tgmslom tgrmm fuoemrs tglt lrm nf tgm edljhenlra, cust bf l ab﬋mrmft nramr. [rnvm tglt Aboloust glvm olam l obstlhm.7. (Mfimd) Qgmrm bs l pnsbtbvm bftmimr bf mljg squlrm nl rmjtlfiudlr tledm. Bf mljg onvm, ynuoly anuedm mljg fuoemr bf l rnw nr suetrljt <rno mljg fuoemr nl jnduof. [rnvm tgltynu jlf rmljg l tledm n zmrnms ey l smqumfjm n` tgmsm pmrobttma onvms.2. (l) Qgm bftmimrs < , 5 ,...,f lrm wrbttmf anwf bf tglt nramr. Lt mljg stmp, ynu oly swlplfy twn bftmimrs3 nr mxlopdm, b f ? 0, ynu jlf emibf ey jglfibfi < , 5 , : , 1 , = , 0 tn< , 5 , = , 1 , : , 0 ey swlppbfi : lfa =.[rnvm tglt ynu jlf fmvmr rmturf tn tgm nrbibfld nramr ltmr lf naa fuoemr n swlps.(Qgbs bs nfm ntgm onrm ab﬊judt prnedmos, eut ldsn tgm onst imfmrlddy usmud rmsudt, snB bfjduam gbfts `nr twn 5 ab﬋mrmft : lpprnljgms tn sndvbfi bt.)(e) Qgm <=-puzzdm bs l sdbabfi puzzdm wbtg ﬌`tmmf squlrm tbdms, fuoemrma < tgrnuig <=, lr-rlfima bf l 1 Ù 1 squlrm. Bf tgm dltm <2tg jmftury, Zlo Dnya n﬋mrma l $<;;; prbzm nrlfynfm tglt jnuda imtrno tgm jnf﬌iurltbnf nf tgm dm`t tn tgm jnf﬌iurltbnf nf tgm rbigt(swlppbfi tgm <1 lfa <= tbdms) ey sdbabfi tgm tbdms lrnufa.< 5 : 1= 0 9 72 <; << <5<: <= <1< 5 : 1= 0 9 72 <; << <5<: <1 <=[rnvm tglt tgbs bs bopnssbedm, lfa sn tgm prbzm wnuda fmvmr glvm tn em plba nut.<;. ([utflo 5;;7) Ztlrt wbtg l smqumfjm l < ,l 5 ,...,l f npnsbtbvm bftmimrs. B pnssbedm, jgnnsmtwn bfabjms c 4 h sujg tglt l c anms fnt abvbam l h , lfa rmpdljm l c lfa l h ey ija( l c ,l h ) lfadjo( l c ,l h ), rmspmjtbvmdy. [rnvm tglt btgbs prnjmss bs rmpmltma, bt oust mvmftulddy stnp, lfatgm ﬌fld smqumfjm anms fnt ampmfa nf tgm jgnbjms olam.<<. Zmvmf squlrms n lf 7 Ù 7 irba lrm sglama. Lt mljg stmp, wm sglam bf mljg ufsglama squlrmtglt gls lt dmlst twn sglama fmbigenrbfi squlrms (gnrbznftlddy nr vmrtbjlddy). [rnvm tglt tgbsprnjmss jlffnt mfa bf tgm mftbrm irba embfi sglama. 5 Lpprnljg #<3 J n f s b a m r t g m f u o e m r n — b f v m r s b n f s ‟ ( p l b r s n b f t m i m r s t g l t l r m n u t n ` n r a m r ) l t m l j g s t m p . : Lpprnljg #53 E m i b f e y p r n v b f i t g m r m s u d t ` n r t g m s p m j b l d j l s m w g m r m n f d y l a c l j m f t b f t m i m r s j l f e m s w l p p m a . 5 Share this document Share on Facebook, opens a new window Share on LinkedIn, opens a new window Share with Email, opens mail client Millions of documents at your fingertips, ad-free Subscribe with a free trial You might also like Design of Cable Trench 78% (9) Design of Cable Trench 4 pages Sumac Exam No ratings yet Sumac Exam 4 pages Mc21 The Mad Titans Shadow Rulebook-Compressed No ratings yet Mc21 The Mad Titans Shadow Rulebook-Compressed 28 pages Awesomemath Admission Test A No ratings yet Awesomemath Admission Test A 9 pages An Assignment On Social Change & Development No ratings yet An Assignment On Social Change & Development 16 pages 421 Questions Ccba Exam 1 No ratings yet 421 Questions Ccba Exam 1 145 pages PROMYS 2023 Application-Boston 100% (1) PROMYS 2023 Application-Boston 4 pages NVS Lab Attendant Notes No ratings yet NVS Lab Attendant Notes 4 pages Chapter 5 - Short-Term and Working Memory 100% (1) Chapter 5 - Short-Term and Working Memory 32 pages How To Be An (Not Aggressive) : Assertive No ratings yet How To Be An (Not Aggressive) : Assertive 340 pages 50 bài đọc cô MP (không có lời giải) No ratings yet 50 bài đọc cô MP (không có lời giải) 46 pages Exploring Grammar in Writing Article No ratings yet Exploring Grammar in Writing Article 4 pages LSM6DS3 Datasheet No ratings yet LSM6DS3 Datasheet 100 pages Classical Inequalities in Olympiad Math No ratings yet Classical Inequalities in Olympiad Math 10 pages Proof Techniques 0% (1) Proof Techniques 3 pages Masculinity vs. Femininity in Cultures No ratings yet Masculinity vs. Femininity in Cultures 6 pages Immediate Access Marketing Management 4th Edition Marshall Verified PDF Download 0% (1) Immediate Access Marketing Management 4th Edition Marshall Verified PDF Download 408 pages School Based Management (SBM) As Correlates To Academic Performance of Secondary Schools in Quezon City No ratings yet School Based Management (SBM) As Correlates To Academic Performance of Secondary Schools in Quezon City 30 pages Irmo No ratings yet Irmo 59 pages Exercises TECHNICAS ANALÍTICAS No ratings yet Exercises TECHNICAS ANALÍTICAS 23 pages Olym Notes 100% (1) Olym Notes 86 pages ED-Course Plan 2024 EEE No ratings yet ED-Course Plan 2024 EEE 6 pages Exterior Render Settings (Vray 3.4 For Sketchup) No ratings yet Exterior Render Settings (Vray 3.4 For Sketchup) 14 pages Capriles y Albarracin-Jordan 2012 - Earliest Human Occupations in Bolivia No ratings yet Capriles y Albarracin-Jordan 2012 - Earliest Human Occupations in Bolivia 14 pages PTM Remarks May Class-6-D 3 No ratings yet PTM Remarks May Class-6-D 3 6 pages Berkeley Math Circle - Monthly Contests - Solutions (1999-00) No ratings yet Berkeley Math Circle - Monthly Contests - Solutions (1999-00) 4 pages Parity Slide No ratings yet Parity Slide 19 pages Subudhi Techno Engineers PVTLTD Checklist For Reiki Survey: SN Details Remarks Option Choice (Y/N) No ratings yet Subudhi Techno Engineers PVTLTD Checklist For Reiki Survey: SN Details Remarks Option Choice (Y/N) 16 pages MOHS Hardness PDF No ratings yet MOHS Hardness PDF 16 pages NCM 103 Fundamentals No ratings yet NCM 103 Fundamentals 6 pages School of Law, Narsee Monjee Institute of Management Studies, Bangalore No ratings yet School of Law, Narsee Monjee Institute of Management Studies, Bangalore 13 pages Essay Hazel No ratings yet Essay Hazel 5 pages The S Point - Michal Rolinek, Josef Tkadlec No ratings yet The S Point - Michal Rolinek, Josef Tkadlec 8 pages Maikop Series Geochemical Analysis No ratings yet Maikop Series Geochemical Analysis 4 pages Exhumation Report: Etafuleni Cemetery No ratings yet Exhumation Report: Etafuleni Cemetery 5 pages Hansen's Right Triangle Theorem, Its Converse and A Generalization No ratings yet Hansen's Right Triangle Theorem, Its Converse and A Generalization 8 pages Civil Engineering Homework Guide No ratings yet Civil Engineering Homework Guide 5 pages Mastery 2 (Etech) No ratings yet Mastery 2 (Etech) 4 pages 2 (R R) AH BH CH No ratings yet 2 (R R) AH BH CH 1 page Ai Yapping No ratings yet Ai Yapping 2 pages Hints For Tutorials MTL180 Preminor No ratings yet Hints For Tutorials MTL180 Preminor 13 pages Ahsashagarwal PDF No ratings yet Ahsashagarwal PDF 44 pages GR 6 - Sa 1 - Date Sheet No ratings yet GR 6 - Sa 1 - Date Sheet 1 page Diophantine PDF No ratings yet Diophantine PDF 5 pages 04 Number Theory I No ratings yet 04 Number Theory I 9 pages Number Theory and Combinatorics Problems No ratings yet Number Theory and Combinatorics Problems 4 pages 8 Vertical Stresses Below Applied Loads No ratings yet 8 Vertical Stresses Below Applied Loads 13 pages MATH 3790 - Assignment 2 Solutions: Due Oct 16 October 17, 2003 No ratings yet MATH 3790 - Assignment 2 Solutions: Due Oct 16 October 17, 2003 3 pages Combinatorics Revision Assignment 1 No ratings yet Combinatorics Revision Assignment 1 6 pages Mathematical Problem Solving Techniques No ratings yet Mathematical Problem Solving Techniques 1 page Problems No ratings yet Problems 4 pages Putnam No ratings yet Putnam 24 pages Yoni Miller - Invariance-Principle No ratings yet Yoni Miller - Invariance-Principle 4 pages Well Logging Data Acquisition and Applications Serra Oberto Serra Download No ratings yet Well Logging Data Acquisition and Applications Serra Oberto Serra Download 39 pages MIT6 042JF10 Assn02 No ratings yet MIT6 042JF10 Assn02 6 pages Baltic Way 1995 Math Problems & Solutions No ratings yet Baltic Way 1995 Math Problems & Solutions 6 pages SUMaC 2023 Admissions Exam 100% (1) SUMaC 2023 Admissions Exam 5 pages Tran Quang Hung and Pham Huy Hoang No ratings yet Tran Quang Hung and Pham Huy Hoang 4 pages Level 7 Assignment 43 No ratings yet Level 7 Assignment 43 6 pages MIT6 042JF10 Assn02 PDF No ratings yet MIT6 042JF10 Assn02 PDF 6 pages Math Contest Solutions for Students No ratings yet Math Contest Solutions for Students 11 pages Junior Problem Seminar - 1 To 100 No ratings yet Junior Problem Seminar - 1 To 100 4 pages Invariants 2 No ratings yet Invariants 2 2 pages Invariants3 2 No ratings yet Invariants3 2 2 pages CS 499 Homework 1 Solutions No ratings yet CS 499 Homework 1 Solutions 13 pages Chapter 3 No ratings yet Chapter 3 7 pages More Problems No ratings yet More Problems 2 pages Problems I 100% (1) Problems I 19 pages PROM 2 2024 Final Copy No ratings yet PROM 2 2024 Final Copy 5 pages SUMaC 2014 Admission Exam 100% (2) SUMaC 2014 Admission Exam 2 pages Strategies No ratings yet Strategies 3 pages Parity 16 No ratings yet Parity 16 3 pages Problems of Year 2024 No ratings yet Problems of Year 2024 5 pages wp-contentuploads202212PROMYS India 2023 Application No ratings yet wp-contentuploads202212PROMYS India 2023 Application 4 pages Mock Aime III 2010 No ratings yet Mock Aime III 2010 6 pages Exercises On Invariants No ratings yet Exercises On Invariants 2 pages A Study Guide No ratings yet A Study Guide 6 pages Level 7 Assignment 42 (Parity) PDF No ratings yet Level 7 Assignment 42 (Parity) PDF 2 pages Stanford Math Circle Sunday May 30, 2010 Problem Solving Using Invariants and Monovariants No ratings yet Stanford Math Circle Sunday May 30, 2010 Problem Solving Using Invariants and Monovariants 4 pages Problems 2015 0% (1) Problems 2015 3 pages Invariants and Monovariants: Adithya Bhaskar January 24, 2016 No ratings yet Invariants and Monovariants: Adithya Bhaskar January 24, 2016 5 pages Skibidi Tyler 3 No ratings yet Skibidi Tyler 3 18 pages Polya Problem-Solving Seminar Week 1: Induction and Pigeonhole No ratings yet Polya Problem-Solving Seminar Week 1: Induction and Pigeonhole 2 pages Tmcsoln 2 R No ratings yet Tmcsoln 2 R 2 pages TJUSAMO 2011 - Invariants/Monovariants: Mitchell Lee and Andre Kessler No ratings yet TJUSAMO 2011 - Invariants/Monovariants: Mitchell Lee and Andre Kessler 2 pages PROMYS Europe 2025 Application Problem Set - J25 No ratings yet PROMYS Europe 2025 Application Problem Set - J25 3 pages Combipset No ratings yet 14 pages PROMYS India 2025 Application No ratings yet PROMYS India 2025 Application 4 pages Problems Week No ratings yet Problems Week 8 pages Invariants and Monovariants TAMO 5 No ratings yet Invariants and Monovariants TAMO 5 11 pages PROMYS India 2025 Application No ratings yet PROMYS India 2025 Application 4 pages Pigeonhole Principle Teach - Gabriel Carroll - MOP (Green) 2010 No ratings yet Pigeonhole Principle Teach - Gabriel Carroll - MOP (Green) 2010 4 pages Math Puzzles for Problem Solvers No ratings yet Math Puzzles for Problem Solvers 3 pages Stanford University Mathematics Camp (Sumac) 2022 Admissions Exam No ratings yet Stanford University Mathematics Camp (Sumac) 2022 Admissions Exam 5 pages scribd. scribd. scribd. scribd. scribd. scribd. scribd. scribd. scribd. scribd. scribd. scribd.
14910
https://math.stackexchange.com/questions/2397468/the-sum-of-odd-powered-complex-numbers-equals-zero-implies-they-cancel-each-othe
algebra precalculus - The sum of odd powered complex numbers equals zero implies they cancel each other in pairs - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more The sum of odd powered complex numbers equals zero implies they cancel each other in pairs Ask Question Asked 8 years, 1 month ago Modified8 years, 1 month ago Viewed 468 times This question shows research effort; it is useful and clear 5 Save this question. Show activity on this post. Show that if a set of complex numbers z 1,z 2,…,z n z 1,z 2,…,z n satisfy z l 1+z l 2+⋯+z l n=0 z 1 l+z 2 l+⋯+z n l=0 for everyoddl l, then for any z i z i we can always find some z j z j such that z i+z j=0 z i+z j=0. The question has been answered here for real numbers but not for complex numbers algebra-precalculus complex-numbers induction Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Follow Follow this question to receive notifications edited Aug 17, 2017 at 23:33 AstorAstor asked Aug 17, 2017 at 23:00 AstorAstor 424 2 2 silver badges 10 10 bronze badges 0 Add a comment| 2 Answers 2 Sorted by: Reset to default This answer is useful 5 Save this answer. Show activity on this post. Let P(z)=∑k=0 n(−1)k e k z n−k P(z)=∑k=0 n(−1)k e k z n−k be the n t h n t h degree polynomial with roots z k∣k=1,⋯,n z k∣k=1,⋯,n, where by Vieta's formulase k e k are the elementary symmetric polynomials. Let p i=∑k=1 n z i k p i=∑k=1 n z k i, where it is given that p l=0 p l=0 for all odd l l. From Newton's identitiesk e k=∑i=1 k(−1)i−1 e k−i p i k e k=∑i=1 k(−1)i−1 e k−i p i it follows (by induction, for example) that e l=0 e l=0 for all odd l l. Therefore, the polynomial P(z)P(z) has every other coefficient 0 0, so it contains either only even powers of z z, or only odd powers of z z, depending on the parity of n n. In the first case P(z)P(z) is an even function, in the second case an odd one. In both cases P(z)=0⟺P(−z)=0 P(z)=0⟺P(−z)=0 so the roots of P(z)P(z) can be grouped in pairs of mutually opposites. Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications answered Aug 18, 2017 at 0:30 dxivdxiv 78k 6 6 gold badges 69 69 silver badges 127 127 bronze badges 0 Add a comment| This answer is useful 5 Save this answer. Show activity on this post. We prove instead the following Lemma. Using this Lemma, your claim follows by an immediate (strong) induction. Lemma Let a 1,..,a k a 1,..,a k be complex numbers, not all of them zero, and z 1,..,z k z 1,..,z k non-zero, pairwise distinct complex numbers. If a 1 z l 1+...+a k z l k=0 a 1 z 1 l+...+a k z k l=0 for all odd integers l l, then, there exists some i≠j i≠j such that z i+z j=0 z i+z j=0 Proof: Consider the determinant Δ=∣∣∣∣∣∣∣∣z 1 z 3 1 z 5 1...z 2 k−1 1 z 2 z 3 2 z 5 2...z 2 k−1 2 z 3 z 3 3 z 5 3...z 2 k−1 3................z k z 3 k z 5 k...z 2 k−1 k∣∣∣∣∣∣∣∣Δ=|z 1 z 2 z 3...z k z 1 3 z 2 3 z 3 3...z k 3 z 1 5 z 2 5 z 3 5...z k 5................z 1 2 k−1 z 2 2 k−1 z 3 2 k−1...z k 2 k−1| First, since a 1 col 1+...+a k col k=0 a 1 col 1+...+a k col k=0 we get Δ=0 Δ=0. Next, using the Vandermonde formula, we get 0=Δ=z 1 z 2...z k∣∣∣∣∣∣∣∣1 z 2 1 z 4 1...z 2 k−2 1 1 z 2 2 z 4 2...z 2 k−2 2 1 z 2 3 z 4 3...z 2 k−2 3................1 z 2 k z 4 k...z 2 k−2 k∣∣∣∣∣∣∣∣=z 1 z 2..z k∏1≤i<j≤k(z 2 j−z 2 i)0=Δ=z 1 z 2...z k|1 1 1...1 z 1 2 z 2 2 z 3 2...z k 2 z 1 4 z 2 4 z 3 4...z k 4................z 1 2 k−2 z 2 2 k−2 z 3 2 k−2...z k 2 k−2|=z 1 z 2..z k∏1≤i<j≤k(z j 2−z i 2) It follows from the hypothesis that there exists some i<j i<j such that z 2 j−z 2 i=0 z j 2−z i 2=0 and hence z i+z j=0 z i+z j=0. Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications edited Aug 21, 2017 at 13:17 answered Aug 19, 2017 at 21:39 N. S.N. S. 135k 12 12 gold badges 150 150 silver badges 271 271 bronze badges 0 Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions algebra-precalculus complex-numbers induction See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Linked 12Prove that ∏1≤i,j≤n 1+a i a j 1−a i a j≥1∏1≤i,j≤n 1+a i a j 1−a i a j≥1 for n n real numbers a i∈(−1,1)a i∈(−1,1) 2The sum of odd powered real numbers equals zero implies the numbers are inverses 1Products of zero diagonal matrices and spectrum symmetry 0minimization on sphere Related 2The sum of odd powered real numbers equals zero implies the numbers are inverses 3Equality in generalized triangle inequality 16Maximum value c c s.t. ∃∃ a subset S S of {z 1,z 2,…,z n}{z 1,z 2,…,z n} s.t. ∣∣∑z∈S z∣∣≥c|∑z∈S z|≥c (∑n i=1|z i|=1∑i=1 n|z i|=1). 0Proof of the Contour Integral Formula 8Super hard complex numbers problem: there do not exist n>1 n>1 complex numbers z 1,z 2,…,z n z 1,z 2,…,z n, no two equal, such that for all 1≤k≤n 1≤k≤n 4Generalized Vandermonde-Matrix 9Proving |1+z 1|2+|1+z 2|2+…+|1+z n|2=2 n|1+z 1|2+|1+z 2|2+…+|1+z n|2=2 n Hot Network Questions How different is Roman Latin? Is existence always locational? Who is the target audience of Netanyahu's speech at the United Nations? Riffle a list of binary functions into list of arguments to produce a result how do I remove a item from the applications menu With line sustain pedal markings, do I release the pedal at the beginning or end of the last note? Gluteus medius inactivity while riding Suggestions for plotting function of two variables and a parameter with a constraint in the form of an equation An odd question What is the feature between the Attendant Call and Ground Call push buttons on a B737 overhead panel? Repetition is the mother of learning в ответе meaning in context Where is the first repetition in the cumulative hierarchy up to elementary equivalence? Why does LaTeX convert inline Python code (range(N-2)) into -NoValue-? Weird utility function Why include unadjusted estimates in a study when reporting adjusted estimates? Vampires defend Earth from Aliens The rule of necessitation seems utterly unreasonable Does a Linux console change color when it crashes? Numbers Interpreted in Smallest Valid Base Checking model assumptions at cluster level vs global level? What NBA rule caused officials to reset the game clock to 0.3 seconds when a spectator caught the ball with 0.1 seconds left? Fundamentally Speaking, is Western Mindfulness a Zazen or Insight Meditation Based Practice? For every second-order formula, is there a first-order formula equivalent to it by reification? more hot questions Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
14911
https://www.khanacademy.org/math/ap-calculus-ab/ab-differential-equations-new/ab-7-6/e/separable-equations
Use of cookies Cookies are small files placed on your device that collect information when you use Khan Academy. Strictly necessary cookies are used to make our site work and are required. Other types of cookies are used to improve your experience, to analyze how Khan Academy is used, and to market our service. You can allow or disallow these other cookies by checking or unchecking the boxes below. You can learn more in our cookie policy Privacy Preference Center When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer. More information Manage Consent Preferences Strictly Necessary Cookies Always Active Certain cookies and other technologies are essential in order to enable our Service to provide the features you have requested, such as making it possible for you to access our product and information related to your account. For example, each time you log into our Service, a Strictly Necessary Cookie authenticates that it is you logging in and allows you to use the Service without having to re-enter your password when you visit a new page or new unit during your browsing session. Functional Cookies These cookies provide you with a more tailored experience and allow you to make certain selections on our Service. For example, these cookies store information such as your preferred language and website preferences. Targeting Cookies These cookies are used on a limited basis, only on pages directed to adults (teachers, donors, or parents). We use these cookies to inform our own digital marketing and help us connect with people who are interested in our Service and our mission. We do not use cookies to serve third party ads on our Service. Performance Cookies These cookies and other technologies allow us to understand how you interact with our Service (e.g., how often you use our Service, where you are accessing the Service from and the content that you’re interacting with). Analytic cookies enable us to support and improve how our Service operates. For example, we use Google Analytics cookies to help us measure traffic and usage trends for the Service, and to understand more about the demographics of our users. We also may use web beacons to gauge the effectiveness of certain communications and the effectiveness of our marketing campaigns via HTML emails.
14912
https://ximera.osu.edu/electromagnetics/electromagnetics/electrostatics/digInElectricFieldBoundaryConditions
Electrostatic Boundary Conditions - Ximera Statistics Get Help Contact my instructor Request help using Ximera Report bug to programmers Another Math Editor Failed Saved! Saving… Reconnecting… Save Update Erase Me Profile Supervise Logout Sign In Sign In with Google Sign In with Twitter Sign In with GitHub Sign In Warning × You are about to erase your work on this activity. Are you sure you want to do this? No, keep my work. Yes, delete my work. Updated Version Available × There is an updated version of this activity. If you update to the most recent version of this activity, then your current progress on this activity will be erased. Regardless, your record of completion will remain. How would you like to proceed? Keep the old version. Delete my work and update to the new version. Mathematical Expression Editor × +–× ÷ _x_ⁿ √ⁿ√ πθ φ ρ ( )| | sincostan arcsinarccosarctan _e_ˣlnlog [?]\blue{[?]}[?] Cancel OK 1Sinusoidal signals 1.1Basic Parameters of Sinusoidal Signals Review of Sinusoidal Signals 1.2Leading and Lagging Signals Review of Sinusoidal Signals 1.3eLi the iCe Man is CIVIL Review of Sinusoidal Signals 1.4Signal Delay on Transmission Lines Review of Sinusoidal Signals 1.5Engineering Design The purpose of this section is to show one application of sinusoidal signals to engineering design. 2Complex numbers 2.1Review of Complex Numbers This section aims to introduce two different ways we represent complex numbers: Cartesian coordinates, and Polar coordinates. We often use complex numbers in Cartesian coordinates when we discuss impedance or admittance. We often use complex numbers in polar coordinates to discuss magnitude and phase of voltages, currents, transfer functions, and Bode Plots. We can also represent sinusoidal signals with complex numbers with phasors. It is critically important that we understand this chapter. 2.2Operations with Complex Numbers The purpose of this section is to review arithmetic operations with complex numbers. We use complex numbers to describe circuits. When we solve circuits to find voltages, currents, and power, we often encounter addition, subtraction, multiplication, division, and the complex conjugate of complex numbers. 2.3Euler’s Formula The purpose of this section is to relate sinusoidal signals and complex numbers using Euler’s formula. 3Phasors 3.1Review of Phasors Phasors are essential tool in circuit analysis, used in many applications. Phasors are a special case of superposition, that simplifies circuit analysis. 3.2Kirchoff’s Laws Phasors are essential tool in circuit analysis. 3.3Example of circuit analysis with phasors Phasors are essential tool in circuit analysis. 4Waves on Transmission Lines 4.1Types of Transmission Lines 4.2Wave Equation 4.3Visualization of waves on lossless transmission lines 4.4Propagation constant and loss 4.5Transmission Line Impedance 4.6Reflection Coefficient 4.7Input impedance of a transmission line 4.8Forward voltage on a transmission line 4.9Traveling and Standing Waves 4.10Example Transmission Line Problem 5Smith Chart 5.1Smith Chart 5.2Impedance and admittance circles on the Smith Chart 5.3Impedance and Admittance on Smith Chart 5.4Electrical Length 5.5Input Reflection Coefficient and Impedance on Smith Chart 6Impedance Matching 6.1Power 6.2Power Transfer on a transmission line 6.3Simple impedance matching case 6.4Mixed Impedance Matching 6.5Transmission-line impedance matching 6.6Lumped element impedance matching 7Electrostatics 7.1Electrostatic Force 7.2Electrostatic Field 7.3Electrostatic Potential 7.4Electrostatic fields from distributed charges 7.5Calculation of electric field using Gauss’s Law 7.6Electrostatic Boundary Conditions 7.7Capacitance 7.8Method of images 8Magnetostatics 8.1Charged particles in static electric and magnetic fields 8.2Force on Conductors 8.3Force on Conductors 8.4Biot-Savart’s Law 8.5Ampere’s Law 8.6Inductance 9Changing electromagnetic fields 9.1Faraday’s Law 9.2Lenz’s Law 9.3Transformers 9.4Magnetic Coupling 9.5Flying ring 9.6Falling Magnet 9.7Voltage droop electromagnetics Electromagnetics Electrostatics Electrostatic Boundary Conditions Milica Markovic Conductors in the electrostatic field Conductors conduct current well because the atoms of good conductors have many loosely bound electrons that can leave the atoms in the presence of an external electric field. In the absence of an external electric field, a piece of metal is shown in Figure 1 on the left. When there is no electric field, the electrons are close to the nucleus. On the right, the same metal piece is placed in an electric field of the battery. Under the influence of the external field, EvEv, electrons can freely move away from the atoms in the direction opposite to the direction of the external field. This type of current is called conduction current. The point form of Ohm’s law states that E\=J/σE\=J/σ , where J is the current density, E is the electric field, and σσ is the conductivity of the material. Conductors have very high conductivity, and so the electric field inside the conductors is zero. In electrostatics, we assume that the charges are not moving, so there is no conduction current. The electric field inside the conductors is zero. As we will see later in this section, the charges on the conductor in electrostatic fields can exist only on its surface, and the vector of the electric field must be perpendicular to the surface of the metal. The tangential electric field is zero. All points on a conductor in electrostatic fields have the same potential, and so the conductor is an equipotential surface. Figure 1: Conductor in Electric Field. Dielectrics in the electrostatic field As shown in Figure 2, in dielectrics, in the absence of an electric field, the electrons are close to the nucleus. The difference here is that the electrons are tightly bound to the nucleus, and they cannot escape in the presence of an electric field. When a battery establishes electric field EvEv inside the dielectric, the atoms of the dielectric stretch because the nucleus is pulled in the direction of the field, and electrons in the opposite direction and the atom can be represented by a dipole. On the other hand, the free electrons in the wire connected to the dielectric start bunching up on top of the dielectric piece, and the dipole’s positive charge is attracted to electrons. The negative dipole’s bound charge pushes electrons away from the bottom conductor. Looking from the outside, the current flows, but the electrons are not flowing through the dielectric. This type of current is called a displacement current. If the battery is removed, the free negative and positive charges are trapped on the top and bottom of the dielectric piece. Figure 2: Dielectric in Electric Field. The electrons in the metal on top of the dielectric establish an electric field across it, as shown in Figure 3. This field, in turn, produces electric dipoles in the dielectric, as explained above. The internal positive and negative charges cancel each other, and the positive bound charge from the dielectric on top and negative on the bottom produce their own field, which is in the opposite direction from the external field, as shown in Figure 4. Figure 3: Polarization of a dielectric in external electric Field. Each oval represents one atom. Figure 4: Two fields acting inside the dielectric. The external field EvEv from the free charges in the metal on top and bottom, and the polarized dielectric field EpEp. The inner part of the dielectric is removed to show clearly the fields. The total field in the dielectric is the sum of the electric fields from free charges on top and bottom metal pieces EvEv, and the electric field from the separated polarization charges of the dielectric EpEp, as shown in Equation 1. The induced field EvEv is a fraction of the external field, and we can represent it in terms of the external field as Ep\=mEvEp\=mEv, where mm is some constant. We can then express the total field as a fraction of the external field in Equation 3. Etotal\=Ev−EpEtotal\=Ev−mEvEtotal\=Ev(1−m)(1)(2)(3)(1)Etotal\=Ev−Ep(2)Etotal\=Ev−mEv(3)Etotal\=Ev(1−m) Relative dielectric permittivity of the material εε is defined as 1−m\=1εr1−m\=1εr. Therefore the total field inside the dielectric is lower than if no dielectric is present. Etotal\=Evεr(4)(4)Etotal\=Evεr Dielectric permittivity of a material is defined as the relative permittivity multiplied by the permittivity of free space ε0\=8.8510−12F/mε0\=8.8510−12F/m. Relative dielectric constant Relative dielectric constant is in general a complex number εr\=εr′+jεr′′εr\=εr′+jεr″. εr′εr′ in data sheets is called a dielectric constant, or design dielectric constant, and it varies from 1 in the air to 13 in GaAs. An outlier is a dielectric constant of distilled water, εr′\=80εr′\=80. We can sketch the complex relative dielectric constant on a complex plane. The angle between the magnitude of the dielectric constant and the x-axis is called tanδtan⁡δ , and it is used to describe the losses in the dielectric material. In datasheets for PC boards, you can see that tanδtan⁡δ is from about 0.001 for microwave substrates, such as Rogers Duroid, to 0.02 for low-frequency FR4 substrates. Boundary conditions at a dielectric-dielectric boundary In many electrical structures, more than one dielectric is used so that the electric field exists in different dielectrics. In such cases, we are interested in how will the electric field change from one dielectric to the other. Figure 5 shows the boundary between the two dielectrics with permittivities ε1ε1 and ε2ε2, and the electric fields E1E1 in material 1 and E2E2 in material 2. At the boundary between two materials, we may have surface charge density ρsρs. At the boundary between any two dielectrics, the tangential components of the electric field E1t,E2tE1t,E2t are continuous, and the normal components E1n,E2nE1n,E2n are discontinuous and equal to the surface charge density. E1t\=E2tε1E1z−ε2E2z\=ρs(5)(6)(5)E1t\=E2t(6)ε1E1z−ε2E2z\=ρs If the free surface charge density at the boundary is zero, then the normal components of the electric field at the boundary are E1t\=E2tε1E1z\=ε2E2z(7)(8)(7)E1t\=E2t(8)ε1E1z\=ε2E2z We can also write electric flux density vectors at the boundary. Since D1\=ε1E1D1\=ε1E1 and D2\=ε2E2D2\=ε2E2, the above equations can be re-written as ε2D1t\=ε1D2tD1z\=D2z(9)(10)(9)ε2D1t\=ε1D2t(10)D1z\=D2z Figure 5: Boundary Conditions for Electric Field. The four equations below show the tangential and normal electric field at the boundary of two dielectrics. Dielectric 1 is a Teflon with a relative dielectric constant of 2.2, and dielectric 2 is Silicon with a relative dielectric constant of 11.2. Which set of equations represents a possible electric field? 2.2E1t\=11.2E2t2.2E1t\=11.2E2t and E1z\=E2zE1z\=E2z E1t\=E2tE1t\=E2t and 2.2E1z\=11.2E2z2.2E1z\=11.2E2z 2.2E1t\=11.2E2t2.2E1t\=11.2E2t and E1z\=E2zE1z\=E2z E1t\=E2tE1t\=E2t and 11.2E1z\=2.2E2z11.2E1z\=2.2E2z Correct Try again Check work Boundary conditions at a conductor-dielectric boundary The electric field inside perfect conductors σ→∞σ→∞ is zero. Ohm’s law states that E\=Jσ(11)(11)E\=Jσ When σ→∞σ→∞ , from the above equation, we see that the electric field is zero. This means that at the boundary of the dielectric and metal, the tangential field in the dielectric must be zero as well, and the only field at the boundary of a metal is the normal electric field DnDn, and it is equal to the induced charge at the surface of the conductor. Dn\=ρs(12)(12)Dn\=ρs Figure 6 shows the field at the boundary of the metallic sphere. Watch this demonstration of separation of charges on a metallic sphere in the electric field of the VanDenGraaff generator. Figure 6: Metallic sphere in an external electric field. Shielding with Faraday’s Cage The electric field is zero inside the closed metallic conductor, even if the conductor is hollow, as shown in Figure 7, and no charge is induced inside a metallic shield. This is Faraday’s cage. Figure 7: Electric field inside hollow metallic conductor (Faraday’s Cage). Watch a demonstration of zero electric fields, and no charge, inside a hollow conductor by Prof Emeritus of MIT Walter Lewin. Grounding In Figure 8, we introduce a charge inside a hollow conductor, and the electric field forms inside the conductor. The charge in the metallic shell will redistribute so that the field is zero inside the metal. The charge on the surface of the conductor will be uniformly distributed, regardless of the position of the charge inside the hollow part. Figure 8: Hollow conductor with a charge inside. Figure 9 shows a grounded hollow conductor with a charge inside it. In this case, the positive charge on the outside of the conductor will attract negative charges from the ground that neutralize the positive charge inside the shell, so there will be no field outside the shell. Figure 9: Grounded hollow conductor with a charge inside. Watch a demonstration of Faraday’s Cage by Prof Emeritus of MIT Walter Lewin. He will enter the Faraday’s Cage with a tinsel, a transmitter (his wireless microphone that likely works at a frequency of a few GHz), and a receiver (a radio that works at a couple of hundred Megahertz frequency). The Faraday’s cage is likely not grounded. He cannot receive the radio signal as the outside radio waves cannot enter the Faraday’s cage, but the waves his microphone transmitter generates inside the cage still reach the receiver that is placed somewhere in the classroom since the cage is not grounded. Proof of boundary conditions We will now use Maxwell’s equations to derive the electrostatic boundary conditions. First, we will use Gauss’s law to find the normal component of the fields at the boundary between two dielectrics, as shown in Figure 10. As we can see from the figure, the flux of the electric field exists through both bases and the side of the cylinder. We can find the components of the fields in both dielectrics, one parallel to the boundary x and one perpendicular to the boundary, in the direction of y. E1⇀\=E1x⇀+E1y⇀\=E1xx⇀+E1yy⇀E2⇀\=E2x⇀+E2y⇀\=E2xx⇀+E2yy⇀(13)(14)(13)E1⇀\=E1x⇀+E1y⇀\=E1xx⇀+E1yy⇀(14)E2⇀\=E2x⇀+E2y⇀\=E2xx⇀+E2yy⇀ The tangential components of the fields produce flux through the sides, and normal components produce flux through the bases. Since we are interested in what happens at the boundary, we will let the height of the cylinder be infinitesimally small h→0h→0. Because the height of the cylinder is zero, and therefore the surface area is zero, the flux through the side surface S3 is zero. The flux through the top and bottom surfaces will only exist due to the normal components of the field. ∮SD⇀⋅dS⇀\=QinS∫S1D⇀⋅dS⇀+∫S2D⇀⋅dS⇀+∫S3D⇀⋅dS⇀\=QinS∫S1D⇀⋅dS⇀+∫S2D⇀⋅dS⇀+0\=QinS∫S1(ε1E1ny⇀+ε1E1tx⇀)⋅dSy⇀+(ε2E2ny⇀+ε2E2tx⇀)⋅dSy⇀\=QinS−ε1E1nS+ε2E2nS\=QinSε2E2n−ε1E1n\=QinSS(15)(16)(17)(18)(19)(20)(15)∮SD⇀⋅dS⇀\=QinS(16)∫S1D⇀⋅dS⇀+∫S2D⇀⋅dS⇀+∫S3D⇀⋅dS⇀\=QinS(17)∫S1D⇀⋅dS⇀+∫S2D⇀⋅dS⇀+0\=QinS(18)∫S1(ε1E1ny⇀+ε1E1tx⇀)⋅dSy⇀+(ε2E2ny⇀+ε2E2tx⇀)⋅dSy⇀\=QinS(19)−ε1E1nS+ε2E2nS\=QinS(20)ε2E2n−ε1E1n\=QinSS Figure 10: Derivation of equation for normal components of electric field on the boundary of two dielectrics. The tangential components of the field can be obtained from the equation for Faraday’s law for static fields, as shown in Figure 11. We choose a rectangular contour, as shown in the figure with length l and width w. Since again we are interested in the boundary, we will let the width of the contour go to zero. The integral along the w-pieces will then be zero. The integral along the l-pieces of contour will depend on the orientation of contour, and we will pick a counter-clockwise path. Because of the counter-clockwise path, the x-component of the E1E1 field will be negative, and we have that the x-components of the fields in two dielectrics have to be the same. ∮CE⇀⋅dl⇀\=0∫l1(E1xx⇀+E1yy⇀)⋅dyy⇀+∫l2(E2xx⇀+E2yy⇀)⋅dyy⇀\=0−E1xl+E2xl\=0E1x\=E2x(21)(22)(23)(24)(21)∮CE⇀⋅dl⇀\=0(22)∫l1(E1xx⇀+E1yy⇀)⋅dyy⇀+∫l2(E2xx⇀+E2yy⇀)⋅dyy⇀\=0(23)−E1xl+E2xl\=0(24)E1x\=E2x Figure 11: Derivation of equation for tangential components of electric field on the boundary of two dielectrics. Charge distribution around sharp edges Theshape of the conductive material impacts the chargde distribution and charge density, as shown in Figure 12. We can see that the charge distribution and electric field on round objects are uniform. The highest charge density and strongest electric fields are produced on sharp edges of condutive bodies. Figure 12: Electric field and charge distribution close to sharp edges. Demonstration of higher charge density near sharp edges by Prof. Emeritus at MIT, Walter Lewin. ← Previous Next → Courses Calculus OneCalculus TwoCalculus Three About FAQDevelopment TeamWorkshopContact Us Social Facebook Twitter Google Plus GitHub Built at The Ohio State UniversityOSU with support from NSF Grant DUE-1245433, the Shuttleworth Foundation, the Department of Mathematics, and the Affordable Learning ExchangeALX. © 2013–2025, The Ohio State University — Ximera team 100 Math Tower, 231 West 18th Avenue, Columbus OH, 43210–1174 Phone: (773) 809–5659 | Contact If you have trouble accessing this page and need to request an alternate format, contact ximera@math.osu.edu. Start typing the name of a mathematical function to automatically insert it. (For example, "sqrt" for root, "mat" for matrix, or "defi" for definite integral.) Controls Press... ...to do left/right arrows Move cursor shift+left/right arrows Select region ctrl+a Select all ctrl+x/c/v Cut/copy/paste ctrl+z/y Undo/redo ctrl+left/right Add entry to list or column to matrix shift+ctrl+left/right Add copy of current entry/column to to list/matrix ctrl+up/down Add row to matrix shift+ctrl+up/down Add copy of current row to matrix ctrl+backspace Delete current entry in list or column in matrix ctrl+shift+backspace Delete current row in matrix × Start typing the name of a mathematical function to automatically insert it. (For example, "sqrt" for root, "mat" for matrix, or "defi" for definite integral.) Symbols Type... ...to get norm ∣∣[?]∣∣||\blue{[?]}||∣∣[?]∣∣ text [?]\text{\blue{[?]}}[?] sym_name \[?]\backslash\texttt{\blue{[?]}}\[?] abs ∣[?]∣\left|\blue{[?]}\right|∣[?]∣ sqrt [?]\sqrt{\blue{[?]}}[?]​ paren ([?])\left(\blue{[?]}\right)([?]) floor ⌊[?]⌋\lfloor \blue{[?]} \rfloor⌊[?]⌋ factorial [?]!\blue{[?]}![?]! exp [?][?]{\blue{[?]}}^{\blue{[?]}}[?][?] sub [?][?]{\blue{[?]}}_{\blue{[?]}}[?][?]​ frac [?][?]\dfrac{\blue{[?]}}{\blue{[?]}}[?][?]​ int ∫[?]d[?]\displaystyle\int{\blue{[?]}}d\blue{[?]}∫[?]d[?] defi ∫[?][?][?]d[?]\displaystyle\int_{\blue{[?]}}^{\blue{[?]}}\blue{[?]}d\blue{[?]}∫[?][?]​[?]d[?] deriv dd[?][?]\displaystyle\frac{d}{d\blue{[?]}}\blue{[?]}d[?]d​[?] sum ∑[?][?][?]\displaystyle\sum_{\blue{[?]}}^{\blue{[?]}}\blue{[?]}[?]∑[?]​[?] prod ∏[?][?][?]\displaystyle\prod_{\blue{[?]}}^{\blue{[?]}}\blue{[?]}[?]∏[?]​[?] root [?][?]\sqrt[\blue{[?]}]{\blue{[?]}}[?][?]​ vec ⟨[?]⟩\left\langle \blue{[?]} \right\rangle⟨[?]⟩ mat ([?])\left(\begin{matrix} \blue{[?]} \end{matrix}\right)([?]​) ⋅\cdot⋅ infinity ∞\infty∞ arcsin arcsin⁡([?])\arcsin\left(\blue{[?]}\right)arcsin([?]) arccos arccos⁡([?])\arccos\left(\blue{[?]}\right)arccos([?]) arctan arctan⁡([?])\arctan\left(\blue{[?]}\right)arctan([?]) sin sin⁡([?])\sin\left(\blue{[?]}\right)sin([?]) cos cos⁡([?])\cos\left(\blue{[?]}\right)cos([?]) tan tan⁡([?])\tan\left(\blue{[?]}\right)tan([?]) sec sec⁡([?])\sec\left(\blue{[?]}\right)sec([?]) csc csc⁡([?])\csc\left(\blue{[?]}\right)csc([?]) cot cot⁡([?])\cot\left(\blue{[?]}\right)cot([?]) log log⁡([?])\log\left(\blue{[?]}\right)log([?]) ln ln⁡([?])\ln\left(\blue{[?]}\right)ln([?]) alpha α\alphaα beta β\betaβ gamma γ\gammaγ delta δ\deltaδ epsilon ϵ\epsilonϵ zeta ζ\zetaζ eta η\etaη theta θ\thetaθ iota ι\iotaι kappa κ\kappaκ lambda λ\lambdaλ mu μ\muμ nu ν\nuν xi ξ\xiξ omicron ο\omicronο pi π\piπ rho ρ\rhoρ sigma σ\sigmaσ tau τ\tauτ upsilon υ\upsilonυ phi ϕ\phiϕ chi χ\chiχ psi ψ\psiψ omega ω\omegaω Gamma Γ\GammaΓ Delta Δ\DeltaΔ Theta Θ\ThetaΘ Lambda Λ\LambdaΛ Xi Ξ\XiΞ Pi Π\PiΠ Sigma Σ\SigmaΣ Phi Φ\PhiΦ Psi Ψ\PsiΨ Omega Ω\OmegaΩ × Global settings: Settings ×
14913
https://commons.wikimedia.org/wiki/Category:Hexadecimal
Category:Hexadecimal - Wikimedia Commons Jump to content [x] Main menu Main menu move to sidebar hide Navigate Main page Welcome Community portal Village pump Help center Language select Participate Upload file Recent changes Latest files Random file Contact us Special pages In Wikipedia Afrikaans العربية Azərbaycanca Беларуская Беларуская (тарашкевіца) Български বাংলা Brezhoneg Bosanski Català Čeština Cymraeg Dansk Deutsch Ελληνικά English Esperanto Español Eesti Euskara فارسی Suomi Français Frysk Gaeilge Galego עברית हिन्दी Hrvatski Kreyòl ayisyen Magyar Հայերեն Արեւմտահայերէն Bahasa Indonesia Ido Íslenska Italiano 日本語 La .lojban. Jawa ქართული Қазақша 한국어 Кыргызча Latina Lëtzebuergesch Lingua Franca Nova Lombard Latviešu 文言 Македонски മലയാളം Монгол मराठी Bahasa Melayu Mirandés 閩南語 / Bân-lâm-gí Nederlands Norsk nynorsk Norsk bokmål Sesotho sa Leboa Polski Português Română Русский Srpskohrvatski / српскохрватски Simple English Slovenčina Slovenščina Shqip Српски / srpski Svenska தமிழ் ไทย Tagalog Türkçe Українська اردو Oʻzbekcha / ўзбекча Tiếng Việt 吴语 ייִדיש 粵語 中文 75 more Edit links Search Search English [x] Appearance Appearance move to sidebar hide Text Small Standard Large This page always uses small font size This page always uses small font size Width Standard Wide The content is as wide as possible for your browser window. Color (beta) Automatic Light Dark This page is always in light mode. Donate Create account Log in [x] Personal tools Donate Create account Log in Pages for logged out editors learn more Contributions Talk Category:Hexadecimal Good pictures Advanced... All images Featured pictures Featured videos Quality images Valued images In this category and in... In this category but not in... About FastCCI... Help Category Discussion [x] English Read Edit View history [x] Tools Tools move to sidebar hide Actions Read Edit View history General What links here Related changes Permanent link Page information Get shortened URL Download QR code Expand all RSS feed Nominate category for discussion Print/export Create a book Download as PDF Printable version In other projects Wikipedia Wikidata item From Wikimedia Commons, the free media repository sistema hexadecimal; Letlase la lesometshela; Sextánundakerfi; Nombor perenambelasan; Шестнадесетична бройна система; Sistem hexazecimal; اساس سولہ کا نظام; Šestnástková sústava; шістнадцяткова система числення; 十六進制; Hexadecimala nombrosistemo; 십육진법; санаудың оналтылық жүйесі; deksesuma nombrosistemo; шеснаесетеречен броен систем; heksadecimalni numerički sistem; ষোড়শিক সংখ্যা পদ্ধতি; système hexadécimal; Hèksadèsimal; Heksadekadski brojevni sustav; העקסדעצימאל; षोडशमान; Hệ thập lục phân; Heksadecimālā skaitīšanas sistēma; Heksadesimale stelsel; хексадецимални; Sistema de numeração hexadecimal; Арван зургаатын тооллын систем; sekstentalssystemet; sekstentallsystemet; Onaltılıq say sistemi; 十六進制; hexadecimal; نظام عد ستة عشري; Diazez c'hwezekred; ၁၆အခြေပြုကိန်း; 十六進制; tizenhatos számrendszer; Zenbaki-sistema hamaseitar; sistema hexadecimal; Hecsadegol; Sistema esadecimal; шаснаццаткавая сістэма злічэння; хексадецимални; 十六進制; Hexadecimale talsystem; თვლის თექვსმეტობითი სისტემა; 十六進法; Heksadecimalni sistem; Δεκαεξαδικό σύστημα αρίθμησης; шаснаццатковая сыстэма зьлічэньня; ju’u pi’e; בסיס הקסדצימלי; Systema numericum sedecimale; เลขฐานสิบหก; हेक्साडेसिमल; 十六进制; heksadesimaalijärjestelmä; Տասնվեցային համակարգ; Sistem exadesimal; hexadecimala talsystemet; பதினறும எண் முறைமை; sistema numerico esadecimale; šestnáctková soustava; On altılı sayı sistemi; Sistèm ekzadesimal; kuueteistkümnendsüsteem; hexadecimaal; 十六進制; Oʻn oltilik sanoq sistemasi; Heksadesimaal systeem; Hexadezimalsystem; шестнадцатеричная система счисления; sistema de numeração hexadecimal; Sistemi i numrave heksadecimal; Sistema de numeraçon heixadecimal; códú heicsidheachúlach; Հաշվարկման տասնվեցական համակարգ; šestnajstiški številski sistem; Hexadecimal; Cha̍p-la̍k-chìn-hoat; уналтарлы санау систимы; Heksadesimal; szesnastkowy system liczbowy; ഷോഡശസംഖ്യാസമ്പ്രദായം; 十六進制; heksadesimála vuogádat; 十六进制; 十六进制; دستگاه اعداد پایه ۱۶; Código hexadecimal; он алтылык эсептөө тутуму; sistema ezadezemałe; Hexadezimalsystem; sistema numerico in base 16; tata wilangan adhedhasar enem welas angka; позиционная система записи чисел, используя 16 цифр; Zahlensystem mit der Basis 16; sistema de numeração posicional em base 16; бројевни систем са основом 16; 以16為基底之進位制; бројевни систем са основом 16; pozicijski številski sistem z osnovo 16; 16を底とし、底およびその冪を基準にして数を表す方法; ကိန်း၁၆ပေါ် အခြေပြု တည်ဆောက်ထားသော ကိန်းစနစ်; เลขสัญลักษณ์ที่มีเลขฐาน 16 ตัว; Stellewäertsystem mat der Basis 16; pozycyjny system liczbowy o podstawie 16; שיטת ספירה על בסיס 16; 以 16 為基底之進位制; sistem bilangan basis-16; आधार 16 संख्यात्मक प्रणाली; talsystem med basen 16; kantalukujärjestelmä, jonka kantaluku on 16; numeral system with 16 as its base; sistema de numeración que tiene dieciséis cifras como base (numerales y de la A a la F); číselná soustava se základem 16; sistema numerich posizzional a bas 16; sistema de numeración hexadecimal; sistema numérico hexadecimal; número en base dieciséis; hexadecimal; base dieciséis; base 16; numeración hexadecimal; código hexadecimal; эсептөөнүн он алтылык тутуму; Sextandakerfi; Sextánundarkerfi; Sextándakerfi; Sextándakerfið; Hamaseitar; 0x; Hexadecimal; шестнадцатиричная система счисления; шестнадцатеричная система; 0x; Base16; Hexadezimal; Sedezimales Zahlensystem; Sedezimal; Sedezimalsystem; Sechzehnersystem; Шаснаццатковая сістэма злічэння; хексадецимални систем; 十六进制; 16進位; 十六進位; 16進制; 十六進; 16基底; Hex; Sekstentalssystemet; Hexadecimal; Hex; Heksadesimal; Hexadecimal; On altılık sayı sistemi; Hexadecimal; 16進法; 十六進記数法; 十六進数; 16進数; ヘキサデシマル; 16進; hexadecimalt; hexadecimalsystemet; hexadecimal; sedecimal; sedecimala talsystemet; hexkod; hexadecimala tal; Sekstentalsystemet; Heksadesimaltal; Шістнадцятерична система числення; HEX; 16шрл.; уналтышарлы система; һәкс; Hexadecimal; 0x; هگزادسیمال; مبنای شانزده; دستگاه پایه 16; دستگاه پایه ۱۶; دستگاه اعداد شانزده‌شانزدهی; دستگاه اعداد شانزده شانزدهی; شانزده‌شانزدهی; دستگاه اعداد پایه 16; षोडशाधारी संख्या पद्धति; Шаснаццацітковая сыстэма зьлічэньня; Шаснаццатковая сыстэма лічэньня; heksadesimaaliluku; heksadesimaali; heksakoodi; санаудың он алтылық жүйесі; Deksesuma; Deksesuma sistemo; 0x; hexadecimální; hexadecimální číslo; hexadecimální soustava; heksadecimalni sistem; heksadecimalni brojni sistem; heksadecimalni brojevni sistem; Esadecimale; Hexadecimale; Base 16; Numerazione esadecimale; Sistema hexadecimal; Hexadecimal; 0x; base 16; hexadécimal; hexadecimal; systeme hexadecimal; hexadécimale; héxadécimal; numération hexadécimale; Hèksadésimal; Heksadesimal; Hexadesimal; heksadetsimaalsüsteem; seksadetsimaalsüsteem; Heksadekadski broj; Heksadecimalni broj; Heksadecimalni sustav; Heksadekadski; Heksadecimalni brojevni sustav; Heksadekadski sustav; בסיס 16; 16진수; 16진 기수법; 16진법; 십육진수; העקס; Per-16-an; Perenambelasan; hexadecimale; Sistema hexadecimal; Hexadecimal; ഹെക്സാഡെസിമൽ സംഖ്യാവ്യവസ്ഥ; Hexadecimal; ഹെക്സാഡെസിമൽ; Heksadecimāli; Sešpadsmitnieku skaitīšanas sistēma; Heksadesimale; Хексадекадни систем бројева; Хексадекадни бројчани систем; Хексадецимални систем бројева; Хексадецимални бројни систем; Хексадекадни бројеви; Хексадецимални бројевни систем; Хексадецимални бројчани систем; Хексадекадни бројевни систем; Хексадекадни систем бројања; Хексадецимални систем бројања; Хексадекадни систем; Хексадецимални код; šestnajstiški sestav; heksadecimalni številski sistem; šestnajstiški številski sestav; osnova 16; Hexadecimal; Hexadesimal; Hệ đếm thập lục phân; Hệ đếm cơ số 16; Sedezimalsystem; System heksadecymalny; Base16; System szesnastkowy; Kod szesnastkowy; Układ szesnastkowy; Heksadecymalność; Heksadesimalt; Heksadesimaltall; Heksadesimal; Hexadesimal; Hexadesimal regning; Heksadekadni sustav; Heksadecimalni sustav; Heksadekadni sistem; เลขฐาน 16; Hex; ตัวเลขฐาน16; hex; Representació hexadecimal; Base16; Hexadecimal; Hexazecimal; Hexadecimal; хексадецимален броен систем; hex; base 16; Hex; Sexadecimal; Hex format; hexadecimal system; نظام عد سداسي عشر; نظام عد سته عشري; نظام سداسي عشر; Δεκαεξαδικό σύστημα; Systema numericum sextidecimalis; Systema numericum sextidecimale hexadecimalCollapse numeral system with 16 as its base Upload media Wikipedia Instance of positional numeral system Has use computation Follows pentadecimal Followed by heptadecimal | Collapse Authority file | | Q82828 | | Reasonator Scholia Wikidocumentaries PetScan statistics WikiMap Locator tool KML file Search depicted | This is a main category requiring frequent diffusion and maybe maintenance. As many pictures and media files as possible should be moved into appropriate subcategories. Subcategories This category has the following 6 subcategories, out of 6 total. H Hexadecimal time(6 F) Hexenary(8 F) S Sets of 16 symbols for binary Boolean functions(3 C) Seven segment displays showing hexadecimal characters or values (0-9, A-F)(7 F) Sixteen segment displays(18 F) T Tecel signs(16 F) Category Slideshow Media in category "Hexadecimal" The following 48 files are in this category, out of 48 total. A hexidecimal multiplication table.svg 513 × 513; 9 KB Base-16 digits.svg 945 × 495; 32 KB Brain virus.jpg 1,280 × 1,263; 225 KB Brain-virus.jpg 1,238 × 646; 439 KB Bruce Martin hexadecimal notation proposal.png 787 × 545; 7 KB Bruce Martin hexadecimal notation proposal.svg 787 × 545; 22 KB Carte hexadecimale.png 598 × 609; 54 KB Convert-Dec Hex.jpg 841 × 401; 125 KB Convert-Deci Hexa.jpg 841 × 401; 134 KB Division-A-Complete-Run-Through-Of-The-Shifted-Subtraction-Algorithm.png 940 × 2,904; 136 KB EPROM Data.jpg 3,648 × 2,736; 4.38 MB Hexadecimal compass by Nystrom.jpg 561 × 689; 118 KB Hexadecimal digit.png 842 × 401; 22 KB Hexadecimal digits proposed by Valdis Vitolins.png 657 × 105; 7 KB Hexadecimal digits.png 634 × 139; 20 KB Hexadecimal multiplication table.PNG 290 × 290; 12 KB Hexadecimal multiplication table.svg 720 × 720; 239 KB Hexadecimal studiVZ-URL´s.jpg 548 × 96; 40 KB Hexadecimal-counting.jpg 412 × 331; 55 KB Hexadecimal-multiplication-table.svg 945 × 791; 306 KB HexadecimalElectric145.JPG 3,968 × 2,976; 2.43 MB Hexary 9 Segment DIsplay.png 1,024 × 512; 58 KB Hexary Symbols.png 724 × 82; 6 KB IBM Hexa.jpg 3,486 × 1,214; 476 KB Joining hexadecimal digits in one character ligature.png 130 × 98; 2 KB MbledhjaHEX.PNG 75 × 64; 1 KB MbledhjaHEX1.PNG 527 × 202; 6 KB NTT-Multiplication-A-Complete-Run-Through-Of-The-Iterative-Algorithm-High-Resolution.png 3,760 × 14,584; 1.19 MB NTT-Multiplication-A-Complete-Run-Through-Of-The-Iterative-Algorithm.png 940 × 3,646; 259 KB Nystrom tonal system.jpg 612 × 748; 104 KB Positionalnotationtable.jpg 949 × 541; 100 KB Sad mac.png 297 × 234; 261 bytes Speicher642.PNG 838 × 344; 12 KB Table de correspondance entre le Bibinaire et les autres notations.svg 890 × 141; 252 KB Table of 8-bit hexadecimal digits.png 559 × 852; 79 KB TKAT Hexadecimal Characters.png 640 × 696; 8 KB VirtualBox HaikuOS R1Beta4 2023 17 11 2023 20 05 37.png 1,024 × 768; 157 KB Wiki ASCII UNI TableAll Pag1.jpg 2,482 × 3,507; 880 KB Wiki ASCII UNI TableAll Pag2.jpg 2,482 × 3,507; 1.06 MB Wiki ASCII UNI TableAll Pag3.jpg 2,482 × 3,507; 599 KB Wiki ASCII UNI TableHex Pag1.jpg 2,482 × 3,507; 1.03 MB Wiki ASCII UNI TableHex Pag2.jpg 2,482 × 3,507; 1.06 MB Wikipedia favicon hexdump.svg 290 × 250; 2 KB Xevi figures.png 635 × 173; 15 KB ZbritjaHEX.PNG 71 × 60; 845 bytes ZbritjaHEX2.PNG 586 × 192; 8 KB Шестнадцатеричная система счисления из фильма Марсианин.svg 1,250 × 600; 78 KB あ 教科書体 png hexdump.svg 339 × 221; 5 KB Retrieved from " Categories: Positional numeral systems 16 (number) Non-topical/index: Uses of Wikidata Infobox Categories requiring permanent diffusion This page was last edited on 21 March 2025, at 07:00. Files are available under licenses specified on their description page. All structured data from the file namespace is available under the Creative Commons CC0 License; all unstructured text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you agree to the Terms of Use and the Privacy Policy. Privacy policy About Wikimedia Commons Disclaimers Code of Conduct Developers Statistics Cookie statement Mobile view Search Search Category:Hexadecimal Add topic Cancel Edit Delete Preview revert Text of the note (may include Wiki markup) Could not save your note (edit conflict or other problem). Please copy the text in the edit box below and insert it manually by editing this page. Upon submitting the note will be published multi-licensed under the terms of the CC BY-SA 4.0 license and of the GFDL (unversioned, with no invariant sections, front-cover texts, or back-cover texts). See our terms of use for more details. Add a note Draw a rectangle onto the image above (press the left mouse button, then drag and release).This file has annotations. Move the mouse pointer over the image to see them.To edit the notes, visit page X.Why do you want to remove this note? SaveHelp about image annotations To modify annotations, your browser needs to have the XMLHttpRequest object. Your browser does not have this object or does not allow it to be used (in Internet Explorer, it may be in a switched off ActiveX component), and thus you cannot modify annotations. We're sorry for the inconvenience. $1$1$1
14914
https://education.ti.com/-/media/C2FC6ABE3F014D819C7D583DB7FB3E51
Birthday Problem TEACHER NOTES MATH NSPIRED ©2011 Texas Instruments Incorporated 1education.ti.com Math Objectives  Students will be able to describe how sampling a population with N objects works.  Students will be able to use the complement of an event to determine probability of an event not occurring.  Students will use appropriate tools strategically (CCSS Mathematical Practice). Vocabulary  event  series  probability About the Lesson  This lesson involves investigating the probability of two people having the same birthday in a crowd of a given size. Topics covered include basic probability theory, sam pling distributions and infinite series approximations.  As a result, students will:  Be able to use probability to determine the likelihood of two people having the same birthday in a crowd of a given size.  Be able to justify the probability of two people h aving the same birthday. TI -Nspire™ Navigator ™ System  Screen Capture and Live Presenter can be used to monitor student progress and allow students to share their answers.  The student TI -Nspire document file could be sent with TI - Navigator to effectively b egin the lesson. TI -Nspire™ Technology Skills:  Download a TI -Nspire document  Open a document  Move between pages  Use a slider  Enter data in a spreadsheet Tech Tips:  Make sure the font size on your TI -Nspire handheld is set to Medium.  On Graphs p age, you can retrieve the entry line by pressing / G. Lesson Materials: Student Activity Birthday_Problem.pdf Birthday _Problem.doc TI -Nspire document Birthday_Problem.tns Visit www.mathnspired.com for le sson upda tes and tech tip videos. Birthday Problem TEACHER NOTES MATH NSPIRED ©2011 Texas Instruments Incorporated 2education.ti.com Discussion Points and Possible Answers TI -Nspire Navigator Opportunity: Transfer See Note 1 at the end of this lesson. Move to page 1.2. The Birthday Problem refers to the probability that in a set of randomly chose n people , some pair of them will have the same birthday. a. What if someone offered to bet you that any two people in your math class had the same birthday? Would you take the bet? Answer: Students should immediately jump to the idea that it depends on how many students are in the class. They should recognize that as the number of students increases, it becomes more likely to have a match. b. If there were only one other person in your math class, would you be surprised to find out that they had the s ame birthday as you? Explain. Answer: Students will probably say they would be surprised, as only about 1 in every 365 people will share their birthday. Suppose you are in a classroom of 25 students . How likely do you think it is that two of the stud ents in this class have the same birthday? It probably seems unlikely, since there are 365 days in the year and only 25 students. Write down your guess (as a percentage) of the likelihood that there will be two people that have the same birthday. Answer: Answers may vary. The discussion within the classroom should focus on how the probability increases as the number of students in the class increases. Many students may immediately under stand that the probability is quite high , while others will need to co mplete this activity to gain a full understanding. TI -Nspire Navigator Opportunity: Quick Poll See Note 2 at the end of this lesson. Birthday Problem TEACHER NOTES MATH NSPIRED ©2011 Texas Instruments Incorporated 3education.ti.com Make a conjecture about the probability of having at least one birthday match in a class of 25 students. Answer: Ans wers will vary . The actual probability is greater than .5 and should be somewhat surprising for students. Question 4 further explores this issue. To solve the birthday problem, we need to use one of the basic rules of probability: the sum of the proba bility that an event will happen and the probability that the event won't happen is always 1. (In other words, the chance that anything might or might not happen is always 100%.) If we can work out the probability that no two people will have the same birt hday, we can use this rule to find the probability that at least two people will share a birthday. Try this process: P(event happens) + P(event doesn't happen) = 1, so, P(two people share birthday) + P(no two people share birthday) = 1 , and P(two people sh are birthday) = 1 – P(no two people share birthday). a. Assuming 365 days in a year, w hat is the probability that two people will not share a birthday? Answer: The answer here should help students develop a better understanding of the problem. Understandi ng that the first person can have any birthday, the second person's birthday has to be different. Assuming 365 days in a year, all 365 are open for the first person and 364 are open for the second. Divide the “open” days by the total possible days to find the probability of a unique birthday for each student. Then, multiply t o find the probability for both: 365 364 364 364 1365 365 365 365     The probability of a unique birthday for two students is 364 365 . b. What is the probability that three or four people will all have different birthdays? Answer: Building on 4a, there are 363 birthdays out of 365 open for the third person. To find the probability that all three students have unique birthdays, we have to multiply: 365 364 363 132,132 1 .9918 365 365 365 133,225      . If we want to know the probability that four students will all have unique birthdays, we multiply again: 365 364 363 362 47,831,784 1 .9836 365 365 365 365 48,627,125       .Birthday Problem TEACHER NOTES MATH NSPIRED ©2011 Texas Instruments Incorporated 4education.ti.com c. Using this same process, what is the probability of NO birthday matches in a class of 25 students? Answer: Using t he same process from part b, the students should end up with the idea that the formula looks something like   365! 365 ! 365 n n  . Naturally, students are not expected to fully develop the formula at this point, but accept any correct discrete variati on of the formula that leads to an answer that the probability is close to .43 . This should be an opportunity to discuss the complement of the event and discuss that since the probability of no two students having a matchin g birthday in a class of 25 is .4 3, then the probability of at least two students sharing a birthday is therefore .57. TI -Nspire Navigator Opportunity: Screen Capture or Live Presenter See Note 3 at the end of this lesson. Move to page 1.3. Page 1.3 contains a simulation of numb er of trials and a frequency distribution of the students that have a matching birthday. Set the number of students at the number of students in your class by pressing the slider up or down. The number of trials represents the number of independent classro oms of that size that you surveyed looking for at least one birthday match. You are not actually asking birthdays, just simulating. For example, if there are 25 students in the class, the number of students should be set at 25. You may conduct as many tria ls as you like. After pressing the arrow for a simulation, record on paper whether there was a match or no match. Press / ^ to reset the number of trials. a. Record the number of trials with a birthday match. Answer: Answers for each student will vary , but will probably be close to half of the total number of trials. Birthday Problem TEACHER NOTES MATH NSPIRED ©2011 Texas Instruments Incorporated 5education.ti.com b. Using your results from part a, calculate the probability of two students in your classroom having the same birthday. Answer: Answers will vary based on the information gathered in part a. The probability of a match can be computed by hand and should mirror that of the formula developed in problem For example, if a student sets the class size at 20, then conducts 20 trials and has 9 matches, the probability (from this simulation) of a match is 920 , or .45. Teacher Tip: Page 1.4 contains a spreadsheet using the formula for calculating the probability of having two birthdays match for a given number of students. Students should experiment with different num bers of students to ascertain the probability of a match. Move to page 1.4. Page 1.4 uses the formula you found in question 4 to calculate the probability that two students in your class will have the same birthday given the number of students in the class. Type in the number of students to see the probability of two students having a match. a. How many students need to be in a class to find a probability of a match of more than .50? Answer: 23 students. Students should experiment with varying numbers of students to determine the answer . b. How many students need to be in a class to find a probability of a match of more than .99 ? Answer: 57 students . Students should experiment with varying numbers of students to determine the answer . c. Is there anything surprising about the probability of two students having the same birthday from the simulation you conducted? Answer: Students will probably find it surprising that the number of students needed for a match is so relatively low, just 23 stud ents for a probability > 50% and 57 students for a probability > 99%. Birthday Problem TEACHER NOTES MATH NSPIRED ©2011 Texas Instruments Incorporated 6education.ti.com Teacher Tip: As an extension, you could ask the students to discuss the shape of the graph and conjecture to the type of regression model needed to model the data. The growth is logisti c. Is it possible to have a probability of a match be 100% or 1? Explain your reasoning. Answer: Yes. If there are 366 or more students, at least two must share a birthday because there are only 365 possible birthdays. Wrap Up Students should be able to discuss the idea of the probability of having a match of two birthdays for a given number of students. They should also have an understanding of the concept of how a series is needed to calculate the complement of the probability of getting a match. TI-Nspire Navigator Note 1 Beginning of lesson, File Transfer : If available, you can send the file to students using Navigator. If not you may use Teacher Software to distribute the TI-Nspire document file. Note 2 Questions 2 and throughout lesson, Quick Poll (Open Response): Send a Quick Poll to have students answer any of the questions, starting with question 2. As students are answering, discuss how the answers might be different and why. Note 3 Page 1.3, Screen Capture or Live Presenter : Starting with page 1.3, use Screen Capture or Live Presenter to make sure students understand how to use the file. Screen Captures of page 1.4 can also help students visually see the possible graph.
14915
https://kskedlaya.org/putnam-archive/2023s.pdf
Solutions to the 84th William Lowell Putnam Mathematical Competition Saturday, December 2, 2023 Manjul Bhargava, Kiran Kedlaya, and Lenny Ng A1 If we use the product rule to calculate f ′′ n (x), the result is a sum of terms of two types: terms where two distinct factors cos(m1x) and cos(m2x) have each been differen-tiated once, and terms where a single factor cos(mx) has been differentiated twice. When we evaluate at x = 0, all terms of the first type vanish since sin(0) = 0, while the term of the second type involving (cos(mx))′′ be-comes −m2. Thus |f ′′ n (0)| = − n ∑ m=1 m2 = n(n+1)(2n+1) 6 . The function g(n) = n(n+1)(2n+1) 6 is increasing for n ∈N and satisfies g(17) = 1785 and g(18) = 2109. It follows that the answer is n = 18. A2 The only other real numbers with this property are ±1/n!. (Note that these are indeed other values than ±1,...,±n because n > 1.) Define the polynomial q(x) = x2n+2 −x2np(1/x) = x2n+2 −(a0x2n + ··· + a2n−1x + 1). The statement that p(1/x) = x2 is equivalent (for x ̸= 0) to the state-ment that x is a root of q(x). Thus we know that ±1,±2,...,±n are roots of q(x), and we can write q(x) = (x2 +ax+b)(x2 −1)(x2 −4)···(x2 −n2) for some monic quadratic polynomial x2 + ax + b. Equating the coefficients of x2n+1 and x0 on both sides gives 0 = a and −1 = (−1)n(n!)2b, respectively. Since n is even, we have x2 + ax + b = x2 −(n!)−2. We con-clude that there are precisely two other real numbers x such that p(1/x) = x2, and they are ±1/n!. A3 The answer is r = π 2 , which manifestly is achieved by setting f(x) = cosx and g(x) = sinx. First solution. Suppose by way of contradiction that there exist some f,g satisfying the stated conditions for some 0 < r < π 2 . We first note that we can assume that f(x) ̸= 0 for x ∈[0,r). Indeed, by continuity, {x|x ≥ 0 and f(x) = 0} is a closed subset of [0,∞) and thus has a minimum element r′ with 0 < r′ ≤r. After replacing r by r′, we now have f(x) ̸= 0 for x ∈[0,r). Next we note that f(r) = 0 implies g(r) ̸= 0. In-deed, define the function k : R →R by k(x) = f(x)2 + g(x)2. Then |k′(x)| = 2|f(x)f ′(x) + g(x)g′(x))| ≤ 4|f(x)g(x)| ≤2k(x), where the last inequality fol-lows from the AM-GM inequality. It follows that d dx(logk(x)) ≤2 for x ∈[0,r); since k(x) is continu-ous at x = r, we conclude that k(r) ̸= 0. Now define the function h: [0,r) →(−π/2,π/2) by h(x) = tan−1(g(x)/ f(x)). We compute that h′(x) = f(x)g′(x)−g(x)f ′(x) f(x)2 +g(x)2 and thus |h′(x)| ≤|f(x)||g′(x)|+|g(x)||f ′(x)| f(x)2 +g(x)2 ≤|f(x)|2 +|g(x)|2 f(x)2 +g(x)2 = 1. Since h(0) = 0, we have |h(x)| ≤x < r for all x ∈[0,r). Since r < π/2 and tan−1 is increasing on (−r,r), we conclude that |g(x)/ f(x)| is uniformly bounded above by tanr for all x ∈[0,r). But this contradicts the fact that f(r) = 0 and g(r) ̸= 0, since limx→r−g(x)/ f(x) = ∞. This contradiction shows that r < π/2 cannot be achieved. Second solution. (by Victor Lie) As in the first solu-tion, we may assume f(x) > 0 for x ∈[0,r). Combining our hypothesis with the fundamental theorem of calcu-lus, for x > 0 we obtain |f ′(x)| ≤|g(x)| ≤ Z x 0 g′(t)dt ≤ Z x 0 |g′(t)|dt ≤ Z x 0 |f(t)|dt. Define F(x) = R x 0 f(t)dt; we then have f ′(x)+F(x) ≥0 (x ∈[0,r]). Now suppose by way of contradiction that r < π 2 . Then cosx > 0 for x ∈[0,r], so f ′(x)cosx+F(x)cosx ≥0 (x ∈[0,r]). The left-hand side is the derivative of f(x)cosx + F(x)sinx. Integrating from x = y to x = r, we obtain F(r)sinr ≥f(y)cosy+F(y)siny (y ∈[0,r]). We may rearrange to obtain F(r)sinrsec2 y ≥f(y)secy+F(y)sinysec2 y (y ∈[0,r]). The two sides are the derivatives of F(r)sinrtany and F(y)secy, respectively. Integrating from y = 0 to y = r and multiplying by cos2 r, we obtain F(r)sin2 r ≥F(r) which is impossible because F(r) > 0 and 0 < sinr < 1. 2 A4 The assumption that all vertices of the icosahedron cor-respond to vectors of the same length forces the center of the icosahedron to lie at the origin, since the icosa-hedron is inscribed in a unique sphere. Since scaling the icosahedron does not change whether or not the stated conclusion is true, we may choose coordinates so that the vertices are the cyclic permutations of the vec-tors (± 1 2,± 1 2φ,0) where φ = 1+ √ 5 2 is the golden ratio. The subgroup of R3 generated by these vectors contains G × G × G where G is the subgroup of R generated by 1 and φ. Since φ is irrational, it generates a dense sub-group of R/Z; hence G is dense in R, and so G×G×G is dense in R3, proving the claim. A5 The complex numbers z with this property are −31010 −1 2 and −31010 −1 2 ± √ 91010 −1 4 i. We begin by noting that for n ≥1, we have the follow-ing equality of polynomials in a parameter x: 3n−1 ∑ k=0 (−2)f(k)xk = n−1 ∏ j=0 (x2·3 j −2x3 j +1). This is readily shown by induction on n, using the fact that for 0 ≤k ≤3n−1 −1, f(3n−1 + k) = f(k) + 1 and f(2·3n−1 +k) = f(k). Now define a “shift” operator S on polynomials in z by S(p(z)) = p(z+1); then we can define Sm for all m ∈Z by Sm(p(z)), and in particular S0 = I is the identity map. Write pn(z) := 3n−1 ∑ k=0 (−2)f(k)(z+k)2n+3 for n ≥1; it follows that pn(z) = n−1 ∏ j=0 (S2·3 j −2S3j +I)z2n+3 = S(3n−1)/2 n−1 ∏ j=0 (S3 j −2I +S−3 j)z2n+3. Next observe that for any ℓ, the operator Sℓ−2I + S−ℓ acts on polynomials in z in a way that decreases degree by 2. More precisely, for m ≥0, we have (Sℓ−2I +S−ℓ)zm = (z+ℓ)m −2zm +(z−ℓ)m = 2 m 2  ℓ2zm−2 +2 m 4  ℓ4zm−4 +O(zm−6). We use this general calculation to establish the follow-ing: for any 1 ≤i ≤n, there is a nonzero constant Ci (depending on n and i but not z) such that i ∏ j=1 (S3n−j −2I +S−3n−j)z2n+3 = Ci  z2n+3−2i + (2n+3−2i)(n+1−i) 6 (∑i j=1 9n−j)z2n+1−2i +O(z2n−1−2i). (1) Proving (1) is a straightforward induction on i: the induction step applies S3n−i−1 −2I + S−3n−i−1 to the right hand side of (1), using the general formula for (Sℓ−2I +S−ℓ)zm. Now setting i = n in (1), we find that for some Cn, n−1 ∏ j=0 (S3 j −2I +S−3 j)z2n+3 = Cn  z3 + 9n −1 16 z  . The roots of this polynomial are 0 and ± √9n−1 4 i, and it follows that the roots of pn(z) are these three numbers minus 3n−1 2 . In particular, when n = 1010, we find that the roots of p1010(z) are as indicated above. A6 (Communicated by Kai Wang) For all n, Bob has a win-ning strategy. Note that we can interpret the game play as building a permutation of {1,...,n}, and the number of times an integer k is chosen on the k-th turn is exactly the number of fixed points of this permutation. For n even, Bob selects the goal “even”. Divide {1,...,n} into the pairs {1,2},{3,4},...; each time Al-ice chooses an integer, Bob follows suit with the other integer in the same pair. For each pair {2k −1,2k}, we see that 2k−1 is a fixed point if and only if 2k is, so the number of fixed points is even. For n odd, Bob selects the goal “odd”. On the first turn, if Alice chooses 1 or 2, then Bob chooses the other one to transpose into the strategy for n −2 (with no moves made). We may thus assume hereafter that Alice’s first move is some k > 2, which Bob counters with 2; at this point there is exactly one fixed point. Thereafter, as long as Alice chooses j on the j-th turn (for j ≥3 odd), either j +1 < k, in which case Bob can choose j +1 to keep the number of fixed points odd; or j+1 = k, in which case k is even and Bob can choose 1 to transpose into the strategy for n −k (with no moves made). Otherwise, at some odd turn j, Alice does not choose j. At this point, the number of fixed points is odd, and on each subsequent turn Bob can ensure that neither his own move nor Alice’s next move does not create a fixed point: on any turn j for Bob, if j +1 is available Bob chooses it; otherwise, Bob has at least two choices available, so he can choose a value other than j. B1 The number of such configurations is m+n−2 m−1  . Initially the unoccupied squares form a path from (1,n) to (m,1) consisting of m−1 horizontal steps and n−1 vertical steps, and every move preserves this property. This yields an injective map from the set of reachable configurations to the set of paths of this form. Since the number of such paths is evidently m+n−2 m−1  (as one can arrange the horizontal and vertical steps in any order), it will suffice to show that the map we just wrote down is also surjective; that is, that one can reach any path of this form by a sequence of moves. 3 This is easiest to see by working backwards. Ending at a given path, if this path is not the initial path, then it contains at least one sequence of squares of the form (i, j) →(i, j−1) →(i+1, j−1). In this case the square (i+1, j) must be occupied, so we can undo a move by replacing this sequence with (i, j) →(i + 1, j) →(i + 1, j −1). B2 The minimum is 3. First solution. We record the factorization 2023 = 7·172. We first rule out k(n) = 1 and k(n) = 2. If k(n) = 1, then 2023n = 2a for some a, which clearly cannot happen. If k(n) = 2, then 2023n = 2a + 2b = 2b(1 + 2a−b) for some a > b. Then 1+2a−b ≡0 (mod 7); but −1 is not a power of 2 mod 7 since every power of 2 is congruent to either 1, 2, or 4 (mod 7). We now show that there is an n such that k(n) = 3. It suffices to find a > b > 0 such that 2023 divides 2a + 2b + 1. First note that 22 + 21 + 1 = 7 and 23 ≡1 (mod 7); thus if a ≡2 (mod 3) and b ≡1 (mod 3) then 7 divides 2a +2b +1. Next, 28 +25 +1 = 172 and 216·17 ≡1 (mod 172) by Euler’s Theorem; thus if a ≡8 (mod 16·17) and b ≡5 (mod 16·17) then 172 divides 2a +2b +1. We have reduced the problem to finding a,b such that a ≡2 (mod 3), a ≡8 (mod 16 · 17), b ≡1 (mod 3), b ≡5 (mod 16 · 17). But by the Chinese Remainder Theorem, integers a and b solving these equations exist and are unique mod 3·16·17. Thus we can find a,b sat-isfying these congruences; by adding appropriate mul-tiples of 3·16·17, we can also ensure that a > b > 1. Second solution. We rule out k(n) ≤2 as in the first solution. To force k(n) = 3, we first note that 24 ≡−1 (mod 17) and deduce that 268 ≡−1 (mod 172). (By writing 268 = ((24 +1)−1)17 and expanding the bino-mial, we obtain −1 plus some terms each of which is divisible by 17.) Since (28 −1)2 is divisible by 172, 0 ≡216 −2·28 +1 ≡216 +2·268 ·28 +1 = 277 +216 +1 (mod 172). On the other hand, since 23 ≡−1 (mod 7), 277 +216 +1 ≡22 +21 +1 ≡0 (mod 7). Hence n = (277 + 216 + 1)/2023 is an integer with k(n) = 3. Remark. A short computer calculation shows that the value of n with k(n) = 3 found in the second solution is the smallest possible. For example, in SageMath, this reduces to a single command: assert all((2^a+2^b+1) % 2023 != 0 for a in range(1,77) for b in range(1,a)) B3 The expected value is 2n+2 3 . Divide the sequence X1,...,Xn into alternating increas-ing and decreasing segments, with N segments in all. Note that removing one term cannot increase N: if the removed term is interior to some segment then the number remains unchanged, whereas if it separates two segments then one of those decreases in length by 1 (and possibly disappears). From this it follows that a(X1,...,Xn) = N + 1: in one direction, the endpoints of the segments form a zigzag of length N + 1; in the other, for any zigzag Xi1,...,Xim, we can view it as a sequence obtained from X1,...,Xn by removing terms, so its number of segments (which is manifestly m −1) cannot exceed N. For n ≥3, a(X1,...,Xn)−a(X2,...,Xn) is 0 if X1,X2,X3 form a monotone sequence and 1 otherwise. Since the six possible orderings of X1,X2,X3 are equally likely, E(a(X1,...,Xn)−a(X1,...,Xn−1)) = 2 3. Moreover, we always have a(X1,X2) = 2 because any sequence of two distinct elements is a zigzag. By lin-earity of expectation plus induction on n, we obtain E(a(X1,...,Xn)) = 2n+2 3 as claimed. B4 The minimum value of T is 29. Write tn+1 = t0 +T and define sk = tk −tk−1 for 1 ≤k ≤ n + 1. On [tk−1,tk], we have f ′(t) = k(t −tk−1) and so f(tk)−f(tk−1) = k 2s2 k. Thus if we define g(s1,...,sn+1) = n+1 ∑ k=1 ks2 k, then we want to minimize ∑n+1 k=1 sk = T (for all pos-sible values of n) subject to the constraints that g(s1,...,sn+1) = 4045 and sk ≥1 for k ≤n. We first note that a minimum value for T is in-deed achieved. To see this, note that the constraints g(s1,...,sn+1) = 4045 and sk ≥1 place an upper bound on n. For fixed n, the constraint g(s1,...,sn+1) = 4045 places an upper bound on each sk, whence the set of (s1,...,sn+1) on which we want to minimize ∑sk is a compact subset of Rn+1. Now say that T0 is the minimum value of ∑n+1 k=1 sk (over all n and s1,...,sn+1), achieved by (s1,...,sn+1) = (s0 1,...,s0 n+1). Observe that there cannot be another (s1,...,sn′+1) with the same sum, ∑n′+1 k=1 sk = T0, satis-fying g(s1,...,sn′+1) > 4045; otherwise, the function f for (s1,...,sn′+1) would satisfy f(t0 + T0) > 4045 and there would be some T < T0 such that f(t0 +T) = 4045 by the intermediate value theorem. We claim that s0 n+1 ≥1 and s0 k = 1 for 1 ≤k ≤n. If s0 n+1 < 1 then g(s0 1,...,s0 n−1,s0 n +s0 n+1)−g(s0 1,...,s0 n−1,s0 n,s0 n+1) = s0 n+1(2ns0 n −s0 n+1) > 0, 4 contradicting our observation from the previous para-graph. Thus s0 n+1 ≥1. If s0 k > 1 for some 1 ≤k ≤n then replacing (s0 k,s0 n+1) by (1,s0 n+1 +s0 k −1) increases g: g(s0 1,...,1,...,s0 n+1 +s0 k −1)−g(s0 1,...,s0 k,...,s0 n+1) = (s0 k −1)((n+1−k)(s0 k +1)+2(n+1)(s0 n+1 −1)) > 0, again contradicting the observation. This establishes the claim. Given that s0 k = 1 for 1 ≤k ≤n, we have T = s0 n+1 +n and g(s0 1,...,s0 n+1) = n(n+1) 2 +(n+1)(T −n)2. Setting this equal to 4045 and solving for T yields T = n+ r 4045 n+1 −n 2. For n = 9 this yields T = 29; it thus suffices to show that for all n, n+ r 4045 n+1 −n 2 ≥29. This is evident for n ≥30. For n ≤29, rewrite the claim as r 4045 n+1 −n 2 ≥29−n; we then obtain an equivalent inequality by squaring both sides: 4045 n+1 −n 2 ≥n2 −58n+841. Clearing denominators, gathering all terms to one side, and factoring puts this in the form (9−n)(n2 −95 2 n+356) ≥0. The quadratic factor Q(n) has a minimum at 95 4 = 23.75 and satisfies Q(8) = 40,Q(10) = −19; it is thus positive for n ≤8 and negative for 10 ≤n ≤29. B5 The desired property holds if and only if n = 1 or n ≡2 (mod 4). Let σn,m be the permutation of Z/nZ induced by mul-tiplication by m; the original problem asks for which n does σn,m always have a square root. For n = 1, σn,m is the identity permutation and hence has a square root. We next identify when a general permutation admits a square root. Lemma 1. A permutation σ in Sn can be written as the square of another permutation if and only if for every even positive integer m, the number of cycles of length m in σ is even. Proof. We first check the “only if” direction. Suppose that σ = τ2. Then every cycle of τ of length m remains a cycle in σ if m is odd, and splits into two cycles of length m/2 if m is even. We next check the “if” direction. We may partition the cycles of σ into individual cycles of odd length and pairs of cycles of the same even length; then we may argue as above to write each partition as the square of another permutation. Suppose now that n > 1 is odd. Write n = pek where p is an odd prime, k is a positive integer, and gcd(p,k) = 1. By the Chinese remainder theorem, we have a ring isomorphism Z/nZ ∼ = Z/peZ×Z/kZ. Recall that the group (Z/peZ)× is cyclic; choose m ∈Z reducing to a generator of (Z/peZ)× and to the identity in (Z/kZ)×. Then σn,m consists of k cycles (an odd number) of length pe−1(p −1) (an even number) plus some shorter cycles. By Lemma 1, σn,m does not have a square root. Suppose next that n ≡2 (mod 4). Write n = 2k with k odd, so that Z/nZ ∼ = Z/2Z×Z/kZ. Then σn,m acts on {0}×Z/kZ and {1}×Z/kZ with the same cycle structure, so every cycle length occurs an even number of times. By Lemma 1, σn,m has a square root. Finally, suppose that n is divisible by 4. For m = −1, σn,m consists of two fixed points (0 and n/2) together with n/2 −1 cycles (an odd number) of length 2 (an even number). By Lemma 1, σn,m does not have a square root. B6 The determinant equals (−1)⌈n/2⌉−12⌈n 2⌉. To begin with, we read off the following features of S. – S is symmetric: Sij = Sji for all i, j, corresponding to (a,b) 7→(b,a)). – S11 = n + 1, corresponding to (a,b) = (0,n),(1,n−1),...,(n,0). – If n = 2m is even, then Smj = 3 for j = 1,m, cor-responding to (a,b) = (2,0),(1, n 2j),(0, n j ). – For n 2 < i ≤n, Sij = #(Z∩{ n−i j , n j }), correspond-ing to (a,b) = (1, n−i j ),(0, n j ). Let T be the matrix obtained from S by performing row and column operations as follows: for d = 2,...,n−2, subtract Snd times row n −1 from row d and subtract Snd times column n −1 from column d; then subtract row n−1 from row n and column n−1 from column n. Evidently T is again symmetric and det(T) = det(S). 5 Let us examine row i of T for n 2 < i < n−1: Ti1 = Si1 −SinS(n−1)1 = 2−1·2 = 0 Tij = Sij −SinS(n−1)j −Sn jSi(n−1) = ( 1 if j divides n−i 0 otherwise. (1 < j < n−1) Ti(n−1) = Si(n−1) −SinS(n−1)(n−1) = 0−1·0 = 0 Tin = Sin −SinS(n−1)n −Si(n−1) = 1−1·1−0 = 0. Now recall (e.g., from the expansion of a determinant in minors) if a matrix contains an entry equal to 1 which is the unique nonzero entry in either its row or its column, then we may strike out this entry (meaning striking out the row and column containing it) at the expense of mul-tiplying the determinant by a sign. To simplify notation, we do not renumber rows and columns after performing this operation. We next verify that for the matrix T, for i = 2,...,⌊n 2⌋ in turn, it is valid to strike out (i,n −i) and (n − i,i) at the cost of multiplying the determinant by -1. Namely, when we reach the entry (n −i,i), the only other nonzero entries in this row have the form (n−i, j) where j > 1 divides n −i, and those entries are in pre-viously struck columns. We thus compute det(S) = det(T) as: (−1)⌊n/2⌋−1 det   n+1 −1 0 −1 0 1 0 1 0  for n odd, (−1)⌊n/2⌋−1 det    n+1 −1 2 0 −1 −1 1 −1 2 1 0 1 0 −1 1 0   for n even. In the odd case, we can strike the last two rows and columns (creating another negation) and then conclude at once. In the even case, the rows and columns are labeled 1, n 2,n −1,n; by adding row/column n −1 to row/column n 2, we produce (−1)⌊n/2⌋det    n+1 1 2 0 1 1 1 0 2 1 0 1 0 0 1 0    and we can again strike the last two rows and columns (creating another negation) and then read off the result. Remark. One can use a similar approach to compute some related determinants. For example, let J be the matrix with Jij = 1 for all i, j. In terms of an indetermi-nate q, define the matrix T by Tij = qSij. We then have det(T −tJ) = (−1)⌈n/2⌉−1q2(τ(n)−1)(q−1)n−1 fn(q,t) where τ(n) denotes the number of divisors of n and fn(q,t) = ( qn−1t +q2 −2t for n odd, qn−1t +q2 −qt −t for n even. Taking t = 1 and then dividing by (q−1)n, this yields a q-deformation of the original matrix S.
14916
https://gieseanw.wordpress.com/2013/07/19/an-analytic-solution-for-ellipse-and-line-intersection/
Andy G's Blog An Analytic Solution for Ellipse and Line Intersection (Note: this article was originally written in and transcribed to WordPress, so forgive the equation alignment. Get the original). If you have a line and an ellipse, how can you tell where they intersect? This is a relatively simple problem that has worked-out examples all over the web if you Google for “line ellipse intersection”. However, what I’ve come to find is that nobody will actually give you the solution for an arbitrary line and an arbitrary ellipse. I’m here to do just that. Derivation An ellipse is defined by a long axis and a short axis, called the semi-major and semi-minor axes, respectively. Usually people use the variable to represent the length of the semi-major axis, and to represent the length of the semi-minor axis. In this article I’ll use to represent only the horizontal axis and to represent only the vertical axis. That said, the formal equation for an ellipse is this: 1 And the equation for a line is this: To avoid confusion about what means, I’ll use the term to represent the y-intercept instead. 2 To find your potentially two intersecting points, you need to solve for and then use the values you found for (there will be two) to find corresponding values for . That is, you need to simultaneously solve equations 1 and 2. But first, let’s discuss our line. Equation of a Line for Two Points and You’re given two points , and you need to find values for slope and y-intercept like in Eqn. 2. Well slope, , is simply the change in over the change in . 3 The actual order of and in Eqn. 2 doesn’t matter — you can have or vice versa and you’ll get the same slope. Now to find the y-intercept, which we’re referring to as , we just take one of our points (arbitrarily choose ) and plug it into our equation to solve for : subtracting from both sides leaves us with our y-intercept : 4 Now that we know our values for , , , and , we are ready to solve for the intersection points between the line and the ellipse. First, substitute the line equation (Eqn. 2) into the ellipse equation (Eqn. 1) so that we can solve for : Expanding the square: We want to have a common denominator for both fractions on the left-hand side, so we’ll multiply the first term by and the second term by : Now we can multiply both sides of the equation by so we don’t have a fraction on the left-hand side: Notice how the first two terms on the left-hand side have a common term: , let’s factor that out: Now let’s notice that the terms and both consist of only our known constants. To make the rest of our solution simpler, let’s label these constants and . That is: 5 6 With our constant-naming out of the way, let’s re-examine our equation: That’s much cleaner isn’t it? Okay, next let’s move the term to the other side: The left-hand side is very clean now: just a quadratic equation. I’m going to use a trick called completing the square to help us solve for . If we first divide everything by we get: Which is of the form . and in this case refer to constants of a quadratic equation, not the same variables we’re using. Because we have it of this form, we know that if we add to both sides, then we can easily factor the left side: Becomes: Now, since we’re interested in finding the value of we need to take a square root of both sides: Evaluating the left hand side: Now we want to find the value for , not so let’s only keep the positive root: Let’s get by itself on the left-hand side by subtracting from both sides: Because things are getting kind of messy with that big square root, I’m going to notice that it’s simply a square root of constants that we already know, and label the whole thing . That is, 7 This makes our equation much cleaner: Later I’ll resubstitute for those constants, but bear with me as I use them to solve for and . We now know that has two solutions: If we take these values for along with our equation for a line (Eqn. 2), then we can solve for : which yields solutions for : This gives us our final intersection points of and If we resubstitute back in for we can simplify it ever so slightly. From Eqn. 7: Let’s substitute back in for and on the first term (Refer to Eqns. 5 and 6): Notice how we have a common term in the numerator, let’s factor it out: Now let’s substitute on the second term: Notice again how there is a common term in the numerator. Now we can factor an out of both, and get an term outside the radical: Finally, to resubstitute everything back into our point equations, our two potential intersection points are: , 8 And , 9 Conclusions The final equation for the points isn’t really the cleanest is it? I myself prefer to keep the constants and that I defined. Note that the points you’ve discovered won’t necessarily lie on the ellipse if the line doesn’t intersect the ellipse at all; you should be able to substitute your discovered and values into equation1 and see if it still equals 1. So there you have it, an analytic solution for the intersection points of a line with an ellipse in a convenient equation for you to translate into code. Thanks for reading! Share this: Click to share on Facebook (Opens in new window) Facebook Click to share on X (Opens in new window) X Like Loading… 11 responses to “An Analytic Solution for Ellipse and Line Intersection” Dave Weininger Well, you know how you started out by offering “the solution for an arbitrary line and an arbitrary ellipse” … but then in your equation 1 you define an ellipse as xx/aa + yy/bb = 1 … which is right for a “canonical” ellipse, e.g, with major axis aligned with X-axis, minor axis aligned with Y-axis … but that’s certainly not an “arbitrary” ellipse. The formula for a general ellipse is: Ax^2 + Bxy + Cy^2 + Dx + Ey + F = 0 provided B^2 – 4AC < 0 Humm. I still have some work to do to get the intersection of a line with an arbitrary ellipse. Oh well, not your fault. Cheers,Dave. Reply gieseanw Hi Dave,Thanks for the clarification, and thanks for stopping by! I’ll have to look more into it, but I think a good strategy could be to determine a coordinate transform of the general ellipse such that it’s transformed into canonical form, and then you can go from there.-Andy Reply 2. Matthias The right side of the formula after “This gives us our final intersection points of” should state “- D” twice, not “+ D”. The left side is “+ D” already.Thanks … I am using this for projecteuler.org Problem 144. Reply gieseanw Good catch, Matthias! I’ve updated the post. It seems I also need to dig up that pdf and update it, too. It appears that I did not carry the error over to equations 8 and 9, so use those instead! Glad to hear this is helping you out. Reply 2. gieseanw Hi Matthias, I’ve updated the .pdf Thanks again for pointing this out. I need to write another post!!! Reply 3. Neil Another Project Euler tragic here. Thanks for the equation 🙂 Reply gieseanw Glad to hear it helped, Neil Reply 4. Steve Nic Same here, I’m working problem #144 eulerproject, this is so far the clearest and cleanest explanation for the intersection of a line and an ellipse. Thank you! Reply 3. Representing line intersections as a system of linear equations | Andy G's Blog […] a previous post, I outlined an analytical solution for intersecting lines and ellipses. In this post I’m doing much the same thing but rather with lines on lines. I’ll […] Reply 4. rokenbuzz I’m grateful to have found and successfully used the formulas here. I totally agree with sentiment, “…I’ve come to find is that nobody will actually give you the solution for an arbitrary line and an arbitrary ellipse.” Thank you for spending the time to figure this out and share it!One frustration I have is that the formula assumes the ellipse is centered on the origin. The line’s position is determined by its y-intercept. So I must subtract the ellipse’s center y offset from the line’s y-intercept. I’m not sure if the ellipse’s position could easily be accounted for in the formulas. Reply rokenbuzz I said: “So I must subtract the ellipse’s center y offset from the line’s y-intercept.” Really I must calculate the line’s intercept after subtracting the ellipse’s center X and center Y from the line’s X and Y. Reply Leave a comment Cancel reply Loading Comments... Comment Reblog Subscribe Subscribed Andy G's Blog Already have a WordPress.com account? Log in now. Andy G's Blog Subscribe Subscribed Sign up Log in Copy shortlink Report this content View post in Reader Manage subscriptions Collapse this bar Design a site like this with WordPress.com Get started
14917
https://ocw.mit.edu/courses/6-042j-mathematics-for-computer-science-spring-2015/mit6_042js15_session16.pdf
“mcs” — 2015/5/18 — 1:43 — page 317 — #325 9 Directed graphs & Partial Orders Directed graphs, called digraphs for short, provide a handy way to represent how things are connected together and how to get from one thing to another by following those connections. They are usually pictured as a bunch of dots or circles with arrows between some of the dots, as in Figure 9.1. The dots are called nodes or vertices and the lines are called directed edges or arrows; the digraph in Figure 9.1 has 4 nodes and 6 directed edges. Digraphs appear everywhere in computer science. For example, the digraph in Figure 9.2 represents a communication net, a topic we’ll explore in depth in Chap-ter 10. Figure 9.2 has three “in” nodes (pictured as little squares) representing locations where packets may arrive at the net, the three “out” nodes representing destination locations for packets, and the remaining six nodes (pictured with lit-tle circles) represent switches. The 16 edges indicate paths that packets can take through the router. Another place digraphs emerge in computer science is in the hyperlink structure of the World Wide Web. Letting the vertices x1; : : : ; xn correspond to web pages, and using arrows to indicate when one page has a hyperlink to another, results in a digraph like the one in Figure 9.3—although the graph of the real World Wide Web would have n be a number in the billions and probably even the trillions. At first glance, this graph wouldn’t seem to be very interesting. But in 1995, two students at Stanford, Larry Page and Sergey Brin, ultimately became multibillionaires from the realization of how useful the structure of this graph could be in building a search engine. So pay attention to graph theory, and who knows what might happen! c b d e Figure 9.1 A 4-node directed graph with 6 edges. “mcs” — 2015/5/18 — 1:43 — page 318 — #326 318 Chapter 9 Directed graphs & Partial Orders in1 in2 in3 out1 out2 out3 Figure 9.2 A 6-switch packet routing digraph. x3 x4 x7 x2 x1 x5 x6 Figure 9.3 Links among Web Pages. “mcs” — 2015/5/18 — 1:43 — page 319 — #327 9.1. Vertex Degrees 319 tail f head v w Figure 9.4 A directed edge e D hu!vi. The edge e starts at the tail vertex, u, and ends at the head vertex, v. Definition 9.0.1. A directed graph, G, consists of a nonempty set, V.G/, called the vertices of G, and a set, E.G/, called the edges of G. An element of V.G/ is called a vertex. A vertex is also called a node; the words “vertex” and “node” are used interchangeably. An element of E.G/ is called a directed edge. A directed edge is also called an “arrow” or simply an “edge.” A directed edge starts at some vertex, u, called the tail of the edge, and ends at some vertex, v, called the head of the edge, as in Figure 9.4. Such an edge can be represented by the ordered pair .u; v/. The notation hu!vi denotes this edge. There is nothing new in Definition 9.0.1 except for a lot of vocabulary. Formally, a digraph G is the same as a binary relation on the set, V D V.G/—that is, a digraph is just a binary relation whose domain and codomain are the same set, V . In fact, we’ve already referred to the arrows in a relation G as the “graph” of G. For example, the divisibility relation on the integers in the interval Œ1::12ç could be pictured by the digraph in Figure 9.5. 9.1 Vertex Degrees The in-degree of a vertex in a digraph is the number of arrows coming into it, and similarly its out-degree is the number of arrows out of it. More precisely, Definition 9.1.1. If G is a digraph and v 2 V.G/, then indeg.v/ WWD jfe 2 E.G/ j head.e/ D vgj outdeg.v/ WWD jfe 2 E.G/ j tail.e/ D vgj An immediate consequence of this definition is Lemma 9.1.2. X indeg.v/ outdeg.v/: v D 2V.G/ v2 X V.G/ Proof. Both sums are equal to jE.G/j. ⌅ “mcs” — 2015/5/18 — 1:43 — page 320 — #328 320 Chapter 9 Directed graphs & Partial Orders 4 2 8 10 5 12 6 1 7 3 9 11 Figure 9.5 The Digraph for Divisibility on f1; 2; : : : ; 12g. 9.2 Walks and Paths Picturing digraphs with points and arrows makes it natural to talk about following successive edges through the graph. For example, in the digraph of Figure 9.5, you might start at vertex 1, successively follow the edges from vertex 1 to vertex 2, from 2 to 4, from 4 to 12, and then from 12 to 12 twice (or as many times as you like). The sequence of edges followed in this way is called a walk through the graph. A path is a walk which never visits a vertex more than once. So following edges from 1 to 2 to 4 to 12 is a path, but it stops being a path if you go to 12 again. The natural way to represent a walk is with the sequence of sucessive vertices it went through, in this case: 1 2 4 12 12 12: However, it is conventional to represent a walk by an alternating sequence of suc-cessive vertices and edges, so this walk would formally be 1 h1!2i 2 h2!4i 4 h4!12i 12 h12!12i 12 h12!12i 12: (9.1) The redundancy of this definition is enough to make any computer scientist cringe, but it does make it easy to talk about how many times vertices and edges occur on the walk. Here is a formal definition: Definition 9.2.1. A walk in a digraph, G, is an alternating sequence of vertices and edges that begins with a vertex, ends with a vertex, and such that for every edge hu!vi in the walk, vertex u is the element just before the edge, and vertex v is the next element after the edge. “mcs” — 2015/5/18 — 1:43 — page 321 — #329 9.2. Walks and Paths 321 So a walk, v, is a sequence of the form v WWD v0 hv0 !v1i v1 hv1 !v2i v2 : : : hvk!1 !vki vk where hvi !viC1i 2 E.G/ for i 2 Œ0::k/. The walk is said to start at v0, to end at vk, and the length, jvj, of the walk is defined to be k. The walk is a path iff all the vi’s are different, that is, if i ¤ j, then vi ¤ vj . A closed walk is a walk that begins and ends at the same vertex. A cycle is a positive length closed walk whose vertices are distinct except for the beginning and end vertices. Note that a single vertex counts as a length zero path that begins and ends at itself. It also is a closed walk, but does not count as a cycle, since cycles by definition must have positive length. Length one cycles are possible when a node has an arrow leading back to itself. The graph in Figure 9.1 has none, but every vertex in the divisibility relation digraph of Figure 9.5 is in a length one cycle. Length one cycles are sometimes called self-loops. Although a walk is officially an alternating sequence of vertices and edges, it is completely determined just by the sequence of successive vertices on it, or by the sequence of edges on it. We will describe walks in these ways whenever it’s convenient. For example, for the graph in Figure 9.1, ✏.a; b; d/, or simply abd, is a (vertex-sequence description of a) length two path, ✏.ha!bi ; hb !di/, or simply ha!bi hb !di, is (an edge-sequence de-scription of) the same length two path, ✏abcbd is a length four walk, ✏dcbcbd is a length five closed walk, ✏bdcb is a length three cycle, ✏hb !ci hc !bi is a length two cycle, and ✏hc !bi hb ai ha!di is not a walk. A walk is not allowed to follow edges in the wrong direction. If you walk for a while, stop for a rest at some vertex, and then continue walking, you have broken a walk into two parts. For example, stopping to rest after following two edges in the walk (9.1) through the divisibility graph breaks the walk into the first part of the walk 1 h1!2i 2 h2!4i 4 (9.2) “mcs” — 2015/5/18 — 1:43 — page 322 — #330 322 Chapter 9 Directed graphs & Partial Orders from 1 to 4, and the rest of the walk 4 h4!12i 12 h12!12i 12 h12!12i 12: (9.3) from 4 to 12, and we’ll say the whole walk (9.1) is the merge of the walks (9.2) and (9.3). In general, if a walk f ends with a vertex, v, and a walk r starts with the same vertex, v, we’ll say that their merge, f br, is the walk that starts with f and continues with r.1 Two walks can only be merged if the first ends with the same vertex, v, that the second one starts with. Sometimes it’s useful to name the node v where the walks merge; we’ll use the notation f v r to describe the merge of a walk f that ends at v with a walk r that begins at v. A consequence of this definition is that b Lemma 9.2.2. jfbrj D jfj C jrj: In the next section we’ll get mileage out of walking this way. 9.2.1 Finding a Path If you were trying to walk somewhere quickly, you’d know you were in trouble if you came to the same place twice. This is actually a basic theorem of graph theory. Theorem 9.2.3. The shortest walk from one vertex to another is a path. Proof. If there is a walk from vertex u to another vertex v ¤ u, then by the Well Ordering Principle, there must be a minimum length walk w from u to v. We claim w is a path. To prove the claim, suppose to the contrary that w is not a path, meaning that some vertex x occurs twice on this walk. That is, w D e x f x g for some walks e; f; g where the length of b f is b positive. But then “deleting” f yields a strictly shorter walk e x g from u to v, contradicting the minimality b of w. ⌅ Definition 9.2.4. The distance, dist .u; v/, in a graph from vertex u to vertex v is the length of a shortest path from u to v. 1It’s tempting to say the merge is the concatenation of the two walks, but that wouldn’t quite be right because if the walks were concatenated, the vertex v would appear twice in a row where the walks meet. “mcs” — 2015/5/18 — 1:43 — page 323 — #331 9.3. Adjacency Matrices 323 As would be expected, this definition of distance satisfies: Lemma 9.2.5. [The Triangle Inequality] dist .u; v/ dist .u; x/ C dist .x; v/ for all vertices u; v; x with equality holding iff x is on a shortest path from u to v. Of course, you might expect this property to be true, but distance has a technical definition and its properties can’t be taken for granted. For example, unlike ordinary distance in space, the distance from u to v is typically different from the distance from v to u. So, let’s prove the Triangle Inequality Proof. To prove the inequality, suppose f is a shortest path from u to x and r is a shortest path from x to v. Then by Lemma 9.2.2, f x r is a walk of length dist .u; x/ C dist .x; v/ from u to v, so this sum is an upper bound on the length of the shortest path from u to v by Theorem 9.2.3. b Proof of the “iff” is in Problem 9.3. ⌅ Finally, the relationship between walks and paths extends to closed walks and cycles: Lemma 9.2.6. The shortest positive length closed walk through a vertex is a cycle through that vertex. The proof of Lemma 9.2.6 is essentially the same as for Theorem 9.2.3; see Problem 9.7. 9.3 Adjacency Matrices If a graph, G, has n vertices, v0; v1; : : : ; vn!1, a useful way to represent it is with an n ⇥n matrix of zeroes and ones called its adjacency matrix, AG. The ijth entry of the adjacency matrix, .AG/ij , is 1 if there is an edge from vertex vi to vertex vj and 0 otherwise. That is, .AG/ij WWD ( 1 if ˝ vi !vj ˛ 2 E.G/; 0 otherwise: “mcs” — 2015/5/18 — 1:43 — page 324 — #332 324 Chapter 9 Directed graphs & Partial Orders For example, let H be the 4-node graph shown in Figure 9.1. Its adjacency matrix, AH, is the 4 ⇥4 matrix: a b c d a 0 1 0 1 AH D b 0 0 1 1 c 0 1 0 0 d 0 0 1 0 A payoff of this representation is that we can use matrix powers to count numbers of walks between vertices. For example, there are two length two walks between vertices a and c in the graph H: a ha!bi b hb !ci c a ha!di d hd !ci c and these are the only length two walks from a to c. Also, there is exactly one length two walk from b to c and exactly one length two walk from c to c and from d to b, and these are the only length two walks in H. It turns out we could have read these counts from the entries in the matrix .AH/2: a b c d a 0 0 2 1 .A 2 H/ D b 0 1 1 0 c 0 0 1 1 d 0 1 0 0 More generally, the matrix .AG/k provides a count of the number of length k walks between vertices in any digraph, G, as we’ll now explain. Definition 9.3.1. The length-k walk counting matrix for an n-vertex graph G is the n ⇥n matrix C such that Cuv WWD the number of length-k walks from u to v: (9.4) Notice that the adjacency matrix AG is the length-1 walk counting matrix for G, and that .AG/0, which by convention is the identity matrix, is the length-0 walk counting matrix. Theorem 9.3.2. If C is the length-k walk counting matrix for a graph G, and D is the length-m walk counting matrix, then CD is the length k C m walk counting matrix for G. “mcs” — 2015/5/18 — 1:43 — page 325 — #333 9.3. Adjacency Matrices 325 According to this theorem, the square .AG/2 of the adjacency matrix is the length two walk counting matrix for G. Applying the theorem again to .AG/2AG shows that the length-3 walk counting matrix is .AG/3. More generally, it follows by induction that Corollary 9.3.3. The length-k counting matrix of a digraph, G, is .A k G/ , for all k 2 N. In other words, you can determine the number of length k walks between any pair of vertices simply by computing the kth power of the adjacency matrix! That may seem amazing, but the proof uncovers this simple relationship between matrix multiplication and numbers of walks. Proof of Theorem 9.3.2. Any length .k Cm/ walk between vertices u and v begins with a length k walk starting at u and ending at some vertex, w, followed by a length m walk starting at w and ending at v. So the number of length .k C m/ walks from u to v that go through w at the kth step equals the number Cuw of length k walks from u to w, times the number Dwv of length m walks from w to v. We can get the total number of length .k C m/ walks from u to v by summing, over all possible vertices w, the number of such walks that go through w at the kth step. In other words, #length .k C m/ walks from u to v D X Cuw w2V.G/ ! Dwv (9.5) But the right hand side of (9.5) is precisely the definition of .CD/uv. Thus, CD is indeed the length-.k C m/ walk counting matrix. ⌅ 9.3.1 Shortest Paths The relation between powers of the adjacency matrix and numbers of walks is cool—to us math nerds at least—but a much more important problem is finding shortest paths between pairs of nodes. For example, when you drive home for vacation, you generally want to take the shortest-time route. One simple way to find the lengths of all the shortest paths in an n-vertex graph, G, is to compute the successive powers of AG one by one up to the n # 1st, watch-ing for the first power at which each entry becomes positive. That’s because The-orem 9.3.2 implies that the length of the shortest path, if any, between u and v, that is, the distance from u to v, will be the smallest value k for which .A k G/uv is nonzero, and if there is a shortest path, its length will be n # 1. Refinements of this idea lead to methods that find shortest paths in reasonably efficient ways. The methods apply as well to weighted graphs, where edges are labelled with weights “mcs” — 2015/5/18 — 1:43 — page 326 — #334 326 Chapter 9 Directed graphs & Partial Orders or costs and the objective is to find least weight, cheapest paths. These refinements are typically covered in introductory algorithm courses, and we won’t go into them any further. 9.4 Walk Relations A basic question about a digraph is whether there is a way to get from one particular vertex to another. So for any digraph, G, we are interested in a binary relation, G⇤, called the walk relation on V.G/ where u G⇤v WWD there is a walk in G from u to v: (9.6) Similarly, there is a positive walk relation u GC v WWD there is a positive length walk in G from u to v: (9.7) Definition 9.4.1. When there is a walk from vertex v to vertex w, we say that w is reachable from v, or equivalently, that v is connected to w. 9.4.1 Composition of Relations There is a simple way to extend composition of functions to composition of rela-tions, and this gives another way to talk about walks and paths in digraphs. Definition 9.4.2. Let R W B ! C and S W A ! B be binary relations. Then the composition of R with S is the binary relation .R ı S/ W A ! C defined by the rule a .R ı S/ c WWD 9b 2 B: .a S b/ AND .b R c/: (9.8) This agrees with the Definition 4.3.1 of composition in the special case when R and S are functions.2 Remembering that a digraph is a binary relation on its vertices, it makes sense to compose a digraph G with itself. Then if we let Gn denote the composition of G with itself n times, it’s easy to check (see Problem 9.9) that Gn is the length-n walk relation: a Gn b iff there is a length n walk in G from a to b: 2The reversal of the order of R and S in (9.8) is not a typo. This is so that relational composition generalizes function composition. The value of function f composed with function g at an argument, x, is f .g.x//. So in the composition, f ı g, the function g is applied first. “mcs” — 2015/5/18 — 1:43 — page 327 — #335 9.5. Directed Acyclic Graphs & Scheduling 327 This even works for n D 0, with the usual convention that G0 is the identity relation IdV.G/ on the set of vertices.3 Since there is a walk iff there is a path, and every path is of length at most jV.G/j # 1, we now have4 G⇤D G0 [ G1 [ G2 [ : : : [ GjV.G/j!1 D .G [ G0/jV.G/j!1: (9.9) The final equality points to the use of repeated squaring as a way to compute G⇤ with log n rather than n # 1 compositions of relations. 3The identity relation, IdA, on a set, A, is the equality relation: a IdA b iff a D b; for a; b 2 A. 4Equation (9.9) involves a harmless abuse of notation: we should have written graph.G⇤/ D graph 0 .G / [ graph 1 .G / : : : : MIT OpenCourseWare 6.042J / 18.062J Mathematics for Computer Science Spring 2015 For information about citing these materials or our Terms of Use, visit:
14918
https://homeschoolgiveaways.com/consumer-math-worksheets/
Free Printable Consumer Math Worksheets for Students Skip to content About Contact Over 6,200 homeschool resources and growing! Account/Log in Homeschool 101 Fun Activities Subjects Seasonal Holidays Seasons Freebies ALL ACCESS Library Bible Freebies Fine Arts Freebies Fun Activities Freebies History Freebies Holidays & Seasons Freebies Language Arts Freebies Math Freebies Science Freebies Technology Freebies Template Freebies Subscriber Hub Shop Printables Home/Subject/Math/Free Printable Consumer Math Worksheets for Students Free Printable Consumer Math Worksheets for Students Published: November 7, 2022 Contributor: Sara Dennis Disclosure: This post may contain affiliate links, meaning if you decide to make a purchase via my links, I may earn a commission at no additional cost to you. See my disclosure for more info. ;e.setAttribute('type','text/javascript');e.setAttribute('charset','UTF-8');e.setAttribute('src','//assets.pinterest.com/js/pinmarklet.js?r='+Math.random()99999999);document.body.appendChild(e)%7D)());) Are you looking for consumer math worksheets that will teach your children the skills they need to handle real life situations? Then you’ll love these free consumer math worksheets for elementary, middle school, and high school students. Consumer Math Worksheets These worksheets break down basic math skills into easily digestible consumer math lessons. The lessons will teach your kids about personal finance, sales tax, and how to set up a monthly budget. Plus, you’ll find real-life examples to help your kids develop their problem-solving skills. What is consumer math? Consumer math is a branch of maths that focuses on teaching kids how to use their basic math skills in real world situations. Kids learn financial literacy and what some people call kitchen math. They’ll learn how to budget, create a grocery list, balance their checking account, and how to read account statements. Why Teaching Students Consumer Math is Important Consumer mathematics is a critical branch of math to teach your kids. Kids will learn a variety of topics that they’ll use throughout their life, such as figuring out a sale price, a credit score, hourly rate, and installment loans. Teaching lifeskills like consumer math will set your student up for success. These skills will allow your kids to make wise financial decisions. We used Dave Ramsey’s personal finance course when my daughter was in high school. I plan to take my next child through it when she gets in high school too. Teach Your Children Economics Understanding Economics is the perfect way to help your 3rd-8th graders grasp important economic topics. Kids will learn all about money, capitalism, supply and demand, banking, stocks, the economy, and more in ways they can understand, using examples from their own lives. Consumer Math Worksheets If you’re on board to teach your kids all about consumer math, then these worksheets can help. You’ll find banking worksheets: deposits, withdrawls, balancing a checkbook and writing checks practice pages. You can also print math worksheets for calculating discounts, taxes, price comparison, interest, and more! Banking Worksheets This group of money math worksheets focuses on banking worksheets that will teach your kids how to balance a checkbook, keep track of deposits and withdrawals, and how to write a check as well. You can use these worksheets to create a math course, supplement a homeschool math curriculum, or create a money lesson. Deposits and Withdrawals Worksheets Most people in the world open a bank account at some time. So your kids will need to know how to keep track of a checking account and a savings account. They’ll also need to know how to read account statements. Deposits and Withdrawals Sort – Kids will read through the different task cards to determine if they should deposit the money into their account or make a withdrawal. Withdrawal and Deposit Warmup – This money worksheet focuses on making sure kids understand the vocabulary involved in withdrawals and deposits. Balancing a Checkbook Worksheets Teach your kids how to balance a checkbook using one of these fun consumer math worksheets. The best part is that your kids will use basic math skills to complete these lessons. Balancing a Checkbook – This consumer math worksheet uses basic operations to have kids calculate a variety of deposits and withdrawals. It’s an excellent introduction to balancing a checkbook. Balancing a Checkbook – Give your kids the opportunity to practice balancing a checkbook before they learn the hard way with this fun money worksheet. Writing Checks Worksheets Writing checks is becoming a lost skill as most adults use debit and credit cards to pay for groceries at the grocery store. However, there are still times your children will need to write a check. These worksheets will ensure your kids know how to write a check properly. How to Write a Check – You’ll find various resources to help you teach your kids everything they need to know about checks and checking accounts, including answer keys to the worksheets. How to Write a Check – This website shares a variety of charts and worksheets to help you teach your children how to write a check properly. Sales Purchases Worksheets Do you have kids struggling as they attempt to learn how to calculate discounts, taxes, or tips? Then you’ve come to the right place. These worksheets will help your kids figure out the final prices and the total cost of items. Calculate Discounts Worksheets Help kids figure out how to calculate the final price of an object when they apply a coupon or discount. It’s a valuable skill your kids will use for years to come! How Much Does It Cost With Coupons – Figure out how much items at the store will cost after you apply the coupons. These word problems use an interactive whiteboard that you can download to use in your lesson plan. How to Calculate Discounts – Help your kids learn what 30% off or 50% off actually means using this fun worksheet that comes in a pdf format. Find the Sale Price Worksheets Teach your kids how to calculate the actual price of an item on sale with these money math worksheets.It’s one of the first stepping stones to financial literacy! Finding Discount and Sales Price – Help your kids learn how to calculate both the discount and the sale price when they’re shopping. Finding Sales Price – This math worksheet will help your kids learn how to find the actual price of an item on sale. Sales Tax Worksheets Does your state have sales tax? Then teach your children how to calculate the total amount of money they’ll need when they shop. It’s a real-world problem your kids will use constantly. How to Calculate Tax, Tips, and Sales Worksheet – This freebie pdf will teach your kids how to calculate tax, tiles, and the sale price. It’s a useful money lesson to complete with your children. 6th Grace Shopping Worksheet: Tax, Tips, Discounts – This shopping worksheet uses only the basic math that 6th and 7th graders have learned. It will quickly have the kids calculating taxes, tips, and discounts like a pro. Price Comparison Worksheets Another useful skill children need to learn is price comparison. These sheets will teach your kid how to calculate unit prices so your child can accurately compare the prices of items on a shopping list. Unit Price Comparison – It can take a long time for kids to learn to compare unit prices and purchase the lowest-priced item if you don’t teach them how. And this worksheet does just that. It teaches kids how to figure out the lowest unit price. Grocery Store Price Comparison Activity – Use online grocery stores to compare the prices of the kids’ favorite items. Kids can work in cooperative groups or independently. Earning Interest You don’t need online textbooks to teach your kids about earning interest. Instead, check out these fabulous resources on consumer mathematics that will teach your kids all about interest rates. What’s the Interest Worksheets Do your kids know how to calculate the interest rate on credit cards or savings accounts? These resources will teach your kids about interest rates on credit cards as well as simple interest rates. What Interest Rate Do Consumers Pay on Credit Cards – The worksheet introduces kids to the concept of a credit score as it teaches them about credit cards. Simple and Compound Interest Rate: Savings Account and Car Loan – Learn all about the difference between a simple interest rate and a compound interest rate as they apply to a savings account and a car loan. Monthly Payments Worksheets Monthly payments are a key part of an adult’s monthly budget. If monthly payments are too high, a person will keep sinking further into debt. These worksheets will guide your kids through the real-life situations they need to understand to keep their finances healthy. Installment Loan Payments Worksheets Installment payments are loans that allow a person to borrow a specific amount of money and make fixed payments over time to repay the loan. These worksheets will introduce your kids to the concept and walk them through the pros and cons. Closed-End Credit Installment Loans – This self-teaching worksheet will walk your kids through the steps involved in an installment loan, such as the down payment, the amount financed, and the finance charge. Revolving Credit Versus Installment Loans: You Decide – This paid resource teaches kids about the differences between revolving credit and installment loans. Monthly Payments Worksheets Do your kids understand the concept of monthly payments and calculate what the monthly payment would be on a loan? Then you need these consumer math worksheets! This or That? Calculating Monthly Payments from Simple Interest Loans – Kids will need to determine what monthly payment goes into different lifestyle choices as they choose between different items, such as cars. Let’s Go Car Shopping: Calculating Interest and Monthly Payments – Teach your kids all about purchasing a car with this fabulous guide to car shopping. Financing Worksheets Financing worksheets will help your kids learn about finance charges and car loans. This way, your children will be able to make wise financial decisions as adults. Finance Charges Worksheets Do you know what the finance charges are for items you purchase with a loan? It helps to find out so that you know the actual price of the item you bought. These resources will teach the concept to your kids. Finance Charges Puzzle – This paid puzzle will teach your kids how to calculate the finance charge on large purchases bought with a loan. Finance Charges Practice & Challenge – Another favorite resource, this paid worksheet will give your kids two problems to help them learn how to calculate finance charges. Car Loan Worksheets These car loan worksheets are like having a secret math tutor for your kids. They’ll learn how to calculate the payments on their dream car, never realizing they’re mastering basic math and honing problem-solving skills. Dream Car Loan Activity – This short math activity will have your kids figuring out interest rates and monthly payments. Online Car Loan and Depreciation Tools – Help your kids learn how to make wise purchasing decisions when buying their first cars by walking them through this car loan worksheet. Creating a Budget Worksheet Do your kids know how to set and maintain a monthly budget? These worksheets will walk your kids through real-life situations in order to engage your kids and teach them this critical life skill. Creating a Budget Using a Narrative – Kids will read a narrative and then use the narrative to create a budget for the fictional character. Dream Summer Vacation Budget Worksheet – Combine math, economics, and geography as you assign your children to figure out the budget for their dream summer vacation. In Conclusion Learning about consumer math while still in school will help prepare your student for life as an adult. Building financial literacy now is a key to success later in life. If you are teaching money to younger students, how about playing some money games for kids? You can also use money coloring pages for fun ways to teach money. If you have younger children, doing a color by coin worksheet cab ne fun and helpful. Sara Dennis Sara Dennis is a veteran homeschool mom of six who’s still homeschooling her two youngest kids after the older four have graduated, entered college, and moved on to adult life. She blogs at Classically Homeschooling. read Sara’s posts on HSG Related resources Free 1–20 Number Coloring Pages for Kids + Fun Story Ideas ---------------------------------------------------------- 9 Times Table Made Easy | Free Tricks & Games Kids Will Enjoy ------------------------------------------------------------- FREE 5 Times Table Worksheet Fun for Kids | Print & Go ------------------------------------------------------ 23 Math Life of Fred Books in Order (K-12 Series) ------------------------------------------------- Teaching Math Word Problem Key Words (Free Cheat Sheet) ------------------------------------------------------- Printable Number Playdough Mats 1-10 (Free Download) ---------------------------------------------------- Search Subscriber Freebie of the Week Learning State Capitals Get this week's freebie Subscriber Library Enter your email to gain access to our full library of95+ freedownloads! Plus, never miss out on a giveaway, freebie or deal again. Discover all the opportunities we have to offer right in your inbox. Join over 130,000 subscribers & Get Access to FREE Subscriber Library Enter your email to gain access to our full library of FREE 95+ downloads! Plus,never miss out on a giveaway, freebie or deal again. Discover all the opportunities we have to offer right in your inbox. We respect your privacy. Unsubscribe at any time. Explore our Homeschool resources Find free and low-cost printables that will bring creativity and depth to your teaching approach. Whether you’re looking to expand your existing collection or discover fresh ideas, our shop offers an array of materials designed to support and enrich your homeschooling adventure. Browse our shop Helping busy parents find free and low-cost homeschool resources since 2011. Company About HSG Meet Charis Subscribe Contact Us Advertise Resources Blog How to Homeschool Fun Activities FREEBIE of the WEEK FAQ Shop Printables Monthly Deals Featured Freebies My Account Cart © 2025 Homeschool Giveaways. Site Credit Privacy Policy / Terms & Conditions / Disclaimer Homeschool 101 Fun Activities Subjects SeasonalToggle child menu Holidays Seasons FreebiesToggle child menu ALL ACCESS Library Bible Freebies Fine Arts Freebies Fun Activities Freebies History Freebies Holidays & Seasons Freebies Language Arts Freebies Math Freebies Science Freebies Technology Freebies Template Freebies Subscriber Hub Shop Printables Search for: Search for:
14919
https://www.healthline.com/health/guanfacine-adhd
Guanfacine: ADHD Medication, Effects, and Options Health Conditions Health Conditions All Breast Cancer Cancer Care Caregiving for Alzheimer's Disease Chronic Kidney Disease Chronic Obstructive Pulmonary Disease (COPD) Digestive Health Eye Health Heart Health Menopause Mental Health Migraine Multiple Sclerosis (MS) Parkinson’s Disease Psoriasis Rheumatoid Arthritis (RA) Sleep Health Type 2 Diabetes Weight Management Condition Spotlight All Controlling Ulcerative Colitis Navigating Life with Bipolar Disorder Mastering Geographic Atrophy Managing Type 2 Diabetes Wellness Wellness Topics All CBD Fitness Healthy Aging Hearing Mental Well-Being Nutrition Parenthood Recipes Sexual Health Skin Care Sleep Health Vitamins and Supplements Women's Wellness Product Reviews All At-Home Testing Men's Health Mental Health Nutrition Sleep Vitamins and Supplements Women's Health Featured Programs All Your Guide to Glucose Health Inflammation and Aging Cold & Flu Season Survival Guide She’s Good for Real Tools Featured Video Series Pill Identifier FindCare Drugs A-Z Lessons All Crohn’s and Ulcerative Colitis Essentials Diabetes Nutrition High Cholesterol Taming Inflammation in Psoriasis Taming Inflammation in Psoriatic Arthritis Newsletters All Anxiety and Depression Digestive Health Heart Health Migraine Nutrition Edition Type 2 Diabetes Wellness Wire Lifestyle Quizzes Find a Diet Find Healthy Snacks Weight Management How Well Do You Sleep? Are You a Workaholic? Featured Health News All Can 6-6-6 Walking Workout Help You Lose Weight? This Couple Lost 118 Pounds Together Without Medication 5 Science-Backed Ways to Live a Longer Life Morning Coffee May Help You Live Longer ‘Weekend Warrior’ Workouts for Your 2025 Fitness Goals This Just In 5 Tips for a Healthy Lifestyle How to Disinfect Your House After the Flu Best Vegan and Plant-Based Meal Delivery for 2025 Does Medicare Cover Pneumonia Shots? Chromosomes, Genetics, and Your Health Top Reads Best Multivitamins for Women Best Multivitamins for Men Best Online Therapy Services Online Therapy That Takes Insurance Buy Ozempic Online Mounjaro Overview Video Series Youth in Focus Healthy Harvest Through an Artist's Eye Future of Health Connect Find Your Bezzy Community Bezzy communities provide meaningful connections with others living with chronic conditions. Join Bezzy on the web or mobile app. All Breast Cancer Multiple Sclerosis Depression Migraine Type 2 Diabetes Psoriasis Follow us on social media Can't get enough? Connect with us for all things health. Subscribe #### ADHD Basics Medication & Treatments Holistic Health Nutrition Symptom Management Guide Time & Productivity Executive Function Emotional Regulation Relationships & Communication Anxiety & Depressioin Energy & Sleep Support for Caregivers Explore more in ADHD Symptoms Home Remedies Treatment Types Diagnosis Complications Management Related Conditions Diet Causes & Risk Factors Prognosis ADHD Related Hubs #### Attention Deficit Hyperactivity Disorder Tips, tools, and support for living and thriving with ADHD Related Topics Symptoms Symptoms #### Related Hub #### Symptom Management A guide to ADHD symptoms and how to manage them Related Articles In Adults In Toddlers Symptoms Postural Sway Time Blindness Exhaustion Hyperactivity Speech Patterns Brain Fog Task Switching Vocal Stimming Identifying Triggers Hyperfixation vs. Special Interest Hyperfocus Memory Problems Effects on the Brain Home Remedies Home Remedies #### Related Hub #### Holistic Health Resources and guidance to support your body and well-being with ADHD Related Articles Home Remedies Supplements Herbs Essential Oils Melatonin and Safety Treatment Treatment #### Related Hub #### Medication & Treatments A guide to medications and other treatments for ADHD Related Articles Medications Vyvanse vs. Ritalin Ritalin vs. Adderall Concerta vs. Vyvanse Guanfacine Vyvanse vs. Adderall Strattera vs. Vyvanse Dexedrine vs. Adderall Clonidine Nootropics Tenex Treatment Neurofeedback Types Types Related Articles Types Inattentive Combined Type In Children Diagnosis Diagnosis Related Articles Rating Scales TOVA Overdiagnosis Diagnosis In Adults Self-Diagnosis Complications Complications Related Articles Adderall Crash Anger Effects of Adderall on the Brain Cocaine Use Concerta Crash Suicidal Ideation Impacts on Self-Esteem Motivation Management Management #### Related Hub s #### ADHD Basics Learn the basics of ADHD#### Support for Caregivers Tips and tools for providing support Related Articles In Marriage Waiting Mode Alcohol Use Body Doubling Brown Noise Management Tools Exercise Procrastination Disability Resources Brain Training Maintaining Focus Occupational Therapy Workplace Accommodations Management CBT Emotional Permanence Meditation During Pregnancy Behavioral Therapy Related Conditions Related Conditions Related Articles Vs. ADD Vs. Anxiety Vs. Autism OCD Related Conditions Vs. ODD Vs. Giftedness With Bipolar Disorder Vs. Auditory Processing Disorder Vs. Misphonia and Noise Sensitivity Sleep Disorders BPD Migraine Frontotemporal Dementia Diet Diet #### Related Hub #### ADHD Nutrition Nutrition tips and tools for managing ADHD Related Articles Foods for Stimulation Foods to Avoid Diet Causes & Risk Factors Causes & Risk Factors Related Articles Dopamine Dysfunction Childhood Trauma Genetics Causes and Risk Factors Prognosis Prognosis Related Articles Outlook Can You Outgrow It? What Do I Need to Know About Guanfacine for ADHD? Medically reviewed by Alexandra Perez, PharmD, MBA, BCGP — Written by The Healthline Editorial Team — Updated on March 20, 2025 Guanfacine for ADHD Dosage Side effects Difference Takeaway Guanfacine is a nonstimulant medication that may help support impulse control and attention span in people with ADHD, particularly children 12 and younger. Guanfacine belongs to the class of medications known as central alpha 2A-adrenergic receptor agonists. This class of medications generally helps open Trusted Source your blood vessels, which may lower your heart rate and blood pressure. Researchers have found that guanfacine may help improve the function of the prefrontal cortex, which is the part of your brain that regulates attention and impulse control. For this reason, doctors may prescribe it to treat attention deficit hyperactivity disorder (ADHD). When is guanfacine used for ADHD? For some people with ADHD, stimulant medications aren’t always the best choice, and they may look for an alternative. A doctor might consider Trusted Source using a nonstimulant medication like guanfacine for ADHD: children between 6 and 17 years old stimulants aren’t working well in managing ADHD symptoms stimulants cause too many side effects a child or teen has a substance use disorder a child or teenager has a medical condition for which stimulants should not be used About this medication The Food and Drug Administration (FDA) approved an extended-release version of guanfacine to treat attention deficit hyperactivity disorder (ADHD) in children and teenagers ages 6 to 17 years old. For adults, doctors may prescribe it off-label. Key facts about guanfacine include: It’s more commonly used to treat hypertension and to help prevent serious health conditions, such as heart attack and stroke, in people with higher-than-normal blood pressure. It was previously sold in the United States under the brand name Tenex and is currently still available as a generic guanfacine immediate release (IR). When sold under the name Intuniv, it’s used to treat ADHD. While the generic version and Intuniv contain guanfacine, the recommended dosage differs. Guanfacine is typically only used for ADHD when stimulants like amphetamine-dextroamphetamine (Adderall) are not suitable, not tolerated, or not effective. The medication appears to be most effective in children 12 years old or younger. Intuniv is an extended-release (ER) formulation of guanfacine that may be given in addition to stimulants or as part of a treatment program that also includes psychological counseling and educational measures. Research Trusted Source shows the medication could also be as effective in treating ADHD in adults. Treatment approaches that combine behavioral therapy and medication have shown to be the most effective compared with using either treatment alone. The recommendations may vary depending on the person’s age. HEALTHLINE NEWSLETTER Download our list of ADHD management tips Certain habits can help you manage ADHD symptoms. Join our ADHD newsletter and get a free list of science-backed strategies sent to your inbox! Enter your email JOIN NOW Your privacy is important to us What is the dosage of guanfacine for ADHD? Guanfacine ER or Intuniv should be taken as a tablet by mouth. The tablets should not be crushed, chewed, or broken before swallowing. For Intuniv, your child can often be given a 1-milligram (mg) dose once daily. However, doctors will often begin with the smallest, most effective dose, taking various criteria into consideration. The typical dosage of guanfacine IR for ADHD is 0.5 mg to 1 mg between one and four times daily. It’s important that you speak with your child’s doctor if you want to stop the medication, as discontinuing may require slow tapering to avoid a rise in blood pressure. The dose may be slowly increased over the next 4 to 7 weeks based on the child’s age and weight. During this time, your child will be monitored for any side effects. The maximum dosage is between 5 mg and 7 mg per day, depending on the child’s weight and age. It’s important to note that guanfacine IR and Intuniv cannot be substituted for each other on a mg-per-mg basis. While both drugs contain guanfacine, there are differences in how the pills are formulated. Extended-release medications like Intuniv are released slowly into the body over time. Guanfacine IR is an immediate-release drug that releases the medication into the body right away. Your child’s heart rate and blood pressure will be measured before treatment begins and periodically during the treatment period. What are the side effects of guanfacine? There are some risks with taking guanfacine. The first is potential side effects, and the second is drug interactions. The most commonly reported side effects of guanfacine include: drowsiness headache dry mouth stomach pain constipation fatigue sedation seizures Serious side effects may include: lower-than-normal blood pressure (hypotension) increased blood pressure if the medication is stopped suddenly (hypertension) weight gain fainting slower heart rate trouble breathing, which can quickly become a medical emergency Warning Guanfacine can also interact with other medications, including herbal supplements and over-the-counter medications. Taking guanfacine with any of the following drugs or classes of medications may require adjustments to dosage: CYP3A4/5 inhibitors, such as ketoconazole, which includes grapefruit and grapefruit juice CYP3A4 inducers, such as rifampin (Rifadin), which is an antibiotic valproic acid (Depakene), an anticonvulsant medication medications used to treat hypertension (antihypertensive drugs) central nervous system depressants, including alcohol, benzodiazepines, opioids, and antipsychotics Use caution if you have a history of fainting, heart disease, low blood pressure, depression, or heart block. This medication may complicate your condition or make its symptoms worse. ADVERTISEMENT Find an in-network psychiatrist in 10 minutes Skip the months-long waitlists. Talkiatry matches you with licensed psychiatrists who take your insurance and fit your busy schedule. GET STARTED Guanfacine vs. other treatments The most commonly used medications for ADHD are in a class of compounds known as stimulants. These work by increasing dopamine and norepinephrine in the brain. They include: methylphenidate (Ritalin, Concerta) amphetamine-dextroamphetamine (Adderall) dextroamphetamine (Dexedrine) lisdexamfetamine (Vyvanse) However, some people with ADHD cannot tolerate stimulants. In these cases, a doctor may prescribe nonstimulant medications like guanfacine. Taking these will not increase your dopamine levels, but this means it can take longer to see results. These medications are also less addictive. Other than guanfacine, which is approved for children and adolescents, there are two nonstimulant drugs FDA approved to treat ADHD in adults: atomoxetine (Strattera) clonidine (Kapvay) Other ADHD medications Find out more about other ADHD medications that you might discuss with your doctor and healthcare team. You can also learn more about determining if your current ADHD medication works and what you may discuss with your healthcare team about possible care plan changes. The takeaway Both Guanfacine IR and Intuniv contain guanfacine and may be used to treat ADHD in children, but only Intuniv is FDA approved for this purpose. Though both Guanfacine IR and Intuniv contain guanfacine, their formulations differ, so be sure to talk with your doctor about your child’s dosage and treatment. ADVERTISEMENT Explore online talk therapy options 4.5 FROM TRUSTPILOT Therapy via messaging, phone, or live video chat Great for a large network of licensed therapists Flexible cancellation at any time $65 to 90/week, billed every 4 weeks 20% off your first month LEARN MORE Master your ADHD with proven methods Get daily bite-size strategies that actually work. Learn from expert-designed, science-based programs. Connect with thousands of supportive members. LEARN MORE 4.6 (2.1K+) FROM App Store FDA-approved medication for anxiety and depression available Unlimited check-ins via online messaging $49 to $85/month, billed monthly $49 per month or less than $2 per day. LEARN MORE Combine therapy and prescription management Therapy via messaging, phone, or live video chat Psychiatry and prescription management available Covered members pay an average copay of $30 LEARN MORE How we reviewed this article: Sources History Healthline has strict sourcing guidelines and relies on peer-reviewed studies, academic research institutions, and medical journals and associations. We only use quality, credible sources to ensure content accuracy and integrity. You can learn more about how we ensure our content is accurate and current by reading oureditorial policy. Arnsten AFT. (2020). Guanfacine’s mechanism of action in treating prefrontal cortical disorders: Successful translation across species. Attention-deficit/hyperactivity disorder (ADHD): Parents' medication guide. (n.d.). Bernknopf A, et al. (2011). Guanfacine (Intuniv) for attention-deficit/hyperactivity disorder Giovannitti JA, et al. (2015). Alpha-2 adrenergic receptor agonists: A review of current clinical applications. Guanfacine (Intuniv). (n.d.). Intuniv (guanfacine) extended-release tablets, for oral use. (2019). Iwanami A, et al. (2020). Safety and efficacy of guanfacine extended-release in adults with attention-deficit/hyperactivity disorder: an open-label, long-term, phase 3 extension study. Peterson B, et al. (2024). Treatments for ADHD in children and adolescents: A systematic review. Ota T, et al. (2021). Evaluating guanfacine hydrochloride in the treatment of attention deficit hyperactivity disorder (ADHD) in adult patients: Design, development and place in therapy. Yu S, et al. (2023). Guanfacine for the treatment of attention-deficit hyperactivity disorder: An updated systematic review and meta-analysis. Our experts continually monitor the health and wellness space, and we update our articles when new information becomes available. Current Version Mar 20, 2025 Written By The Healthline Editorial Team Edited By Mike Hoskins Copy Edited By Delores Smith-Johnson Feb 27, 2025 Medically Reviewed By Alexandra Perez, PharmD, MBA, BCGP VIEW ALL HISTORY Share this article Medically reviewed by Alexandra Perez, PharmD, MBA, BCGP — Written by The Healthline Editorial Team — Updated on March 20, 2025 Was this article helpful? YesNo Read this next 9 Tips for Managing ADHD Mood Changes Medically reviewed by Nicole Washington, DO, MPH If you're actively managing your ADHD, you're likely interested in improving your confidence and enhancing how you interact with the world. We'll show… READ MORE Can Tenex Be Used to Treat ADHD? Medically reviewed by Alan Carter, Pharm.D. Wondering if Tenex can be used to treat ADHD? Here’s a look at Tenex and how it’s sometimes used off-label to treat ADHD. READ MORE Concerta vs. Adderall: A Side-by-Side Comparison Concerta and Adderall both treat ADHD, and they’re similar in many ways. However, some differences exist — we'll lay them out for you in detail. READ MORE Is There a Connection Between ADHD and Tics? ADHD and tic disorders like Tourette syndrome are two commonly associated conditions, but one doesn't cause the other. Learn more about the link, ADHD… READ MORE What to Know About the Relationship Between ADHD and Skin Picking Disorder Skin picking disorder can occur alongside ADHD. It causes individuals to repeatedly pick at and damage their skin. Learn more here. READ MORE What is the All-or-Nothing Mindset in ADHD, and How Do I Change It? Medically reviewed by Nicole Washington, DO, MPH Some people with ADHD may have perfectionist tendencies. This all-or-nothing mindset can have negative effects, but you can change this pattern over… READ MORE Is There a Link Between Night Terrors and ADHD? AHDHD is a neurodevelopmental condition that causes symptoms such as inattention or hyperactivity. It has been associated with sleep difficulties such… READ MORE Can ADHD Medications Help With Co-Occuring Anxiety? Medically reviewed by Nicole Washington, DO, MPH ADHD and anxiety are common co-occurring conditions. Certain ADHD medications may help with co-occurring anxiety, but others may worsen symptoms… READ MORE Is Nail Biting a Symptom of ADHD? Nail biting isn't a symptom of ADHD, but it's a behavior that's commonly linked to people with ADHD. Learn more. READ MORE Can ADHD Cause Rejection Sensitivity? Medically reviewed by Nicole Washington, DO, MPH ADHD may cause rejection sensitivity because it affects how your brain processes emotions and responds to feedback. READ MORE Get our wellness newsletter Filter out the noise and nurture your inbox with health and wellness advice that’s inclusive and rooted in medical expertise. SIGN UP Your privacy is important to us © 2025 Healthline Media LLC. All rights reserved. Healthline Media is an RVO Health Company. Our website services, content, and products are for informational purposes only. Healthline Media does not provide medical advice, diagnosis, or treatment. See additional information. About Us Contact Us Terms of Use Privacy Policy Privacy Settings Advertising Policy Health Topics Sitemap Medical Affairs Content Integrity Newsletters Your Privacy Choices © 2025 Healthline Media LLC. All rights reserved. Healthline Media is an RVO Health Company. Our website services, content, and products are for informational purposes only. Healthline Media does not provide medical advice, diagnosis, or treatment. See additional information. © 2025 Healthline Media LLC. All rights reserved. Healthline Media is an RVO Health Company. Our website services, content, and products are for informational purposes only. Healthline Media does not provide medical advice, diagnosis, or treatment. See additional information. AboutCareersAdvertise with us OUR BRANDS HealthlineMedical News TodayGreatistPsych CentralBezzy
14920
https://ocw.mit.edu/courses/2-161-signal-processing-continuous-and-discrete-fall-2008/resources/lecture_14/
MIT OpenCourseWare 2.161 Signal Processing: Continuous and Discrete Fall 200 8 For information about citing these materials or our Terms of Use, visit: n n n F ( z ) Y Z Z h n ∑ ∑ 1 Massachusetts Institute of Technology Department of Mechanical Engineering 2.161 Signal Processing - Continuous and Discrete Fall Term 2008 Lecture 14 1 Reading: • Proakis & Manolakis, Chapter 3 (The z-transform) • Oppenheim, Schafer & Buck, Chapter 3 (The z-transform) The Discrete-Time Transfer Function Consider the discrete-time LTI system, characterized by its pulse response {hn}: c o n vo lu tio n { f n } LT Isystem {y }= {f Ä h } (z)=F(z)H (z) m ultiplication We saw in Lec. 13 that the output to an input sequence {fn} is given by the convolution sum: ∞ ∞ yn = fn ⊗ hn = fkhn−k = hkfn−k, k=−∞ k=−∞ where {hn} is the pulse response. Using the convolution property of the z-transform we have at the output Y (z) = F (z)H(z) where F (z) = Z { fn}, and H(z) = Z { hn}. Then Y (z) H(z) = F (z) is the discrete-time transfer function , and serves the same role in the design and analysis of discrete-time systems as the Laplace based transfer function H(s) does in continuous systems. 1copyright ©cD.Rowell 2008 14–1 ( ) ( ) 2 In general, for LTI systems the transfer function will be a rational function of z, and may be written in terms of z or z−1, for example N (s) b0 + b1z−1 + b2z−2 + . . . + bM z−M H(z) = = D(s) a0 + a1z−1 + a2z−2 + . . . + aN z−N where the bi, i = 0 , . . . , m , ai, i = 0 , . . . , n are constant coefficients. The Transfer Function and the Difference Equation As defined above, let Y (z) b0 + b1z−1 + b2z−2 + . . . + bM z−M H(z) = = F (z) a0 + a1z−1 + a2z−2 + . . . + aN z−N and rewrite as a0 + a1z −1 + a2z −2 + . . . + aN z −N Y (z) = b0 + b1z −1 + b2z −2 + . . . + bM z −M F (z) If we apply the z-transform time-shift property Z {fn−k} = z −kF (z) term-by-term on both sides of the equation, (effectively taking the inverse z-transform) a0yn + a1yn−1 + a2yn−2 + . . . + aN yn−N = b0fn + b1fn−1 + b2fn−2 + . . . + bM fn−M and solve for yn 1 1 yn = − (a1yn−1 + a2yn−2 + . . . + aN yn−N ) + (b0fn + b1fn−1 + b2fn−2 + . . . + bM fn−M ) a0 a0 N ( ) M ( )∑ −ai ∑ bi = yn−i + fn−i a0 a0i=1 i=0 which is in the form of a recursive linear difference equation as discussed in Lecture 13. The transfer function H(z) directly defines the computational dif ference equation used to implement a LTI system. Example 1 Find the difference equation to implement a causal LTI system with a transfer function (1 − 2z−1)(1 − 4z−1) H(z) = z(1 − 1 2 z−1) Solution: z−1 − 6z−2 + 8 z−3 H(z) = 1 − 1 z−1 2 14–2 from which 1 yn − yn−1 = fn−1 − 6fn−2 + 8 fn−3, 2 or 1 yn = yn−1 + ( fn−1 − 6fn−2 + 8 fn−3). 2 The reverse holds as well: if we are given the difference equation, we can define the system transfer function. Example 2 Find the transfer function (expressed in powers of z) for the difference equation yn = 0 .25 yn−2 + 3 fn − 3fn−1 and plot the system poles and zeros on the z-plane. Solution: Taking the z-transform of both sides Y (z) = 0 .25 z −2Y (z) + 3 F (z) − 3z −1F (z) and reorganizing Y (z) 3(1 − z−1) 3z(z − 1) H(z) = = = F (z) 1 − 0.25 z−2 z2 − 0.25 which has zeros at z = 0 , 1 and poles at z = −0.5, 0.5: z p l a n e oo xx 10 . 5- 0 . 5 Á { z }  { z } 14–3 o 1 Á { z } { 3 Introduction to z-plane Stability Criteria The stability of continuous time systems is governed by pole locations - for a system to be BIBO stable all poles must lie in the l.h. s-plane. Here we do a preliminary investigation of stability of discrete-time systems, based on z-plane pole locations of H(z). Consider the pulse response hn of the causal system with z 1 H(z) = = z − a 1 − az −1 with a single real pole at z = a and with a difference equation yn = ay n−1 + fn. z -p la n e a<-1 -1<a<0 0<a<1 a>1 x  { z } p o le lo c a tio n Clearly the pulse response is 1 n = 0 hn = an n ≥ 1 The nature of the pulse response will depend on the pole location: 0 < a < 1: In this case hn = an will be a decreasing function of n and lim n→∞ hn = 0 and the system is stable . a = 1 : The difference equation is yn = yn−1 + fn (the system is a summer and the impulse response is hn = 1, (non-decaying). The system is marginally stable . a > 1: In this case h = an will be a increasing function of n and lim n→∞ h = ∞ and the n n system is unstable . −1 < a < 0: In this case hn = an will be a oscillating but decreasing function of n and lim n→∞ hn = 0 and the system is stable . a = −1: The difference equation is yn = −yn−1 + fn and the impulse response is hn = ( −1) n , that is a pure oscillator. The system is marginally stable . a < −1: In this case hn = an will be a oscillating but increasing function of n and lim n→∞ |hn| = ∞ and the system is unstable . 14–4  { z } w w- 4 This simple demonstration shows that this system is stable only for the pole position −1 < a < 1. In general for a system ∏M (z − zk) H(z) = K k=1 ∏N (z − pk)k=1 having complex conjugate poles (pk) and zeros (zk) : A discrete-time system will be stable only if all of the poles of its transfer function H(z) lie within the unit circle on the z-plane. The Frequency Response of Discrete-Time Systems Consider the response of the system H(z) to an infinite complex exponential sequence fn = A ej ωn = A cos( ωn ) + jA sin( ωn ), where ω is the normalized frequency (rad/sample). The response will be given by the con volution ∞ ∞∑ ∑ yn = hkfn−k = hk (A ej ω(n−k) ) k=−∞ k=−∞ ( )∞∑ = A hk e−j ωk ej ωn k=−∞ = AH (ej ω)ej ωn where the frequency response function H(ej ω) is H(ej ω) = H(z)| j ωz=e that is The frequency response function of a LTI discrete-time system is H(z) evaluated on the unit circle - provided the ROC includes the unit circle. For a stable causal system this means there are no poles lying on the unit circle. w in cre z - p l a n e Á { z } 1 - 1 j 1 - j 1 w e j w w = 0 = p a s i n g = - p 0 < w < p p < w < 0 14–5  { z } 0 ∑  { z } N Alternatively, the frequency response may be based on a physical frequency Ω associated with an implied sampling interval ΔT , and H(ej ΩΔ T ) = H(z)| j ΩΔ Tz=e which is again evaluated on the unit circle, but at angle ΩΔ T . W in cre N yquistfrequenc W = p /D z - p l a n e Á { z } 1 - 1 j 1 - j 1 W D T e j W D T W = 0 T a s i n g < W < p / D T < W < 0 T y W = - p /D - p /D T From the definition of the DTFT based on a sampling interval ΔT ∞ H ∗ (jΩ) = h −mjn ΩΔ T = H(z)|n e z= e −mjn ΩΔ T n=0 we can define the mapping between the imaginary axis in the s-plane and the unit-circle in the z-plane s = j Ω o ←→ z = e j Ω oΔT z p l a n e Á { z } 1- 1 j 1 - j 1 W D T e j W D T W = 0 W = p / D T W = - p / D T N y q u i s t f r e q u e n c y s - p l a n e j p / D T - j p / D T y q u i s t f r e q u e n c y j W s j W o o m a p p i n g " p r i m a r y " s t r i p The periodicity in H( e j ΩΔ T ) can be clearly seen, with the “primary” strip in the s-plane (defined by −π/ ΔT < Ω < π/ ΔT ) mapping to the complete unit-circle. Within the primary strip, the l.h. s-plane maps to the interior of the unit circle in the z-plane, while the r.h. s-plane maps to the exterior of the unit-circle. 14–6 ∮ ∑ 5 Aside: We use the argument to differentiate between the various classes of transfer functions: H(s) H(jΩ) H(z) H( e j ω)     Continuous Continuous Discrete Discrete Transfer Frequency Transfer Frequency Function Response Function Response The Inverse z-Transform The formal definition of the inverse z-transform is as a contour integral in the z-plane, 1 ∮ ∞ F (z)z n−1 dz 2πj −∞ where the path is a ccw contour enclosing all of the poles of F (z). Cauchy’s residue theorem states 1 ∞ ∑ F (z) d z = Res [F (z), p k] 2πj −∞ k where F (z) has N distinct poles pk, k = 1 , . . . , N and ccw path lies in the ROC. For a simple pole at z = zo Res [ F (z), z o] = lim ( z − zo)F (z), z→zo and for a pole of multiplicity m at z = zo 1 dm−1 Res [ F (z), z o] = lim (z − zo)mF (z) z→zo (m − 1)! dzm−1 The inverse z-transform of F (z) is therefore fn = Z−1 {F (z)} = Res [F (z)z n−1 , p k ] . k Example 3 A first-order low-pass filter is implemented with the difference equation yn = 0 .8yn−1 + 0 .2fn. Find the response of this filter to the unit-step sequence {un}. 14–7 [ ] [ ] Solution: The filter has a transfer function Y (z) 0.2 0.2z H(z) = = = F (z) 1 − 0.8z−1 z − 0.8 The input {fn} = {un} has a z-transform z F (z) = z − 1 so that the z-transform of the output is 0.2z2 Y (z) = H(z)U (z) = (z − 1)( z − 0.8) and from the Cauchy residue theorem yn = Res Y (z)z n−1 , 1 + Res Y (z)z n−1 , 0.8 = lim( z − 1) Y (z)z n−1 + lim (z − 0.8) Y (z)z n−1 z→1z→0.8 0.2zn+1 0.2zn+1 = lim + lim z→1 z − 0.8 z→0.8 z − 1 = 1 − 0.8n+1 which is shown below 0 2 4 6 8 1012141618 n(sam ples) 0 0 . 2 0 . 4 0 . 6 0 . 8 1 y n Example 4 Find the impulse response of the system with transfer function 1 z2 z2 H(z) = = = 1 + z−2 z2 + 1 (z + j 1)( z − j 1) 14–8 { Solution: The system has a pair of imaginary poles at z = ±j 1. From the Cauchy residue theorem hn = Z−1 {H(z)} = Res [H(z)z n−1 , j 1 ] + Res [H(z)z n−1 , −j 1 ] n+1 n+1 z z = lim + lim z→j1 z + j 1 z→− j 1 z − j 1 1 1 = (j 1) n+1 − (−j 1) n+1 j 2 j 2 = j n (1 + ( −1) n+1 ) 2 0 n odd hn = (−1) n/ 2 n even = cos( nπ/ 2) where we note that the system is a pure oscillator (poles on the unit circle) with a frequency of half the Nyquist frequency. 2 4 6 8 1 0 1 2 1 4 1 6 1 8 - 1 - 0 . 5 0 0 . 5 1 n y n Example 5 Find the impulse response of the system with transfer function 1 z2 z2 H(z) = = = 1 + 2 z + z−2 z2 + 2 z + 1 (z + 1) 2 Solution: The system has a pair of coincident poles at z = −1. The residue at z = −1 must be computed using 1 dm−1 Res [ F (z), z o] = lim (z − zo)mF (z). z→zo (m − 1)! dzm−1 With m = 2, at z = −1, Res [H(z)z n−1 , −1] = lim 1 d (z − 1) 2H(z)z n−1 z→− 1 (1)! dz d n+1 = lim z z→− 1 dz = (n + 1)( −1) n 14–9 [ ] ( ) ∑ The impulse response is hn = Z −1 {H(z)} = Res H(z)z n−1 , − 1 = ( n + 1)( −1) n . 2 4 6 8 1 0 1 2 1 4 1 6 1 8 - 2 0 - 1 0 0 1 0 2 0 h n n Other methods of determining the inverse z-transform include: Partial Fraction Expansion: This is a table look-up method, similar to the method used for the inverse Laplace transform. Let F (z) be written as a rational function of z−1: ∑M biz−k F (z) = k=0 ∑N −kaizk=0 ∏M −1)(1 − cizk=1 = ∏N (1 − diz−1)k=1 If there are no repeated poles, F (z) may be expressed as a set of partial fractions. N ∑ Ak F (z) = 1 − dkz−1 k=1 where the Ak are given by the residues at the poles Ak = lim (1 − dkz −1)F (z). z→dk Since Z 1 (dk)n un ←→ 1 − dkz−1 N fn = Ak (dk)n un. k=1 14–10 { } { } ∑ Example 6 Find the response of the low-pass filter in Ex. 3 to an input fn = ( −0.5) n Solution: From Ex. 3, and from the z-transform of {fn}, 1 0.2 F (z) = , H(z) = 1 − 0.5z−1 1 − 0.8z−1 so that 0.2 Y (z) = (1 + 0 .5z−1)(1 − 0.8z−1) A1 A2 = + 1 + 0 .5z−1 1 − 0.8z−1 Using residues 0.2 0.1 A1 = lim = z→− 0.5 1 − 0.8z−1 1.3 0.2 0.16 A2 = lim = z→0.8 1 + 0 .5z−1 1.3 and yn = 0.1 Z−1 1 + 0.16 Z−1 1 1.3 1 + 0 .5z−1 1.3 1 − 0.8z−1 0.1 0.16 = (−0.5) n + (0 .8) n 1.3 1.3 Note: (1) If F (z) contains repeated poles, the partial fraction method must be ex tended as in the inverse Laplace transform. (2) For complex conjugate poles – combine into second-order terms. Power Series Expansion: Since ∞ F (z) = fnz −n n=−∞ if F (z) can be expressed as a power series in z−1, the coefficients must be fn. 14–11 ) Example 7 Find Z−1 {log(1 + az −1)}. Solution: F (z) is recognized as having a power series expansion: ∞ a F (z) = log(1 + az −1) = ∑ (−1) n+1 n z −n for |a| < |z| nn=1 Because the ROC defines a causal sequence, the samples fn are ⎧ ⎨0 for n ≤ 0 fn = (−1) n+1 n a⎩ for n ≥ 1. n Polynomial Long Division: For a causal system, with a transfer function written as a rational function, the first few terms in the sequence may sometimes be computed directly using polynomial division. If F (z) is written as N (z−1) −1 −2 −2 F (z) = D(z−1) = f0 + f1z + f2z + f2z + · · · the quotient is a power series in z−1 and the coefficients are the sample values. Example 8 Determine the first few terms of fn for 1 + 2 z−1 F (z) = 1 − 2z−1 + z−2 using polynomial long division. Solution: 1 + 4 z−1 + 7 z−2 + · · · 1 − 2z−1 + z−2 1 + 2 z−1 1 − 2z−1 + z−2 4z−1 − z−2 4z−1 − 8z−2 + 4 z−3 7z−2 − 4z−3 so that 1 + 2 z−1 F (z) = 1 − 2z−1 + z−2 = 1 + 4 z −1 + 7 z −2 + · · · and in this case the general term is fn = 3 n + 1 for n ≥ 0. 14–12 In general, the computation can become tedious, and it may be difficult to recognize the general term from the first few terms in the sequence. 14–13
14921
https://mathematica.stackexchange.com/questions/253618/elementary-number-theory-problem-and-findinstance
equation solving - Elementary number theory problem and FindInstance - Mathematica Stack Exchange Join Mathematica By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematica helpchat Mathematica Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Elementary number theory problem and FindInstance Ask Question Asked 4 years, 1 month ago Modified4 years, 1 month ago Viewed 221 times This question shows research effort; it is useful and clear 2 Save this question. Show activity on this post. Find {m,n}∈Z{m,n}∈Z such that m≠n m≠n and m n=n m m n=n m. This has (unordered) solutions {2,4}{2,4} and {−2,−4}{−2,−4}, as can be easily checked. I'm of course hoping for an analytic approach, but the direct method in Mathematica does not find any solutions: mathematica FindInstance[m^n == n^m \[And] m != n, {m, n}, Integers] I've tried several variants, including Solve, taking Log of both sides, and so on... and none worked. Any suggestions? equation-solving findinstance Share Share a link to this question Copy linkCC BY-SA 4.0 Improve this question Follow Follow this question to receive notifications edited Aug 20, 2021 at 20:26 David G. StorkDavid G. Stork asked Aug 20, 2021 at 19:49 David G. StorkDavid G. Stork 43k 3 3 gold badges 40 40 silver badges 110 110 bronze badges Add a comment| 4 Answers 4 Sorted by: Reset to default This answer is useful 2 Save this answer. Show activity on this post. First we assume m>0, n>0,then the equation is equivalent to Log[m]/m==Log[n]/n. We consider the function f[x]=Log[x]/x. mathematica Solve[D[Log[x]/x, x] == 0, x, Reals] {{x -> E}} mathematica Plot[{Log[x]/x, 1/E, Log/4, Log/2}, {x, 0, 5}, AspectRatio -> 1] It is easy to see that f[x] increasing from 0 0 to E E and decreasing from E E to ∞∞, so we need to set m>=E>=n in such equation. mathematica Solve[{Log[m]/m == Log[n]/n, m >= E >= n}, PositiveIntegers] {{m -> 4, n -> 2}} Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications edited Aug 21, 2021 at 1:09 answered Aug 21, 2021 at 1:03 cvgmtcvgmt 90.7k 6 6 gold badges 112 112 silver badges 193 193 bronze badges 2 Oh... nice. Of course based on the Youtube video, but it was a good idea to map that to code. (✓✓)David G. Stork –David G. Stork 2021-08-21 03:01:14 +00:00 Commented Aug 21, 2021 at 3:01 By the monotonous, m=n is the only solution when {m,n}>E or 0<{m,n}<E ,that is why we only need to consider the case in answer.cvgmt –cvgmt 2021-08-21 05:34:05 +00:00 Commented Aug 21, 2021 at 5:34 Add a comment| This answer is useful 1 Save this answer. Show activity on this post. Here's a weird way to do it with NMinimize - the stuff with Quiet/Check/Boole is to work around the 0^0 and 0^-x issue. I found NMinimize wouldn't avoid these cases even if you added m != n, m != 0, n != 0 to the constraints: ```mathematica f[m_?NumericQ, n_?NumericQ] := Quiet[Check[0/Boole[m != n] + (m^n - n^m)^2, 10^20]] {err, sol} = NMinimize[{f[m, n], m < 0, n < 0}, {n ∈ Integers, m ∈ Integers}] With[{v = Values[sol]}, (f @@ v == 0 && Unequal @@ v)] ( {0., {n -> -4, m -> -2}} True ) ``` Change m < 0, n < 0 to m > 0, n > 0 to get the positive solution. Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications answered Aug 20, 2021 at 20:21 flintyflinty 26.2k 2 2 gold badges 23 23 silver badges 97 97 bronze badges 2 Thanks (+1+1) but I should have mentioned that I solved the problem numerically too. Of course I'm seeking an analytic method. Let's see if someone has a clever approach to this.David G. Stork –David G. Stork 2021-08-20 20:25:33 +00:00 Commented Aug 20, 2021 at 20:25 1 @DavidG.Stork I'm not sure an analytic approach even exists - Diophantine equations are hard and usually boil down to just checking inputs - i.e numerical approaches. See hereflinty –flinty 2021-08-20 20:35:30 +00:00 Commented Aug 20, 2021 at 20:35 Add a comment| This answer is useful 1 Save this answer. Show activity on this post. With variable restrictions up to 100, FindInstance and Reduce do the job. ```mathematica FindInstance[{m^n == n^m, m != n, 0 < m < 100, 0 < n < 100}, {m, n}, Integers] ( {{m -> 2, n -> 4}} ) Reduce[{m^n == n^m, m != n, -100 < m < 0, -100 < n < 0}, {m, n}, Integers] ( (m == -4 && n == -2) || (m == -2 && n == -4) ) ``` Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications answered Aug 20, 2021 at 21:06 Akku14Akku14 17.4k 16 16 silver badges 32 32 bronze badges 1 Hah... halfway between numerical search and algorithmic solution! (+1+1). Let's still wait a while to see if anyone finds an analytic approach. After all, there are Youtube clips on how to solve this problem, but they all involve creativity at the human level.David G. Stork –David G. Stork 2021-08-20 21:13:04 +00:00 Commented Aug 20, 2021 at 21:13 Add a comment| This answer is useful 1 Save this answer. Show activity on this post. Yet an other answer. Plot3D of m^n - n^m == 0 shows, you can divide the (m,n)-area into four quadrants around {m,n} = {E,E}. Three of them can be easily solved. The fourth can be proofen to have only solutions for m == n. ```mathematica Plot3D[{0, m^n - n^m}, {m, 1, 5}, {n, 1, 5}, PlotRange -> 1] Reduce[{0 < n < E, m^n == n^m}, {m, n}, Integers] ( (m == 1 && n == 1) || (m == 2 && n == 2) || (m == 4 && n == 2) ) Reduce[{0 < m < E, m^n == n^m}, {m, n}, Integers] ( (m == 1 && n == 1) || (m == 2 && n == 2) || (m == 2 && n == 4) ) red1 = Reduce[{m^n == n^m, E < n, E < m}, {m, n}, Reals] ( m > E && n == E^-ProductLog[-1, -(Log[m]/m)] ) Reduce[E^-ProductLog[-1, -(Log[m]/m)] == m, m, Integers] ( m [Element] Integers && m >= 3 ) ``` For all m >=3, E^-ProductLog[...] is equal m, therefore n == m. Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications answered Aug 21, 2021 at 5:15 Akku14Akku14 17.4k 16 16 silver badges 32 32 bronze badges 1 Helpful... thanks. (+1+1)David G. Stork –David G. Stork 2021-08-21 15:01:37 +00:00 Commented Aug 21, 2021 at 15:01 Add a comment| Your Answer Thanks for contributing an answer to Mathematica Stack Exchange! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. Use MathJax to format equations. MathJax reference. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions equation-solving findinstance See similar questions with these tags. The Overflow Blog The history and future of software development (part 1) Getting Backstage in front of a shifting dev experience Featured on Meta Spevacus has joined us as a Community Manager Introducing a new proactive anti-spam measure Related 1Finding formulas from other formulas and relations? 6FindInstance fails to find integer solutions to glaringly obvious problems 0How to force NMinimize to find the correct global minimum for equation with many local minima? 0Solving integer equation with an oddness (or related) constraint 5Solving an equation over the integers 0Finding all solutions to coupled algebraic equations without exhaustive enumeration 3Problem with solving a linear ODE with a parameter 3FunctionDomain and FindInstance problem Hot Network Questions What NBA rule caused officials to reset the game clock to 0.3 seconds when a spectator caught the ball with 0.1 seconds left? alignment in a table with custom separator How to locate a leak in an irrigation system? Is it possible that heinous sins result in a hellish life as a person, NOT always animal birth? Lingering odor presumably from bad chicken ICC in Hague not prosecuting an individual brought before them in a questionable manner? Triangle with Interlacing Rows Inequality [Programming] Why include unadjusted estimates in a study when reporting adjusted estimates? Is encrypting the login keyring necessary if you have full disk encryption? Xubuntu 24.04 - Libreoffice Any knowledge on biodegradable lubes, greases and degreasers and how they perform long term? Checking model assumptions at cluster level vs global level? An odd question What's the expectation around asking to be invited to invitation-only workshops? Is there a way to defend from Spot kick? Exchange a file in a zip file quickly Numbers Interpreted in Smallest Valid Base Another way to draw RegionDifference of a cylinder and Cuboid Can a cleric gain the intended benefit from the Extra Spell feat? The rule of necessitation seems utterly unreasonable Making sense of perturbation theory in many-body physics в ответе meaning in context Bypassing C64's PETSCII to screen code mapping I have a lot of PTO to take, which will make the deadline impossible Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. lang-mma Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematica Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 Mathematica is a registered trademark of Wolfram Research, Inc. While the mark is used herein with the limited permission of Wolfram Research, Stack Exchange and this site disclaim all affiliation therewith. By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
14922
https://www.periodicos.capes.gov.br/index.php/acervo/buscador.html?task=detalhes&id=W1964209388
Entrar Lembrar minha senha Receber meu e-mail de confirmação Portal de Periódicos da CAPES Você tem acesso ao conteúdo gratuito do Portal de Periódicos da CAPES Sua pesquisa será realizada no conteúdo gratuito disponível no acervo do Portal de Periódicos da CAPES. O conteúdo assinado com as editoras científicas está disponível para os IPs identificados das instituições participantes. Caso você esteja acessando fora da rede da sua instituição, é necessário efetuar o login na Comunidade Acadêmica Federada (CAFe). Portal de Periódicos da CAPES Review of absorption and adsorption in the hydrogen–palladium system 2006; Elsevier BV; Volume: 310; Linguagem: Inglês 10.1016/j.apcata.2006.05.012 1873-3875 Linda L. Jewell, Burtron H. Davis, Catalytic Processes in Materials Science The hydrogen–palladium system has been the subject of much study, both experimentally and computationally. In this review article the authors have set out to draw a comparison between the experimentally determined thermodynamic data for this system and the calculated energies, in order to attempt to bridge the gap between computational chemistry and experimental work and so gain insight into the absorption and adsorption of hydrogen on palladium. Rigorous thermodynamic analysis of the data for the absorption of hydrogen into palladium metal shows that although constant volume measurements have been made, the analysis that has been applied in the literature in several instances is valid only for a constant pressure system. Re-analysis of the data has lead to a heat of formation for β-palladium hydride which is not a function of composition and a weak function of temperature. Values for the internal energy of absorption of −36.7, −35.2 and −34.4 kJ/mol of H2 were obtained at 0 °C and in the temperature ranges from 200 to 313 °C and from 366 to 477 °C, respectively. There is a good agreement between these values and the calculated values. The implicit assumptions that underpin the integrated form of the Clausius–Clapeyron equation are that an isobaric system is being analyzed, and that the enthalpy is not a function of composition or temperature. Since heat of adsorption is known to be a function of surface coverage and is generally measured in a constant volume system, the application of the integrated Clausius–Clapeyron equation to determine the enthalpy of adsorption as a function of surface coverage has been questioned and an alternative thermodynamic analysis has been proposed that enables one to calculate the differential change in internal energy of adsorption with surface coverage. It has been found that the internal energy of adsorption varies with increasing surface coverage in a similar manner to the way in which internal energy varies as two atoms approach each other. It is noted that the variation in internal energy with surface coverage (0.1 < θ < 0.94) calculated in this work is of the order of 100 J/mol, while the heat of adsorption in the literature is of the order of −87,000 J/mol. Thus, except at very high coverages, the change in internal energy or enthalpy of adsorption with changes in surface coverage is very small compared to the overall heat of adsorption. The computationally determined energies of adsorption do not reflect this trend and appear to under estimate the electrostatic repulsion (or over estimate the attraction) between gas phase molecules and atoms that are already adsorbed on the surface for this system.
14923
https://flexbooks.ck12.org/cbook/prep-for-8th-grade-math/section/3.2/related/lesson/algebraic-equations-to-represent-words-alg-i-hnrs/
Algebraic Equations to Represent Words | CK-12 Foundation AI Teacher Tools – Save Hours on Planning & Prep. Try it out! Skip to content What are you looking for? Search Math Grade 6 Grade 7 Grade 8 Algebra 1 Geometry Algebra 2 PreCalculus Science Earth Science Life Science Physical Science Biology Chemistry Physics Social Studies Economics Geography Government Philosophy Sociology Subject Math Elementary Math Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Interactive Math 6 Math 7 Math 8 Algebra I Geometry Algebra II Conventional Math 6 Math 7 Math 8 Algebra I Geometry Algebra II Probability & Statistics Trigonometry Math Analysis Precalculus Calculus What's the difference? Science Grade K to 5 Earth Science Life Science Physical Science Biology Chemistry Physics Advanced Biology FlexLets Math FlexLets Science FlexLets English Writing Spelling Social Studies Economics Geography Government History World History Philosophy Sociology More Astronomy Engineering Health Photography Technology College College Algebra College Precalculus Linear Algebra College Human Biology The Universe Adult Education Basic Education High School Diploma High School Equivalency Career Technical Ed English as 2nd Language Country Bhutan Brasil Chile Georgia India Translations Spanish Korean Deutsch Chinese Greek Polski Explore EXPLORE Flexi A FREE Digital Tutor for Every Student FlexBooks 2.0 Customizable, digital textbooks in a new, interactive platform FlexBooks Customizable, digital textbooks Schools FlexBooks from schools and districts near you Study Guides Quick review with key information for each concept Adaptive Practice Building knowledge at each student’s skill level Simulations Interactive Physics & Chemistry Simulations PLIX Play. Learn. Interact. eXplore. CCSS Math Concepts and FlexBooks aligned to Common Core NGSS Concepts aligned to Next Generation Science Standards Certified Educator Stand out as an educator. Become CK-12 Certified. Webinars Live and archived sessions to learn about CK-12 Other Resources CK-12 Resources Concept Map Testimonials CK-12 Mission Meet the Team CK-12 Helpdesk FlexLets Know the essentials. Pick a Subject Donate Sign InSign Up Back To Equivalent Expressions Within a ContextBack 3.2 Algebraic Equations to Represent Words Written by:Brenda Meery |Kaitlyn Spong Fact-checked by:The CK-12 Editorial Team Last Modified: Sep 01, 2025 The sum of two consecutive even integers is 34. What are the integers? Algebraic Expressions to Represent Words To translate a problem from words into an equation, look for key words to indicate the operation used in the problem. Once the equation is known, to solve the problem you use the same rules as when solving equations with one variable. Isolate the variable and then solve for it making sure that whatever you do to one side of the equals sign you do to the other side. Drawing a diagram is also helpful in solving some word problems. Let's practice by writing out in algebraic expression for the following word problems: Two consecutive integers have a sum of 173. What are those numbers? Let x be integer 1. Then x+1= integer 2 (Because they are consecutive, they must be separated by only one number. For example: 1, 2, 3, 4,... all are consecutive.) Translate the sentence into an equation and solve: x+(x+1)=173 x+x+1=173(Remove the parentheses)2 x+1=173(Combine like terms)2 x+1−1=173−1(Subtract 1 from both sides to isolate the variable)2 x=172(Simplify)2 x 2=172 2(Divide both sides by 2 to solve for the variable)x=86(Simplify) Therefore the first integer is 86 and the second integer is (86+1)=87. Check: 86+87=173. When a number is subtracted from 35, the result is 11. What is the number? Let x be the number. Translate the sentence into an equation and solve: 35−x=11 35−35−x=11−35(Subtract 35 from both sides to isolate the variable)−x=−24(Simplify)−x−1=−24−1(Divide both sides by−1 to solve for the variable)x=24(Simplify) Therefore the number is 24. When one third of a number is subtracted from one half of a number, the result is 14. What is the number? Let x be the number. Translate the sentence into an equation and solve: 1 2 x−1 3 x=14 You need to get a common denominator in this problem in order to solve it. For this problem, the denominators are 2, 3, and 1. The LCD is 6. Therefore multiply the first fraction by 3 3, the second fraction by 2 2, and the third number by 6 6. (3 3)1 2 x−(2 2)1 3 x=(6 6)14 3 6 x−2 6 x=84 6(Simplify) Now that the denominator is the same, the equation can be simplified to be: 3 x−2 x=84 x=84(Combine like terms) Therefore the number is 84. Examples Example 1 Earlier, you were given an algebraic expression "The sum of two consecutive even integers is 34" and asked to find the integers described by the statement. Let x= integer 1 Then x+2= integer 2 (Because they are even, they must be 2 numbers apart. For example: 2, 4, 6, 8,... are all consecutive even numbers.) Translate the sentence into an equation and solve: x+(x+2)=34 x+x+2=34(Remove the parentheses)2 x+2=34(Combine like terms)2 x+2−2=34−2(Subtract 2 from both sides to isolate the variable)2 x=32(Simplify)2 x 2=32 2(Divide both sides by 2 to solve for the variable)x=16(Simplify) Therefore the first integer is 16 and the second integer is (16+2)=18. Note that 16+18 is indeed 34. Example 2 What is a number that when doubled would equal sixty? The number is 30. 2 x=60 2 x 2=60 2(Divide by 2 to solve for the variable)x=30(Simplify) Example 3 The sum of two consecutive odd numbers is 176. What are these numbers? The first number is 87 and the second number is (87+2)=89. x+(x+2)=176 x+x+2=176(Remove parentheses)2 x+2=176(Combine like terms)2 x+2−2=176−2(Subtract 2 from both sides of the equals sign to isolate the variable)2 x=174(Simplify)2 x 2=174 2(Divide by 2 to solve for the variable)x=87 Example 4 The perimeter of a square frame is 48 in. What are the lengths of each side? The side length is 12 inches. s+s+s+s=48(Write initial equation with four sides adding to the perimeter)4 s=48(Simplify)4 s 4=48 4(Divide by 4 to solve for the variable)s=12 Review The sum of two consecutive numbers is 159. What are these numbers? The sum of three consecutive numbers is 33. What are these numbers? A new computer is on sale for 30% off. If the sale price is $500, what was the original price? Jack and his three friends are sharing an apartment for the next year while at university (8 months). The rent costs $1200 per month. How much does Jack have to pay if they split the cost evenly? You are designing a triangular garden with an area of 168 square feet and a base length of 16 feet. What would be the height of the triangular garden shape? If four times a number is added to six, the result is 50. What is that number? This week, Emma earned ten more than half the number of dollars she earned last week babysitting. If this week, she earned 100 dollars, how much did she earn last week? Three is twenty-one divided by the sum of a number plus five. What is the number? Five less than three times a number is forty-six. What is the number? Hannah had $237 in her bank account at the start of the summer. She worked for four weeks and now she has $1685 in the bank. How much did Hannah make each week in her summer job? The formula to estimate the length of the Earth's day in the future is found to be twenty–four hours added to the number of million years divided by two hundred and fifty. In five hundred million years, how long will the Earth's day be? Three times a number less six is one hundred twenty-six. What is the number? Sixty dollars was two-thirds the total money spent by Jack and Thomas at the store. How much did they spend total? Ethan mowed lawns for five weekends over the summer. He worked ten hours each weekend and each lawn takes an average of two and one-half hours. How many lawns did Ethan mow? The area of a rectangular pool is found to be 280 square feet. If the base length of the pool is 20 feet, what is the width of the pool? A cell phone company charges a base rate of $10 per month plus 5¢ per minute for any long distance calls. Sandra gets her cell phone bill for $21.20. How many long distance minutes did she use? Review (Answers) Click HERE to see the answer key or go to the Table of Contents and click on the Answer Key under the 'Other Versions' option. Back to Algebraic Equations to Represent Words Ask me anything! CK-12 Foundation is a non-profit organization that provides free educational materials and resources. FLEXIAPPS ABOUT Our missionMeet the teamPartnersPressCareersSecurityBlogCK-12 usage mapTestimonials SUPPORT Certified Educator ProgramCK-12 trainersWebinarsCK-12 resourcesHelpContact us BYCK-12 Common Core MathK-12 FlexBooksCollege FlexBooksTools and apps CONNECT TikTokInstagramYouTubeTwitterMediumFacebookLinkedIn v2.11.10.20250923073248-4b84c670be © CK-12 Foundation 2025 | FlexBook Platform®, FlexBook®, FlexLet® and FlexCard™ are registered trademarks of CK-12 Foundation. Terms of usePrivacyAttribution guide Curriculum Materials License Student Sign Up Are you a teacher? Sign up here Sign in with Google Having issues? Click here Sign in with Microsoft Sign in with Apple or Sign up using email By signing up, I confirm that I have read and agree to the Terms of use and Privacy Policy Already have an account? Sign In Adaptive Practice I’m Ready to Practice! Get 10 correct to reach your goal Estimated time to complete: 13 min Start Practice Save this section to your Library in order to add a Practice or Quiz to it. Title (Edit Title)39/ 100 Save Go Back This lesson has been added to your library. Got It No Results Found Your search did not match anything in . Got It Searching in: CK-12 Looks like this FlexBook 2.0 has changed since you visited it last time. We found the following sections in the book that match the one you are looking for: Go to the Table of Contents Ok Are you sure you want to restart this practice? Restarting will reset your practice score and skill level.
14924
https://www.boddlelearning.com/1st-grade/addition-strategy-making-10
Teaching Resources 1st Grade Addition Strategy (Making 10) Learning about addition strategies for making 10 is a first grade, Common Core math skill: 1.OA.6. Below we show two videos that demonstrate this standard. Then, we provide a breakdown of the specific steps in the videos to help you teach your class. Prior Learnings Your students should be familiar with the Kindergarten skill of understanding the number pairs that equal 10 and knowing all decompositions (e.g. 5=4+1, 5=2+3) of numbers below 10. This skill builds a foundation for strategy development, the understanding of place values, and properties of operations (K.OA.3-4). Your students should also understand that ten 1’s plus more 1’s are considered “teens” (K.NBT.1). Future Learnings Understanding how to perform addition and subtraction within 20 will enable your students to perform similar skills up to 100, eventually extending those skills to work with larger numbers and solve two-step word problems (2.OA.1). They will also be able to apply this skill with problems in a variety of contexts involving length, picture graphs and bar graphs (2.NBT.5). Common Core Standard: 1.OA.6 - Add and subtract within 20, demonstrating fluency for addition and subtraction within 10 Students who understand this principle can: Accurately and efficiently add within 10. Accurately and efficiently subtract within 10. Find hard to recall sums and differences using strategies: counting on, making ten, and doubles. Demonstrate or explain their thinking. ‍2 Videos to Help You Teach Common Core Standard: 1.OA.6 Below we provide and breakdown two videos to help you teach your students this standard. ‍Video 1: Making 10 with Miss D. Gunn Miss D. Gunn demonstrates the “Making 10” strategy through a single problem. This video is used as a resource for parents and teachers, rather than a video to show students in class. She provides a list of materials you can give your students so they can practice this method and/or follow along with the video. List of materials (optional): Paper and Sheet Protectora. Used as a dry erase board. Dry Erase Markers (different colors) Flash Cardsa. Addition cards with at least one add-in higher than 5 and less than 9. The addition problem she uses to demonstrate the “Making 10” strategy is 8 + 5 = ?. Your students can draw circles matching each number in groups of five. a. Eight circles under the 8 and five circles under the 5. Then, identify how many circles are needed to make 10. “Steal” the circles from the other add-in and draw a circle around them. Write 10 underneath to start the new 10 addition equation.10 + __ = ? Count how many circles are remaining and write that number next to 10.a. 10 + 3 = ? The new answer is the same as the original equation. a. 10 + 3 = 13 is the same as 8 + 5 = 13. This practice exercise can help your students understand the “Making 10” strategy. Video 2: Understanding the “Making 10” Strategy The video begins by explaining that when adding larger numbers, the more difficult the tasks becomes; however, there are strategies one can use to make it easier. The “Making 10” strategy is one method that makes adding larger numbers easier. The “Making 10” strategy is when you try to turn one of the numbers in an addition problem into 10. The first example demonstrates this strategy. The problem is 9 + 5. Add a value to 9 to make it 10. 9 + 1 equals 10. The 1 comes from the 5; so, 5 - 1 = 4. So, 9 + 5 = 9 + 1 + 4. 9 + 5 = 10 + 4.a. When adding one-digit values to 10, replace the 0 with the value. 9 + 5 = 14. The video then offers additional problems for your students to practice turning one of the numbers into 10. Those problems are 7 + 5 and 8 + 6. 7 + 5 = ?a. Take 3 from 5 to make 7 a 10. b. 7 + 3 + 2 = 10 + 2 = 12c. So, 7 + 5 = 12 8 + 6 = ?a. Take 2 from 6 to make 8 a 10. b. 8 + 2 + 4 = 10 + 4 = 14c. So, 8 + 6 = 14 Want more practice? Give your students additional standards-aligned practice with Boddle Learning. Boddle includes questions related to Comparing and Measuring Lengths plus rewarding coins and games for your students to keep them engaged. Click here to sign up for Boddle Learning and create your first assignment today. Information on standards is gathered from The New Mexico Public Education Department's New Mexico Instructional Scope for Mathematics and the Common Core website. Select your device
14925
https://arec.tennessee.edu/wp-content/uploads/sites/17/2020/09/algebrakey.pdf
SIMULTANEOUS EQUATIONS – ANSWERS TO PROBLEMS: 1. Find the equilibrium solution for each of the following economic models of supply and demand (a) Qd = 24 – 2P, Qs = -5 + 7P P = (24+5)/(2+7) = 29/9 = 3.22 Q = (-10+168)/(2+7) =158/9 = 17.56 (b) Qd = 51 – 3P, Qs = 6P – 10 P = (51+10)/(3+6) = 61/9 = 6.78 Q = (-20+144)/(3+6) = 124/9 = 13.78 (c) Qd = 30 – 2P, Qs = -6 + 5P P = (30+6)/(2+5) = 36/7 =5.14 Q = (-12+150)/(2+5) = 138/7 = 19.71 LOGS AND EXPONENTS – ANSWERS TO PROBLEMS: 1. Evaluate the following: (a) log10 10,000 = 4 (b) log10 0.01 = -2 (c) ln e2 = 2 (d) ln(1/e3) = -3 (e) ln ex – eln x = x – x = 0 2. Evaluate the following by application of the rules of logarithms: (a) log10 (100)14 = 14 · log10 (100) = 14 · 2 = 28 (b) log10 (100)-1 = -1 · 2 = -2 (c) ln (3/b) = ln 3 – ln b) (d) ln Ae2 = ln A + ln e2 = ln A + 2 (e) ln Abe-4 = ln A + ln b + ln e-4 = ln A + ln b – 4 3. Which of the following are valid? (a) ln u – 2 = ln(u/e2) VALID (b) 3 + ln v = ln (e3/v) INVALID: 3 - ln v (c) ln 3 + ln 5 = 8 INVALID: ln 3 + ln 5 = ln 15 (d) ln u + ln v – ln w = ln(uv/w) VALID ECONOMIC INTERPRETATION OF e – ANSWERS TO PROBLEM: 1. What’s the value of an $100 investment five years from now if the interest rate is r = 0.06 and (a) Interest is compounded annually? $133.8226 (b) Interest is compounded monthly? $134.8850 (c) Interest is compounded continuously? $134.9859
14926
https://brainly.com/question/31225396
[FREE] If a problem says an object "starts at rest," that means: A) \Delta x = 0 B) v_i = 0 C) a = 0 D) v_f = - brainly.com 7 Search Learning Mode Cancel Log in / Join for free Browser ExtensionTest PrepBrainly App Brainly TutorFor StudentsFor TeachersFor ParentsHonor CodeTextbook Solutions Log in Join for free Tutoring Session +37,1k Smart guidance, rooted in what you’re studying Get Guidance Test Prep +28,6k Ace exams faster, with practice that adapts to you Practice Worksheets +7,3k Guided help for every grade, topic or textbook Complete See more / Physics Textbook & Expert-Verified Textbook & Expert-Verified If a problem says an object "starts at rest," that means: A) Δ x=0 B) v i​=0 C) a=0 D) v f​=0 2 See answers Explain with Learning Companion NEW Asked by Chrismitchell0614 • 03/21/2023 0:00 / 0:15 Read More Community by Students Brainly by Experts ChatGPT by OpenAI Gemini Google AI Community Answer This answer helped 478597 people 478K 0.0 0 Upload your school material for a more relevant answer The correct option is B) vi=0. If a problem says that an object “starts at rest,” that means that the initial velocity of the object is zero (vi = 0). This is because velocity is defined as the rate of change of displacement with respect to time, and if an object is at rest, its displacement is not changing with respect to time. Therefore, the initial velocity of the object is zero. The other answer choices are not correct in this context. Delta x = 0 refers to the displacement of an object over a certain distance, a = 0 refers to an object with zero acceleration, and vf = 0 refers to an object with zero final velocity. To know more about acceleration, visit : brainly.com/question/12550364 SPJ1 Answered by SamuelGregory •1.6K answers•478.6K people helped Thanks 0 0.0 (0 votes) Textbook &Expert-Verified⬈(opens in a new tab) This answer helped 478597 people 478K 0.0 0 Physics - Paul Peter Urone, Roger Hinrichs Calculus-Based Physics - Jeffrey W. Schnick Physical Sciences Upload your school material for a more relevant answer The correct answer is B) v i​=0, indicating that the object has an initial velocity of zero when it starts at rest. This means that it is not moving at that moment. Other options related to displacement, acceleration, or final velocity do not accurately reflect the statement given. Explanation In physics, when a problem states that an object "starts at rest," it means that the initial velocity of that object is zero. Therefore, the correct answer is: B) v i​=0 This indicates that the object has not been moving at the beginning of the observation period. To clarify further: Initial Velocity: The notation v i​ refers to the initial velocity of the object. When it is specified that the object starts at rest, it implies there is no motion at that moment, hence v i​=0. Displacement: The statement does not necessarily mean that the displacement Δ x is zero (choice A). An object could start at rest from a position that is not the origin. Displacement measures how far the object travels from its starting point to its ending point. Acceleration: Also, stating that the object is starting from rest does not imply that the acceleration a is zero (choice C). An object can start at rest and then accelerate due to a force acting on it. Final Velocity: Regarding final velocity v f​ (choice D), this value will depend on the forces acting on the object after it starts from rest. The final velocity can be non-zero if the object begins to move after starting from rest. In summary, when we say an object is at rest, we specifically mean it has an initial velocity of zero, which is crucial for solving many problems in kinematics involving motion. Examples & Evidence For example, a ball dropped from a height starts at rest (initial velocity = 0) before it begins to fall due to gravity, which acts as the acceleration. Similarly, a car starting from a stoplight is also at rest until it accelerates forward. In physics, the definition of rest is always associated with zero initial velocity. This aligns with the fundamental concepts of kinematics in classical mechanics. Thanks 0 0.0 (0 votes) Advertisement Community Answer This answer helped 1026233 people 1M 0.0 0 When an object starts at rest, the initial velocity (v i​) is zero. Therefore, the correct answer is B) v i​=0. Understanding 'Starts at Rest' When a problem states that an object "starts at rest," it means that the initial velocity of the object is zero. This is a common condition in physics problems dealing with motion and kinematics. To determine which option is correct, let's analyze each one: A)Δ x=0: This implies no change in position, which is not necessarily true when an object starts from rest. B) v i​=0 : This correctly indicates that the initial velocity is zero. C)a=0: This means no acceleration, which is unrelated to starting at rest. D) v f​=0: This means the final velocity is zero, not relevant to starting at rest. Therefore, the correct answer is B)v i​=0. Example If a car starts at rest and accelerates, its initial velocity (v_) is 0. Over time, its velocity increases as it accelerates. Answered by Qwletter •7.5K answers•1M people helped Thanks 0 0.0 (0 votes) Advertisement ### Free Physics solutions and answers Community Answer let f(x)=g(x)/2x^2-32, where g(x) is a polynomial Community Answer 5.0 1 If a problem says an object has "constant velocity", that means: A) Deltax = 0 B) vi=0 C) a = 0 D) v_{f} = 0 Community Answer 47 Whats the usefulness or inconvenience of frictional force by turning a door knob? Community Answer 5 A cart is pushed and undergoes a certain acceleration. Consider how the acceleration would compare if it were pushed with twice the net force while its mass increased by four. Then its acceleration would be? Community Answer 4.8 2 define density and give its SI unit​ Community Answer 9 To prevent collisions and violations at intersections that have traffic signals, use the _____ to ensure the intersection is clear before you enter it. Community Answer 4.0 29 Activity: Lab safety and Equipment Puzzle Community Answer 4.7 5 If an instalment plan quotes a monthly interest rate of 4%, the effective annual/yearly interest rate would be _______. 4% Between 4% and 48% 48% More than 48% Community Answer When a constant force acts upon an object, the acceleration of the object varies inversely with its mass 2kg. When a certain constant force acts upon an object with mass , the acceleration of the object is 26m/s^2 . If the same force acts upon another object whose mass is 13 , what is this object's acceleration New questions in Physics Planck's constant can be found by: A. energy of a particle x speed of light / wavelength B. energy of a particle x speed of light / frequency of light C. energy of a particle x wavelength / speed of light D. slope of the cut-off voltage vs frequency graph What is the most obvious and important way in which the characteristic spectra of atoms differ from those of molecules? A. Molecular spectra are generally far more complex than the characteristic spectra of the atoms of which they are composed. B. Molecular spectra are much dimmer than the characteristic spectra of the atoms of which they are composed. C. Molecular spectra fade more quickly with distance than do atomic spectra. D. Molecular spectra are generally far simpler than the characteristic spectra of the atoms of which they are composed. E. Molecular spectra are much brighter than the characteristic spectra of the atoms of which they are composed. Which characteristic do Mercury and Mars share? A. many moons B. rocky surfaces C. thick atmospheres D. moderate temperature fluctuations Which quantity does a light-year measure? A. distance B. speed C. time D. volume Observable matter makes up about what percentage of the universe? A. 5% B. 10% C. 50% D. 95% Previous questionNext question Learn Practice Test Open in Learning Companion Company Copyright Policy Privacy Policy Cookie Preferences Insights: The Brainly Blog Advertise with us Careers Homework Questions & Answers Help Terms of Use Help Center Safety Center Responsible Disclosure Agreement Connect with us (opens in a new tab)(opens in a new tab)(opens in a new tab)(opens in a new tab)(opens in a new tab) Brainly.com
14927
https://www.pennmedicine.org/conditions/gastroesophageal-reflux-disease
Gastroesophageal reflux disease Find a doctor Call 800-789-7366 When you eat, food passes from the throat to the stomach through the esophagus. A ring of muscle fibers in the lower esophagus prevents swallowed food from moving back up. These muscle fibers are called the lower esophageal sphincter (LES). When this ring of muscle does not close all the way, stomach contents can leak back into the esophagus. This is called reflux or gastroesophageal reflux. Reflux may cause symptoms. Harsh stomach acids can also damage the lining of the esophagus. The risk factors for reflux include: Use of alcohol (possibly) Hiatal hernia (a condition in which part of the stomach moves above the diaphragm, which is the muscle that separates the chest and abdominal cavities) Obesity Pregnancy Scleroderma Smoking or tobacco use Lying down within 3 hours after eating Heartburn and gastroesophageal reflux can be caused by or made worse by pregnancy. Symptoms can also be caused by certain medicines, such as: Anticholinergics (for example, sea sickness medicine) Beta-blockers for high blood pressure or heart disease Bronchodilators for asthma or other lung diseases Calcium channel blockers for high blood pressure Dopamine-active medicines for Parkinson disease Progestin for abnormal menstrual bleeding or birth control Sedatives for insomnia or anxiety Theophylline (for asthma or other lung diseases) Tricyclic antidepressants Talk to your health care provider if you think one of your medicines may be causing heartburn. Never change or stop taking a medicine without first talking to your provider. Definition Gastroesophageal reflux disease (GERD) is a condition in which the stomach contents leak backward from the stomach into the esophagus (food pipe). Food travels from your mouth to the stomach through your esophagus. GERD can irritate the food pipe and cause heartburn and other symptoms. Exams and Tests You may not need any tests if your symptoms are mild. If your symptoms are severe or they come back after you have been treated, your provider may recommend a test called an upper endoscopy (esophagogastroduodenoscopy). This is a test to examine the lining of the esophagus, stomach, and first part of the small intestine. It is done with a small camera (flexible endoscope) that is inserted down the throat. You may also be recommended to have one or more of the following tests: A test that measures how often stomach acid enters the esophagus. This can be done with a catheter through the nose or with a device clipped to the bottom of your esophagus during an upper endoscopy. A test to measure the pressure inside the lower part of the esophagus (esophageal manometry). A test to measure fluid and air coming up from the esophagus (impedance). A positive stool occult blood test may diagnose bleeding that is coming from the irritation in the esophagus, stomach, or intestines. Outlook (Prognosis) Most people respond to lifestyle changes and medicines. However, many people feel the need to continue taking medicines to control their symptoms. If you have inflammation from your GERD (esophagitis) or precancerous changes (Barrett esophagus), your provider may recommend staying on these medicines. Otherwise speak with your provider about whether you need to stay on medicines long term. Possible Complications Complications may include: Worsening of asthma A change in the lining of the esophagus that can increase the risk of cancer (Barrett esophagus) Bronchospasm (irritation and spasm of the airways due to acid) Long-term (chronic) cough or hoarseness Dental problems Ulcer or inflammation in the esophagus Stricture (a narrowing of the esophagus due to scarring from chronic irritation) Prevention Avoiding factors that cause heartburn may help prevent symptoms. Obesity is linked to GERD. Maintaining a healthy body weight may help prevent the condition. References Falk GW, Katzka DA. Diseases of the esophagus. In: Goldman L, Cooney KA, eds. Goldman-Cecil Medicine. 27th ed. Philadelphia, PA: Elsevier; 2024:chap 124. Katz PO, Dunbar KB, Schnoll-Sussman FH, Greer KB, Yadlapati R, Spechler SJ. ACG Clinical Guideline for the diagnosis and management of Gastroesophageal Reflux Disease. Am J Gastroenterol. 2022;117(1):27-56. PMID: 34807007 pubmed.ncbi.nlm.nih.gov/34807007/. National Institute of Diabetes and Digestive and Kidney Diseases website. Acid reflux (GER & GERD) in adults. www.niddk.nih.gov/health-information/digestive-diseases/acid-reflux-ger-gerd-adults. Updated July 2020. Accessed March 17, 2025. Richter JE, Vaezi MF. Gastroesophageal reflux disease. In: Feldman M, Friedman LS, Brandt LJ, eds. Sleisenger and Fordtran's Gastrointestinal and Liver Disease. 11th ed. Philadelphia, PA: Elsevier; 2021:chap 46. Symptoms Typical symptoms of GERD are: Heartburn or a burning pain in the chest Bringing food back up (regurgitation) Less common symptoms are: Nausea after eating Cough or wheezing Difficulty swallowing (make sure to discuss this with your provider) Hiccups Hoarseness or change in voice Sore throat Symptoms may get worse when you bend over or lie down, or after you eat. Symptoms may also be worse at night. Treatment You can make many lifestyle changes to help treat your symptoms such as avoiding tobacco, alcohol, or foods that cause your symptoms. Other tips include: If you are overweight or obese, in many cases, losing weight can help. Raise the head of the bed if your symptoms get worse at night. Have your dinner 2 to 3 hours before going to sleep. Avoid eating food after dinner. Avoid medicines such as aspirin, ibuprofen (Advil, Motrin), or naproxen (Aleve, Naprosyn). Take acetaminophen (Tylenol) to relieve pain. Take all of your medicines with plenty of water. When your provider gives you a new medicine, ask whether it will make your heartburn worse. You may use over-the-counter antacids after meals and at bedtime, although the relief may not last very long. Common side effects of antacids include diarrhea or constipation. Other over-the-counter and prescription medicines can treat GERD. They work more slowly than antacids, but give you longer relief. Your pharmacist, provider, or nurse can tell you how to take these medicines. Proton pump inhibitors (PPIs) decrease the amount of acid produced in your stomach. H2 blockers also lower the amount of acid released in the stomach. Potassium competitive acid blockers (PCABs) are the newest medicines that decrease stomach acid. Anti-reflux surgery may be an option for people whose symptoms do not go away with lifestyle changes and medicines. Heartburn and other symptoms should improve after surgery. But you may still need to take medicines for your heartburn. Your provider will recommend certain tests before any surgery for GERD to help you get the best outcome. There are also new therapies for reflux that can be performed through an endoscope (a flexible tube passed through the mouth into the stomach). When to Contact a Medical Professional Contact your provider if symptoms do not improve with lifestyle changes or medicine. Also contact your provider if you have: Bleeding Choking (coughing, shortness of breath) Feeling filled up quickly when eating Frequent vomiting Hoarseness Loss of appetite Trouble swallowing (dysphagia) or pain with swallowing (odynophagia) Weight loss A feeling like food or pills are sticking behind the breast bone Schedule an appointment We can help you schedule an appointment or you can search our directory of specialists. Find a doctor Call 800-789-7366 Manage Your Cookie Settings Your Privacy Strictly Necessary Cookies Performance Cookies Functional Cookies Targeting Cookies Your Privacy Penn Medicine websites, including Pennmedicine.org, Lancastergeneralhealth.org, Chestercountyhospital.org, and Princetonhcs.org use cookies and other tracking technologies to collect data and improve your experience. If you continue to use our websites, please note that you are agreeing to our using these cookies and tracking technologies as described in our Cookie Policy. Click on the cookie category headings to find out more about the types of cookies we use and to change the default cookie settings. Note that blocking some types of cookies may impact your experience of the site and services we are able to offer. Penn Medicine's Cookie Policy Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but then some parts of the site will not work. These cookies do not store any personally identifiable information. Performance Cookies Active These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our sites. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies, we will not know when you have visited our site. Also, our ability to monitor site performance is impacted. Functional Cookies Active These cookies enable the website to provide enhanced functionality and personalization. For example, they allow our website to remember information from your previous visit, such as details you submitted before or your previously stated preferences; and when you complete a registration form and you tick the "Remember Me " checkbox, we will use a cookie to remember your details if you complete another form without you having to re-enter them. These cookies may also be used to provide services you request such as newsletters or publications. Such a cookie may also be used to set the background image across the website. This will remember which image was displayed to you the last time you visited our website and will display the next image (in a set order) on your next visit. These cookies may be set by us or by third-party providers whose services we have added to our pages. If you do not allow these cookies, then some or all of these services may not function properly. Targeting Cookies Active These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant advertisements on other sites. They do not store directly personal information but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising. Back Consent Leg.Interest label label label
14928
https://www.oed.com/dictionary/trowel_v
Oxford English Dictionary Skip to main content Advanced searchAI Search Assistant Oxford English Dictionary The historical English dictionary An unsurpassed guide for researchers in any discipline to the meaning, history, and usage of over 500,000 words and phrases across the English-speaking world. Find out more about OED Understanding entries Glossaries, abbreviations, pronunciation guides, frequency, symbols, and more Explore resources Personal account Change display settings, save searches and purchase subscriptions Account features Getting started Videos and guides about how to use the new OED website Read our guides Recently added bombo flailing bomba hittee apols bagh declinism carreta short pants close-in woodshop Hitchiti shortward hectarage bee balm woodblocked Word of the day dulcorate verb To sweeten; to soften, soothe, ease. Recently updated hob-job kraal ship news buffoon desk bhang shipful shipwork beatee dissava causant ship-like beeswax bag-wig arcade ship fare Word stories ------------ Read our collection of word stories detailing the etymology and semantic development of a wide range of words, including ‘dungarees’, ‘codswallop’, and ‘witch’. Word lists ---------- Access our word lists and commentaries on an array of fascinating topics, from film-based coinages to Tex-Mex terms. World Englishes --------------- Explore our World Englishes hub and access our resources on the varieties of English spoken throughout the world by people of diverse cultural backgrounds. History of English ------------------ Here you can find a series of commentaries on the History of English, charting the history of the English language from Old English to the present day. About OED Historical Thesaurus Editorial policy Updates Institutional account management How to use the OED Purchasing Help with access World Englishes Contribute Accessibility Contact us Upcoming events Case studies Media enquiries Oxford University Press Oxford Languages Oxford Academic Oxford Dictionary of National Biography Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide Cookie policy Privacy policy Legal notice Copyright © 2025 Oxford University Press AI Search Assistant Search Hello! I am the AI search assistant, here to help you use OED’s advanced search tools. I can't converse or generate answers myself, but I can construct complex searches on your behalf and provide a link to the results. I can also help you find information about the OED itself. If you are interested in looking up a particular word, the best way to do that is to use the search box at the top of every OED page. Example queries I can run are "Which words in English are borrowed from French?", "Which words were first used by Charles Dickens?" or "How are words added to the dictionary?". I cannot search for synonyms yet, but I’m ready to help with other advanced searches. Give me a try! Sign in Personal account Access or purchase personal subscriptions Get our newsletter Save searches Set display preferences Sign inRegister Institutional access Sign in through your institution Sign in with library card Sign in with username / password Recommend to your librarian Institutional account management Sign in as administrator on Oxford Academic Word of the Day Sign up to receive the Oxford English Dictionary Word of the Day email every day. Our Privacy Policy sets out how Oxford University Press handles your personal information, and your rights to object to your personal information being used for marketing to you or being processed as part of our business activities. We will only use your personal information for providing you with this service. Email address (required) First name (required) Last name (required) Submit Oxford University Press uses cookies to enhance your experience on our website. By selecting ‘accept all’ you are agreeing to our use of cookies. You can change your cookie settings at any time. More information can be found in ourCookie Policy. Reject and manage Accept all Privacy Preference Center When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer. More information Allow All Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Targeting Cookies [x] Targeting Cookies These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising. Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Reject All Confirm My Choices
14929
https://mdedge.com/edermatologynews/article/104601/melanoma/dermatologists-management-melanoma-varies
Dermatologists’ management of melanoma varies | MDedge Dermatology News search All Specialties CME Events REGISTER or LOGIN / Dermatology News LatestQuizzesVideossearch ADVERTISEMENT Dermatologists’ management of melanoma varies November 21, 2015|Dermatology News Patrice Wendling Author and Disclosure Information ✕ AT THE ASDS ANNUAL MEETING CHICAGO – Significant variance exists in management of primary cutaneous melanoma, according to a national survey of 510 dermatologists. Most dermatologists (36%) preferred a shave biopsy for lesions suspected of being melanoma, despite guidelines from the American Academy of Dermatology (AAD) and National Comprehensive Cancer Network (NCCN) guidelines that recommend narrow excision biopsy. ©The National Cancer Institute In all, 31% of dermatologists used a narrow local excision (less than 5 mm margin), 13% saucerization/scoop shave biopsy, 11%, punch biopsy, 3% wide local excision, and 7% other. “The guidelines and academy are all very clear that one of the goals of the biopsy is to obtain tumor depth, so we were surprised that a significant number of providers use shave biopsy or other methods that may leave a risk of not getting the correct depth,” study co-author Dr. Aaron S. Farberg, a melanoma clinical research fellow at the National Society for Cutaneous Medicine in New York City, said in an interview. Notably, dermatologists in academic and dermatology-based group practices were significantly less likely than those in multispecialty or solo practice to use narrow excision (23% vs. 42%; P< .001). Although treatment for melanoma evolves continuously, the authors observed that dermatologists remain at the forefront of melanoma management and play a critical role in patient decision making. “This study suggests that a knowledge gap may exist representing an educational opportunity to more effectively disseminate and implement recommended approaches,” Dr. Farberg and Dr. Darrell Rigel, of New York University School of Medicine, reported in a poster presentation at the annual meeting of the American Society for Dermatologic Surgery. The survey also revealed that dermatologists are going beyond suggested surgical margins when excising melanoma. For malignant melanoma in situ (MMIS), 62% used a 5 mm or less margin, 36% a 6 mm to 10 mm margin, and 2% a 1.1 cm to 1.9 cm margin. For these lesions, the AAD recommends a 0.5 cm-1 cm (5 mm-10 mm) margin and the NCCN a 0.5 cm margin, Drs. Farber and Rigel reported. Academic dermatologists were significantly more likely than all other practice types to refer patients with MMIS out for excision (18% vs. 10%; P< .05). For invasive melanoma less than 1 mm in depth, both the AAD and NCCN recommend a 1 cm margin (10 mm). In all, 61% of dermatologists reported using 6 mm to 10 mm margins, with 34% opting for 1.1 cm to 1.9 cm margins, 3% at least 2 cm margins, and 2% no more than 5 mm margins. No significant difference was found across provider types for treatment of melanomas less than 1 mm in depth. For invasive melanoma greater than 1 mm in depth, 54% of respondents used 1.1 cm to 1.9 cm margins, with most (67%) referring the patient to another provider. Both national guidelines recommend 1 cm to 2 cm margins for melanomas 1 mm to 2 mm in depth and 2 cm margins for melanoma greater than 2 mm in depth. Academic dermatologists were significantly more likely than other dermatologists to treat these lesions rather than to refer the patient out (51% vs. 30%; P< .001). “This is exciting new data that suggests that there still is a variance in early melanoma management,” Dr. Rigel, past president of AAD and ASDS, said in an interview. “The data suggest more studies need to be done to better access why this is occurring.” Dr. Hensin Tsao, who served on the 2011 AAD guideline working group and is co-chairing the AAD’s pending guideline update, said in an interview that, “Dr. Rigel is well respected in the field and the project will undoubtedly be submitted for publication and subject to further review. It is worthwhile to wait on the final published results and conclusions.” He agreed, however, with the authors’ suggestion that there is significant variation in practice. Regarding the finding that 36% of respondents use a shave biopsy for suspicious lesions, the AAD guidelines recommend that the entire lesion be removed with a 1 mm to 3 mm margin, which can be accomplished by an elliptical or punch excision with sutures or shave removal to a depth below the anticipated plane of the lesion, Dr. Tsao, of Massachusetts General Hospital in Boston, said. “It is quite possible that some of the respondents to the questionnaire interpreted ‘shave biopsy’ as a full shave disk excision,” he said. “That said, intentional and routine partial sampling of suspected melanomas would be at odds with the guidelines.” It is not inappropriate to remove a suspicious lesion, if small enough, with a punch biopsy, Dr. Tsao said, adding, “Perhaps again, the respondents failed to distinguish between partial punch biopsy and punch excision.” References 1 2 Recommended reading Spitzoid melanoma mortality on par with conventional melanoma FDA approves T-VEC for metastatic melanoma Biopsy-site photography an easy winner on all counts Ipilimumab approved as adjuvant treatment for resected metastatic melanoma ASTRO: Combined radioimmunotherapy proves promising in metastatic melanoma Higher risk of basal cell carcinoma linked to menopausal hormone therapy Lenalidomide plus rituximab achieves 87% response rate × ADVERTISEMENT ADVERTISEMENT ADVERTISEMENT This Publication About Us Advertise Contact Us Corporate Management Editorial Advisory Board Editorial Staff Editorial Calendar Facebook Twitter Specialty Focus Acne Actinic Keratosis Atopic Dermatitis Psoriasis Top Sections Aesthetic Dermatology Commentary Make the Diagnosis Law & Medicine Highlighted from this site and network Contact ClinicalEdge Manager Contact MD-IQ Manager Custom Programs MedJob Network Privacy Policy Terms of Use Editorial Policy Cookie Policy Ad Policy Medscape Customer Support Frequently Asked Questions Advertise with MDedge See more with MDedge! See our Other Publications Copyright © 2025Frontline Medical Communications Inc., Newark, NJ, USA. All rights reserved. Unauthorized use prohibited.The information provided is for educational purposes only. Use of this Web site is subject to the medical disclaimer.
14930
https://math.stackexchange.com/questions/4851146/show-that-the-recursive-sequence-x-k1-x-k-fracx-k1-2m2x-k2-is
Skip to main content Show that the recursive sequence xk+1=|xk−xk1−2M2x2k| is monotone Ask Question Asked Modified 1 year, 7 months ago Viewed 133 times This question shows research effort; it is useful and clear 3 Save this question. Show activity on this post. I'm doing some exercises for an upcoming exam, and as part of a larger problem, I want to show that the given recursive sequence: xk+1=∣∣∣xk−xk1−2M2x2k∣∣∣ is monotonly increasing if |x0|≥12M and M>1. I'm pretty sure that induction is the right approach, but I can't get the induction step to work. I tried messing around with the inverse triangle equation, but I couldn't get far. Do you have any pointers on how to approach the problem? analysis induction monotone-functions newton-raphson Share CC BY-SA 4.0 Follow this question to receive notifications edited Jan 25, 2024 at 19:16 maibrl asked Jan 25, 2024 at 17:13 maibrlmaibrl 10544 bronze badges 2 2 I don't think the sequence is monotonically increasing with the given constraint. Taking x0=12M, we have x1=12M∣∣∣1−43∣∣∣=13⋅12M<x0 – Sam Commented Jan 25, 2024 at 17:23 1 @Sam Thanks, I forgot a factor two when writing the question, now it should be monotone, at least heuristically according to some example calculations. – maibrl Commented Jan 25, 2024 at 19:20 Add a comment | 1 Answer 1 Reset to default This answer is useful 1 Save this answer. Show activity on this post. ⟺(Mxk+1)2=(Mxk)2(1−11−2(Mxk)2)2⟺yk+1=yk(1−11−2yk)2 where yk:=(Mxk)2 for k∈N. As xk≥0 for k≥1, to prove the sequence (xk)k is strictly increasing is equal to prove the sequence (yk)k is strictly increasing. We prove that yk≥1/4 and yk+1≥yk for all k∈N by induction. For k=0, it's evident that y0=M2x20≥1/4. Suppose the statement holds true for k≥0, we prove that it's true also for k+1: yk+1−14=yk(1−11−2yk)2−14=(yk−14)(4y2k+1)(1−2yk)2≥0⟹yk+1≥14 yk+1−yk=yk(1−11−2yk)2−yk=4yk(yk−14)(1−2yk)2≥0⟹yk+1≥yk Then, we can conclude that the sequence (xk)k is increasing. Q.E.D Share CC BY-SA 4.0 Follow this answer to receive notifications answered Jan 25, 2024 at 20:03 NN2NN2 20.2k22 gold badges1616 silver badges5050 bronze badges Add a comment | You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions analysis induction monotone-functions newton-raphson See similar questions with these tags. Featured on Meta Community help needed to clean up goo.gl links (by August 25) Related 0 Showing that a sequence (defined in terms of the previous sequence term) is increasing and bounded above 1 Use the principle of mathematical induction to prove that n<(32)n for all integers n≥1. 1 Recursive induction for a sequence. 1 Proof by induction for fn+1(x)=x1+(n+1)x2√ 4 A convergence problem with Newton-Raphson iteration 1 Help with induction on a recursive sequence. 0 Show that for a≠1, a>0 the sequence {xn}=n(1−a1n) is increasing 1 Showing that a sequence which satisfies xk+1≤axk+c⋅bk converges linearly to 0? 3 Approximation of 2–√. Hot Network Questions Incoming water pipe has no apparent ground Spectral sequences every mathematician should know tcolorbox/pgfkeys: making an argument the title if it contains no pgfkeys; otherwise, pass the argument as pgfkeys to tcolorbox CSI: Las Vegas episode where a woman uses her son to attract young women for purposes of prostitution NMinimize behaves strangely for this simple problem The point of well-pointed spaces Buck LED driver inductor placement What did Quine mean with the word "ponential" in Mathematical Logic? Other than the tank PSI, what else could cause a Whirlpool 3 stage RO Under Sink Water Filtration System to have low water pressure? Why is muscle cramp called a “charley horse”? Does 我们员工的休假已经达到了150多天 include weekends as holidays? shift option doesn't work with tikz-matrix's nodes? How are the word-searches made for mass-produced puzzle books? Is kernel memory mapped once or repeatedly for each spawned process Confusion about infinity in gravitational potential energy (GPE) How can the UK enforce the Online Safety Act against foreign web sites? How to get code of a program that decrypts itself? Would weekly seasonal changes still allow a habitable planet? How can I estimate a function from its level sets? Having trouble identifying a font Is Berk (1966)'s main theorem standard in Statistics/Probability? Is there a name for it? Why do aviation safety videos mime mouth-inflating the life vest? Example of two topologies with same Borel sets but different τ-additivity How do I fill holes in new pine furniture so that the color will continue to match as the wood ages? more hot questions Question feed By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device.
14931
http://www.pinkeyan.com/news/1095
常见的三种培养基分类 | 品科研-生化科学服务试剂超市 Hi,欢迎来到品科研 电话:4000-685-696 我的订单登录注册 站内 站外 站内 站外 购物车 常见的三种培养基分类 发布日期:2023-05-15 新闻来源:品科研-生物试剂,化学试剂,科学服务试剂超市-河北品科研生物科技有限公司 浏览次数:5932 培养基:是指供给微生物、植物或动物(或组织)生长繁殖的,由不同营养物质组合配制而成的营养基质。通过在体外模拟细胞的生长环境,培养基可以为细胞生长提供合适的 pH、渗透压以及各种营养物质。 目前,培养基广泛应用于重组蛋白/抗体药物、疫苗、基因治疗/细胞治疗药物等生物制品生产及科研机构研发,是生物医药行业发展的基石,对生物医药行业的发展起着至关重要的作用。 一、常见微生物培养基 (一)LB琼脂培养基 特点:主要成分为胰蛋白胨、酵母浸粉、琼脂、氯化钠。 适用:实验室中广泛使用的普通培养基,用来培养大肠杆菌,表达大量的外源蛋白。在基因工程、分子生物学中使用比较多。 (二)麦康凯琼脂培养基(MAC培养基) 特点:主要成分为蛋白胨、脙胨、猪胆盐(或牛、羊胆盐)、氯化钠、琼脂、乳糖、1%结晶紫水溶液、0.5%中性红水溶液。 适用:选择性培养基,用于粪便、分泌物中肠道致病菌及肠球菌的分离培养。 (三)伊红美蓝琼脂培养基 特点:主要成分包含蛋白胨、乳糖、磷酸氢二钾、伊红、美蓝、琼脂。 适用:主要用于分离肠道病原菌,特别是大肠杆菌和粪大肠杆菌群。 (四)R2A琼脂培养基 特点:是一种低营养培养基,支持耐氯细菌恢复生长;培养时间较普通培养基更长,通常为 2 天以上,延长培养时间让微生物得到更好的恢复,提升回收率。 适用:对饮用水中的菌落数进行测定。 (五)巧克力琼脂培养基 特点:主要是蛋白胨和牛肉粉用来提供细菌生长所需的碳源、氮源氨基酸以及维生素。氯化钠维持培养基中的渗透压,琼脂用做凝固剂。 适用:用于奈瑟氏菌属和嗜血杆菌等需氧菌的分离和培养。 (六)BHI琼脂培养基 特点:营养成分丰富,支持各类致病菌生长。 适用:是非选择性培养基,广泛用于霉菌、酵母、细菌的培养。 二、常见的植物组织培养基 (一)MS培养基(属于高无机盐培养基) 特点:是无机盐离子浓度较高,硝酸盐较高,为较稳定的平衡溶液。 适用:广泛地用于植物的器官、花药、细胞和原生质体培养,效果良好。 (二)B5培养基(属于较高硝酸钾培养基) 特点:含有较低的铵,这可能对不少培养物的生长有抑制作用。 适用:如双子叶植物特别是其中的木本植物。 (三)N6培养基(属于较高硝酸钾培养基) 特点:成分较简单,KNO 3 和(NH 4)2 SO 4 含量高。 适用:广泛应用于小麦、水稻及其它植物的花药培养等。 (四)White培养基(属于低无机盐培养基) 特点:无机盐数量较低,KNO 3,KI,烟酸(Vpp),MgSO 4·7H 2 O,CuSO 4·5H 2 O,MnSO 4·H 2 O,盐酸吡哆醇(VB6),盐酸硫胺素(VB1)。 (五)KM-8P培养基 特点:有机成分较复杂,它包括了所有的单糖和维生素。 适用:原生质融合的培养。 (六)SH培养基 特点:与 B5 相似,不用(NH 4)2 SO 4,而改用 NH 4 H 2 PO 4,是无机盐浓度较高的培养基。 适用:在不少单子叶和双子叶植物上使用,效果很好。 (七)Miller 培养基 特点:与 MS 培养基比较,Miller 培养基无机元素用量减少 1/3~1/2,微量元素种类减少。无肌醇。 (八)VW 培养基 特点:总的离子强度稍低些,磷以磷酸钙形式供给,要先用 1mol/L HCl 溶解后再加入混合溶液中。 适用:适合于气生兰的培养。 三、常见的动物细胞培养基分类 (一)天然培养基 种类很多,包括凝固剂(如血浆)、生物性液体(如血清,目前广泛使用的一种天然培养基)、组织浸出液(如胚胎浸出液)、水解乳蛋白等。 (二)合成培养基 1.MEM(最低限量 Eagle 基本培养基) 特点:成分简单,仅含谷氨酰胺、12 种必需氨基酸和 8 种维生素;易于添加或减少某些成分,适用于某些特殊研究的细胞培养工作。 适用:适合多种细胞单层生长,是最基本的、也是应用最广的一种培养基。 2.DMEM 特点:增加了各成分的用量,营养成分浓度高,利于高密度细胞的大量增殖。根据葡萄糖用量可将 DMEM 分为高糖型(4500 mg/L)和低糖型(1000 mg/L)两种。 适用:高糖型特别适于附着性差但希望能贴于原生长位点的肿瘤细胞的生长,如骨髓瘤细胞和 DNA 转染的转化细胞培养。 3.IMDM 特点:较 DMEM 又增加了几种非必需氨基酸及一些维生素。 适用:MDM 为高葡萄糖型,可用于杂交瘤细胞筛选培养,也可作为无血清培养的基础培养基。 4.Ham'sF-12 特点:在 MEM 培养基配方基础上,加入了一些微量元素(如非必需氨基酸、维生素)、无机盐(如 NaHCO 3)和代谢添加剂(如核苷酸)配制而成。 适用:营养成分浓度低,适用于单细胞的低密度克隆化培养,如 CHO 细胞。 5.RPMI-1640 特点:组分比较简单,包括 21 种氨基酸、11 种维生素和其他一些成分,同 MEM 一样,是目前最广泛使用的培养液之一。 适用:适于许多种正常细胞和肿瘤细胞的生长,也用于悬浮细胞的培养。 6.M199 特点:成分复杂,所含营养成分达 69 种之多,几乎包括了所有氨基酸、维生素及核酸衍生物、生长激素、脂类和 Eagle 平衡盐溶液等。 适用:M199 仅能维持体外细胞的短期生存,又加上其成分复杂,故现在已很少使用。 7.Ll5 特点:该培养基所用的 BSS 含有高浓度氨基酸以提高缓冲能力,培养基中使用半乳糖作碳源,以阻止培养基中乳酸形成。 适用:用于外周神经元的培养;适用于快速增殖瘤细胞的培养。 8.McCoy's5A 特点:专门为培养肉瘤细胞设计的培养液。 适用:除适用于原代细胞、组织活检细胞和淋巴细胞的培养外,还适用于较难培养的细胞。 9.Fischer's 适用:主要用于白血病微粒细胞的培养。 以下为附件: 论文查重 关于我们 售后政策 超市订阅 摩尔浓度计算器 稀释计算器 分子量计算器 联系我们 本网站销售的所有产品仅用于工业应用或者科学研究等非医疗目的,不可用于人类或动物的临床诊断或治疗,非药用,非食用。© 2024 河北品科研生物科技有限公司 版权所有 客服邮箱: sales@pinkeyan.com ICP备案号: 冀ICP备20004266号-1 冀公网安备13068102000321号 联系我们 + QQ客服 1491291545 516070781 291475931 2144893854 微信客服 王经理 张经理 刘经理 薛经理 王经理 袁经理 热线电话 4000-685-696
14932
https://tereshenkov.wordpress.com/2017/09/10/dividing-a-polygon-into-a-given-number-of-equal-areas-with-arcpy/
Dividing a polygon into a given number of equal areas with arcpy – Alex Tereshenkov Skip to content Alex Tereshenkov Programming and managing GIS Menu [Geospatial careers] [ Geospatial Python progression path ] [ File geodatabase SQL editor ] [ Geodatabase HTML reporter ] Dividing a polygon into a given number of equal areas with arcpy Alex TereshenkovArcGIS Desktop, ArcPy, PythonSeptember 10, 2017 September 10, 2017 I was recently searching the Internet trying to find any tool that would let me split an arbitrary polygon inside a geodatabase feature class into multiple polygons of equal area. ArcGIS does provide this functionality as a part of the parcel fabric functionality. Unfortunately, there is a lot of work involved in setting up the parcel fabric and there is a lot to learn before you will be able to divide your parcels. So I was looking for a simpler solution that would work directly with the geometry of a polygon. However, I was not able to find any solution that would work and most helpful posts on the forums were pointing either at parcel fabrics or providing some ideas on the implementation of the workflow using multiple geoprocessing tools and some custom code. I found a nice ArcGIS custom script tool called Polygon Bisector which computes a line that bisects, or divides in half, a polygon area along a line of constant latitude or longitude. So, this would work great if you need to split a polygon creating a number that follows the exponent of two (2, 4, 8, 16, 32 and so forth). This is because after dividing a polygon into two polygons of equal area (now you have 2 polygons), you could divide each of them into two parts again (now you have 4 polygons), and so on. Since I want to be able to divide a polygon into an arbitrary number of areas, I had to write my own tool. I have solved this problem this way. Say I want to have the polygon of area 1000 sq. m. divided into 5 equal areas: Get an extent of a polygon. Construct a polyline using the vertices of the polygon’s extent with a tiny shift of coordinates. Cut the polygon into two halves using this line. Find what is the area of the smallest polygon. If the area is smaller than the 200 sq. m. (that is, fifth part of the polygon), the shift the line again and re-run steps 2-4. If the area is 200 sq. m. or larger, than leave this part and keep working with the polygon that is left essentially running through the steps 2-5. When the original polygon has been successfully divided into equal areas, they are inserted into a new feature class along with the source polygon attributes. Advertisement The illustration of the cutting lines with the extent polygon is below. This approach has several disadvantages, though. First, if your polygon is very large and you want the parts of the polygon to have the same area with the minimal difference, the tool execution will take a lot of time as you will need to shift the cutting lines a few centimeters, cut the polygon into halves, and evaluate the result. Second, you can only choose between the North-South or West-East direction of the cutting lines. You won’t be able to specify the angle yourself. However, this tool works great for the target use I kept in mind when writing it. Using 0.5 meters as the step for moving the cutting line, the difference between the largest and the smallest sub-polygons was just around 1%. Running the same code for the same polygon with the shift value set to 0.05 meters (5 cm), I have observed the difference value to be around 0.1%. The illustration of the polygon subdivision is below (West-East to the left, North-South to the right). The code is available at the GitHub Gist as usual. Run this code inside the Python window in ArcMap while having a single polygon selected. This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters import os import sys import arcpy import pythonaddins mxd = arcpy.mapping.MapDocument('CURRENT') poly_lyr = arcpy.mapping.ListLayers(mxd.activeDataFrame, 'polys') num_out_polys = 10 map units (eg meters) and the difference in area between the largest and the smallest polygons 0.005 – 0.02%; 0.01 – 0.03%; 0.05 – 0.1%; 0.1 – 0.3%; step_value = 1 orientation = 'NS' #'WE' / 'NS' number of splits splits = [round(float(100)/float(num_out_polys), 2)] num_out_polys spatial reference of the output fc will be of the polygon layer sr = arcpy.SpatialReference(arcpy.Describe(poly_lyr).spatialReference.factoryCode) source polygon fields fields = [f.name for f in arcpy.ListFields(poly_lyr) if not f.required] if int(arcpy.GetCount_management(poly_lyr).getOutput(0)) != 1: pythonaddins.MessageBox('Need to have exactly one feature selected', 'Error') sys.exit(0) get polygon geometry and extent property with arcpy.da.SearchCursor(poly_lyr, fields + ["SHAPE@"]) as cur: for row in cur: attributes = list(row[:-1]) polygon = row[-1] extent = polygon.extent orient lines either North-South (up-down) or West-East (left to right) if orientation == 'NS': x_max = extent.XMax + step_value x_min = extent.XMin + step_value y_max = extent.YMax y_min = extent.YMin if orientation == 'WE': x_max = extent.XMax x_min = extent.XMin y_max = extent.YMax – step_value y_min = extent.YMin cut_poly = polygon output feature class create/clean mem_path = os.path.join(arcpy.env.scratchGDB, 'cut_polys') if arcpy.Exists(mem_path): arcpy.Delete_management(mem_path) mem = arcpy.CopyFeatures_management(poly_lyr, mem_path) arcpy.DeleteFeatures_management(mem) lines = [] with arcpy.da.InsertCursor(mem, fields + ["SHAPE@"]) as icur: for i in splits[:-1]: #need to get all but the last item tolerance = 0 while tolerance < i: pnt_arr = arcpy.Array() if orientation == 'NS': construct North-South oriented line pnt_arr.add(arcpy.Point(x_min, y_max)) pnt_arr.add(arcpy.Point(x_min, y_min)) if orientation == 'WE': construct West-East oriented line pnt_arr.add(arcpy.Point(x_min, y_max)) pnt_arr.add(arcpy.Point(x_max, y_max)) line = arcpy.Polyline(pnt_arr, sr) lines.append(line) cut polygon and get split-parts cut_list = cut_poly.cut(line) if orientation == 'NS': tolerance = 100 cut_list.area / polygon.area x_min += step_value if orientation == 'WE': tolerance = 100 cut_list.area / polygon.area y_max -= step_value part 0 is on the right side and part 1 is on the left side of the cut if orientation == 'NS': cut_poly = cut_list icur.insertRow(attributes + [cut_list]) if orientation == 'WE': cut_poly = cut_list icur.insertRow(attributes + [cut_list]) insert last cut remainder if orientation == 'NS': icur.insertRow(attributes + [cut_list]) if orientation == 'WE': icur.insertRow(attributes + [cut_list]) for illustration purposes only arcpy.CopyFeatures_management(lines, 'in_memory\lines') evaluation of the areas error done_polys = [f for f in arcpy.da.SearchCursor('cut_polys', 'SHAPE@AREA')] the % of the smallest and the largest areas pythonaddins.MessageBox("{}%".format(round(100 – 100 (min(done_polys) / max(done_polys)), 2)), 'Precision error') view raw divide_polygons_into_areas.py hosted with by GitHub Advertisement Rate this: i 2 Votes Share this: Click to share on Facebook (Opens in new window)Facebook Click to share on X (Opens in new window)X Like Loading... Related Creating convex hull using arcpyApril 18, 2017 In "ArcGIS Desktop" Building concave hulls (alpha shapes) with PyQt, shapely, and arcpyNovember 28, 2017 In "arcgis pro" Multiple Ring Buffer with PostGIS and SQL ServerMarch 23, 2018 In "ArcGIS Desktop" Tagged ArcPy divide parcel fabric polygon split PublishedSeptember 10, 2017 September 10, 2017 Post navigation Previous Post Implementing Copy Parallel with ArcObjects Next Post Access ArcMap UI from Python comtypes 7 thoughts on “Dividing a polygon into a given number of equal areas with arcpy” kate (@pokateo_)says: January 15, 2018 at 8:25 pm This is an awesome tool! Thank you so much. Any thoughts on being able to specify an angle other than N/S or E/W? 0 0 i Rate This Reply 1. Alex Tereshenkovsays: January 20, 2018 at 9:46 am Hi Kate, glad you found it to be useful. It’s been a while since I’ve written this. It should be possible to specify another angle, but I don’t think I will find the time to implement this any soon. Sorry for that. 2 0 i Rate This Reply yangkefengsays: March 29, 2018 at 12:52 am Thank you so much. use shape file execute the script ok, create polygon with function AsShape, polygon.cut get Error: return convertArcObjectToPythonObject(self._arc_object.Cut(gp_fixargs((other,)))) RuntimeError: Cannot perform this operation on non-simple geometry. 0 0 i Rate This Reply 1. Alex Tereshenkovsays: March 29, 2018 at 5:46 am Look here and then run the Check Geometry geoprocessing tool for your input shapefile to find out what are the non-simple geometries you will need to handle; use the Repair Geometry geoprocessing tool to fix them. 0 0 i Rate This Reply 1. yangkefengsays: March 29, 2018 at 8:17 am Thank you. use arcpy.Polygon create polygon to cut, is ok. poly_json = json.loads(poly_json_str) sr = arcpy.SpatialReference(poly_json .get(“spatialReference”).get(“wkid”)) for feature in poly_json .get(“rings”): features.append( arcpy.Polygon( arcpy.Array([arcpy.Point(coord) for coord in feature]), sr)) polygon = features line = arcpy.AsShape(json.loads(line_json_str)) polygon.cut(line) 0 0 i Rate This ehesays: September 16, 2019 at 4:32 am Hi Thank you for your code! I have been able to run it successfully except for one polygon where i get the error: File “c:\program files (x86)\arcgis\desktop10.5\arcpy\arcpy\arcobjects\arcobjects.py”, line 76, in cut return convertArcObjectToPythonObject(self._arc_object.Cut(gp_fixargs((other,)))) RuntimeError: A polygon cut operation could not classify all parts of the polygon as left or right of the cutting line. The resulting polygon (cut_poly) does only up to a certain section of the original polygon. I have run correct geometry on the input polygon. The input polygon is on a projected system. I have done intensive googling and no solution. Do you know by chance? Thank you! 0 0 i Rate This Reply 1. Alex Tereshenkovsays: September 16, 2019 at 7:27 pm Hi there, glad it’s useful to someone. There is a chance something is wrong with the polygon in terms of its validness. Perhaps it has some self intersecting parts or some other weirdness? I don’t work with GIS any longer so can’t really help you running or debugging the code. You can try using some tools such as Check Geometry to see if it would give you any useful insight. Feel free to ask on GIS StackExchange forum as well – someone could help you troublshooting! Good luck! 0 0 i Rate This Reply Leave a comment Cancel reply Δ About me Alexey was part of GIS industry for 10 years building geospatial software products and services. This blog is an archive of his useful findings. Esri Forums MVP Choose category to read… Choose category to read… Blogroll ArcGIS Team Python Bill Dollins Cindy Williams-Jayakumar Geospatial notes Dr. Mike J Smith Esri ArcGIS Esri Australia Technical blog Esri Education Esri Support Center News Esri Training Matters Esri UK Exprodat GIS Geodatabase Geek Gretchen Peterson James Fee Joel McCune Node Dangles Safe Software Blog (FME) Disclaimer The information in this weblog is provided “AS IS” with no warranties, and confers no rights. This is a personal weblog; moreover, the opinions expressed here represent my own and not those of my employer. Blog at WordPress.com. Comment Reblog SubscribeSubscribed Alex Tereshenkov Join 69 other subscribers Sign me up Already have a WordPress.com account? Log in now. Alex Tereshenkov SubscribeSubscribed Sign up Log in Copy shortlink Report this content View post in Reader Manage subscriptions Collapse this bar Loading Comments... Write a Comment... Email (Required) Name (Required) Website Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use. To find out more, including how to control cookies, see here: Cookie Policy %d Design a site like this with WordPress.com Get started Advertisement
14933
https://pubmed.ncbi.nlm.nih.gov/30292055/
The rise and fall of the alveolar process: Dependency of teeth and metabolic aspects - PubMed Clipboard, Search History, and several other advanced features are temporarily unavailable. Skip to main page content An official website of the United States government Here's how you know The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site. The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely. Log inShow account info Close Account Logged in as: username Dashboard Publications Account settings Log out Access keysNCBI HomepageMyNCBI HomepageMain ContentMain Navigation Search: Search AdvancedClipboard User Guide Save Email Send to Clipboard My Bibliography Collections Citation manager Display options Display options Format Save citation to file Format: Create file Cancel Email citation Email address has not been verified. Go to My NCBI account settings to confirm your email and then refresh this page. To: Subject: Body: Format: [x] MeSH and other data Send email Cancel Add to Collections Create a new collection Add to an existing collection Name your collection: Name must be less than 100 characters Choose a collection: Unable to load your collection due to an error Please try again Add Cancel Add to My Bibliography My Bibliography Unable to load your delegates due to an error Please try again Add Cancel Your saved search Name of saved search: Search terms: Test search terms Would you like email updates of new search results? Saved Search Alert Radio Buttons Yes No Email: (change) Frequency: Which day? Which day? Report format: Send at most: [x] Send even when there aren't any new results Optional text in email: Save Cancel Create a file for external citation management software Create file Cancel Your RSS Feed Name of RSS Feed: Number of items displayed: Create RSS Cancel RSS Link Copy Actions Cite Collections Add to Collections Create a new collection Add to an existing collection Name your collection: Name must be less than 100 characters Choose a collection: Unable to load your collection due to an error Please try again Add Cancel Permalink Permalink Copy Display options Display options Format Page navigation Title & authors Abstract Similar articles Cited by Publication types MeSH terms Substances Related information LinkOut - more resources Review Arch Oral Biol Actions Search in PubMed Search in NLM Catalog Add to Search . 2018 Dec:96:195-200. doi: 10.1016/j.archoralbio.2018.09.016. Epub 2018 Sep 28. The rise and fall of the alveolar process: Dependency of teeth and metabolic aspects Grethe Jonasson1,Ingmarie Skoglund2,Marianne Rythén3 Affiliations Expand Affiliations 1 R & D Public Dental Service, Region Västra Götaland, Sweden; Dept. of Behavioral and Community Dentistry, Institute of Odontology at the Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden. Electronic address: grethe.jonasson@gmail.com. 2 R & D Public Dental Service, Region Västra Götaland, Sweden; Department of Public Health and Community Medicine/Primary Health Care, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden. Electronic address: ingmarie.skoglund@vgregion.se. 3 R & D Public Dental Service, Region Västra Götaland, Sweden; Specialist Clinic for Pediatric Dentistry, Public Dental Service, Mölndal, Sweden. Electronic address: marianne.rythen@vgregion.se. PMID: 30292055 DOI: 10.1016/j.archoralbio.2018.09.016 Item in Clipboard Review The rise and fall of the alveolar process: Dependency of teeth and metabolic aspects Grethe Jonasson et al. Arch Oral Biol.2018 Dec. Show details Display options Display options Format Arch Oral Biol Actions Search in PubMed Search in NLM Catalog Add to Search . 2018 Dec:96:195-200. doi: 10.1016/j.archoralbio.2018.09.016. Epub 2018 Sep 28. Authors Grethe Jonasson1,Ingmarie Skoglund2,Marianne Rythén3 Affiliations 1 R & D Public Dental Service, Region Västra Götaland, Sweden; Dept. of Behavioral and Community Dentistry, Institute of Odontology at the Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden. Electronic address: grethe.jonasson@gmail.com. 2 R & D Public Dental Service, Region Västra Götaland, Sweden; Department of Public Health and Community Medicine/Primary Health Care, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden. Electronic address: ingmarie.skoglund@vgregion.se. 3 R & D Public Dental Service, Region Västra Götaland, Sweden; Specialist Clinic for Pediatric Dentistry, Public Dental Service, Mölndal, Sweden. Electronic address: marianne.rythen@vgregion.se. PMID: 30292055 DOI: 10.1016/j.archoralbio.2018.09.016 Item in Clipboard Cite Display options Display options Format Abstract The alveolar bone has a unique capacity to follow the teeth's movements. It is formed around erupting teeth and their periodontal ligaments: the more the teeth have erupted, the larger the alveolar process. Throughout life the teeth erupt and migrate in an occlusal and mesial direction to compensate for attrition, an evolutionary trait. After tooth extraction, the alveolar process is resorbed to varying degrees. The mandibular alveolar bone mirrors skeletal bone condition. Due to fast bone turnover (which is the fastest in the whole skeleton), low bone mass and increased fracture risk may first be seen here. If a periapical radiograph of the mandibular premolars shows a dense trabeculation with well-mineralized trabeculae and small intertrabecular spaces, it is a reliable sign of normal skeletal bone density (BMD) and low skeletal fracture risk, whereas a sparse trabecular pattern indicates osteopenia and high fracture risk. The bone turnover rate in the mandible is twice that of the maxilla, and may, hypothetically, play a role in the development of osteonecrosis of the jaw (ONJ), which has been found mainly in the mandibular alveolar process? Keywords: Alveolar process; Bone fracture; Metabolism; Osteoporosis; Radiography. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved. PubMed Disclaimer Similar articles The impact of occlusal function on structural adaptation in alveolar bone of the growing pig, Sus Scrofa.Yeh KD, Popowics TE.Yeh KD, et al.Arch Oral Biol. 2011 Jan;56(1):79-89. doi: 10.1016/j.archoralbio.2010.08.013. Epub 2010 Sep 19.Arch Oral Biol. 2011.PMID: 20855059 Free PMC article. Alveolar bone loss in osteoporosis: a loaded and cellular affair?Jonasson G, Rythén M.Jonasson G, et al.Clin Cosmet Investig Dent. 2016 Jul 13;8:95-103. doi: 10.2147/CCIDE.S92774. eCollection 2016.Clin Cosmet Investig Dent. 2016.PMID: 27471408 Free PMC article.Review. Histomorphometric evaluation of alveolar bone turnover between the maxilla and the mandible during experimental tooth movement in dogs.Deguchi T, Takano-Yamamoto T, Yabuuchi T, Ando R, Roberts WE, Garetto LP.Deguchi T, et al.Am J Orthod Dentofacial Orthop. 2008 Jun;133(6):889-97. doi: 10.1016/j.ajodo.2006.12.013.Am J Orthod Dentofacial Orthop. 2008.PMID: 18538254 Distribution of BMP6 in the alveolar bone during mouse mandibular molar eruption.Oralová V, Chlastáková I, Radlanski RJ, Matalová E.Oralová V, et al.Connect Tissue Res. 2014 Oct-Dec;55(5-6):357-66. doi: 10.3109/03008207.2014.951441. Epub 2014 Aug 26.Connect Tissue Res. 2014.PMID: 25084210 The effect of osteoporosis on periodontal status, alveolar bone and orthodontic tooth movement. A literature review.Sidiropoulou-Chatzigiannis S, Kourtidou M, Tsalikis L.Sidiropoulou-Chatzigiannis S, et al.J Int Acad Periodontol. 2007 Jul;9(3):77-84.J Int Acad Periodontol. 2007.PMID: 17715839 Review. See all similar articles Cited by Loss of KDM4B impairs osteogenic differentiation of OMSCs and promotes oral bone aging.Deng P, Chang I, Wang J, Badreldin AA, Li X, Yu B, Wang CY.Deng P, et al.Int J Oral Sci. 2022 May 7;14(1):24. doi: 10.1038/s41368-022-00175-3.Int J Oral Sci. 2022.PMID: 35525910 Free PMC article. Effects of Drynaria Total Flavonoid on the Microstructure of the Mandible in Ovariectomized Rats.Zeng H, Zhao X, Wang L, Tang C, Li Z, Xie N, Wang F.Zeng H, et al.Med Sci Monit. 2020 Oct 31;26:e926171. doi: 10.12659/MSM.926171.Med Sci Monit. 2020.PMID: 33128539 Free PMC article. Overview on postmenopausal osteoporosis and periodontitis: The therapeutic potential of phytoestrogens against alveolar bone loss.Jayusman PA, Nasruddin NS, Baharin B, Ibrahim N', Ahmad Hairi H, Shuid AN.Jayusman PA, et al.Front Pharmacol. 2023 Feb 23;14:1120457. doi: 10.3389/fphar.2023.1120457. eCollection 2023.Front Pharmacol. 2023.PMID: 36909165 Free PMC article.Review. Changes in the Dentition of Small Dogs up to 4 Months of Age.Lorászkó G, Rácz B, Ózsvári L.Lorászkó G, et al.Animals (Basel). 2022 May 31;12(11):1417. doi: 10.3390/ani12111417.Animals (Basel). 2022.PMID: 35681881 Free PMC article. Impacts of Development, Dentofacial Disharmony, and Its Surgical Correction on Speech: A Narrative Review for Dental Professionals.Bode C, Ghaltakhchyan N, Silva ER, Turvey T, Blakey G, White R, Mielke J, Zajac D, Jacox L.Bode C, et al.Appl Sci (Basel). 2023 May;13(9):5496. doi: 10.3390/app13095496. Epub 2023 Apr 28.Appl Sci (Basel). 2023.PMID: 37323873 Free PMC article. See all "Cited by" articles Publication types Review Actions Search in PubMed Search in MeSH Add to Search MeSH terms Alveolar Process / metabolism Actions Search in PubMed Search in MeSH Add to Search Alveolar Process / physiology Actions Search in PubMed Search in MeSH Add to Search Animals Actions Search in PubMed Search in MeSH Add to Search Biomarkers / metabolism Actions Search in PubMed Search in MeSH Add to Search Biomechanical Phenomena / physiology Actions Search in PubMed Search in MeSH Add to Search Bisphosphonate-Associated Osteonecrosis of the Jaw / physiopathology Actions Search in PubMed Search in MeSH Add to Search Bone Density / physiology Actions Search in PubMed Search in MeSH Add to Search Bone Remodeling / physiology Actions Search in PubMed Search in MeSH Add to Search Humans Actions Search in PubMed Search in MeSH Add to Search Mandible / metabolism Actions Search in PubMed Search in MeSH Add to Search Mandible / physiology Actions Search in PubMed Search in MeSH Add to Search Osteoporosis / physiopathology Actions Search in PubMed Search in MeSH Add to Search Tooth Eruption / physiology Actions Search in PubMed Search in MeSH Add to Search Tooth Extraction Actions Search in PubMed Search in MeSH Add to Search Tooth Movement Techniques Actions Search in PubMed Search in MeSH Add to Search Substances Biomarkers Actions Search in PubMed Search in MeSH Add to Search Related information MedGen LinkOut - more resources Full Text Sources ClinicalKey [x] Cite Copy Download .nbib.nbib Format: Send To Clipboard Email Save My Bibliography Collections Citation Manager [x] NCBI Literature Resources MeSHPMCBookshelfDisclaimer The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited. Follow NCBI Connect with NLM National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov
14934
https://ocw.mit.edu/courses/18-01-single-variable-calculus-fall-2006/pages/exams/
Browse Course Material Course Info Instructor Prof. David Jerison Departments Mathematics As Taught In Fall 2006 Level Undergraduate Topics Mathematics Calculus Differential Equations Learning Resource Types assignment Problem Sets grading Exams with Solutions notes Lecture Notes theaters Lecture Videos Download Course search GIVE NOW about ocw help & faqs contact us 18.01 | Fall 2006 | Undergraduate Single Variable Calculus Exams Format Students will need both the course textbook ( Simmons, George F. Calculus with Analytic Geometry. 2nd ed. New York, NY: McGraw-Hill, October 1, 1996, ISBN: 9780070576421) and the course reader (18.01/18.01A Supplementary Notes, Exercises and Solutions; Jerison, D., and A. Mattuck. Calculus 1) to complete the assigned problem sets. The course reader is where to find the exercises labeled 1A, 1B, etc. Problem sets have two parts, I and II Part I consists of exercises given in the course reader and solved in section S of the course reader. It will be graded quickly, checking that all is there and the solutions not copied. Part II consists of problems for which solutions are not given; it is worth more points. Some of these problems are longer multi-part exercises posed here because they do not fit conveniently into an exam or short-answer format. See the guidelines below for what collaboration is acceptable, and follow them. To encourage you to keep up with the lectures, both Part I and Part II tell you for each problem on which class session day you will have the needed background for it. Homework Rules Collaboration on problem sets is encouraged, but Attempt each part of each problem yourself. Read each portion of the problem before asking for help. If you don’t understand what is being asked, ask for help interpreting the problem and then make an honest attempt to solve it. Write up each problem independently. On both Part A and B exercises you are expected to write the answer in your own words. Write on your problem set whom you consulted and the sources you used. If you fail to do so, you may be charged with plagiarism and subject to serious penalties. It is illegal to consult materials from previous semesters. Key to Notation 2.1 = Section 2.1 of the Simmons book Notes G = section G of the Notes (Course Reader) 1A-3 = Exercise 1A-3 in Section E (Exercises) of the Notes (solved in section S) 2.4/13; 81/4 = in Simmons, respectively, section 2.4 Problem 13; page 81 Problem 4 Homeworks Problem Set 1 (PDF) Problem Set 2 (PDF 1) (PDF 2) Problem Set 3 (PDF) Problem Set 4 (PDF) Problem Set 5 (PDF) Problem Set 6 (PDF) Problem Set 7 (PDF) Problem Set 8 (PDF 1) (PDF 2) Exams took place in the sessions noted in the table. | SES # | EXAM # | EXAM INFORMATION | PRACTICE EXAMS | EXAMS | --- --- | 8 | 1 | Covers Ses #1-7 Review sheet (PDF) | Practice questions for exam 1 (PDF) Solutions (PDF 1) (PDF 2) Practice exam 1 (PDF) Solutions (PDF) | Exam (PDF) Solution (PDF) | | 17 | 2 | Covers Ses #8-16 Review sheet (PDF) | Practice questions for exam 2 (PDF) Solutions (PDF) Practice exam 2 (PDF) Solutions (PDF) | Exam (PDF) Solution (PDF) | | 26 | 3 | Covers Ses #18-24 Review sheet (PDF) | Practice questions for exam 3 (PDF) Solutions (PDF) Practice exam 3 (PDF) Solutions (PDF) | Exam (PDF) Solution (PDF) | | 33 | 4 | Covers Ses #26-32 Review sheet (PDF) | Sheet of formulas which will be provided on exam 4 (PDF) Practice questions for exam 4 (PDF) Solutions (PDF) Practice exam 4 (PDF) Solutions (PDF) | Exam (PDF) Solution (PDF) | | | Final | Covers the entire semester’s work, including all the material since exam 4 | End of term info (PDF) Practice final (PDF) Solutions (PDF) | | Course Info Instructor Prof. David Jerison Departments Mathematics As Taught In Fall 2006 Level Undergraduate Topics Mathematics Calculus Differential Equations Learning Resource Types assignment Problem Sets grading Exams with Solutions notes Lecture Notes theaters Lecture Videos Download Course Over 2,500 courses & materials Freely sharing knowledge with learners and educators around the world. Learn more © 2001–2025 Massachusetts Institute of Technology Creative Commons License Terms and Conditions Proud member of: © 2001–2025 Massachusetts Institute of Technology You are leaving MIT OpenCourseWare Please be advised that external sites may have terms and conditions, including license rights, that differ from ours. MIT OCW is not responsible for any content on third party sites, nor does a link suggest an endorsement of those sites and/or their content. Continue
14935
https://www.omnicalculator.com/health/isotretinoin
Isotretinoin Dose Calculator LinkedIn Website Aleksandra (Ola) Zając, MD, is a medical doctor with a passion for lifestyle medicine. She wants to never stop learning while sharing what she already knows as a health educator and prophylaxis popularizer. She mainly concentrates on how human lifestyle impacts their health and disease. After hours, she likes weightlifting, foreign affairs podcasts, learning Korean, and dogs (with a special affection for her own pup Zoja). See full profile LinkedIn Website Research Gate Dominik Czernia, PhD, is a physicist at the Institute of Nuclear Physics in Kraków, specializing in condensed matter physics with a focus on molecular magnetism. He has led several national research projects, pioneering innovative approaches to novel materials for high technology. Passionate about making science accessible, Dominik has created various calculators, mostly in physics and math categories. In his free time, he enjoys family walks, city explorations, mountain hiking, and traveling everywhere by bike. See full profile Based on 2 sources If you struggle with skin conditions and your doctor has suggested giving oral retinoids a chance, the isotretinoin dose calculator (or Accutane® dosage calculator) is here to help you. With this tool, you will be able to count your cumulative isotretinoin dose, the daily Accutane® dosage, and the duration of the whole treatment. If you want to know more, check out the sections on isotretinoin's side effects and the drug's mechanism of the action below. We explained what isotretinoin is, how does isotretinoin work, and how to reduce the side effects of isotretinoin. Keep reading! We try our best to make our Omni Calculators as precise and reliable as possible. However, this tool can never replace a professional doctor's assessment. If any health condition bothers you, consult a physician. What is isotretinoin? What is Accutane®? Accutane® is the most popular brand name for a drug called isotretinoin. It is one of the vitamin A derivatives from the family called retinoids. We use isotretinoin to treat severe forms of acne. Accutane® comes as jelly capsules, and you can buy it in doses of 10, 20, and 40 mg of isotretinoin in one capsule. Other brands of isotretinoin also come in a 5 mg dose. How long does Accutane® take to work? Treating acne with the right Accutane® dosage takes time as the therapy usually lasts for months or even more than a year. However, it is supposed to treat your skin for the rest of your life. Sometimes, in cases of relapse, the course can be repeated. How to use the isotretinoin dose calculator Accutane® dosage The isotretinoin dose calculator can be helpful with a few problems regarding Accutane dosage. It has a built-in Accutane® dosage chart. To use it, follow the below instructions: Start by typing in your weight. You can switch between kg and lbs. In the next field of the isotretinoin dose calculator, you can calculate how much time (approximately) it will take to administer the full cumulative dose. It takes a mean value of a cumulative dose, which is 135 mg per kilogram. You will then see the range of your isotretinoin cumulative dose. It varies between 120 and 150 mg of the drug per kilogram of your weight. You can also change the result and switch to counting how much isotretinoin you have to take daily if you want to take the full cumulative dose within a certain amount of time. Alternatively, if you know your full dose (e.g., because your dermatologist has counted it for you), you can choose to input your cumulative isotretinoin dose and the parameter that you know — either the duration of your therapy or your daily dose. The calculator will then provide the other number. How to set a daily dose You usually start with a low introductory dose, and if you feel okay, you can increase it to around 1 mg/kg per day — the recommended maximum dose for most people. However, singular severe cases of adult acne can benefit from increasing it to even 2 mg/kg per day. However, the cumulative isotretinoin dose does not change. Your leading doctor should always have the last word in terms of your daily and cumulative dose. How does Accutane® work? While you now know "What is Accutane®?" you may still be wondering - how does isotretinoin work on my skin? There are three main mechanisms of acne: An individual can experience one, two, or all three phenomena and have acne because of this. There are, of course, multiple drugs and cosmetics for acne. Some of them are antibiotics, and some types of acids, like azelaic acid; you may also meet topical retinoids or topical isotretinoin. However, oral isotretinoin is the only one that works on every level. First, it promotes cell turnover so that keratinization slows down. Then it shrinks the sebaceous (oil) glands of the skin, so they don't have as much sebum. Lastly, using the proper Accutane® dosage chart stops the acne bacteria from multiplying. This results in less inflammation and reduced redness. How to reduce the side effects of isotretinoin Every drug has possible side effects, and Accutane® is no exception. The drug is also quite potent, and it can cause some complications as well. The list is quite long and can be scary sometimes. That's why during treatment, you should check your blood often, be mindful of any symptoms and if you're a woman, take care of effective contraception. The side effects observed in some people due to oral isotretinoin are: You can learn how to reduce the side effects of isotretinoin in the next section in more detail. How to manage your therapy with Accutane® The isotretinoin dose calculator may be helpful to check the proper Accutane® dosage, but you still need to pay attention to the following points: Be cautious of your symptoms and don't forget to take regular blood tests. If you catch the signs early, you might address them immediately and prevent potentially severe conditions. Sometimes a decrease in dose is all it needs. Stay away from alcohol for the duration of the treatment. Alcohol and isotretinoin are not a good match, as taken together, they work synergistically and can damage your liver, pancreas, and blood sugar. Remember about the photosensitivity that your skin experiences. Don't go expose yourself to any unnecessary direct sunlight; avoid sunbathing and solariums too. Use high SPF sunscreens and hats and stay in the shade. Better safe than sorry! Take care of your whole body. You might experience dryness not only on the skin but also of your eyes, lips, and mouth. Use gentle but rich moisturizers for your face and hands. You might want to refrain from contact lenses for some time and may find soothing eye drops beneficial. And buy yourself a nice protective lip balm so you don't end up with cheilosis. You know how does Accutane® works, but do you know it first needs to be absorbed properly? Vitamin A (and its derivatives, like retinoids) belong to the group of fat-soluble vitamins. The group, besides vitamin A, includes vitamins D, E, and K. To effectively absorb those micronutrients, you need to take them preferably with a fatty meal. You don't have to go keto all the way - just eat your isotretinoin dose with a proper dinner, not a fruit snack. Don't play around with your dose unless your dermatologist tells you so. Never exceed your daily dose, even if the Accutane® dosage calculator tells you so. You should listen to the experts who know your individual case. If you're a woman, use effective contraception before, during, and one month after the treatment. It is preferred to use two different methods at once (preferably one physical and the other hormonal). Your doctor can also ask you for a regular pregnancy test. Don't forget about it. Take a look at the few tools that can help you manage Accutane® therapy: FAQs What is isotretinoin? Isotretinoin is a drug derived from vitamin A, commonly used in cases of severe acne. It can be taken orally (which is sold as Accutane®) or topically in the form of a gel or cream. Is isotretinoin an antibiotic? Despite some anti-bacterial properties that stop the acne bacteria from spreading, isotretinoin is not technically an antibiotic. Be sure to consult your dermatologist if you have any doubts about what isotretinoin is to avoid or at least reduce potential side effects. When does isotretinoin start to work? You will notice signs of isotretinoin working in the first couple of weeks - you will feel the dryness of the skin and can experience flares around weeks 4 - 6. The acne usually starts improving in about a month of taking the drug and continues to do so up to two months after the therapy. How long do you take isotretinoin? An isotretinoin therapy usually lasts for 4 - 6 months (16 - 24 weeks). You aim to take your cumulative dose (about 120 - 150 mg/kg of body weight) - no more, no less. Is isotretinoin Accutane? Yes. Accutane® is a drug in the form of jelly capsules that contains the active ingredient - isotretinoin. Although Accutane® is not the only formulation of oral isotretinoin, it is the most widespread one. You can find Accutane® in a dose of 10, 20, or 40 mg per capsule. Related calculators Your weight Do you know your exact cumulative dose? I want to calculate Daily dose Duration Share result Did we solve your problem today? Yes No Check out 27 similar dosage calculators 💊 Adderall dosage Aleve dosage Benadryl
14936
https://www.quora.com/What-is-the-value-of-R-in-Kp-Kc-RT-n
What is the value of R in Kp=Kc(RT) ^∆n? - Quora Something went wrong. Wait a moment and try again. Try again Skip to content Skip to search Sign In Chemistry Kc and Kc Universal Gas Constant Equilibrium Systems Thermodynamics Physical Constant Ideal Gas Equation Basic Physical Chemistry Gas Equilibrium 5 What is the value of R in Kp=Kc(RT) ^∆n? All related (32) Sort Recommended Assistant Bot · 1y In the equation K p=K c(R T)Δ n K p=K c(R T)Δ n, the value of R R is the ideal gas constant. It can be expressed in different units depending on the context of the problem. The most commonly used values are: R=0.0821 L⋅atm/(K⋅mol)R=0.0821 L⋅atm/(K⋅mol) R=8.314 J/(K⋅mol)R=8.314 J/(K⋅mol) When using the equation, make sure to use the appropriate value of R R that is consistent with the units of pressure (atmospheres or pascals) and temperature (Kelvin) in your calculations. Explanation of Terms: K p K p = equilibrium constant in terms of partial pressures K c K c = equilibriu Continue Reading In the equation K p=K c(R T)Δ n K p=K c(R T)Δ n, the value of R R is the ideal gas constant. It can be expressed in different units depending on the context of the problem. The most commonly used values are: R=0.0821 L⋅atm/(K⋅mol)R=0.0821 L⋅atm/(K⋅mol) R=8.314 J/(K⋅mol)R=8.314 J/(K⋅mol) When using the equation, make sure to use the appropriate value of R R that is consistent with the units of pressure (atmospheres or pascals) and temperature (Kelvin) in your calculations. Explanation of Terms: K p K p = equilibrium constant in terms of partial pressures K c K c = equilibrium constant in terms of concentrations T T = temperature in Kelvin Δ n Δ n = change in the number of moles of gas (moles of products - moles of reactants) during the reaction. Upvote · 9 1 Related questions More answers below What is the R value in KP and KC relation? What is the value of Kp when Kc=Kp(RT) ^∆n? Which R (universal gas constant) should we use when converting from Kc to Kp or from Kp to Kc (0.082 or 8.314)? What are the units of Kc and Kp respectively? What are Kp and Kc? What is relation between them? Ah Clem PhD from University of Kentucky (Graduated 1978) · Author has 135 answers and 105.5K answer views ·6y Kc=Kp(RT)Δn The basic equation relates the (Gas Equilibrium Constants in concentration units to pressure units). In the equation presented Δn = (Total moles of gas on the products side) - (Total moles of gas on the reactants side). Hence Δn = (d + c) - (a + b)] [where the lower case numbers are the exponents the chemical equation] For example: If the reaction is A + B^2 = C^2 + D then Δn = (2+1)-(1+2) = 0 R is the gas constant found in the ideal gas law (0.0821 Liter Atm / Kelvin Mole). Therefore, consistent units require that you use T in K. You may of course use other units for R in which case your Continue Reading Kc=Kp(RT)Δn The basic equation relates the (Gas Equilibrium Constants in concentration units to pressure units). In the equation presented Δn = (Total moles of gas on the products side) - (Total moles of gas on the reactants side). Hence Δn = (d + c) - (a + b)] [where the lower case numbers are the exponents the chemical equation] For example: If the reaction is A + B^2 = C^2 + D then Δn = (2+1)-(1+2) = 0 R is the gas constant found in the ideal gas law (0.0821 Liter Atm / Kelvin Mole). Therefore, consistent units require that you use T in K. You may of course use other units for R in which case your Kc and Kp units would be different. Upvote · 99 20 9 1 Promoted by Grammarly Grammarly Great Writing, Simplified ·Aug 18 Which are the best AI tools for students? There are a lot of AI tools out there right now—so how do you know which ones are actually worth your time? Which tools are built for students and school—not just for clicks or content generation? And more importantly, which ones help you sharpen what you already know instead of just doing the work for you? That’s where Grammarly comes in. It’s an all-in-one writing surface designed specifically for students, with tools that help you brainstorm, write, revise, and grow your skills—without cutting corners. Here are five AI tools inside Grammarly’s document editor that are worth checking out: Do Continue Reading There are a lot of AI tools out there right now—so how do you know which ones are actually worth your time? Which tools are built for students and school—not just for clicks or content generation? And more importantly, which ones help you sharpen what you already know instead of just doing the work for you? That’s where Grammarly comes in. It’s an all-in-one writing surface designed specifically for students, with tools that help you brainstorm, write, revise, and grow your skills—without cutting corners. Here are five AI tools inside Grammarly’s document editor that are worth checking out: Docs – Your all-in-one writing surface Think of docs as your smart notebook meets your favorite editor. It’s a writing surface where you can brainstorm, draft, organize your thoughts, and edit—all in one place. It comes with a panel of smart tools to help you refine your work at every step of the writing process and even includes AI Chat to help you get started or unstuck. Expert Review – Your built-in subject expert Need to make sure your ideas land with credibility? Expert Review gives you tailored, discipline-aware feedback grounded in your field—whether you're writing about a specific topic, looking for historical context, or looking for some extra back-up on a point. It’s like having the leading expert on the topic read your paper before you submit it. AI Grader – Your predictive professor preview Curious what your instructor might think? Now, you can get a better idea before you hit send. AI Grader simulates feedback based on your rubric and course context, so you can get a realistic sense of how your paper measures up. It helps you catch weak points and revise with confidence before the official grade rolls in. Citation Finder – Your research sidekick Not sure if you’ve backed up your claims properly? Citation Finder scans your paper and identifies where you need sources—then suggests credible ones to help you tighten your argument. Think fact-checker and librarian rolled into one, working alongside your draft. Reader Reactions – Your clarity compass Writing well is one thing. Writing that resonates with the person reading it is another. Reader Reactions helps you predict how your audience (whether that’s your professor, a TA, recruiter, or classmate) will respond to your writing. With this tool, easily identify what’s clear, what might confuse your reader, and what’s most likely to be remembered. All five tools work together inside Grammarly’s document editor to help you grow your skills and get your writing across the finish line—whether you’re just starting out or fine-tuning your final draft. The best part? It’s built for school, and it’s ready when you are. Try these features and more for free at Grammarly.com and get started today! Upvote · 999 201 99 34 9 3 Filip Cernatič PhD in Chemistry from University of Strasbourg (Graduated 2023) ·4y Related How do you prove that, Kp= Kc (RT) ^∆n? Without loss of generality, let’s assume we are dealing with a simple reaction involving one reactant and one product, both in the gaseous state: a A<=>b B a A<=>b B The equilibrium constant in terms of molar concentrations is written as: K c=c b B c a A K c=c B b c A a while the eq. constant in terms of partial pressures for the same reaction is written as: K p=p b B p a A K p=p B b p A a If we express partial pressures in terms of molar concentrations, we can connect the two equilibrium constants: p I=n I R T V=c I R T p I=n I R T V=c I R T K p=(c B R T)b(c A R T)a K p=(c B R T)b(c A R T)a =\frac{c_{B}^{b}}{c_{A}^{=\frac{c_{B}^{b}}{c_{A}^{ Continue Reading Without loss of generality, let’s assume we are dealing with a simple reaction involving one reactant and one product, both in the gaseous state: a A<=>b B a A<=>b B The equilibrium constant in terms of molar concentrations is written as: K c=c b B c a A K c=c B b c A a while the eq. constant in terms of partial pressures for the same reaction is written as: K p=p b B p a A K p=p B b p A a If we express partial pressures in terms of molar concentrations, we can connect the two equilibrium constants: p I=n I R T V=c I R T p I=n I R T V=c I R T K p=(c B R T)b(c A R T)a K p=(c B R T)b(c A R T)a =c b B c a A R T b R T a=c B b c A a R T b R T a =K c(R T)b−a=K c(R T)b−a where b−a=Δ n b−a=Δ n is the difference between total moles of gas on the product side and the total moles of gas on the reactant side. Hence, the connection between K p K p and K c K c is expressed as K p=K c(R T)Δ n K p=K c(R T)Δ n Upvote · 99 10 Ernest Leung M.Phil. in Chemistry, The Chinese University of Hong Kong · Author has 11.9K answers and 5.8M answer views ·May 20 Related Is Kp=KC(RT) ∆n always correct? Suppose the question is: "Is Kp = Kc(RT)^(Δn) always correct?" The answer is as follows. Continue Reading Suppose the question is: "Is Kp = Kc(RT)^(Δn) always correct?" The answer is as follows. Upvote · 9 6 Related questions More answers below What is R in Kp=(RT) ∆n? What is the relationship between KP, KC, and KX? How does Kc = Kp in an equilibrium position? Are Kp and Keq the same value? If so why are Kc and Kp not the same value? What is the kc and kp of 2HI (g) → H2 (g) + I2 (g)? Anonymous 6y Related What are Kp and Kc? What is relation between them? Continue Reading Upvote · 99 40 9 6 Promoted by London Academy of Trading London Academy of Trading The UK's 1st Accredited Trading Academy ·Updated Feb 28 How do I start a career in trading without prior experience? Starting a career in tradingwith no prior experience is completely possible, and taking a structured course is a great first step. At the London Academy of Trading, we offer a one-week Introduction to Financial Markets and Trading course, perfect for beginners. This course gives you a solid foundation in financial markets, covering everything from a day in the life of a trader to fundamental and technical analysis. You’ll also dive into trading psychology and learn how it can impact performance. The course is offered both online through live interactive sessions and in person at our tra Continue Reading Starting a career in tradingwith no prior experience is completely possible, and taking a structured course is a great first step. At the London Academy of Trading, we offer a one-week Introduction to Financial Markets and Trading course, perfect for beginners. This course gives you a solid foundation in financial markets, covering everything from a day in the life of a trader to fundamental and technical analysis. You’ll also dive into trading psychology and learn how it can impact performance. The course is offered both online through live interactive sessions and in person at our trading floor in central London. With a focus on currencies, commodities, and major indices, you’ll get hands-on experience trading real-time markets on a demo platform. Throughout the course, you'll be fully supported by our expert lecturers, who are experienced traders themselves, ensuring you gain the knowledge and confidence to take your first steps in trading. Our courses are accredited by ABE or certified by CPD, providing you with a recognised qualification to support your career in trading. Upvote · 99 67 9 8 9 2 Ernest Leung M.Phil. in Chemistry, The Chinese University of Hong Kong · Author has 11.9K answers and 5.8M answer views ·Mar 7 Related Which R (universal gas constant) should we use when converting from Kc to Kp or from Kp to Kc (0.082 or 8.314)? Which R (universal gas constant) should we use when converting from Kc to Kp or from Kp to Kc (0.082 or 8.314)? Gas law: PV = nRT V is L, n is in mol, T is in K. When P is in atm, take R = 0.08206 L atm / (mol K) When P is in kPa, take R = 8.314 L kPa / (mol K) = 8.314 J / (mol K) Upvote · 9 2 Anand Shankar Sahay Former Assoc. Professor of Chemistry(Retd) at TNB College (1978–2016) · Author has 232 answers and 1M answer views ·Updated 7y Related What is the value of R in the relation of KP and KC? If standard pressure is 1 bar and standard concentration is 1 mol/L, R = 0.0831 bar.L/K.mol in Kp = Kc(RT)^(delta n) If standard pressure is 1 atm, R = 0.0821 L.atm/K.mol . The relationship between KP and Kc is based on p =cRT, the unit of R is in bar.L/K.mol if pressure is in bar and concentration in mol/L. Similarly , if pressure is in atmosphere, R is in L.atm/K.mol. The equilibrium constants are dimensionless and their expressions are in terms of pressure or concentration relative to the corresponding standard state. Upvote · 99 11 Sponsored by Atlassian What is the Best Way to Start in the Atlassian Community Forums? Update your profile, join discussions, and collect badges along the way that lead to the mega-badge! Learn More 99 51 Gaurav Kumar Influencer, Perfectionist, researcher · Author has 204 answers and 1.4M answer views ·6y Related What are Kp and Kc? What is relation between them? Continue Reading Upvote · 999 149 99 26 9 4 Alka B. Gupta Masters in Pharmaceutical Chemistry ·9y Related What are Kp and Kc? What is relation between them? K c and K p are the equilibrium constants of gaseous mixtures. However, the difference between the two constants is that K c is defined by molar concentrations, whereas K p is defined by the partial pressures of the gasses inside a closed system. The equilibrium constants do not include the concentrations of single components such as liquids and solid, and they do not have any units. Derive the relationship between K p and K c Relationship between Kp and Kc Consider the following reversible reaction: aA + bB ⇌ cC + dD The equilibrium constant for the reaction expressed in terms of the concentration ( Continue Reading K c and K p are the equilibrium constants of gaseous mixtures. However, the difference between the two constants is that K c is defined by molar concentrations, whereas K p is defined by the partial pressures of the gasses inside a closed system. The equilibrium constants do not include the concentrations of single components such as liquids and solid, and they do not have any units. Derive the relationship between K p and K c Relationship between Kp and Kc Consider the following reversible reaction: aA + bB ⇌ cC + dD The equilibrium constant for the reaction expressed in terms of the concentration (mol / litre) may be expressed as: K c = [C] c [D] d / [A] a [B] b If the equilibrium involves gaseous species, then the concentrations may be expressed in terms of partial pressures of the gaseous substance. The equilibrium constant in terms of partial pressures may be given as: K p = pcC pdD / paA pbB Where pA, pB, pC and pD represents the partial pressures of the substance A, B, C and D respectively. If gases are assumed to be ideal, then according to ideal gas equation: pV = nRT p = nRT / V Where p ———-> pressure in Pa n ——————–> amount of gas in mol V ——————–> Volume in m3 T ———————> temperature in Kelvin n/V = concentration, C or p = CRT or [gas] RT If C is in mol dm-3 and p is in bar, then R = 0.0831 bar dm3 mol-1 K-1 Therefore, at constant temperature, pressure of the gas P is proportional to its concentration C, i.e. Let us suppose a general reaction: aA + bB↔ cC + dD The equilibrium constant will be given as: Kp = (pC) c (pD) d / (pA) a (pB) b ……. (1) Now, p = CRT Hence, pA = [A] RT where [A] is the molar concentration of A Similarly, pB = [B] RT pC = [C] RT pD = [D] RT where [B], [C] and [D] are the molar concentration of B, C and D respectively Substituting these values in expression for Kp i.e. in equation (1) Kp = [([C] RT) c ([D] RT) d]/[([A] RT) a ([B] RT) b] = [C] c [D] d (RT) c+d/[A] a [B] b (RT) a+b = [C] c [D] d (RT) c+d – a+b/[A] a [B] b = Kc (RT) c+d – a+b = Kc (RT) ∆n Where ∆n = (c + d) – (a + b) i.e. number of moles of gaseous products – number of moles of gaseous reactants in the balanced chemical reaction. Hence relation between Kp and Kc is given as: Kp = Kc (RT) ∆n Upvote · 999 108 9 5 9 3 Sponsored by Aberdeen IT India: Does tariff turmoil matter for markets? India is facing rising US tariffs, but the direct impact remains limited. Learn More 999 106 Ernest Leung M.Phil. in Chemistry, The Chinese University of Hong Kong · Author has 11.9K answers and 5.8M answer views ·Mar 7 Related How are KP and KC related in PCL5=PCL3+CL2? Suppose the question is: "How are Kp and Kc related in PCl₅ ⇌ PCl₃ + Cl₂?" The answer is as follows. Continue Reading Suppose the question is: "How are Kp and Kc related in PCl₅ ⇌ PCl₃ + Cl₂?" The answer is as follows. Your response is private Was this worth your time? This helps us sort answers on the page. Absolutely not Definitely yes Upvote · 9 4 Michael Mombourquette Retired Chemistry Prof, Church member, Knight of Columbus, · Author has 6.8K answers and 17.7M answer views ·6y Related Which R (universal gas constant) should we use when converting from Kc to Kp or from Kp to Kc (0.082 or 8.314)? Use whichever one you need to cancel the units of pressure and get units of concentration. This is only done if the system at question is a gas phase reaction. The concentration of a gas phase reaction is n/V. the pressure is P. they are related in the ideal gas equation. PV=nRT. if you are using P and V in atmospheres and litres, respectively then you should use R in those units too. 0.08203 L-atm/mol-K to properly cancel out the units. Thus, P = n/V RT is the conversion equation between C and P. If your pressure and volume is using SI units then use R = 8.3145 J/molK as these are SI units. Pers Continue Reading Use whichever one you need to cancel the units of pressure and get units of concentration. This is only done if the system at question is a gas phase reaction. The concentration of a gas phase reaction is n/V. the pressure is P. they are related in the ideal gas equation. PV=nRT. if you are using P and V in atmospheres and litres, respectively then you should use R in those units too. 0.08203 L-atm/mol-K to properly cancel out the units. Thus, P = n/V RT is the conversion equation between C and P. If your pressure and volume is using SI units then use R = 8.3145 J/molK as these are SI units. Personally, I never teach this as it’s really just mathematical gymnastics that no one in their real work world would ever use. The real Thermodynamic K constant has no units and therefore has no conversion. Kc and Kp are really just shortcut versions of K and way too many teachers and professors (and textbooks they use) rely on these false equilibrium constants. Upvote · 9 5 Chandan Chanki Former Student · Author has 96 answers and 375.4K answer views ·7y Related What is the relationship between KP and KC? Kc and Kp are the equilibrium constants of gaseous mixtures. However, the difference between the two constants is that Kc is defined by molar concentrations, whereas Kp is defined by the partial pressures of the gasses inside a closed system. Continue Reading Kc and Kp are the equilibrium constants of gaseous mixtures. However, the difference between the two constants is that Kc is defined by molar concentrations, whereas Kp is defined by the partial pressures of the gasses inside a closed system. Upvote · 99 28 9 1 Ravi Divakaran Studied Chemistry&Science · Author has 1.5K answers and 3.5M answer views ·5y Related Is the equation Kp=Kc(RT) ^Δn valid only for homogeneous gaseous equilibrium? Yes. The equation is valid only for reactions in which the equilibrium constant can be calculated both in terms of partial pressure (Kp) and using concentration in moles per litre (Kc). This is possible only in reaction mixtures consisting entirely of gases. Upvote · 9 3 Related questions What is the R value in KP and KC relation? What is the value of Kp when Kc=Kp(RT) ^∆n? Which R (universal gas constant) should we use when converting from Kc to Kp or from Kp to Kc (0.082 or 8.314)? What are the units of Kc and Kp respectively? What are Kp and Kc? What is relation between them? What is R in Kp=(RT) ∆n? What is the relationship between KP, KC, and KX? How does Kc = Kp in an equilibrium position? Are Kp and Keq the same value? If so why are Kc and Kp not the same value? What is the kc and kp of 2HI (g) → H2 (g) + I2 (g)? What is the relation between Kc & Kp? Under what condition is Kp=Kc? How do I convert from KP to KC? What is relation between Kp/Kc and rate constant K? What is the value of Kc? Related questions What is the R value in KP and KC relation? What is the value of Kp when Kc=Kp(RT) ^∆n? Which R (universal gas constant) should we use when converting from Kc to Kp or from Kp to Kc (0.082 or 8.314)? What are the units of Kc and Kp respectively? What are Kp and Kc? What is relation between them? What is R in Kp=(RT) ∆n? Advertisement About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025 Privacy Preference Center When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer. More information Allow All Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Performance Cookies Always Active These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Functional Cookies Always Active These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Targeting Cookies Always Active These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising. Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Reject All Confirm My Choices
14937
https://www.physicsforums.com/threads/solving-2-equations-and-2-unknowns-with-vectors.971028/
Solving 2 equations and 2 unknowns with vectors • Physics Forums Insights Blog-- Browse All Articles --Physics ArticlesPhysics TutorialsPhysics GuidesPhysics FAQMath ArticlesMath TutorialsMath GuidesMath FAQEducation ArticlesEducation GuidesBio/Chem ArticlesTechnology GuidesComputer Science Tutorials ForumsGeneral MathCalculusDifferential EquationsTopology and AnalysisLinear and Abstract AlgebraDifferential GeometrySet Theory, Logic, Probability, StatisticsMATLAB, Maple, Mathematica, LaTeX Trending Log inRegister What's new General Math Calculus Differential Equations Topology and Analysis Linear and Abstract Algebra Differential Geometry Set Theory, Logic, Probability, Statistics MATLAB, Maple, Mathematica, LaTeX Menu Log in Register Navigation More options Style variation SystemLightDark Contact us Close Menu Forums Mathematics Linear and Abstract Algebra I Solving 2 equations and 2 unknowns with vectors I Thread starter matt382 Start date Apr 28, 2019 TagsUnknownsVectors Apr 28, 2019 1 matt382 2 0 Hi, I have a work-related problem to solve and I'm not sure where to start and a pointer would be appreciated. I have the following two sets of polar equations V1 + V2 = Vx V1 + V2 + V3 = Vy, where Vx, V3, and Vy have been measured with reasonable accuracy, maybe +/-2% Any thoughts on how to approach? If, for example, if I convert to rectangular form and try substitution the entire thing is quickly swimming in a sea of sines and cosines that cannot possibly be solvable My question is this: There should be enough known to solve, is that right? Can this just go into a matrix and get solved that way? thanks for any help Physics news on Phys.org New adaptive optics system promises sharper gravitational-wave observations Physics-informed AI learns local rules behind flocking and collective motion behaviors New perspectives on light-matter interaction: How virtual charges influence material responses Apr 28, 2019 2 BvU Science Advisor Homework Helper 16,212 4,925 hello matt, looks like you do not have enough to solve: all you have is 2 measurements of v1+v2 Apr 28, 2019 3 fresh_42 Staff Emeritus Science Advisor Homework Helper Insights Author 2024 Award 20,671 27,937 The equations don't allow you to separate from . You could set and have the same amount of information coded. There is no way to achieve the values of or . Discover more DIY electronics kits Engineering tools Mathematical Physics Books Physics simulation software Technical writing guides Programming courses Physics Forums Subscription Educational software Internet forum Academic advising services Apr 28, 2019 4 matt382 2 0 Hi BvU and fresh_42, I understand your point if they were scalar numbers. But it intuitively feels to me that because these are vectors that there's an additional constraint present in the form of angles that must be achieved. In the attached, with System 1 of course V1 and V2 have an infinite solution space. But look at System 2: Visually it appears to be completely constrained. If you change the position of V3, then you will break the system 1 constraint that V1 and V2 have a fixed angle between them, for example. And we know both VX and VY. In other words, this looks completely constrained to me--there's no other way to draw the vectors when both systems are considered. Am I just not seeing this correctly? Attachments angles.PNG 14.3 KB · Views: 352 Apr 28, 2019 5 fresh_42 Staff Emeritus Science Advisor Homework Helper Insights Author 2024 Award 20,671 27,937 No. You need an equation which makes two variables. You have only one variable: . Often does the job, but I don't know your system and whether you can measure the difference. Jun 11, 2019 6 HallsofIvy Science Advisor Homework Helper 42,895 984 You have one equation that says V1+ V2= VX and another that says V1+ V2= VY- V3, with VX, VY, and V3 known. If VX= VY- V3 then there are infinitely many solutions. If they are not equal, there is no solution. Last edited: Jun 13, 2019 Likes Greg Bernhardt Jun 11, 2019 7 fresh_42 Staff Emeritus Science Advisor Homework Helper Insights Author 2024 Award 20,671 27,937 HallsofIvy said: You have one equation that says V1+ V2= VX and another that says V1+ V1= VY- V3, with VX, VY, and V3 known. If VX= VY- V3 then there are infinitely many solutions. If they are not equal, there is no solution. The situation has two unknowns and , and two known variables and The equations now read and . This can always uniquely be solved. Jun 11, 2019 8 BvU Science Advisor Homework Helper 16,212 4,925 That's just a typo ! Similar threads IOrthogonal transformations preserve length Dec 15, 2024 Replies 1 Views 2K Question about vector independence Dec 13, 2014 Replies 10 Views 2K Can you rearrange vectors in a set? And another misc questn. Jan 29, 2016 Replies 7 Views 2K Am I Calculating the Components of V3 Correctly? Jan 31, 2021 Replies 9 Views 2K ADiffusion equation and a system of equations with reciprocal unknowns? Sep 15, 2019 Replies 2 Views 2K ILinear dependency of Vectors above R and C and the det May 21, 2016 Replies 5 Views 1K IDetermining elements of Markov matrix from a known stationary vector Jan 2, 2023 Replies 24 Views 2K IIndex notation of vector rotation Dec 3, 2024 Replies 7 Views 2K BCan a matrix be transformed like a vector? Jun 9, 2018 Replies 2 Views 1K Solving a circuit using both the mesh and node analysis May 29, 2018 Replies 3 Views 2K Share: BlueskyLinkedInShare Forums Mathematics Linear and Abstract Algebra Hot Threads E IHow to show ##p(x)=g(x)x\pm 1\in\Bbb{Q}[x]## is irreducible in ##\Bbb{Q}_{\Bbb{Z}}[x]##? Started by elias001 May 20, 2025 Replies: 48 Linear and Abstract Algebra E IShowing ##k[x_1,\ldots,x_n]/\mathfrak{a}## is finite dimensional Started by elias001 May 24, 2025 Replies: 40 Linear and Abstract Algebra ANear-Rings with Noncommutative Addition and Two-Sided Distributivity Started by lpetrich Apr 23, 2025 Replies: 4 Linear and Abstract Algebra E IHow do we distinguish two different notations for cokernel and coimage? Started by elias001 Jun 13, 2025 Replies: 41 Linear and Abstract Algebra E ILocalising a non integral domain at a prime Started by elias001 Jun 12, 2025 Replies: 17 Linear and Abstract Algebra Recent Insights InsightsQuantum Entanglement is a Kinematic Fact, not a Dynamical Effect Started by Greg Bernhardt Sep 2, 2025 Replies: 11 Quantum Physics InsightsWhat Exactly is Dirac’s Delta Function? - Insight Started by Greg Bernhardt Sep 2, 2025 Replies: 3 General Math InsightsRelativator (Circular Slide-Rule): Simulated with Desmos - Insight Started by Greg Bernhardt Sep 2, 2025 Replies: 1 Special and General Relativity P InsightsFixing Things Which Can Go Wrong With Complex Numbers Started by PAllen Jul 20, 2025 Replies: 7 General Math F InsightsFermat's Last Theorem Started by fresh_42 May 21, 2025 Replies: 105 General Math F InsightsWhy Vector Spaces Explain The World: A Historical Perspective Started by fresh_42 Mar 13, 2025 Replies: 0 General Math Change width Contact About Terms Privacy Help RSS 2025 © Physics Forums, All Rights Reserved Back Top
14938
https://artofproblemsolving.com/wiki/index.php/2009_AMC_12A_Problems/Problem_17?srsltid=AfmBOooRmh00pgYtgwn5eN6uZhyTbJsgWMaLIHRaXZGTwql_i1MvH6jT
Art of Problem Solving 2009 AMC 12A Problems/Problem 17 - AoPS Wiki Art of Problem Solving AoPS Online Math texts, online classes, and more for students in grades 5-12. Visit AoPS Online ‚ Books for Grades 5-12Online Courses Beast Academy Engaging math books and online learning for students ages 6-13. Visit Beast Academy ‚ Books for Ages 6-13Beast Academy Online AoPS Academy Small live classes for advanced math and language arts learners in grades 2-12. Visit AoPS Academy ‚ Find a Physical CampusVisit the Virtual Campus Sign In Register online school Class ScheduleRecommendationsOlympiad CoursesFree Sessions books tore AoPS CurriculumBeast AcademyOnline BooksRecommendationsOther Books & GearAll ProductsGift Certificates community ForumsContestsSearchHelp resources math training & toolsAlcumusVideosFor the Win!MATHCOUNTS TrainerAoPS Practice ContestsAoPS WikiLaTeX TeXeRMIT PRIMES/CrowdMathKeep LearningAll Ten contests on aopsPractice Math ContestsUSABO newsAoPS BlogWebinars view all 0 Sign In Register AoPS Wiki ResourcesAops Wiki 2009 AMC 12A Problems/Problem 17 Page ArticleDiscussionView sourceHistory Toolbox Recent changesRandom pageHelpWhat links hereSpecial pages Search 2009 AMC 12A Problems/Problem 17 Contents [hide] 1 Problem 2 Solution 3 Solution 2 4 Solution 3 5 Alternate Solution 6 See Also Problem Let and be two different infinite geometric series of positive numbers with the same first term. The sum of the first series is , and the sum of the second series is . What is ? Solution Using the formula for the sum of a geometric series we get that the sums of the given two sequences are and . Hence we have and . This can be rewritten as . As we are given that and are distinct, these must be precisely the two roots of the equation . Using Vieta's formulas we get that the sum of these two roots is . Solution 2 Using the previous solution we reach the equality . Obviously, since , then so . -Vignesh Peddi Solution 3 We basically have two infinite geometric series whose sum is equivalent to the common ratio. Let us have a geometric series: . The sum is: Thus, and by Vieta's, the sum of the two possible values of ( and ) is . ~conantwiz2023 Alternate Solution Using the formula for the sum of a geometric series we get that the sums of the given two sequences are and . Hence we have and . This can be rewritten as . Which can be further rewritten as . Rearranging the equation we get . Expressing this as a difference of squares we get . Dividing by like terms we finally get as desired. Note: It is necessary to check that , as you cannot divide by zero. As the problem states that the series are different, , and so there is no division by zero error. See Also 2009 AMC 12A (Problems • Answer Key • Resources) Preceded by Problem 16Followed by Problem 18 1•2•3•4•5•6•7•8•9•10•11•12•13•14•15•16•17•18•19•20•21•22•23•24•25 All AMC 12 Problems and Solutions These problems are copyrighted © by the Mathematical Association of America, as part of the American Mathematics Competitions. Retrieved from " Art of Problem Solving is an ACS WASC Accredited School aops programs AoPS Online Beast Academy AoPS Academy About About AoPS Our Team Our History Jobs AoPS Blog Site Info Terms Privacy Contact Us follow us Subscribe for news and updates © 2025 AoPS Incorporated © 2025 Art of Problem Solving About Us•Contact Us•Terms•Privacy Copyright © 2025 Art of Problem Solving Something appears to not have loaded correctly. Click to refresh.
14939
https://arxiv.org/pdf/0708.3661
arXiv:0708.3661v2 [math.CO] 30 Sep 2007 On Kalai’s conjectures concerning centrally symmetric polytopes Raman Sanyal Axel Werner G¨ unter M. Ziegler Institute of Mathematics, MA 6-2 TU Berlin D-10623 Berlin, Germany {sanyal,awerner,ziegler }@math.tu-berlin.de September 30, 2007 Abstract In 1989 Kalai stated the three conjectures A, B, C of increasing strength concerning face numbers of centrally symmetric convex polytopes. The weakest conjecture, A, became known as the “3 d-conjecture”. It is well-known that the three conjectures hold in dimensions d ≤ 3. We show that in dimension 4 only conjectures A and B are valid, while conjecture C fails. Furthermore, we show that both conjectures B and C fail in all dimensions d ≥ 5. 1 Introduction A convex d-polytope P is centrally symmetric , or cs for short, if P = −P . Concerning face numbers, this implies that for 0 ≤ i ≤ d − 1 the number of i-faces fi(P ) is even and, since P is full-dimensional, that min {f0(P ), f d−1(P )} ≥ 2d. Beyond this, only very little is known for the general case. That is to say, the extra (structural) information of a central symmetry yields no substantial additional constraints for the face numbers on the restricted class of polytopes. Not uncommon to the f -vector business, the knowledge about face numbers is concentrated on the class of centrally symmetric simplicial , or dually simple , polytopes. In 1982, B´ ar´ any and Lov´ asz proved a lower bound on the number of vertices of simple cs polytopes with prescribed number of facets, using a generalization of the Borsuk–Ulam theorem. Moreover, they conjectured lower bounds for all face numbers of this class of polytopes with respect to the number of facets. In 1987 Stanley proved a conjecture of Bj¨ orner concerning the h-vectors of simplicial cs polytopes that implies the one by B´ ar´ any and Lov´ asz. The proof uses Stanley-Reisner rings and toric varieties plus a pinch of representation theory. The result of Stanley for cs polytopes was reproved in a more geometric setting by Novik by using “symmetric flips” in McMullen’s weight algebra . For general polytopes, lower bounds on the toric h-vector were recently obtained by A’Campo-Neuen by using combinatorial intersection cohomology. Unfortunately, the toric h-vector contains only limited information about the face numbers of general (cs) polytopes and thus the applicability of the result is limited (see Section 2.1). In , Kalai stated three conjectures about the face numbers of general cs polytopes. Let P 1be a (cs) d-polytope with f -vector f (P ) = ( f0, f 1, . . . , f d−1). Define the function s(P ) by s(P ) := 1 + d−1 ∑ i=0 fi(P ) = fP (1) where fP (t) := fd−1(P ) + fd−2(P )t + · · · + f0(P )td−1 + td is the f -polynomial. Thus, s(P )measures the total number of non-empty faces of P . Here is Kalai’s first conjecture from , the “3 d-conjecture”. Conjecture A. Every centrally-symmetric d-polytope has at least 3 d non-empty faces, i.e. s(P ) ≥ 3d.Is is easy to see that the bound is attained for the d-dimensional cube Cd and for its dual, the d-dimensional crosspolytope C△ d . It takes a moment’s thought to see that in dimensions d ≥ 4these are not the only polytopes with 3 d non-empty faces. An important class that attains the bound is the class of Hanner polytopes . These are defined recursively: As a start, every cs 1-dimensional polytope is a Hanner polytope. For dimensions d ≥ 2, a d-polytope H is a Hanner polytope if it is the direct sum or the direct product of two (lower dimensional) Hanner polytopes H′ and H′′ .The number of Hanner polytopes grows exponentially in the dimension d, with a Catalan-type recursion. It is given by the number of two-terminal networks with d edges, n(d) = 1, 1, 2, 4, 8, 18 , 40 , 94 , 224 , 548 , 1356 , . . . , for d = 1 , 2, . . . , as counted by Moon ; see also . Conjecture B. For every centrally-symmetric d-polytope P there is a d-dimensional Hanner polytope H such that fi(P ) ≥ fi(H) for all i = 0 , . . . , d − 1. For a d-polytope P and S = {i1, i 2, . . . , i k} ⊆ [d] = {0, 1, . . . , d − 1} let fS (P ) ∈ Z2[d] be the number of chains of faces F1 ⊂ F2 ⊂ · · · ⊂ Fk ⊂ P with dim Fj = ij for all j = 1 , . . . , k . Iden-tifying R2[d] with its dual space via the standard inner product, we write α(P ) := ∑ S αS fS (P )for ( αS )S⊆[d] ∈ R2[d] . The set Pd = {(αS )S⊆[d] ∈ R2[d] : α(P ) = ∑ S αS fS (P ) ≥ 0 for all d-polytopes P } is the polar to the set of flag-vectors of d-polytopes, that is, the cone of all linear functionals that are non-negative on all flag-vectors of (not necessarily cs) d-polytopes. Conjecture C. For every centrally-symmetric d-polytope P there is a d-dimensional Hanner polytope H such that α(P ) ≥ α(H) for all α ∈ P d.It is easy to see that C ⇒ B ⇒ A: Define αi(P ) := fi(P ), then αi ∈ P d and the validity of C on the functionals αi implies B; the remaining implication follows since s(P ) is a non-negative combination of the fi(P ). In this paper we investigate the validity of these three conjectures in various dimensions. Our main results are as follows. Theorem 1.1. The conjectures A and B hold for centrally symmetric polytopes of dimension d ≤ 4. Theorem 1.2. Conjecture C is false in dimension d = 4 . Theorem 1.3. For all d ≥ 5 both conjectures B and C fail. 2The paper is organized as follows. In Section 2 we establish a lower bound on the flag-vector functional gtor 2 on the class of cs 4-polytopes. Together with some combinatorial and geometric reasoning this leads to a proof of Theorem 1.1. In Section 3, we exhibit a centrally symmetric 4-polytope and a flag vector functional that disprove conjecture C. In Section 4 we consider centrally symmetric hypersimplices in odd dimensions; combined with basic properties of Hanner polytopes, this gives a proof of Theorem 1.3. We close with two further interesting examples of centrally symmetric polytopes in Section 5. Acknowledgements. We are grateful to Gil Kalai for his inspiring conjectures, and for pointing out the connection to symmetric stresses for Theorem 2.1. 2 Conjectures A and B in dimensions d ≤ 4 In this section we prove Theorem 1.1, that is, the conjectures A and B for polytopes in di-mensions d ≤ 4. The work of Stanley implies A and B for simplicial and thus also for simple polytopes. Furthermore, if f0(P ) = 2 d, then P is linearly isomorphic to a crosspolytope. Therefore, we assume throughout this section that all cs d-polytopes P are neither simple nor simplicial, and that fd−1(P ) ≥ f0(P ) ≥ 2d + 2 . The main work will be in dimension 4. The claims for dimensions one, two, and three are vacuous, clear, and easy to prove, in that order. In particular, the case d = 3 can be obtained from an easy f -vector calculation. But, to get in the right mood, let us sketch a geometric argument. Let P be a cs 3-polytope. Since P is not simplicial, P has a non-triangle facet. Let F be a facet of P with f0(F ) ≥ 4 vertices. Let F0 = P ∩ H with H being the hyperplane parallel to the affine hulls of F and of −F that contains the origin. Now, F0 is a cs 2-polytope and it is clear that every face G of P that has a nontrivial intersection with H is neither a face of F nor of −F . We get s(P ) ≥ s(F ) + s(F0) + s(−F ) ≥ 3 · 32. This type of argument fails in dimensions d ≥ 4. Applying small (symmetric) perturbations to the vertices of a prism over an octahedron yields a cs 4-polytope with the following two types of facets: prisms over a triangle and square pyramids. Every such facet has less than 3 3 faces, which shows that less than a third of the alleged 81 faces are concentrated in any facet. Let’s come back to dimension 4. The proof of the conjectures A and B splits into a combinatorial part ( f -vector yoga ) and a geometric argument. We partition the class of cs 4-polytopes into large and (few) small polytopes, where “large” means that f0(P ) + f3(P ) ≥ 24 . (1) We will reconsider an argument of Kalai that proves a lower bound theorem for polytopes and, in combination with flag-vector identities, leads to a tight flag-vector inequality for cs 4-polytopes. With this new tool, we prove that (1) implies conjectures A and B for dimension 4. We show that the small cs 4-polytopes, i.e. those not satisfying (1), are twisted prisms , to be introduced in Section 2.3, over 3-polytopes. We then establish basic properties of twisted prisms that imply the validity of conjectures A and B for small cs 4-polytopes. 32.1 Rigidity with symmetry and flag-vector inequalities For a general simplicial d-polytope P the h-vector h(P ) is the ordered collection of the coefficients of the polynomial hP (t) := fP (t − 1), the h-polynomial of P . Clearly, hP (t) encodes the same information as the f -polynomial, but additionally hP (t) is a unimodal, palindromic polynomial with non-negative, integral coefficients (see e.g. [28, Sect. 8.3]). This gives more insight in the nature of face numbers of simplicial polytopes and, in a compressed form, this numerical information is carried by its g-vector g(P ) with gi(P ) = hi(P ) − hi−1(P ) for i = 1 , . . . , ⌊ d 2 ⌋.There are various interpretations for the h- and g-numbers and, via the g-Theorem, they carry a complete characterization of the f -vectors of simplicial d-polytopes. For general d-polytopes a much weaker invariant is given by the generalized or toric h-vector htor (P ) introduced by Stanley . In contrast to the ordinary h-vector, the toric h-numbers htor i (P ) are not determined by the f -vector: They are linear combinations of the face numbers and of other entries of the flag-vector of P . For example, gtor 2 = htor 2 − htor 1 = f1 + f02 − 3f2 − df 0 + (d+1 2 ). The corresponding toric h-polynomial shares the same properties as its simplicial relative but, unfortunately, carries quite incomplete information about the f -vector. For example, in the case of P being a quasi-simplicial polytope, i.e. if every facet of P is simplicial, the toric h-vector depends only on the f -numbers fi(P ) for 0 ≤ i ≤ ⌊ d 2 ⌋ and, therefore, does not carry enough information to determine a lower bound on s(P ) for d ≥ 5. However, the information gained in dimension 4 will be a major step in the direction of a proof of Theorem 1.1. To be more precise, for the class of centrally symmetric d-polytopes there is a refinement of the flag-vector inequality gtor 2 = htor 2 − htor 1 ≥ 0. Theorem 2.1. Let P be a centrally symmetric d-polytope. Then gtor 2 (P ) = f1(P ) + f02 (P ) − 3f2(P ) − df 0(P ) + (d+1 2 ) ≥ (d 2 ) − d. With Euler’s equation and the Generalized Dehn-Sommerville equations it is routine to derive the following inequality for the class of cs 4-polytopes. Corollary 2.2. If P is a centrally symmetric 4-polytope, then f03 (P ) ≥ 3f0(P ) + 3 f3(P ) − 8. (2) We will prove Theorem 2.1 using the theory of infinitesimally rigid frameworks . For information about rigidity beyond our needs we refer the reader to Roth for a very readable introduction and to Whiteley and Kalai for rigidity in connection with polytopes. Let d ≥ 1 and let G = ( V, E ) be an abstract simple undirected graph. The edge function associated to G and d is the map Φ : ( Rd)V → RE (pv)v∈V 7 → (‖pu − pv‖2) uv ∈E , which measures the (squared) lengths of the edges of G for any choice of coordinates p =(pv)v∈V ∈ (Rd)V . The pair ( G, p) is called a framework in Rd and the points of Φ p := Φ −1(Φ( p)) give the possible frameworks in Rd with constant edge lengths Φ( p). 4Let v = |V | ≥ d + 1 and let p be a generic embedding. Then the set Φ p ⊂ (Rd)V is a smooth submanifold on which the group of Euclidean/rigid motions E(Rd) acts smoothly and faithfully. Therefore the dimension of Φ p is dim Φp ≥ (d+1 2 ) and in case of equality the framework ( G, p)is infinitesimally rigid .The rigidity matrix R = R(G, p) ∈ (Rd)E×V of ( G, p) is the Jacobian matrix of Φ evaluated at p. Invoking the Implicit Function Theorem, it is easy to see that ( G, p) is infinitesimally rigid if and only if rank R = dv − (d+1 2 ). A stress on the framework ( G, p) is an assignment ω = ( ωe)e∈E ∈ RE of weights ωe ∈ R to the edges e ∈ E such that there is an equilibrium ∑ u:uv ∈E ωuv (pv − pu) = 0 at every vertex v ∈ V . We denote by S(G, p) = {ω ∈ RE : ωR = 0 } the kernel of R⊤, called the space of stresses on ( G, p). Theorem 2.3 (Whiteley [27, Thm. 8.6 with Thm. 2.9]) . Let P ⊂ Rd be a d-polytope. Let G = G(P ) = ( V, E ) be the graph obtained from a triangulation of the 2-skeleton of P without new vertices and let p = p(P ) be the vertex coordinates. Then the resulting framework (G, p) is infinitesimally rigid. The above theorem makes no reference to the triangulation of the 2-skeleton. The important fact to note is that the graph G of Theorem 2.3 will have exactly e := |E| = f1(P ) + f02 (P ) − 3f2(P )edges: In addition to the f1(P ) edges of P , k − 3 edges are needed for every 2-face with k vertices. For the dimension of the space of stresses S(G, p) we get 0 ≤ dim S(G, p) = e − rank R = e − dv + (d+1 2 ) = f1(P ) + f02 (P ) − 3f2(P ) − df 0(P ) + (d+1 2 ) = gtor 2 (P ). Now let P be a centrally symmetric d-polytope, d ≥ 3. Let G = G(P ) = ( V, E ) be the graph in Theorem 2.3 obtained from a triangulation that respects the central symmetry of the 2-skeleton and let p = p(P ) be the vertex coordinates of P . The antipodal map x 7 → − x induces a free action of the group Z2 on the graph G. We denote by V = V / Z2 and E = E/ Z2 the respective quotients and, after choosing representatives, we denote by V = V + ⊎ V − and E = E+ ⊎ E− the decompositions of the set of vertices and edges according to the action. Since the action is free we have |V | = |V ±| = v 2 and |E| = |E±| = e 2 .Concerning the rigidity matrix, it is easy to see that R = ( V + V − E+ R1 R2 E− −R2 −R1 ) ∈ (Rd)V ×E with labels above and to the left of the matrix. The embedding p = p(P ) respects the central symmetry of G and we can augment the edge function by a second component that takes the symmetry information into account: Φsym : (Rd)V + × (Rd)V − → RE × (Rd)V p = ( pV + , pV − ) 7 → (Φ( p), pV + + pV − ) . Thus Φ sym additionally measures the degree of asymmetry of the embedding. By the symmetry of P , Φ sym (p) = (Φ( p), 0) for p = p(P ). The preimage of this point under Φ sym is Φ sym p ⊂ Φp, the 5set of all centrally symmetric embeddings with edge lengths Φ( p). Any small (close to identity) rigid motion that fixes the origin takes p ∈ Φsym p to a distinct centrally symmetric realization p′ ∈ Φsym p . Thus the action of the subgroup O(Rd), the group of orthogonal transformations, on Φsym p locally gives a smooth embedding. It follows that dim Φsym p ≥ dim O(Rd) = (d 2 ) and thus rank Rsym ≤ dv − (d 2 ), (3) where we can compute the rank of Rsym , the Jacobian of Φ sym at p, as rank Rsym = rank  R1 R2 −R2 −R1 IV + IV −  = dv 2 + rank (R1 − R2) . (4) Proof of Theorem 2.1. Consider the space of symmetric stresses , that is, the linear subspace Ssym (G, p) = {ω = ( ωE+ , ω E− ) ∈ S(G, p) : ωE+ = ωE− } ∼= {ω ∈ RE : ω (R1 − R2) = 0 }. From (3) and (4) it follows that dim Ssym (G, p) = e 2 − rank (R1 − R2) ≥ e 2 − dv 2 + (d 2 ) . The theorem follows from noting that Ssym (G, p) ⊆ S(G, p) and therefore e − dv + (d + 1 2 ) ≥ 1 2 (e − dv ) + (d 2 ) . Theorem 2.1 can also be deduced from the following result of A’Campo-Neuen ; see also . Theorem 2.4 ([2, Theorem 2]) . Let P be a centrally symmetric d-polytope and let htor P (t) = ∑di=0 htor i (P ) ti be its toric h-polynomial. Then the polynomial htor P (t) − htor C△ d (t) = htor P (t) − (1 + t)d ∈ Z[t] is palindromic and unimodal with non-negative, even coefficients. In particular, gtor i (P ) = htor i (P ) − htor i−1 (P ) ≥ (di ) − ( di−1 ) for all 1 ≤ i ≤ ⌊ d 2 ⌋ . The proof of Theorem 2.4 relies on the (heavy) machinery of combinatorial intersection co-homology for fans. Theorem 2.1 concerns the special case of the coefficient of the quadratic term. In light of McMullen’s weight algebra , it would be interesting to know whether/how Theorem 2.4 can be deduced by considering (generalized) stresses. A connection between the combinatorial intersection cohomology set-up for fans and rigidity was established by Braden [6, Sect. 2.9]. 2.2 Large centrally symmetric 4-polytopes In order to prove conjectures A and B for large polytopes, we need one more ingredient. 6Proposition 2.5. Let P be a 4-polytope. Then f03 (P ) ≤ 4f2(P ) − 4f3(P )= 4f1(P ) − 4f0(P ). (5) Equality holds if and only if P is center-boolean , i.e. if every facet is simple. Proof. The inequality was first proved by Bayer . Every facet F of P is a 3-polytope satisfying 3f0(F ) ≤ 2f1(F ). By summing up over all facets of P we get 3f03 (P ) = ∑ Ffacet 3f0(F ) ≤ ∑ Ffacet 2f1(F ) = 2 f13 (P ). By one of the Generalized Dehn-Sommerville Equations we have f03 − f13 + f23 = 2 f3, which, together with f23 = 2 f2 immediately implies the asserted inequality. Equality holds if the above inequality for 3-polytopes holds with equality for all facets of P , which means that all facets are simple 3-polytopes. The equality in the assertion is Euler’s equation. Combining the inequalities (2) and (5), we obtain f2 ≥ 1 4 (3 f0 + 7 f3) − 2 = f3 + 3 4 (f0 + f3) − 2 f1 ≥ 1 4 (7 f0 + 3 f3) − 2 = f0 + 3 4 (f0 + f3) − 2. (6) In terms of f0 and f3 this gives s(P ) ≥ 14 4 (f0 + f3) − 3 ≥ 81 where the last inequality holds if P is large. To prove conjecture B for large polytopes, we have to show that the f -vector of every large polytope is component-wise larger than the f -vector of one of the following four Hanner poly-topes: (f0, f 1, f 2, f 3) C4 (16 , 32 , 24 , 8) C△ 4 ( 8 , 24 , 32 , 16) bip C3 (10 , 28 , 30 , 12) prism C△ 3 (12 , 30 , 28 , 10) It suffices to treat the case f0 + f3 = 24. Indeed, for f0 + f3 ≥ 26 and f3 ≥ f0 ≥ 10 we get from (6) that f1 ≥ f0 + 18 ≥ 28 f2 ≥ f3 + 18 ≥ 30 and thus f (bip C3) is componentwise smaller. We claim that the same bounds hold for f0 + f3 = 24. Otherwise, if f1 ≤ 26 or f2 ≤ 28, then by using (5) together with f0 ≥ 10 and f3 ≥ 12 we get in both cases that f03 ≤ 64. In fact, we now get f03 = 64 from (2), which tells us that P is center boolean , i.e. every facet is simple. Granted that every facet of P is simple and has at most 6 vertices, the possible facet types are the 3-simplex ∆ 3 and the triangular prism prism ∆2. Using the assumption that P is not 7simplicial, there is a facet F ∼= prism ∆2. The three quad faces of F give rise to three more prism facets and, due to the number of vertices, no two of them are antipodes. For the same reason, any two prism facets cannot intersect in a triangle face. In total, we note that P has exactly eight prism facets and four tetrahedra. Since every antipodal pair of prism facets give a partition of the vertices, it follows that every vertex is contained in a simplex and exactly 4 prism facets. Therefore, every vertex has degree ≥ 6 and thus 2 f1 ≥ 6 · 12. By Euler’s equation, the same holds for f2. 2.3 Twisted prisms and the small polytopes The class of small cs 4-polytopes consists of all cs 4-polytopes P with 12 ≥ f3(P ) ≥ f0(P ) = 10. Since P is not simplicial, P has a facet F that has 5 = d + 1 = f0(F ) vertices, and P = conv (F ∪ − F ). In particular, F is a 3-polytope with 3 + 2 vertices, which does not leave much diversity in terms of combinatorial types. The facet F is combinatorially equivalent to ◮ a pyramid over a quadrilateral, or ◮ a bipyramid over a triangle. Definition 2.6 (Twisted prism) . Let Q ⊂ Rd−1 be a ( d − 1)-polytope. The centrally symmetric d-polytope P = tprism Q = conv (Q × { 1} ∪ −Q × {− 1}) ⊂ Rd is called the twisted prism over the base Q.The following basic properties of twisted prisms will be of good service. Proposition 2.7. Let Q ⊂ Rd−1 be a (d − 1) -polytope and tprism Q the twisted prism over Q. If T : Rd−1 → Rd−1 is a non-singular affine transformation, then tprism Q and tprism T Q are affinely isomorphic. If Q = pyr Q′ is a pyramid with base Q′, then tprism Q is combinatorially equivalent to bip tprism Q′, a bipyramid over the twisted prism over Q′. The second statement of Proposition 2.7 actually proves the conjectures A and B for half of the small cs 4-polytopes: Let P = tprism Q and Q a pyramid over a quadrilateral. By the second statement P is combinatorially equivalent to bip P ′, where P ′ is a cs 3-polytope. In terms of f -polynomials, it is easy to show that for a bipyramid fbip Q(t) = (2 + t)fQ(t). Thus s(P ) = fbip P ′ (1) = 3 fP ′ (1) ≥ 34. Since B is true in dimension 3 there is a 3-dimensional Hanner polytope H such that fi(P ′) ≥ fi(H) for i = 0 , 1, 2. From the above identity of f -polynomials it follows that fi(bip P ′) ≥ fi(bip H) for 1 ≤ i ≤ 3, where bip H = I ⊕ H is a Hanner polytope. The next lemma shows that the above class already contains all small polytopes, which finally settles A and B for dimension 4. Lemma 2.8. Let d ≥ 4 and let P = tprism F ⊂ Rd be a cs d-polytope with F combinatorially equivalent to ∆i ⊕ ∆d−i−1 and 1 ≤ i ≤ d−1 2 . Then fd−1(P ) ≥ 2(1 + ( i + 1)( d − i)) ≥ 2(2 d − 1) . 8Proof. The facet F in P has ( i + 1)( d − i) ridges and thus F and its neighbors account for 1 + ( i + 1)( d − i) facets. The result now follows by considering −F as soon as we have checked that no facet G shares a ridge with F and with −F . This, however, is impossible, since G would have to have two vertex disjoint ( d − 2)-simplices as maximal faces and, therefore, at least f0(G) ≥ 2d − 2 vertices. Thus 2 d + 2 = f0(P ) ≥ f0(G) + f0(−G) ≥ 4d − 4. Corollary 2.9. If P = tprism Q with Q ∼= bip ∆2, then P is large. 3 Conjecture C in dimension 4 We will refute conjecture C strongly for dimension 4: We exhibit a flag-functional α ∈ P 4 and a cs 4-polytope P such that α(P ) < α (H) for every 4-dimensional Hanner polytope H.Geometrically, this means that there is an oriented hyperplane in the vector space R2[d] that has the flag vector ( fS (P )) S on its negative side, but all the flag-vectors of Hanner polytopes on its positive side, while some parallel hyperplane has the flag-vectors of all (not-necessarily cs) 4-polytopes on its positive side. For this, consider the two functionals ℓ1(P ) = f02 (P ) − 3f2(P ) ℓ2(P ) = f13 (P ) − 3f1(P )= f02 (P ) − 3f1(P ). Let Fk(P ) be the number of 2-faces with exactly k vertices. Then f02 (P ) = ∑ k≥3 k · Fk(P ). Thus ℓ1(P ) = ∑ k≥4 (k − 3) · Fk(P ), which is clearly non-negative for every 4-polytope. In case of equality the polytope is 2 -simplicial . For the second functional note that ℓ2(P ) = ℓ1(P △) ≥ 0and the bound is attained by the 2 -simple polytopes. Thus, the functional α(P ) := 1 2 (ℓ1 + ℓ2) = f02 − 3 2 (f1 + f2)is non-negative for all 4-polytopes; it vanishes exactly for 2-simple 2-simplicial polytopes. (See for examples of such polytopes.) Consider the cs 4-polytope P4 := [−1, +1] 4 ∩ { x ∈ R4 : −2 ≤ x1 + · · · + x4 ≤ 2} which arises from the 4-cube C4 by chopping off the vertices ±1 by hyperplanes that pass through the respective neighbors. It is straightforward to verify that the f -vector of P4 is f (P4) = (10 , 32 , 36 , 14) . Indeed, the only faces that go missing are the 2 · 4 edges incident to the two vertices; the added faces are the faces of strictly positive dimension of the vertex figures at 1 and −1.Concerning the number of vertex–2-face incidences: there are only triangles and quadrilaterals. The number of triangles is twice the number of 2-faces and facets incident to any given vertex. Thus, f02 = 3 · 20 + 4 · 12 = 108 and α(P4) = 6. 9Theorem 1.2 now follows from inspecting the following table, which lists in its first row the data for P4, and then (extended) data for the 4-dimensional Hanner polytopes: (f0, f 1, f 2 f3 ) f02 α P4 (10 , 32 , 36 14 ) 108 6 C4 (16 , 32 , 24 8 ) 96 12 C△ 4 ( 8 , 24 , 32 16 ) 96 12 bip C3 (10 , 28 , 30 12 ) 96 9 prism C△ 3 (12 , 30 , 28 10 ) 96 9 4 The central hypersimplices ˜∆k = ∆( k, 2k) For natural numbers d > k > 0, the ( k, d )-hypersimplex is the ( d − 1)-dimensional polytope ∆( k, d ) = conv { x ∈ { 0, 1}d : x1 + x2 + · · · + xd = k } ⊂ Rd. Hypersimplices were considered as (regular) polytopes in [7, §11.8] (see also [19, Sect. 3.3.2] and [10, Exercise 4.8.16]), as well as in connection with algebraic geometry in , , and . One rather simple observation is that ∆( k, d ) and ∆( d − k, d ) are affinely isomorphic under the map x 7 → 1 − x. In particular, the hypersimplex ˜∆k := ∆( k, 2k) is a centrally symmetric (2 k − 1)-polytope with f0( ˜∆k) = (2kk ) vertices. In a different, full-dimensional realization, the central hypersimplex is given by ˜∆k ∼= conv { x ∈ { +1 , −1}2k−1 : −1 ≤ x1 + x2 + · · · + x2k−1 ≤ 1 } . From this realization it is easy to see that for k ≥ 2 the hypersimplex ˜∆k is a twisted prism over ∆( k, 2k − 1) with f2k−2( ˜∆k) = 4 k = 2(2 k − 1) + 2 facets: Since the above realization lives in an odd-dimensional space, the sum of the coordinates for any vertex is either +1 or −1. The points satisfying ∑ i xi = 1 form a face that is affinely isomorphic to ∆( k, 2k − 1). To verify the number of facets, observe that ˜∆k is the intersection of the 2 k-cube with a hyperplane that cuts all its 4 k facets. We will show that in odd dimensions d = 2 k − 1 ≥ 5 a d-dimensional Hanner polytope that has no more facets than ˜∆k has way too many vertices for conjecture B. In even dimensions d ≥ 6 Theorem 1.3 follows then by taking a prism over ˜∆k. The following proposition gathers the information needed about Hanner polytopes. Proposition 4.1. Let H be a d-dimensional Hanner polytope. Then (a) fd−1(H) ≥ 2d. (b) If fd−1(H) = 2 d, then H is a d-cube. (c) If fd−1(H) = 2 d + 2 , then H = Cd−3 × C△ 3 .Proof. Since all three claims are certainly true for Hanner polytopes of dimension d ≤ 3, let us assume that d ≥ 4. By definition, H is the direct sum or product of two Hanner polytopes H′ and H′′ of dimensions i and d − i with 1 ≤ i ≤ d 2 .If H = H′ ⊕ H′′ , then, by induction on d, we get fd−1(H) = fi−1(H′) · fd−i−1(H′′ ) ≥ 4i(d − i) ≥ 2d + 4 . 10 Therefore, we can assume that H = H′ × H′′ and fd−1(H) = fi−1(H′) + fd−i−1(H′′ ) ≥ 2d which proves (a). The condition in (b) is satisfied if and only if it is satisfied for each of the two factors. Therefore, by induction, both factors are cubes and so is their product. Similarly, the condition in (c) is satisfied iff it is satified for one of the two factors. By using (a) we see that the remaining factor is a cube, which proves (c). Proof of Theorem 1.3. Let d = 2 k − 1 ≥ 5 and let H be a d-dimensional Hanner polytope with fi(H) ≤ fi( ˜∆k) for all i = 0 , . . . , d − 1. Since the hypersimplex ˜∆k has 2 d + 2 facets, it follows from Proposition 4.1 that H is either C2k−1 or C2k−4 × C△ 3 . In either case, the Hanner polytope satisfies f0(H) ≥ 3 · 22k−3 > (2kk ) , where the last inequality holds for k ≥ 3. For even dimensions d = 2 k consider prism ˜∆k = I × ˜∆k, which has 2(2 k − 1) + 4 = 2 d + 2 facets. Again by Proposition 4.1, a Hanner polytope H with componentwise smaller f -vector is of the form I × H′ and the result follows from the odd case. 5 Two more examples We wish to discuss two examples of centrally symmetric polytopes that exhibit some remarkable properties, two of which are being self-dual and being counter-examples to conjecture C. Both polytopes are instances of Hansen polytopes , for which we sketch the construction. Let G = ( V, E ) be a perfect graph on the vertex set V = {1, . . . , d − 1}, that is, a simple, undirected graph without induced odd cycles of length ≥ 5 (cf. Schrijver [21, Chap. 65]). Let Ind (G) ⊆ 2V be the independence complex of G. So Ind (G) is the simplicial complex on the vertices V defined by the relation that S ⊆ V is contained in Ind (G) if and only if the vertex induced subgraph G[S] has no edges. To every independent set S ∈ Ind (G) associate the (characteristic) vector ˜ χS ∈ { +1 , −1}d−1 with ( ˜ χS )i = +1 if and only if i ∈ S. The collection of vectors is a subset of the vertex set of the ( d − 1)-cube. Let PInd (G) = conv { ˜χS : S ∈ Ind (G)} ⊂ [−1, +1] d−1 be the vertex induced subpolytope. The Hansen polytope H(G) associated to G is the twisted prism over PInd (G). In particular, H(G) is a centrally symmetric d-polytope with f0(H(G)) = 2 |Ind (G)| vertices. A graph G = ( V, E ) is self-complementary if G is isomorphic to its complementary graph G = ( V, (V 2 )\E). Proposition 5.1. If G = ( V, E ) is a self-complementary, perfect graph on d − 1 vertices, then H(G) is a centrally symmetric, self-dual d-polytope. Proof. By [12, Thm. 4], the polytope H(G)△ is isomorphic to H(G) = H(G). Example 5.2. Let G4 the path on four vertices v1, v 2, v 3, v 4. This is a self-complementary perfect graph, so H(G4) is a 5-dimensional self-dual cs polytope. We compute its f -vector, and compare it to the f -vectors of the 5-dimensional hypersimplex ˜∆3 and of the eight 5-dimensional Hanner polytopes. This results in the following table (the four Hanner polytopes not listed are 11 the duals of the ones given here, with the corresponding reversed f -vectors): ( f0, f 1, f2, f 3, f 4 ) f0 + f4 s H(G4) ( 16 , 64 , 98 , 64 , 16 ) 32 259 ˜∆3 ( 20 , 90 , 120 , 60 , 12 ) 32 303 C△ 5 ( 10 , 40 , 80 , 80 , 32 ) 42 243 bip bip C3 ( 12 , 48 , 86 , 72 , 24 ) 36 243 bip prism C△ 3 ( 14 , 54 , 88 , 66 , 20 ) 34 243 prism C△ 4 ( 16 , 56 , 88 , 64 , 18 ) 34 243 Thus H(G4) refutes conjecture B in dimension 5 strongly : its value for f0 + f4 is smaller than for any Hanner polytope. Furthermore, H(G4) has a smaller face number sum s than the hypersimplex, so in that sense it is even a better example to look at in view of conjecture A. Example 5.3. Let G5 be the path on five vertices v1, v 2, v 3, v 4, v 5 (in this order), with an additional edge connecting the second vertex v2 to the fourth vertex v4 on the path. This is a self-complementary perfect graph, so we obtain a 6-dimensional self-dual cs polytope H(G5). Again its f -vector can be computed and compared to those of the prism over the 5-dimensional hypersimplex, I × ˜∆3, which we had used for Theorem 1.3 as well as the eighteen Hanner polytopes in dimension 6 (again we do not list the duals explicitly): ( f0, f1, f2, f3, f4, f 5 ) f0 + f5 s H(G5) ( 24 , 116 , 232 , 232 , 116 , 24 ) 48 745 prism ˜∆3 ( 40 , 200 , 330 , 240 , 84 , 14 ) 54 908 C△ 6 ( 12 , 60 , 160 , 240 , 192 , 64 ) 76 729 bip bip bip C3 ( 14 , 72 , 182 , 244 , 168 , 48 ) 62 729 bip bip prism C△ 3 ( 16 , 82 , 196 , 242 , 152 , 40 ) 56 729 bip prism C△ 4 ( 18 , 88 , 200 , 240 , 146 , 36 ) 54 729 bip bip C4 ( 20 , 100 , 216 , 232 , 128 , 32 ) 52 729 prism C△ 5 ( 20 , 90 , 200 , 240 , 144 , 34 ) 54 729 bip prism bip C3 ( 22 , 106 , 220 , 230 , 122 , 28 ) 50 729 prism bip bip C3 ( 24 , 108 , 220 , 230 , 120 , 26 ) 50 729 C3 ⊕ C3 ( 16 , 88 , 204 , 240 , 144 , 36 ) 52 729 Thus H(G5) is a self-dual cs polytope that also refutes conjecture B in dimension 6 strongly .Moreover, also looking at the pair ( f1, f 4) suffices to derive a contradiction to conjecture B. In these respects, H(G5) is the nicest and strongest counter-example that we currently have for conjecture B in dimension 6. Note that there are no self-complementary (perfect) graphs on 6 or on 7 vertices, since (62 ) = 15 and (72 ) = 21 are odd. Thus, we cannot derive self-dual polytopes in dimensions 7 or 8 from Hansen’s construction. The Hansen polytopes, derived from perfect graphs, are subject to further research. For example, H(G4) and H(G5) are interesting examples in view of the Mahler conjecture, since they exhibit only a small deviation from the Mahler volume of the d-cube, which is conjectured to be minimal (see Kuperberg and Tao ). 12 The Hansen polytopes in turn are special cases of weak Hanner polytopes , as defined by Hansen , which are twisted prisms over any of their facets. Greg Kuperberg has observed that all of these are equivalent to ±1-polytopes. References A. A’Campo-Neuen , On generalized h-vectors of rational polytopes with a symmetry of prime order ,Discrete Comput. Geom., 22 (1999), pp. 259–268. A. A’Campo-Neuen , On toric h-vectors of centrally symmetric polytopes , Arch. Math. (Basel), 87 (2006), pp. 217–226. I. B´ ar´ any and L. Lov´ asz , Borsuk’s theorem and the number of facets of centrally symmetric polytopes , Acta Math. Acad. Sci. Hungar., 40 (1982), pp. 323–329. M. M. Bayer , The extended f -vectors of 4-polytopes , J. Combin. Theory Ser. A, 44 (1987), pp. 141– 151. M. M. Bayer and L. J. Billera , Generalized Dehn-Sommerville relations for polytopes, spheres and Eulerian partially ordered sets , Invent. Math., 79 (1985), pp. 143–157. T. Braden , Remarks on the combinatorial intersection cohomology of fans , Pure Appl. Math. Q., 2 (2006), pp. 1149–1186. H. S. M. Coxeter , Regular Polytopes , Dover Publications Inc., New York, third ed., 1973. I. M. Gelfand, M. M. Kapranov, and A. V. Zelevinsky , Discriminants, Resultants, and Multidimensional Determinants , Mathematics: Theory & Applications, Birkh¨ auser Boston Inc., Boston, MA, 1994. I. M. Gelfand and R. D. MacPherson , Geometry in Grassmannians and a generalization of the dilogarithm , Advances in Math., 44 (1982), pp. 279–312. B. Gr¨ unbaum , Convex Polytopes , vol. 221 of Graduate Texts in Mathematics, Springer-Verlag, New York, second ed., 2003. Second edition by V. Kaibel, V. Klee and G. M. Ziegler (original edition: Interscience, London 1967). O. Hanner , Intersections of translates of convex bodies , Math. Scand., 4 (1956), pp. 67–89. A. B. Hansen , On a certain class of polytopes associated with independence systems , Math. Scand., 41 (1977), pp. 225–241. G. Kalai , Rigidity and the lower bound theorem I , Invent. Math., 88 (1987), pp. 125–151. , The number of faces of centrally-symmetric polytopes , Graphs and Combinatorics, 5 (1989), pp. 389–391. G. Kuperberg , From the Mahler conjecture to Gauss linking integrals . Preprint, Oct. 2006, 9 pages, . P. McMullen , Weights on polytopes , Discrete Comput. Geom., 15 (1996), pp. 363–388. J. W. Moon , Some enumerative results on series-parallel networks , in Random graphs ’85 (Pozna´ n, 1985), vol. 144 of North-Holland Math. Stud., North-Holland, Amsterdam, 1987, pp. 199–226. I. Novik , The lower bound theorem for centrally symmetric simple polytopes , Mathematika, 46 (1999), pp. 231–240. A. Paffenholz and G. M. Ziegler , The Et-construction for lattices, spheres and polytopes ,Discrete & Comput. Geometry (Billera Festschrift), 32 (2004), pp. 601–624. B. Roth , Rigid and flexible frameworks , Amer. Math. Monthly, 88 (1981), pp. 6–21. 13 A. Schrijver , Combinatorial optimization. Polyhedra and efficiency. Vol. B , vol. 24 of Algorithms and Combinatorics, Springer-Verlag, Berlin, 2003. Matroids, trees, stable sets, Chapters 39–69. N. J. A. Sloane , Number of series-parallel networks with n unlabeled edges, multiple edges not allowed . Sequence A058387, The On-Line Encyclopedia of Integer Sequences, . R. Stanley , Generalized H-vectors, intersection cohomology of toric varieties, and related results ,in Commutative algebra and combinatorics (Kyoto, 1985), vol. 11 of Adv. Stud. Pure Math., North-Holland, Amsterdam, 1987, pp. 187–213. R. Stanley , On the number of faces of centrally-symmetric simplicial polytopes , Graphs and Com-binatorics, 3 (1987), pp. 55–66. B. Sturmfels , Gr¨ obner Bases and Convex Polytopes , vol. 8 of University Lecture Series, AMS, Providence, RI, 1996. T. Tao , Open question: the Mahler conjecture on convex bodies . Blog page started March 8, 2007, . W. Whiteley , Infinitesimally rigid polyhedra. I. Statics of frameworks , Trans. Amer. Math. Soc., 285 (1984), pp. 431–465. G. M. Ziegler , Lectures on Polytopes , vol. 152 of Graduate Texts in Mathematics, Springer-Verlag, New York, 1995. 14
14940
https://www.sciencedirect.com/science/article/pii/S0140673684923894
MATERNAL SERUM ALPHA-FETOPROTEIN MEASUREMENT: A SCREENING TEST FOR DOWN SYNDROME - ScienceDirect Skip to main contentSkip to article Journals & Books Access throughyour organization Purchase PDF Patient Access Other access options Search ScienceDirect Article preview Abstract References (14) Cited by (384) The Lancet Volume 323, Issue 8383, 28 April 1984, Pages 926-929 MATERNAL SERUM ALPHA-FETOPROTEIN MEASUREMENT: A SCREENING TEST FOR DOWN SYNDROME Author links open overlay panel HowardS Cuckle a, NicholasJ Wald a, RichardH Lindenbaum b Show more Add to Mendeley Share Cite rights and content Abstract The median maternal serum alphafetoprotein (AFP) level at 14-20 weeks' gestation in 61 pregnancies associated with Down syndrome was 0·72 multiples of the median (MoM) value for a series of 36 652 singleton pregnancies unaffected by Down syndrome or neural-tube defect—a statistically significant reduction. The difference is great enought to form the basis of a screening test. By selecting for amniocentesis women with serum AFP levels ≤0·5 MoM at 14-20. weeks' gestation (excluding any of these that ultrasound cephalometry shows to have been due to the overestimation of gestational age) 21% of pregnancies with Down syndrome would be identified as well as 5% of unaffected pregnancies. If amniocentesis were offered to all women aged 38 years or more and, in addition, to younger women with serum AFP below specified maternal age-dependent cut-off levels (≤1·0 MoM at 37 years, ≤0·9 at 36, ≤0·8 at 35,≤0·7 at 34,≤0·6 at 32-33, ≤0·5 at 25-31) 40% of pregnancies with Down syndrome and 6·8% unaffected pregnancies would be selected. Access through your organization Check access to the full text by signing in through your organization. Access through your organization Recommended articles References (14) Nj Wald et al. Congenital talipes and hip malformation in relation to amniocentesis: A case-control study Lancet (1983) Ir Merkatz, Hm Nitowsky, Jn Macri, We Johnson, An association between low maternal serum alpha-fetoprotein (AFP) and... Nj Wald et al. Antenatal screening in Oxford for fetal neural-tube defects Br J Obstet Gynaecol (1979) Nj Wald et al. Effect of estimating gestational age by ultrasound cephalometry on the specificity of alpha-fetoprotein screening for open neural tube defects Br J Obstet Gynaecol (1982) Eb Hook et al. Estimated rates of Down syndrome in livebirths by one year maternal age intervals in a New York State study. Implications of the risk figures for genetic counselling and cost-benefit analysis of prenatal diagnosis programs Office of population Censuses and Surveys Birth Statistics, England and Wales, 1980 (1981) There are more references available in the full text version of this article. Cited by (384) Screening of maternal serum for fetal Down's syndrome in the first trimester 1998, New England Journal of Medicine ### Antenatal screening for Down's syndrome 1997, Journal of Medical Screening ### Prenatal Screening for Down's Syndrome with Use of Maternal Serum Markers 1992, New England Journal of Medicine ### Low second trimester maternal serum unconjugated oestriol in pregnancies with Down's syndrome 1988, BJOG an International Journal of Obstetrics Gynaecology ### Sonographic Identification of Second-Trimester Fetuses with Down's Syndrome 1987, New England Journal of Medicine ### Estimating a woman's risk of having a pregnancy associated with Down's syndrome using her age and serum alpha‐fetoprotein level 1987, BJOG an International Journal of Obstetrics Gynaecology View all citing articles on Scopus View full text Copyright © 1984 Published by Elsevier Ltd. Recommended articles Hepatic Glycogenoses Among Children—Clinical and Biochemical Characterization: Single-Center Study Journal of Clinical and Experimental Hepatology, Volume 10, Issue 3, 2020, pp. 222-227 Sophy Korula, …, Anna Simon ### Altered immune parameters correlate with infection-related hospitalizations in children with Down syndrome Human Immunology, Volume 77, Issue 7, 2016, pp. 594-599 Elizabeth Martínez, …, Martha C.Mesa ### Assessing cytochrome P450-based drug-drug interactions with hemoglobin-vesicles, an artificial red blood cell preparation, in healthy rats Drug Metabolism and Pharmacokinetics, Volume 35, Issue 5, 2020, pp. 425-431 Masahiro Tokuno, …, Masaki Otagiri ### Granulomatous Diseases Affecting Jaws Dental Clinics of North America, Volume 60, Issue 1, 2016, pp. 195-234 Baddam Venkat Ramana Reddy, …, Mel Mupparapu ### Incidental detection of microfilaria in cyst fluid of Mucinous cystadenocarcinoma of ovary: A rare case report International Journal of Surgery Case Reports, Volume 70, 2020, pp. 56-59 Vyshnavi Vasantham, …, Sonam Kumar Pruthi ### Use of pHlurorin-mKate2-human LC3 to Monitor Autophagic Responses Methods in Enzymology, Volume 587, 2017, pp. 87-96 I.Tanida, …, Y.Uchiyama Show 3 more articles Article Metrics Citations Citation Indexes 384 Clinical Citations 3 Policy Citations 8 Captures Mendeley Readers 53 View details About ScienceDirect Remote access Advertise Contact and support Terms and conditions Privacy policy Cookies are used by this site. Cookie Settings All content on this site: Copyright © 2025 Elsevier B.V., its licensors, and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies. For all open access content, the relevant licensing terms apply. Cookie Preference Center We use cookies which are necessary to make our site work. We may also use additional cookies to analyse, improve and personalise our content and your digital experience. For more information, see our Cookie Policy and the list of Google Ad-Tech Vendors. You may choose not to allow some types of cookies. However, blocking some types may impact your experience of our site and the services we are able to offer. See the different category headings below to find out more or change your settings. Allow all Manage Consent Preferences Strictly Necessary Cookies Always active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. Cookie Details List‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. Cookie Details List‎ Targeting Cookies [x] Targeting Cookies These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. If you do not allow these cookies, you will experience less targeted advertising. Cookie Details List‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Confirm my choices Your Privacy [dialog closed] × Chat with an article Save up to 50% of your literature research time by asking your questions to an article. Try for free
14941
https://pmc.ncbi.nlm.nih.gov/articles/PMC2803260/
Current techniques in postmortem imaging with specific attention to paediatric applications - PMC Skip to main content An official website of the United States government Here's how you know Here's how you know Official websites use .gov A .gov website belongs to an official government organization in the United States. Secure .gov websites use HTTPS A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites. Service Alert: Planned Maintenance beginning July 25th Most services will be unavailable for 24+ hours starting 9 PM EDT. Learn more about the maintenance. Search Log in Dashboard Publications Account settings Log out Search… Search NCBI Primary site navigation Search Logged in as: Dashboard Publications Account settings Log in Search PMC Full-Text Archive Search in PMC Advanced Search Journal List User Guide New Try this search in PMC Beta Search View on publisher site Download PDF Add to Collections Cite Permalink PERMALINK Copy As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health. Learn more: PMC Disclaimer | PMC Copyright Notice Pediatr Radiol . 2009 Dec 16;40(2):141–152. doi: 10.1007/s00247-009-1486-0 Search in PMC Search in PubMed View in NLM Catalog Add to search Current techniques in postmortem imaging with specific attention to paediatric applications Tessa Sieswerda-Hoogendoorn Tessa Sieswerda-Hoogendoorn 1 Department of Radiology, Academic Medical Centre Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam Zuid-Oost, Netherlands 2 Department of Pathology and Toxicology, Netherlands Forensic Institute, The Hague, Netherlands Find articles by Tessa Sieswerda-Hoogendoorn 1,2, Rick R van Rijn Rick R van Rijn 1 Department of Radiology, Academic Medical Centre Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam Zuid-Oost, Netherlands 2 Department of Pathology and Toxicology, Netherlands Forensic Institute, The Hague, Netherlands Find articles by Rick R van Rijn 1,2,✉ Author information Article notes Copyright and License information 1 Department of Radiology, Academic Medical Centre Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam Zuid-Oost, Netherlands 2 Department of Pathology and Toxicology, Netherlands Forensic Institute, The Hague, Netherlands ✉ Corresponding author. Received 2009 Oct 12; Revised 2009 Nov 9; Accepted 2009 Nov 16; Issue date 2010. © The Author(s) 2009 This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. PMC Copyright notice PMCID: PMC2803260 PMID: 20013258 Abstract In this review we discuss the decline of and current controversies regarding conventional autopsies and the use of postmortem radiology as an adjunct to and a possible alternative for the conventional autopsy. We will address the radiological techniques and applications for postmortem imaging in children. Keywords: Autopsy, Radiology, Child, Forensics Introduction The number of autopsies has been declining worldwide. In a study performed at the Agency for Healthcare Research and Quality, Shojania et al. found an autopsy rate in the United States below 6%, compared with 40–50% a few decades ago. This trend is supported by other sources verified by the authors . In a review by Burton and Underwood , similar rates were found in other Western countries as well (Table1). Table 1. The worldwide decline in autopsy rates. Autopsy rate is expressed as a percentage of all deaths. Figures in brackets denote the years in which the data were reported (adapted from Burton and Underwood with permission) | | Initial autopsy rate (period) | Subsequent autopsy rate (period) | --- | Australia | 21.0% (1992–93) | 12.0% (2002–03) | | France | 15.4% (1988) | 3.7% (1997) | | Hungary | 100% (1938–51) | 68.9% (1990–02) | | Ireland | 30.4% (1990) | 18.4% (1999) | | Jamaica | 65.3% (1968) | 39.3% (1997) | | Sweden | 81.0% (1984) | 34.0% (1993) | | UK | 42.7% (1979) | 15.3% (2001) | | USA | 26.67% (1967) | 12.4% (1993) | Open in a new tab The paediatric and neonatal autopsy rates have always been higher than those in adults . A general trend towards declining paediatric autopsy rates has been noticed as well [5, 6]. The reason for this decline is twofold—a decrease in willingness of the attending physician to ask permission for an autopsy in combination with a decrease in parental willingness to give consent. Little research has been done on the diminished willingness of doctors to ask permission to perform an autopsy. In a study by Snowdon et al. , neonatologists said that although autopsies are important, they believed it was secondary to parental needs. They felt especially uncomfortable if the cause of death could be determined without an autopsy. This is supported by several other authors [8, 9]. Other factors that are mentioned are budget cuts, fear of legal consequences, lack of communication training, and prior negative experiences with poorly performed autopsies or delayed autopsy reports . Refusal of parents/guardians to agree to an autopsy is influenced by several factors, including religion [11, 12], fear of unethical practices (influenced by the Alder Hey scandal in the UK [13–17]), and an increasingly individualistic culture in which personal life and experience precedes gain in scientific knowledge . An important reason for relatives to refuse permission is the feeling that the deceased “has suffered enough” . Many of these reasons have not changed during the last decades and therefore do not explain the decrease in the number of autopsies. Perhaps the most important reason for the decline in autopsies is the fact that, with increasing diagnostic and imaging techniques, both doctors and parents/guardians are under the assumption that they already know the cause of death. This assumption is known not to be true for adults in about one-third of the cases [19–21]. For neonates and children, discrepancies between clinical diagnosis and diagnosis at autopsy are found in about one-quarter of the cases [6, 22]. The autopsy reveals new information in paediatric deaths in about 44% of the cases . An alternative for conventional autopsy could be postmortem radiology (also known as virtual autopsy or Virtopsy®) or minimally invasive autopsy (CT and MR imaging followed by ultrasonographic-guided biopsies) (www.virtopsy.com) [13, 24–26]. In this review article we address the radiological techniques, conditions and applications for postmortem imaging in children. Techniques Conventional radiography Conventional radiology (CR) is the mainstay of postmortem imaging. The radiographs are reported by a paediatric radiologist and emphasis is placed on skeletal development, both with regard to the gestational age as well as the presence of anomalies, such as skeletal dysplasias. In foetuses up to a gestational age of approximately 24 weeks, we make use of the mammography system, as this has a high resolution and exquisitely depicts the foetal skeleton (Fig.1). In these cases a babygram, which visualizes developmental anomalies of the entire skeletal system in two or more views, is acceptable. Fig.1. Open in a new tab A neonate aborted at 14 weeks gestational age. a Antenatal US showed severe dysmorphological changes. Photograph of the foetus shows the fused lower extremity. The insert depicts the size of the foetus in relation to the fingertip of the pathology assistant (arrow). b Radiography, performed on a mammography system, shows a sirenomelia. The skeleton is exquisitely depicted In older foetuses and neonates a direct digital radiography system (Triathlon DR, Oldelft Benelux, the Netherlands) is used. In babies and toddlers postmortem radiography is reserved for cases of sudden infant death syndrome (SIDS) or suspected child abuse. In these cases a full skeletal survey according to either the American College of Radiology or the Royal College of Radiologists should be performed, even if a whole-body CT is obtained [27–29]. In older children (> 4 years of age) postmortem CR plays a minor role and is only performed on special indications. Finally, pathologists may request radiographs of autopsy specimen. These radiographs should preferably be obtained in close cooperation with the attending pathologist and if size permits should be made on a mammography system. These specimen radiographs can yield additional information that initially was not visible on either CR or CT (Fig.2). Fig.2. Open in a new tab A 3 ½-month-old girl died under suspicious circumstances. Judicial autopsy was warranted. Contact radiograph (performed on mammography system) shows a fracture of the second rib on the left with consolidation (arrow) but also a fresh fracture (arrowhead). The latter was, even in retrospect, not visible on CR or CT (not shown here) (Reprinted with permission from Bilo RA, Robben SG, van Rijn RR Forensic aspects of paediatric fractures: differentiating accidental trauma from child abuse. Springer-Verlag, in press) Conventional angiography Since conventional autopsy examination of the vascular system is difficult to perform, postmortem angiography could be a useful technique . In most cases postmortem angiography will consist of a single organ study in which the organ can be in situ or removed from the body. Whole-body postmortem conventional angiography has been described in foetuses and neonates . A special technique worth mentioning is the art of cast angiography in which a resin is injected into the vasculature and the tissues are removed by maceration, thus yielding a cast of the vasculature. Ultrasonography The use of US in postmortem imaging, to date, has been limited (Fig.3) [32, 33]. Implementation is hindered by a relative lack of knowledge about the possibilities of US by (forensic) pathologists. Not only can US be used as an inexpensive imaging method in the absence of CT and/or MRI, but it can also be used to guide biopsy procedures in case of a minimal invasive autopsy (MIA). Fig.3. Open in a new tab Postmortem US shows portal air (arrow), a common finding in postmortem imaging Computed tomography Postmortem CT is a fast technique that allows imaging of the whole body inside a body bag or coffin. This makes access to the scanner relatively easy and technicians appreciate the fact that they are not confronted with the deceased person. In general, straightforward CT protocols are used. In our hospital we routinely perform these exams and use separate protocols for the brain and the rest of the body (Table2). The use of 3-D reconstructions can be very illustrative. Table 2. Postmortem CT parameters a | Anatomical location | kV | mAs | Slice thickness (mm) | Increment (mm) | Pitch | Collimation (mm) | Rotation time (sec) | --- --- --- --- | | Head & neck | 120 | 285 | 0.9 | 0.45 | 0.392 | 640,625 | 0.75 | | Thorax & abdomen b | 120 | 250 | 3.0 | 2.0 | 0.983 | 640,625 | 1.0 | Open in a new tab a Protocol for Philips Brilliance (64 channel CT, Philips Medical Systems, Best, the Netherlands) b Lower extremities will be scanned upon special request only In all cases, coronal and sagittal reconstructions, using appropriate kernels, are performed In selected cases 3-D SSD reconstructions are performed One of the clear disadvantages of postmortem CT is the absence of blood flow; this makes CT angiography (CTA) difficult. In the Virtopsy® project, postmortem use of CTA has been explored . Using a pressure-controlled modified heart–lung machine and femoral access, postmortem angiography is a feasible option in specialized centres. Magnetic resonance imaging We feel that CR still has an important role and therefore should be obtained in all cases in which postmortem MRI of foetuses and neonates is performed. In some instances pathology will be difficult, if not impossible, to detect on MRI, whereas CR can be diagnostic (Fig.4). Fig.4. Open in a new tab A neonate, born at 41 weeks gestational age, who died shortly after birth. a Antenatal US showed an underdeveloped thorax and short stature (T2-W 3-D, slice thickness: 1 mm, TR: 4000, TE: 80). b Chest radiograph shows a severely constricted thoracic cage, underdeveloped scapulae and flattened vertebrae. c Radiograph of the pelvis shows hypoplastic iliac wings and sciatic notch spurs (arrow). Based on the conventional radiological findings, the diagnosis thanatophoric dysplasia type II is most likely (OMIM #187601) The MRI protocol is divided into two separate parts. First of all the neurocranium (in a significant number of cases neuropathology will be present) (Table3) (Fig.5). The thorax and abdomen are imaged separately (Table3) (Fig.6). The extremities are only imaged upon special request. In general this protocol can be scanned within a 1-h time frame. Protocols should be fine-tuned if specific questions need to be answered. Table 3. Postmortem MRI parameters a | Anatomic location | | Head & neck | Sequence | FoV (mm) | Sl. Thickness (mm) | TR (msec) | TE (msec) | NSA | FA (°) | ATb(min) | | Axial | T2 | 180 | 3 | 4000 | 96 | 4 | 150 | 3:18 | | Sagittal c | T1-3D | 256 | 1 | 1990 | 2.92 | 1 | 15 | 4:06 | | Axial d | Flash 2D | 180 | 3 | 660 | 26 | 2 | 20 | 5:39 | | Bodye | | | | | | | | | | Coronal f | 3D-T2 | 300 | 1 | 750 | 108 | 1 | 150 | 8:26 | | Coronal | 3D-T1 | 300 | 3 | 1500 | 13 | 2 | 90 | 6:27 | Open in a new tab a Protocol for Siemens Magnetom Avanto 1.5 T (Siemens, Erlangen, Germany) b TA = Acquisition time c Coronal and axial reconstructions d In case of clinical suspicion of intracranial haemorrhage only e Lower extremities will be scanned upon special request only f Axial and sagittal reconstructions Fig.5. Open in a new tab A neonate aborted at 31 weeks gestational age. Antenatal US showed abnormal brain development. Autopsy was refused by the parents. T1-W MRI shows asymmetrical development of the brain with overgrowth of the right side, in keeping with hemimegalencephaly. On the right side multiple focal hemorrhagic lesions are seen (arrow) (slice thickness: 1 mm, TR: 9, TE: 4,1) Fig.6. Open in a new tab A neonate with a congenital cyanotic heart disease born at 39 weeks. Maximum support and 100% oxygen did not lead to clinical improvement and the child died. T2-W coronal MRI shows a complete anomalous venous return (arrow) with pulmonary interstitial oedema (insert). A central tendon defect is seen (open arrow) (slice thickness: 2 mm, TR: 5500, TE: 54, FA: 180°). b T2-W coronal MRI shows a persistent left superior caval vein (arrow), a dextrocardia and situs intermedius of the liver. Asplenia was also noted To date, in our hospital, we have only performed postmortem MRI in fresh cadavers that can be placed in the bore without problems. However, it is possible to obtain an MRI while the corpse is in the body bag. Technicians and local guidelines Before a postmortem radiology service is offered in a department of radiology, the radiological technicians and other involved personnel should be informed. It should be made clear to all involved that postmortem imaging is an important aspect of medical care and that the outcome of the exam can seriously impact the life of parents/guardians. In our department we have the policy that these exams are performed on a voluntary basis. However, to date none of our technicians has refused to do these exams. Handling of the deceased foetus, neonate or child is done in all instances by either the mortuary personnel or the attending paediatric radiologist. The referring clinician should be aware that postmortem imaging is not a routine procedure and that normal clinical work will have priority over these exams. In general this means that the exams will be done before or after the normal radiology schedule. Clinical postmortem radiology If postmortem imaging proves to be an adequate alternative for the conventional autopsy, this will probably lead to more investigations of deceased children. This way, major improvements could be made in studying clinically challenging diagnoses, e.g. sudden infant death syndrome. At the present time, postmortem imaging can be an interesting addition to conventional autopsy. Comparison studies show that for certain diagnoses imaging is superior, while in others the cause of death cannot be determined. Most information comes from perinatal (neuro-) imaging and imaging in trauma patients. Conventional radiography It is widely known that conventional radiology plays an important role in the diagnosis of skeletal dysplasias and the detection of fractures in cases of child abuse . Detailed discussion of conventional radiology lies outside the scope of this review as this is a widely accepted and utilized technique [36–38]. Ultrasonography To date only a few articles on postmortem US have been published. Uchigasaki et al. state that although CT and MRI can provide much more information than US (especially in decomposed bodies), US is inexpensive and easy to handle, and therefore might provide some information before an autopsy is performed. Farina et al. studied 100 cases in which US and US-guided biopsy were performed . In this study the concordance rate between US and conventional autopsy with regard to the cause of death and the main pathological findings reached 83%. Computed tomography Compared to autopsy CT is superior in detecting fractures, as it can detect fractures in places that are generally not examined during a conventional autopsy (e.g. the face). Bolliger et al. conclude from their study that CT examination has proven “to be an invaluable tool in three areas of forensic pathology; namely, in the detection and demonstration of fractures, the detection of foreign bodies and the detection of gas” (Fig.7). Furthermore, 3-D reconstructions can be made, which can be helpful in demonstrating the type of injury during court cases, for example. Fig.7. Open in a new tab A 6-month-old boy who died after attempted resuscitation. On postmortem, CT air is seen in all major vessels. On autopsy a positive blood culture for S. aureus was found. The cause of death was a fulminant sepsis Magnetic resonance imaging Obtaining permission for an examination of the brain during an autopsy can be especially difficult because it takes several weeks to fixate the brain properly, and many parents/guardians request that all organs be replaced before burial . It is therefore encouraging that several studies have shown that structural anomalies of the brain can be adequately detected with MRI [26, 41]. Griffiths et al. have been examining neuropathology in foetuses and deceased neonates since 2003. In their first series they found complete agreement between MRI and autopsy in 28 of 32 cases. In 2005 they examined more than 200 foetuses and neonates with similar results. They found that MR provides detailed information about all organ systems, except for the heart (Figs.8 and 9) . Cohen et al. found that although MR is very good in detecting brain and spine anomalies, if it is not combined with the results of autopsy, 71% of essential information will not be detected. Breeze et al. determined kappa values to assess agreement between MRI and autopsy for different organ systems. They were high for the brain (0.83), moderate for lungs (0.56) and fair for the heart (0.33). The relative inability of postmortem MRI to detect cardiac pathology is described by several other authors [43, 44]. This is a major shortcoming because cardiac disease is a major cause of death in the Western world. Fig.8. Open in a new tab A 2-month-old neonate presented at the emergency department in severe cardiac and respiratory distress. Resuscitation was unsuccessful. Coronal T2-W MRI shows hematopericard (open arrow), hematothorax (arrow) and a pleural effusion (arrowhead) (slice thickness: 1 mm, TR: 4000, TE: 80, FA: 90°). There is an abberant pulmonary vein draining into the left ventricle (open arrowhead). The abdomen shows ascites (asterisk) Fig.9. Open in a new tab A neonate aborted at 20 weeks gestational age. a Antenatal US showed a massively dilated bladder and bilateral hydronephrosis. Sagittal T2-W MRI shows a distended bladder (asterisk) and a dilated posterior urethra (open arrow), consistent with posterior urethral valves (slice thickness: 1 mm, TR: 1500, TE: 161, FA: 150°). Note the fluid-fluid level in the heart (arrow) as a result of blood b Coronal T2-W MRI shows a distended bladder (asterisk) and dilated tortuous ureters (open arrow). There is a substantial bilateral pyelocaliceal dilatation (arrow). There is a relative hypoplasia of the lungs as a result of the oligohydramnion The use of ultra-high-field MRI has been evaluated by Thayyil et al. , who concluded that “high-field MRI (and ancillary non-invasive post-mortem investigations) provided all the information that could be obtained with invasive autopsy for all internal organs in cases in which the intrauterine retention period was less than 1 week. Moreover, clinically useful information about the brain could be obtained, even in cases in which maceration and autolysis prevented formal neuropathological examination” . Combined CT and MRI In a recent study, 30 adult study subjects underwent both minimally invasive autopsy (MIA) and conventional autopsy (CA) . Because of a wide variety of causes of death, it is difficult to draw conclusions in such a small group about the sensitivity and specificity for CA of different organ systems, but the overall agreement on cause of death between MIA and CA was 77%. MIA correctly identified common causes of death, such as pneumonia and sepsis, but failed to demonstrate acute myocardial infarction (n = 4). In this study MRI was superior to CT in detecting brain abnormalities and pulmonary embolus. Conversely, CT was superior to MRI for the detection of calcifications and pneumothorax. A head-to-head analysis for CT versus MRI was not presented. Yen et al. performed a retrospective study on postmortem neuroimaging (mostly in adults) as part of the Virtopsy® project, a Swiss research project that aims to eventually replace the standard autopsy by a virtual one combined with minimally invasive procedures. They found that imaging, compared with autopsy as the gold standard, correctly identified the cause of death in almost 80% of the cases where the brain was the primary atrium mortis. The overall agreement between CT and MRI was 69%. A closer look at different findings shows big differences between the accuracy of imaging and autopsy for different types of lesions. Autopsy was superior in detecting scalp lesions, intracranial blood layers and contusions less than 3 mm in thickness, plaque jaunes, dura mater ruptures and brain oedema. Imaging, on the other hand, was better in visualizing gunshot injuries and complex skull fractures and in detecting pneumencephalon, ventricular haemorrhage and facial bone fractures. Furthermore, imaging was better in detecting lesions in a decomposed body. It has to be noted that the radiologists reviewing the CT and MRI data were not particularly trained in the forensic field, but that the outcome of the evaluation depends to a large degree on the previous forensic training of the radiologists . Forensic postmortem radiology The word “forensic” comes from the Latin adjective forensis, meaning “of or before the forum.” Today “forensic” is interchangeably used with “forensic sciences” and implicitly means “related to the court” or “legal.” The use of radiology in the legal system actually dates to 7 February 1896 when in Montreal, Canada, a radiograph (with an exposure time of 45 min) was obtained to locate a bullet lodged in the leg of a gunshot victim. Based on the radiograph the assailant could be convicted . With respect to forensic radiology, one aspect that needs to be addressed is patient confidentiality. Not only are the radiographs part of the evidence, and as such should be kept confidential in order not to compromise the chain of evidence, but there may also be interest from third parties not directly involved in the criminal procedure. It is good practice to store the data anonymously. In the last few decades radiology has been widely used in forensic sciences; however, in most cases radiologists have not been involved. In contrast to the clinical situation, forensic radiology will in most cases be done in children in whom the cause and manner of death are unclear and clinical information is not always available (e.g. in the case of the body of an unknown child). One of the aims of postmortem forensic radiology is to detect the presence of foreign bodies, such as fragments of glass and bullets, and to describe their position and, if applicable, the trajectory they followed. Radiology will also be able to detect more subtle cases of pneumothorax or pneuperitoneum, for example, which can be missed in a conventional autopsy (Fig.10). Furthermore, if radiology is used to describe the corpse prior to autopsy it makes revision possible even after the corpse has been buried. In legal proceedings this is an important advantage of postmortem radiology over the conventional autopsy. Fig.10. Open in a new tab A 10-year-old boy who died in the hospital after a fall. a Postmortem CT shows a small pneumothorax, which was not found at autopsy. There is diffuse airway consolidation in keeping with postmortem pulmonary oedema. b Surface-shaded rendering of the thorax shows an incorrectly positioned left subclavian line with the tip of the line in the jugular vein (arrow). The line was cut and the distal end (arrowhead) was buried subcutaneously There are numerous forensic institutes in the world that have a radiological facility or are contemplating buying a CT and/or MR scanner [49, 50]. In Europe one of the most well known is the Forensic Institute and the Centre for Forensic Imaging at the University of Bern, Switzerland . The Virtopsy® project began at the institute, which is one of the leaders in scientific research and development in forensic radiology . It is important to remember that in the end the aim of all forensic necroscopic examinations is to determine the cause (e.g. hypoxia) and manner (e.g. strangulation) of death in order to decide whether a crime has been committed (Fig.11). Fig.11. Open in a new tab A neonate of unknown gestational age found in a garbage bin. a Coronal T2-W image shows oedema around the right jugular vein (arrow) (slice thickness: 4 mm, TR: 5970, TE: 84, FA: 150°) (Reprinted with permission from Bilo RA, Robben SG, van Rijn RR Differentiating accidental trauma from child abuse. In: Forensic aspects of pediatric fractures. Springer-Verlag, in press). b Autopsy shows a bilateral haematoma (open arrow) around the jugular vein, the carotid artery and the sternocleidoid muscles (arrow). This finding is fitting with strangulation Historical paediatric specimens Although slightly outside the scope of postmortem paediatric radiology, the use of radiological imaging of historical paediatric specimens is worth mentioning. There are many collections of human historical specimens that have been gathered over the centuries by both historians and clinicians. Some of these cases depict diseases and/or disorders that are now very rare and therefore of interest. Radiology can be an excellent tool used to investigate these delicate specimens without destroying them. In this article the use of radiology in this interesting scientific field is demonstrated with two paediatric cases. The first case is a specimen owned by the Academic Medical Centre Amsterdam, which is home to one of the largest teratological collections in Europe (the Vrolik Museum). This collection of more than 2,000 specimens was founded by Gerardus Vrolik (1775–1859) and his son Willem Vrolik (1801–1863). It shows various aspects of human and animal anatomy, embryology, pathology, and congenital anomalies . One of the specimens in this collection is a cephalothoracopagus, estimated to be 100–150 years old (Fig.12). Conjoined twins are classified according to the site of union by using the suffix pagus (fixed), and therefore a cephalothoracopagus is joined by the head, thorax and (part of the) abdomen. Fig.12. Open in a new tab Historical paediatric specimen. a Image of cephalothoracopagus from the Vrolik Museum. Specimen is estimated to be 100–150 years old. b Surface-shaded rendering shows a conjoined skull and chest; the spine, pelvis and extremities are separate. c Coronal T2-W MRI shows individual development of the brain with a clear separation between the right and left side of the craniothoracophagus (open arrow) (slice Thickness: 3 mm, TR: 2500, TE: 68, FA: 90°). The trachea is fused (arrow) and a single diaphragm is present (arrowhead). There is a compound liver (neoaxial orientation), which on further imaging shows two separate gallbladders. Normal renal development is present The second case is from the National Museum of Antiquities in Leiden, the Netherlands. This particular mummy is of a boy, estimated to be 9.5–14.5 years of age, and has been dated to the 3rd century A.D. (Fig.13) . The mummy is one of eight (located in several museums worldwide), forming a homogeneous group based on similarities of both the exterior as well as the embalming techniques . Fig.13. Open in a new tab Second historical specimen. a Mummy of a boy, estimated age 9.5–14.5 years, dated to the 3rd century A.D. b Shaded-surface rendering of the face shows facial features. The nose is slightly depressed, likely as a result of mummification. The ear is relatively large and stands off the skull. c Virtual endoscopy of the abdominal cavity shows absence of both abdominal and thoracic organs. The thorax is partially filled with gauzes (asterisk) (courtesy of the National Museum of Antiquities, Leiden, the Netherlands, reprinted with permission from Raven MJ, Taconis WK (eds) Egyptian mummies: radiological atlas of the collections in the national museum of antiquities in Leiden. Brepols, Turnhout, Belgium, pp 191–195) Conclusion In this review paper we have presented the current state of postmortem imaging in children. This exciting new field in paediatric radiology opens new areas in which close collaboration between radiologists and pathologists is essential. It is important that paediatric radiologists become involved in this field as pathologists are not trained in reading radiographs and may therefore miss essential clues . To date, nearly all radiological studies have looked at either general findings in a relatively small population or specific pathology in larger samples, and in almost all studies the set-up was descriptive. The lack of substantially large studies with statistical power and the wide variety of study designs makes it difficult to perform a meta-analysis of the published data. This is clearly illustrated by a recent systematic review by Scholing et al. of the role of postmortem CT in trauma victims. They included 15 studies with a moderate sample size of 13 patients and a range of agreement between postmortem CT and autopsy of 46–100%. Sebire adds that besides the lack of large-scale studies comparing imaging-based versus autopsy-based diagnoses, data on the accuracy of postmortem-obtained histopathological material has not been published. The combination of postmortem imaging with needle-core biopsy is something that deserves further attention, as most information at a conventional autopsy comes from microscopic histopathological examination. These publications in relation to the diminishing number of autopsies underscore the need for multi-institutional prospective studies in order to assess the full potential of this technique. It is difficult to predict the future, but it seems certain that radiological techniques will play an important role in the future of both clinical and forensic pathology. For paediatric radiologists involved in this field, it completes the circle of life, making it one of the few medical specialties that cares for patients from the cradle to the grave. Perhaps in the future a new subspecialty of forensic radiology will emerge . Acknowledgments Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. References 1.Shojania KG, Burton EC, McDonald KM, et al. The autopsy as an outcome and performance measure. Evid Rep Technol Assess (Summ) 2002;58:1–5. [PMC free article] [PubMed] [Google Scholar] 2.Shojania KG, Burton EC. The vanishing nonforensic autopsy. N Engl J Med. 2008;358:873–875. doi: 10.1056/NEJMp0707996. [DOI] [PubMed] [Google Scholar] 3.Burton JL, Underwood J. Clinical, educational, and epidemiological value of autopsy. Lancet. 2007;369:1471–1480. doi: 10.1016/S0140-6736(07)60376-6. [DOI] [PubMed] [Google Scholar] 4.Maniscalco WM, Clarke TA. Factors influencing neonatal autopsy rate. Am J Dis Child. 1982;136:781–784. doi: 10.1001/archpedi.1982.03970450023005. [DOI] [PubMed] [Google Scholar] 5.Newton D, Coffin CM, Clark EB, et al. How the pediatric autopsy yields valuable information in a vertically integrated health care system. Arch Pathol Lab Med. 2004;128:1239–1246. doi: 10.5858/2004-128-1239-HTPAYV. [DOI] [PubMed] [Google Scholar] 6.Brodlie M, Laing IA, Keeling JW, et al. Ten years of neonatal autopsies in tertiary referral centre: retrospective study. BMJ. 2002;324:761–763. doi: 10.1136/bmj.324.7340.761. [DOI] [PMC free article] [PubMed] [Google Scholar] 7.Snowdon C, Elbourne DR, Garcia J. Perinatal pathology in the context of a clinical trial: attitudes of neonatologists and pathologists. Arch Dis Child Fetal Neonatal Ed. 2004;89:F204–F207. doi: 10.1136/adc.2002.012732. [DOI] [PMC free article] [PubMed] [Google Scholar] 8.Sinard JH. Factors affecting autopsy rates, autopsy request rates, and autopsy findings at a large academic medical center. Exp Mol Pathol. 2001;70:333–343. doi: 10.1006/exmp.2001.2371. [DOI] [PubMed] [Google Scholar] 9.Hinchliffe SA, Godfrey HW, Hind CR. Attitudes of junior medical staff to requesting permission for autopsy. Postgrad Med J. 1994;70:292–294. doi: 10.1136/pgmj.70.822.292. [DOI] [PMC free article] [PubMed] [Google Scholar] 10.McPhee SJ. Maximizing the benefits of autopsy for clinicians and families. What needs to be done. Arch Pathol Lab Med. 1996;120:743–748. [PubMed] [Google Scholar] 11.Dorff EN. End-of-life: Jewish perspectives. Lancet. 2005;366:862–865. doi: 10.1016/S0140-6736(05)67219-4. [DOI] [PubMed] [Google Scholar] 12.Geller SA. Religious attitudes and the autopsy. Arch Pathol Lab Med. 1984;108:494–496. [PubMed] [Google Scholar] 13.Griffiths PD, Paley MN, Whitby EH. Post-mortem MRI as an adjunct to fetal or neonatal autopsy. Lancet. 2005;365:1271–1273. doi: 10.1016/S0140-6736(05)74816-9. [DOI] [PubMed] [Google Scholar] 14.Burton JL, Underwood JC. Necropsy practice after the “organ retention scandal”: requests, performance, and tissue retention. J Clin Pathol. 2003;56:537–541. doi: 10.1136/jcp.56.7.537. [DOI] [PMC free article] [PubMed] [Google Scholar] 15.Burton JL, Wells M. The Alder Hey affair: implications for pathology practice. J Clin Pathol. 2001;54:820–823. doi: 10.1136/jcp.54.11.820. [DOI] [PMC free article] [PubMed] [Google Scholar] 16.Khong TY, Tanner AR. Foetal and neonatal autopsy rates and use of tissue for research: the influence of ‘organ retention’ controversy and new consent process. J Paediatr Child Health. 2006;42:366–369. doi: 10.1111/j.1440-1754.2006.00874.x. [DOI] [PubMed] [Google Scholar] 17.Khong TY, Arbuckle SM. Perinatal pathology in Australia after Alder Hey. J Paediatr Child Health. 2002;38:409–411. doi: 10.1046/j.1440-1754.2002.00022.x. [DOI] [PubMed] [Google Scholar] 18.McHaffie HE, Fowlie PW, Hume R, et al. Consent to autopsy for neonates. Arch Dis Child Fetal Neonatal Ed. 2001;85:F4–F7. doi: 10.1136/fn.85.1.F4. [DOI] [PMC free article] [PubMed] [Google Scholar] 19.Roulson J, Benbow EW, Hasleton PS. Discrepancies between clinical and autopsy diagnosis and the value of post mortem histology; a meta-analysis and review. Histopathology. 2005;47:551–559. doi: 10.1111/j.1365-2559.2005.02243.x. [DOI] [PubMed] [Google Scholar] 20.Shojania KG, Burton EC, McDonald KM, et al. Changes in rates of autopsy-detected diagnostic errors over time: a systematic review. JAMA. 2003;289:2849–2856. doi: 10.1001/jama.289.21.2849. [DOI] [PubMed] [Google Scholar] 21.Pastores SM, Dulu A, Voigt L, et al. Premortem clinical diagnoses and postmortem autopsy findings: discrepancies in critically ill cancer patients. Crit Care. 2007;11:R48. doi: 10.1186/cc5782. [DOI] [PMC free article] [PubMed] [Google Scholar] 22.Gordijn SJ, Erwich JJ, Khong TY. Value of the perinatal autopsy: critique. Pediatr Dev Pathol. 2002;5:480–488. doi: 10.1007/s10024-002-0008-y. [DOI] [PubMed] [Google Scholar] 23.Kumar P, Taxy J, Angst DB, et al. Autopsies in children: are they still useful? Arch Pediatr Adolesc Med. 1998;152:558–563. doi: 10.1001/archpedi.152.6.558. [DOI] [PubMed] [Google Scholar] 24.Wright C, Lee RE. Investigating perinatal death: a review of the options when autopsy consent is refused. Arch Dis Child Fetal Neonatal Ed. 2004;89:F285–F288. doi: 10.1136/adc.2003.022483. [DOI] [PMC free article] [PubMed] [Google Scholar] 25.Cohen MC, Paley MN, Griffiths PD, et al. Less invasive autopsy: benefits and limitations of the use of magnetic resonance imaging in the perinatal postmortem. Pediatr Dev Pathol. 2008;11:1–9. doi: 10.2350/07-01-0213.1. [DOI] [PubMed] [Google Scholar] 26.Whitby EH, Paley MN, Cohen M, et al. Postmortem MR imaging of the fetus: an adjunct or a replacement for conventional autopsy? Semin Fetal Neonatal Med. 2005;10:475–483. doi: 10.1016/j.siny.2005.05.006. [DOI] [PubMed] [Google Scholar] 27.American College of Radiology (ACR) (2006) ACR practice guideline for skeletal surveys in children. Available via Accessed 13 Nov 2009 28.The Royal College of Radiologists and the Royal College of Paediatrics and Child Health (2008) Standards for radiological investigations of suspected non-accidental injury. Available via Accessed 13 Nov 2009 29.Offiah A, van Rijn RR, Perez-Rossello JM, et al. Skeletal imaging of child abuse (non-accidental injury) Pediatr Radiol. 2009;9:461–470. doi: 10.1007/s00247-009-1157-1. [DOI] [PubMed] [Google Scholar] 30.Grabherr S, Dirnhofer R. Postmortem angiography. In: Thali MJ, Dirnhofer R, Vock P, editors. The Virtopsy approach: 3D optical and radiological scanning and reconstruction in forensic medicine. Boca Raton: CRC; 2009. pp. 443–450. [Google Scholar] 31.Stoeter P, Voigt K. Radiological examination of embryonal and fetal vessels. Technique and method of prenatal, post-mortem angiography in different stages of gestation. Rofo. 1976;124:558–564. doi: 10.1055/s-0029-1230391. [DOI] [PubMed] [Google Scholar] 32.Farina J, Millana C, Fdez-Acenero MJ, et al. Ultrasonographic autopsy (echopsy): a new autopsy technique. Virchows Arch. 2002;440:635–639. doi: 10.1007/s00428-002-0607-z. [DOI] [PubMed] [Google Scholar] 33.Uchigasaki S, Oesterhelweg L, Gehl A, et al. Application of compact ultrasound imaging device to postmortem diagnosis. Forensic Sci Int. 2004;140:33–41. doi: 10.1016/j.forsciint.2003.11.029. [DOI] [PubMed] [Google Scholar] 34.Ross S, Spendlove D, Bolliger S, et al. Postmortem whole-body CT angiography: evaluation of two contrast media solutions. AJR. 2008;190:1380–1389. doi: 10.2214/AJR.07.3082. [DOI] [PubMed] [Google Scholar] 35.Kleinman PK, Marks SC, Jr, Nimkin K, et al. Rib fractures in 31 abused infants: postmortem radiologic-histopathologic study. Radiology. 1996;200:807–810. doi: 10.1148/radiology.200.3.8756936. [DOI] [PubMed] [Google Scholar] 36.OE OE, Espeland A, Maartmann-Moe H, et al. Diagnostic value of radiography in cases of perinatal death: a population based study. Arch Dis Child Fetal Neonatal Ed. 2003;88:F521–F524. doi: 10.1136/fn.88.6.F521. [DOI] [PMC free article] [PubMed] [Google Scholar] 37.Seppanen U. The value of perinatal post-mortem radiography. Experience of 514 cases. Ann Clin Res. 1985;17(Suppl 44):1–59. [PubMed] [Google Scholar] 38.Foote GA, Wilson AJ, Stewart JH. Perinatal post-mortem radiography–experience with 2500 cases. Br J Radiol. 1978;51:351–356. doi: 10.1259/0007-1285-51-605-351. [DOI] [PubMed] [Google Scholar] 39.Bolliger SA, Thali MJ, Ross S, et al. Virtual autopsy using imaging: bridging radiologic and forensic sciences. A review of the Virtopsy and similar projects. Eur Radiol. 2008;18:273–282. doi: 10.1007/s00330-007-0737-4. [DOI] [PubMed] [Google Scholar] 40.Huisman TA. Magnetic resonance imaging: an alternative to autopsy in neonatal death? Semin Neonatol. 2004;9:347–353. doi: 10.1016/j.siny.2003.09.004. [DOI] [PubMed] [Google Scholar] 41.Woodward PJ, Sohaey R, Harris DP, et al. Postmortem fetal MR imaging: comparison with findings at autopsy. AJR. 1997;168:41–46. doi: 10.2214/ajr.168.1.8976917. [DOI] [PubMed] [Google Scholar] 42.Griffiths PD, Variend D, Evans M, et al. Postmortem MR imaging of the fetal and stillborn central nervous system. AJNR. 2003;24:22–27. [PMC free article] [PubMed] [Google Scholar] 43.Breeze AC, Cross JJ, Hackett GA, et al. Use of a confidence scale in reporting postmortem fetal magnetic resonance imaging. Ultrasound Obstet Gynecol. 2006;28:918–924. doi: 10.1002/uog.3886. [DOI] [PubMed] [Google Scholar] 44.Alderliesten ME, Peringa J, van der Hulst VP, et al. Perinatal mortality: clinical value of postmortem magnetic resonance imaging compared with autopsy in routine obstetric practice. BJOG. 2003;110:378–382. doi: 10.1046/j.1471-0528.2003.02076.x. [DOI] [PubMed] [Google Scholar] 45.Thayyil S, Cleary JO, Sebire NJ, et al. Post-mortem examination of human fetuses: a comparison of whole-body high-field MRI at 9.4 T with conventional MRI and invasive autopsy. Lancet. 2009;374:467–475. doi: 10.1016/S0140-6736(09)60913-2. [DOI] [PubMed] [Google Scholar] 46.Weustink AC, Hunink MG, van Dijke CF, et al. Minimally invasive autopsy: an alternative to conventional autopsy? Radiology. 2009;250:897–904. doi: 10.1148/radiol.2503080421. [DOI] [PubMed] [Google Scholar] 47.Yen K, Lovblad KO, Scheurer E, et al. Post-mortem forensic neuroimaging: correlation of MSCT and MRI findings with autopsy results. Forensic Sci Int. 2007;173:21–35. doi: 10.1016/j.forsciint.2007.01.027. [DOI] [PubMed] [Google Scholar] 48.Cox J, Kirkpatrick RC. The new photography with report of a case in which a bullet was photographed in the leg. Montreal Med J. 1896;24:661. [Google Scholar] 49.Rutty GN, Morgan B, O’Donnell C, et al. Forensic institutes across the world place CT or MRI scanners or both into their mortuaries. J Trauma. 2008;65:493–494. doi: 10.1097/TA.0b013e31817de420. [DOI] [PubMed] [Google Scholar] 50.Thomsen AH, Jurik AG, Uhrenholt L, et al. An alternative approach to computerized tomography (CT) in forensic pathology. Forensic Sci Int. 2009;183:87–90. doi: 10.1016/j.forsciint.2008.10.019. [DOI] [PubMed] [Google Scholar] 51.Thali MJ, Braun M, Buck U, et al. VIRTOPSY-scientific documentation, reconstruction and animation in forensic: individual and real 3D data based geo-metric approach including optical body/object surface and radiological CT/MRI scanning. J Forensic Sci. 2005;50:428–442. doi: 10.1520/JFS2004290. [DOI] [PubMed] [Google Scholar] 52.Baljet B, Oostra RJ. Historical aspects of the study of malformations in The Netherlands. Am J Med Genet. 1998;77:91–99. doi: 10.1002/(SICI)1096-8628(19980501)77:2<91::AID-AJMG2>3.0.CO;2-U. [DOI] [PubMed] [Google Scholar] 53.Raven MJ, Taconis WK. Egyptian mummies: radiological atlas of the collections in the national museum of antiquities in Leiden. Belgium: Brepols; 2005. pp. 191–195. [Google Scholar] 54.Kremer C, Racette S, Marton D, et al. Radiographs interpretation by forensic pathologists: a word of warning. Am J Forensic Med Pathol. 2008;29:295–296. doi: 10.1097/PAF.0b013e3181847db0. [DOI] [PubMed] [Google Scholar] 55.Scholing M, Saltzherr TP, Fung Kon Jin PH, et al. The value of postmortem computed tomography as an alternative for autopsy in trauma victims: a systematic review. Eur Radiol. 2009;10:2333–2341. doi: 10.1007/s00330-009-1440-4. [DOI] [PMC free article] [PubMed] [Google Scholar] 56.Sebire NJ. Towards the minimally invasive autopsy? Ultrasound Obstet Gynecol. 2006;28:865–867. doi: 10.1002/uog.3869. [DOI] [PubMed] [Google Scholar] 57.O’Donnell C, Woodford N. Post-mortem radiology-a new sub-speciality? Clin Radiol. 2008;63:1189–1194. doi: 10.1016/j.crad.2008.05.008. [DOI] [PubMed] [Google Scholar] 58.Broumandi DD, Hayward UM, Benzian JM, et al. Best cases from the AFIP: hemimegalencephaly. Radiographics. 2004;24:843–848. doi: 10.1148/rg.243035135. [DOI] [PubMed] [Google Scholar] Articles from Pediatric Radiology are provided here courtesy of Springer ACTIONS View on publisher site PDF (693.2 KB) Cite Collections Permalink PERMALINK Copy RESOURCES Similar articles Cited by other articles Links to NCBI Databases On this page Abstract Introduction Techniques Clinical postmortem radiology Forensic postmortem radiology Historical paediatric specimens Conclusion Acknowledgments References Cite Copy Download .nbib.nbib Format: Add to Collections Create a new collection Add to an existing collection Name your collection Choose a collection Unable to load your collection due to an error Please try again Add Cancel Follow NCBI NCBI on X (formerly known as Twitter)NCBI on FacebookNCBI on LinkedInNCBI on GitHubNCBI RSS feed Connect with NLM NLM on X (formerly known as Twitter)NLM on FacebookNLM on YouTube National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov Back to Top
14942
https://openstax.org/books/elementary-algebra-2e/pages/3-1-use-a-problem-solving-strategy
3.1 Use a Problem-Solving Strategy - Elementary Algebra 2e | OpenStax This website utilizes technologies such as cookies to enable essential site functionality, as well as for analytics, personalization, and targeted advertising purposes. Privacy Notice Customize Reject All Accept All Customize Consent Preferences We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below. The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ...Show more For more information on how Google's third-party cookies operate and handle your data, see:Google Privacy Policy Necessary Always Active Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data. Cookie oxdid Duration 1 year 1 month 4 days Description OpenStax Accounts cookie for authentication Cookie campaignId Duration Never Expires Description Required to provide OpenStax services Cookie __cf_bm Duration 1 hour Description This cookie, set by Cloudflare, is used to support Cloudflare Bot Management. Cookie CookieConsentPolicy Duration 1 year Description Cookie Consent from Salesforce Cookie LSKey-c$CookieConsentPolicy Duration 1 year Description Cookie Consent from Salesforce Cookie renderCtx Duration session Description This cookie is used for tracking community context state. Cookie pctrk Duration 1 year Description Customer support Cookie _accounts_session_production Duration 1 year 1 month 4 days Description Cookies that are required for authentication and necessary OpenStax functions. Cookie nudge_study_guides_page_counter Duration 1 year 1 month 4 days Description Product analytics Cookie _dd_s Duration 15 minutes Description Zapier cookies that are used for Customer Support services. Cookie ak_bmsc Duration 2 hours Description This cookie is used by Akamai to optimize site security by distinguishing between humans and bots Cookie PHPSESSID Duration session Description This cookie is native to PHP applications. The cookie stores and identifies a user's unique session ID to manage user sessions on the website. The cookie is a session cookie and will be deleted when all the browser windows are closed. Cookie m Duration 1 year 1 month 4 days Description Stripe sets this cookie for fraud prevention purposes. It identifies the device used to access the website, allowing the website to be formatted accordingly. Cookie BrowserId Duration 1 year Description Sale Force sets this cookie to log browser sessions and visits for internal-only product analytics. Cookie ph_phc_bnZwQPxzoC7WnmjFNOUQpcKsaDVg8TwnyoNzbClpIsD_posthog Duration 1 year Description Privacy-focused platform cookie Cookie cookieyes-consent Duration 1 year Description CookieYes sets this cookie to remember users' consent preferences so that their preferences are respected on subsequent visits to this site. It does not collect or store any personal information about the site visitors. Cookie _cfuvid Duration session Description Calendly sets this cookie to track users across sessions to optimize user experience by maintaining session consistency and providing personalized services Cookie dmn_chk_ Duration Less than a minute Description This cookie is set to track user activity across the website. Cookie cookiesession1 Duration 1 year Description This cookie is set by the Fortinet firewall. This cookie is used for protecting the website from abuse. Functional [x] Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features. Cookie session Duration session Description Salesforce session cookie. We use Salesforce to drive our support services to users. Cookie projectSessionId Duration session Description Optional AI-based customer support cookie Cookie yt-remote-device-id Duration Never Expires Description YouTube sets this cookie to store the user's video preferences using embedded YouTube videos. Cookie ytidb::LAST_RESULT_ENTRY_KEY Duration Never Expires Description The cookie ytidb::LAST_RESULT_ENTRY_KEY is used by YouTube to store the last search result entry that was clicked by the user. This information is used to improve the user experience by providing more relevant search results in the future. Cookie yt-remote-connected-devices Duration Never Expires Description YouTube sets this cookie to store the user's video preferences using embedded YouTube videos. Cookie yt-remote-session-app Duration session Description The yt-remote-session-app cookie is used by YouTube to store user preferences and information about the interface of the embedded YouTube video player. Cookie yt-remote-cast-installed Duration session Description The yt-remote-cast-installed cookie is used to store the user's video player preferences using embedded YouTube video. Cookie yt-remote-session-name Duration session Description The yt-remote-session-name cookie is used by YouTube to store the user's video player preferences using embedded YouTube video. Cookie yt-remote-fast-check-period Duration session Description The yt-remote-fast-check-period cookie is used by YouTube to store the user's video player preferences for embedded YouTube videos. Cookie yt-remote-cast-available Duration session Description The yt-remote-cast-available cookie is used to store the user's preferences regarding whether casting is available on their YouTube video player. Analytics [x] Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc. Cookie hjSession Duration 1 hour Description Hotjar sets this cookie to ensure data from subsequent visits to the same site is attributed to the same user ID, which persists in the Hotjar User ID, which is unique to that site. Cookie visitor_id Duration 9 months 7 days Description Pardot sets this cookie to store a unique user ID. Cookie visitor_id-hash Duration 9 months 7 days Description Pardot sets this cookie to store a unique user ID. Cookie _gcl_au Duration 3 months Description Google Tag Manager sets the cookie to experiment advertisement efficiency of websites using their services. Cookie _ga Duration 1 year 1 month 4 days Description Google Analytics sets this cookie to calculate visitor, session and campaign data and track site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognise unique visitors. Cookie _gid Duration 1 day Description Google Analytics sets this cookie to store information on how visitors use a website while also creating an analytics report of the website's performance. Some of the collected data includes the number of visitors, their source, and the pages they visit anonymously. Cookie _fbp Duration 3 months Description Facebook sets this cookie to display advertisements when either on Facebook or on a digital platform powered by Facebook advertising after visiting the website. Cookie ga Duration 1 year 1 month 4 days Description Google Analytics sets this cookie to store and count page views. Cookie pardot Duration past Description The pardot cookie is set while the visitor is logged in as a Pardot user. The cookie indicates an active session and is not used for tracking. Cookie pi_pageview_count Duration Never Expires Description Marketing automation tracking cookie Cookie pulse_insights_udid Duration Never Expires Description User surveys Cookie pi_visit_track Duration Never Expires Description Marketing cookie Cookie pi_visit_count Duration Never Expires Description Marketing cookie Cookie cebs Duration session Description Crazyegg sets this cookie to trace the current user session internally. Cookie gat_gtag_UA Duration 1 minute Description Google Analytics sets this cookie to store a unique user ID. Cookie vuid Duration 1 year 1 month 4 days Description Vimeo installs this cookie to collect tracking information by setting a unique ID to embed videos on the website. Performance [x] Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors. Cookie hjSessionUser Duration 1 year Description Hotjar sets this cookie to ensure data from subsequent visits to the same site is attributed to the same user ID, which persists in the Hotjar User ID, which is unique to that site. Advertisement [x] Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns. Cookie test_cookie Duration 15 minutes Description doubleclick.net sets this cookie to determine if the user's browser supports cookies. Cookie YSC Duration session Description Youtube sets this cookie to track the views of embedded videos on Youtube pages. Cookie VISITOR_INFO1_LIVE Duration 6 months Description YouTube sets this cookie to measure bandwidth, determining whether the user gets the new or old player interface. Cookie VISITOR_PRIVACY_METADATA Duration 6 months Description YouTube sets this cookie to store the user's cookie consent state for the current domain. Cookie IDE Duration 1 year 24 days Description Google DoubleClick IDE cookies store information about how the user uses the website to present them with relevant ads according to the user profile. Cookie yt.innertube::requests Duration Never Expires Description YouTube sets this cookie to register a unique ID to store data on what videos from YouTube the user has seen. Cookie yt.innertube::nextId Duration Never Expires Description YouTube sets this cookie to register a unique ID to store data on what videos from YouTube the user has seen. Uncategorized [x] Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet. Cookie donation-identifier Duration 1 year Description Description is currently not available. Cookie abtest-identifier Duration 1 year Description Description is currently not available. Cookie __Secure-ROLLOUT_TOKEN Duration 6 months Description Description is currently not available. Cookie _ce.s Duration 1 year Description Description is currently not available. Cookie _ce.clock_data Duration 1 day Description Description is currently not available. Cookie cebsp_ Duration session Description Description is currently not available. Cookie lpv218812 Duration 1 hour Description Description is currently not available. Reject All Save My Preferences Accept All Skip to ContentGo to accessibility pageKeyboard shortcuts menu Log in Elementary Algebra 2e 3.1 Use a Problem-Solving Strategy Elementary Algebra 2e3.1 Use a Problem-Solving Strategy Contents Contents Highlights Table of contents Preface 1 Foundations 2 Solving Linear Equations and Inequalities 3 Math Models Introduction 3.1 Use a Problem-Solving Strategy 3.2 Solve Percent Applications 3.3 Solve Mixture Applications 3.4 Solve Geometry Applications: Triangles, Rectangles, and the Pythagorean Theorem 3.5 Solve Uniform Motion Applications 3.6 Solve Applications with Linear Inequalities Chapter Review Exercises 4 Graphs 5 Systems of Linear Equations 6 Polynomials 7 Factoring 8 Rational Expressions and Equations 9 Roots and Radicals 10 Quadratic Equations Answer Key Index Search for key terms or text. Close Learning Objectives By the end of this section, you will be able to: Approach word problems with a positive attitude Use a problem-solving strategy for word problems Solve number problems Be Prepared 3.1 Before you get started, take this readiness quiz. Translate “6 less than twice x” into an algebraic expression. If you missed this problem, review Example 1.26. Be Prepared 3.2 Solve: 2 3 x=24.2 3 x=24.2 3 x=24. If you missed this problem, review Example 2.16. Solve: 3 x+8=14.3 x+8=14.3 x+8=14. If you missed this problem, review Example 2.27. Approach Word Problems with a Positive Attitude “If you think you can… or think you can’t… you’re right.”—Henry Ford The world is full of word problems! Will my income qualify me to rent that apartment? How much punch do I need to make for the party? What size diamond can I afford to buy my girlfriend? Should I fly or drive to my family reunion? How much money do I need to fill the car with gas? How much tip should I leave at a restaurant? How many socks should I pack for vacation? What size turkey do I need to buy for Thanksgiving dinner, and then what time do I need to put it in the oven? If my sister and I buy our mother a present, how much does each of us pay? Now that we can solve equations, we are ready to apply our new skills to word problems. Do you know anyone who has had negative experiences in the past with word problems? Have you ever had thoughts like the student below? Figure 3.2 Negative thoughts can be barriers to success. When we feel we have no control, and continue repeating negative thoughts, we set up barriers to success. We need to calm our fears and change our negative feelings. Start with a fresh slate and begin to think positive thoughts. If we take control and believe we can be successful, we will be able to master word problems! Read the positive thoughts in Figure 3.3 and say them out loud. Figure 3.3 Thinking positive thoughts is a first step towards success. Think of something, outside of school, that you can do now but couldn’t do 3 years ago. Is it driving a car? Snowboarding? Cooking a gourmet meal? Speaking a new language? Your past experiences with word problems happened when you were younger—now you’re older and ready to succeed! Use a Problem-Solving Strategy for Word Problems We have reviewed translating English phrases into algebraic expressions, using some basic mathematical vocabulary and symbols. We have also translated English sentences into algebraic equations and solved some word problems. The word problems applied math to everyday situations. We restated the situation in one sentence, assigned a variable, and then wrote an equation to solve the problem. This method works as long as the situation is familiar and the math is not too complicated. Now, we’ll expand our strategy so we can use it to successfully solve any word problem. We’ll list the strategy here, and then we’ll use it to solve some problems. We summarize below an effective strategy for problem solving. How To Use a Problem-Solving Strategy to Solve Word Problems. Step 1. Read the problem. Make sure all the words and ideas are understood. Step 2. Identify what we are looking for. Step 3. Name what we are looking for. Choose a variable to represent that quantity. Step 4. Translate into an equation. It may be helpful to restate the problem in one sentence with all the important information. Then, translate the English sentence into an algebraic equation. Step 5. Solve the equation using good algebra techniques. Step 6. Check the answer in the problem and make sure it makes sense. Step 7. Answer the question with a complete sentence. Example 3.1 Pilar bought a purse on sale for $18, which is one-half of the original price. What was the original price of the purse? Solution Step 1. Read the problem. Read the problem two or more times if necessary. Look up any unfamiliar words in a dictionary or on the internet. In this problem, is it clear what is being discussed? Is every word familiar? Step 2. Identify what you are looking for. Did you ever go into your bedroom to get something and then forget what you were looking for? It’s hard to find something if you are not sure what it is! Read the problem again and look for words that tell you what you are looking for! In this problem, the words “what was the original price of the purse” tell us what we need to find. Step 3. Name what we are looking for. Choose a variable to represent that quantity. We can use any letter for the variable, but choose one that makes it easy to remember what it represents. Let p=p=p= the original price of the purse. Step 4. Translate into an equation. It may be helpful to restate the problem in one sentence with all the important information. Translate the English sentence into an algebraic equation. Reread the problem carefully to see how the given information is related. Often, there is one sentence that gives this information, or it may help to write one sentence with all the important information. Look for clue words to help translate the sentence into algebra. Translate the sentence into an equation. Restate the problem in one sentence with all the important information. Translate into an equation. Step 5. Solve the equation using good algebraic techniques. Even if you know the solution right away, using good algebraic techniques here will better prepare you to solve problems that do not have obvious answers. Solve the equation. Multiply both sides by 2. Simplify. Step 6. Check the answer in the problem to make sure it makes sense. We solved the equation and found that p=36,p=36,p=36, which means “the original price” was $36. Does $36 make sense in the problem? Yes, because 18 is one-half of 36, and the purse was on sale at half the original price. Step 7. Answer the question with a complete sentence. The problem asked “What was the original price of the purse?” The answer to the question is: “The original price of the purse was $36.” If this were a homework exercise, our work might look like this: Pilar bought a purse on sale for $18, which is one-half the original price. What was the original price of the purse? Let p=p=p= the original price. 18 is one-half the original price. Multiply both sides by 2. Simplify. Check. Is $36 a reasonable price for a purse? Yes. Is 18 one half of 36? 18=?1 2⋅36 18=?1 2⋅36 18=?1 2⋅36 18=18✓18=18✓18=18✓ The original price of the purse was $36. Try It 3.1 Joaquin bought a bookcase on sale for $120, which was two-thirds of the original price. What was the original price of the bookcase? Try It 3.2 Two-fifths of the songs in Mariel’s playlist are country. If there are 16 country songs, what is the total number of songs in the playlist? Let’s try this approach with another example. Example 3.2 Ginny and her classmates formed a study group. The number of girls in the study group was three more than twice the number of boys. There were 11 girls in the study group. How many boys were in the study group? Solution Step 1. Read the problem. Step 2. Identify what we are looking for.How many boys were in the study group? Step 3. Name. Choose a variable to represent the number of boys.Let n=n=n= the number of boys. Step 4. Translate. Restate the problem in one sentence with all the important information. Translate into an equation. Step 5. Solve the equation. Subtract 3 from each side. Simplify. Divide each side by 2. Simplify. Step 6. Check. First, is our answer reasonable? Yes, having 4 boys in a study group seems OK. The problem says the number of girls was 3 more than twice the number of boys. If there are four boys, does that make eleven girls? Twice 4 boys is 8. Three more than 8 is 11. Step 7. Answer the question.There were 4 boys in the study group. Try It 3.3 Guillermo bought textbooks and notebooks at the bookstore. The number of textbooks was 3 more than twice the number of notebooks. He bought 7 textbooks. How many notebooks did he buy? Try It 3.4 Gerry worked Sudoku puzzles and crossword puzzles this week. The number of Sudoku puzzles he completed is eight more than twice the number of crossword puzzles. He completed 22 Sudoku puzzles. How many crossword puzzles did he do? Solve Number Problems Now that we have a problem solving strategy, we will use it on several different types of word problems. The first type we will work on is “number problems.” Number problems give some clues about one or more numbers. We use these clues to write an equation. Number problems don’t usually arise on an everyday basis, but they provide a good introduction to practicing the problem solving strategy outlined above. Example 3.3 The difference of a number and six is 13. Find the number. Solution Step 1. Read the problem. Are all the words familiar? Step 2. Identify what we are looking for.the number Step 3. Name. Choose a variable to represent the number.Let n=n=n= the number. Step 4. Translate. Remember to look for clue words like "difference... of... and..." Restate the problem as one sentence. Translate into an equation. Step 5. Solve the equation. Simplify. Step 6. Check. The difference of 19 and 6 is 13. It checks! Step 7. Answer the question.The number is 19. Try It 3.5 The difference of a number and eight is 17. Find the number. Try It 3.6 The difference of a number and eleven is −7.−7.−7. Find the number. Example 3.4 The sum of twice a number and seven is 15. Find the number. Solution Step 1. Read the problem. Step 2. Identify what we are looking for.the number Step 3. Name. Choose a variable to represent the number.Let n=n=n= the number. Step 4. Translate. Restate the problem as one sentence. Translate into an equation. Step 5. Solve the equation. Subtract 7 from each side and simplify. Divide each side by 2 and simplify. Step 6. Check. Is the sum of twice 4 and 7 equal to 15? 2⋅4+7 15≟=15 15✓2⋅4+7≟15 15=15✓2⋅4+7≟15 15=15✓ Step 7. Answer the question.The number is 4. Did you notice that we left out some of the steps as we solved this equation? If you’re not yet ready to leave out these steps, write down as many as you need. Try It 3.7 The sum of four times a number and two is 14. Find the number. Try It 3.8 The sum of three times a number and seven is 25. Find the number. Some number word problems ask us to find two or more numbers. It may be tempting to name them all with different variables, but so far we have only solved equations with one variable. In order to avoid using more than one variable, we will define the numbers in terms of the same variable. Be sure to read the problem carefully to discover how all the numbers relate to each other. Example 3.5 One number is five more than another. The sum of the numbers is 21. Find the numbers. Solution Step 1. Read the problem. Step 2. Identify what we are looking for.We are looking for two numbers. Step 3. Name. We have two numbers to name and need a name for each. Choose a variable to represent the first number.Let n=1 st n=1 st n=1 st number. What do we know about the second number?One number is five more than another. n+5=2 nd n+5=2 nd n+5=2 nd number Step 4. Translate.Restate the problem as one sentence with all the important information.The sum of the 1 st number and the 2 nd number is 21. Translate into an equation. Substitute the variable expressions. Step 5. Solve the equation. Combine like terms. Subtract 5 from both sides and simplify. Divide by 2 and simplify. Find the second number, too. Step 6. Check. Do these numbers check in the problem? Is one number 5 more than the other?13≟8+5 13≟8+5 13≟8+5 Is thirteen 5 more than 8? Yes.13=13✓13=13✓13=13✓ Is the sum of the two numbers 21?8+13≟21 8+13≟21 8+13≟21 21=21✓21=21✓21=21✓ Step 7. Answer the question.The numbers are 8 and 13. Try It 3.9 One number is six more than another. The sum of the numbers is twenty-four. Find the numbers. Try It 3.10 The sum of two numbers is fifty-eight. One number is four more than the other. Find the numbers. Example 3.6 The sum of two numbers is negative fourteen. One number is four less than the other. Find the numbers. Solution Step 1. Read the problem. Step 2. Identify what we are looking for.We are looking for two numbers. Step 3. Name. Choose a variable.Let n=1 st n=1 st n=1 st number. One number is 4 less than the other.n−4=2 nd n−4=2 nd n−4=2 nd number Step 4. Translate. Write as one sentence.The sum of the 2 numbers is negative 14. Translate into an equation. Step 5. Solve the equation. Combine like terms. Add 4 to each side and simplify. Simplify. Step 6. Check. Is −9 four less than −5?−5−4≟−9−5−4≟−9−5−4≟−9 −9=−9✓−9=−9✓−9=−9✓ Is their sum −14?−5+(−9)≟−14−5+(−9)≟−14−5+(−9)≟−14 −14=−14✓−14=−14✓−14=−14✓ Step 7. Answer the question.The numbers are −5 and −9. Try It 3.11 The sum of two numbers is negative twenty-three. One number is seven less than the other. Find the numbers. Try It 3.12 The sum of two numbers is −18.−18.−18. One number is 40 more than the other. Find the numbers. Example 3.7 One number is ten more than twice another. Their sum is one. Find the numbers. Solution Step 1. Read the problem. Step 2. Identify what you are looking for.We are looking for two numbers. Step 3. Name. Choose a variable.Let x=1 st x=1 st x=1 st number. One number is 10 more than twice another.2 x+10=2 nd 2 x+10=2 nd 2 x+10=2 nd number Step 4. Translate. Restate as one sentence.Their sum is one. The sum of the two numbers is 1. Translate into an equation. Step 5. Solve the equation. Combine like terms. Subtract 10 from each side. Divide each side by 3. Step 6. Check. Is ten more than twice −3 equal to 4?2(−3)+10≟4 2(−3)+10≟4 2(−3)+10≟4 −6+10≟4−6+10≟4−6+10≟4 4=4✓4=4✓4=4✓ Is their sum 1?−3+4≟1−3+4≟1−3+4≟1 1=1✓1=1✓1=1✓ Step 7. Answer the question.The numbers are −3 and 4. Try It 3.13 One number is eight more than twice another. Their sum is negative four. Find the numbers. Try It 3.14 One number is three more than three times another. Their sum is −5.−5.−5. Find the numbers. Some number problems involve consecutive integers. Consecutive integers are integers that immediately follow each other. Examples of consecutive integers are: 1,2,3,4−10,−9,−8,−7 150,151,152,153 1,2,3,4−10,−9,−8,−7 150,151,152,153 1,2,3,4−10,−9,−8,−7 150,151,152,153 Notice that each number is one more than the number preceding it. So if we define the first integer as n, the next consecutive integer is n+1.n+1.n+1. The one after that is one more than n+1,n+1,n+1, so it is n+1+1,n+1+1,n+1+1, which is n+2.n+2.n+2. n n+1 n+2 1 st integer 2 nd consecutive integer 3 rd consecutive integer . . . etc.n 1 st integer n+1 2 nd consecutive integer n+2 3 rd consecutive integer . . . etc.n 1 st integer n+1 2 nd consecutive integer n+2 3 rd consecutive integer . . . etc. Example 3.8 The sum of two consecutive integers is 47. Find the numbers. Solution Step 1. Read the problem. Step 2. Identify what you are looking for.two consecutive integers Step 3. Name each number.Let n=1 st n=1 st n=1 st integer. n+1=n+1=n+1= next consecutive integer Step 4. Translate. Restate as one sentence.The sum of the integers is 47. Translate into an equation. Step 5. Solve the equation. Combine like terms. Subtract 1 from each side. Divide each side by 2. Step 6. Check. 23+24 47≟=47 47✓23+24≟47 47=47✓23+24≟47 47=47✓ Step 7. Answer the question.The two consecutive integers are 23 and 24. Try It 3.15 The sum of two consecutive integers is 95.95.95. Find the numbers. Try It 3.16 The sum of two consecutive integers is −31.−31.−31. Find the numbers. Example 3.9 Find three consecutive integers whose sum is −42.−42.−42. Solution Step 1. Read the problem. Step 2. Identify what we are looking for.three consecutive integers Step 3. Name each of the three numbers.Let n=1 st n=1 st n=1 st integer. n+1=n+1=n+1= 2 nd consecutive integer n+2=n+2=n+2= 3 rd consecutive integer Step 4. Translate. Restate as one sentence.The sum of the three integers is −42. Translate into an equation. Step 5. Solve the equation. Combine like terms. Subtract 3 from each side. Divide each side by 3. Step 6. Check. −13+(−14)+(−15)−42≟=−42−42✓−13+(−14)+(−15)≟−42−42=−42✓−13+(−14)+(−15)≟−42−42=−42✓ Step 7. Answer the question.The three consecutive integers are −13, −14, and −15. Try It 3.17 Find three consecutive integers whose sum is −96.−96.−96. Try It 3.18 Find three consecutive integers whose sum is −36.−36.−36. Now that we have worked with consecutive integers, we will expand our work to include consecutive even integers and consecutive odd integers. Consecutive even integers are even integers that immediately follow one another. Examples of consecutive even integers are: 18,20,22 64,66,68−12,−10,−8 18,20,22 64,66,68−12,−10,−8 18,20,22 64,66,68−12,−10,−8 Notice each integer is 2 more than the number preceding it. If we call the first one n, then the next one is n+2.n+2.n+2. The next one would be n+2+2 n+2+2 n+2+2 or n+4.n+4.n+4. n n+2 n+4 1 st even integer 2 nd consecutive even integer 3 rd consecutive even integer . . . etc.n 1 st even integer n+2 2 nd consecutive even integer n+4 3 rd consecutive even integer . . . etc.n 1 st even integer n+2 2 nd consecutive even integer n+4 3 rd consecutive even integer . . . etc. Consecutive odd integers are odd integers that immediately follow one another. Consider the consecutive odd integers 77, 79, and 81. 77,79,81 n,n+2,n+4 77,79,81 n,n+2,n+4 77,79,81 n,n+2,n+4 n n+2 n+4 1 st odd integer 2 nd consecutive odd integer 3 rd consecutive odd integer . . . etc.n 1 st odd integer n+2 2 nd consecutive odd integer n+4 3 rd consecutive odd integer . . . etc.n 1 st odd integer n+2 2 nd consecutive odd integer n+4 3 rd consecutive odd integer . . . etc. Does it seem strange to add 2 (an even number) to get from one odd integer to the next? Do you get an odd number or an even number when we add 2 to 3? to 11? to 47? Whether the problem asks for consecutive even numbers or odd numbers, you don’t have to do anything different. The pattern is still the same—to get from one odd or one even integer to the next, add 2. Example 3.10 Find three consecutive even integers whose sum is 84. Solution Step 1. Read the problem. Step 2. Identify what we are looking for.three consecutive even integers Step 3. Name the integers.Let n=1 st n=1 st n=1 st even integer. n+2=2 nd n+2=2 nd n+2=2 nd consecutive even integer n+4=3 rd n+4=3 rd n+4=3 rd consecutive even integer Step 4. Translate. Restate as one sentence.The sume of the three even integers is 84. Translate into an equation.n+n+2+n+4=84 n+n+2+n+4=84 n+n+2+n+4=84 Step 5. Solve the equation. Combine like terms.n+n+2+n+4=84 n+n+2+n+4=84 n+n+2+n+4=84 Subtract 6 from each side.3 n+6=84 3 n+6=84 3 n+6=84 Divide each side by 3.3 n=78 3 n=78 3 n=78 n=26 n+2 26+2 28 n+4 26+4 30 1 st integer 2 nd integer 3 rd integer n=26 1 st integer n+2 2 nd integer 26+2 28 n+4 3 rd integer 26+4 30 n=26 1 st integer n+2 2 nd integer 26+2 28 n+4 3 rd integer 26+4 30 Step 6. Check. 26+28+30=?84 84=84✓26+28+30=?84 84=84✓26+28+30=?84 84=84✓ Step 7. Answer the question.The three consecutive integers are 26, 28, and 30. Table 3.1 Try It 3.19 Find three consecutive even integers whose sum is 102. Try It 3.20 Find three consecutive even integers whose sum is −24.−24.−24. Example 3.11 A married couple together earns $110,000 a year. The wife earns $16,000 less than twice what her husband earns. What does the husband earn? Solution Step 1. Read the problem. Step 2. Identify what we are looking for.How much does the husband earn? Step 3. Name. Choose a variable to represent the amount the husband earns.Let h=h=h= the amount the husband earns. The wife earns $16,000 less than twice that.2 h−16,000 2 h−16,000 2 h−16,000 the amount the wife earns. Step 4. Translate.Together the husband and wife earn $110,000. Restate the problem in one sentence with all the important information. Translate into an equation. Step 5. Solve the equation.h + 2h − 16,000 = 110,000 Combine like terms.3h − 16,000 = 110,000 Add 16,000 to both sides and simplify.3h = 126,000 Divide each side by 3.h = 42,000 $42,000 amount husband earns 2h − 16,000 amount wife earns 2(42,000) − 16,000 84,000 − 16,000 68,000 Step 6. Check. If the wife earns $68,000 and the husband earns $42,000 is the total $110,000? Yes! Step 7. Answer the question.The husband earns $42,000 a year. Try It 3.21 According to the National Automobile Dealers Association, the average cost of a car in 2014 was $28,500. This was $1,500 less than 6 times the cost in 1975. What was the average cost of a car in 1975? Try It 3.22 U.S. Census data shows that the median price of new home in the United States in November 2014 was $280,900. This was $10,700 more than 14 times the price in November 1964. What was the median price of a new home in November 1964? Section 3.1 Exercises Practice Makes Perfect Use the Approach Word Problems with a Positive Attitude In the following exercises, prepare the lists described. 1. List five positive thoughts you can say to yourself that will help you approach word problems with a positive attitude. You may want to copy them on a sheet of paper and put it in the front of your notebook, where you can read them often. List five negative thoughts that you have said to yourself in the past that will hinder your progress on word problems. You may want to write each one on a small piece of paper and rip it up to symbolically destroy the negative thoughts. Use a Problem-Solving Strategy for Word Problems In the following exercises, solve using the problem solving strategy for word problems. Remember to write a complete sentence to answer each question. 3. Two-thirds of the children in the fourth-grade class are girls. If there are 20 girls, what is the total number of children in the class? Three-fifths of the members of the school choir are women. If there are 24 women, what is the total number of choir members? 5. Zachary has 25 country music CDs, which is one-fifth of his CD collection. How many CDs does Zachary have? One-fourth of the candies in a bag of M&M’s are red. If there are 23 red candies, how many candies are in the bag? 7. There are 16 girls in a school club. The number of girls is four more than twice the number of boys. Find the number of boys. There are 18 Cub Scouts in Pack 645. The number of scouts is three more than five times the number of adult leaders. Find the number of adult leaders. 9. Huong is organizing paperback and hardback books for her club’s used book sale. The number of paperbacks is 12 less than three times the number of hardbacks. Huong had 162 paperbacks. How many hardback books were there? Jeff is lining up children’s and adult bicycles at the bike shop where he works. The number of children’s bicycles is nine less than three times the number of adult bicycles. There are 42 adult bicycles. How many children’s bicycles are there? 11. Philip pays $1,620 in rent every month. This amount is $120 more than twice what his brother Paul pays for rent. How much does Paul pay for rent? Marc just bought an SUV for $54,000. This is $7,400 less than twice what his wife paid for her car last year. How much did his wife pay for her car? 13. Laurie has $46,000 invested in stocks and bonds. The amount invested in stocks is $8,000 less than three times the amount invested in bonds. How much does Laurie have invested in bonds? Erica earned a total of $50,450 last year from her two jobs. The amount she earned from her job at the store was $1,250 more than three times the amount she earned from her job at the college. How much did she earn from her job at the college? Solve Number Problems In the following exercises, solve each number word problem. 15. The sum of a number and eight is 12. Find the number. The sum of a number and nine is 17. Find the number. 17. The difference of a number and 12 is three. Find the number. The difference of a number and eight is four. Find the number. 19. The sum of three times a number and eight is 23. Find the number. The sum of twice a number and six is 14. Find the number. 21. The difference of twice a number and seven is 17. Find the number. The difference of four times a number and seven is 21. Find the number. 23. Three times the sum of a number and nine is 12. Find the number. Six times the sum of a number and eight is 30. Find the number. 25. One number is six more than the other. Their sum is 42. Find the numbers. One number is five more than the other. Their sum is 33. Find the numbers. 27. The sum of two numbers is 20. One number is four less than the other. Find the numbers. The sum of two numbers is 27. One number is seven less than the other. Find the numbers. 29. The sum of two numbers is −45.−45.−45. One number is nine more than the other. Find the numbers. The sum of two numbers is −61.−61.−61. One number is 35 more than the other. Find the numbers. 31. The sum of two numbers is −316.−316.−316. One number is 94 less than the other. Find the numbers. The sum of two numbers is −284.−284.−284. One number is 62 less than the other. Find the numbers. 33. One number is 14 less than another. If their sum is increased by seven, the result is 85. Find the numbers. One number is 11 less than another. If their sum is increased by eight, the result is 71. Find the numbers. 35. One number is five more than another. If their sum is increased by nine, the result is 60. Find the numbers. One number is eight more than another. If their sum is increased by 17, the result is 95. Find the numbers. 37. One number is one more than twice another. Their sum is −5.−5.−5. Find the numbers. One number is six more than five times another. Their sum is six. Find the numbers. 39. The sum of two numbers is 14. One number is two less than three times the other. Find the numbers. The sum of two numbers is zero. One number is nine less than twice the other. Find the numbers. 41. The sum of two consecutive integers is 77. Find the integers. The sum of two consecutive integers is 89. Find the integers. 43. The sum of two consecutive integers is −23.−23.−23. Find the integers. The sum of two consecutive integers is −37.−37.−37. Find the integers. 45. The sum of three consecutive integers is 78. Find the integers. The sum of three consecutive integers is 60. Find the integers. 47. Find three consecutive integers whose sum is −36.−36.−36. Find three consecutive integers whose sum is −3.−3.−3. 49. Find three consecutive even integers whose sum is 258. Find three consecutive even integers whose sum is 222. 51. Find three consecutive odd integers whose sum is 171. Find three consecutive odd integers whose sum is 291. 53. Find three consecutive even integers whose sum is −36.−36.−36. Find three consecutive even integers whose sum is −84.−84.−84. 55. Find three consecutive odd integers whose sum is −213.−213.−213. Find three consecutive odd integers whose sum is −267.−267.−267. Everyday Math 57. Sale Price Patty paid $35 for a purse on sale for $10 off the original price. What was the original price of the purse? Sale Price Travis bought a pair of boots on sale for $25 off the original price. He paid $60 for the boots. What was the original price of the boots? 59. Buying in Bulk Minh spent $6.25 on five sticker books to give his nephews. Find the cost of each sticker book. Buying in Bulk Alicia bought a package of eight peaches for $3.20. Find the cost of each peach. 61. Price before Sales Tax Tom paid $1,166.40 for a new refrigerator, including $86.40 tax. What was the price of the refrigerator? Price before Sales Tax Kenji paid $2,279 for a new living room set, including $129 tax. What was the price of the living room set? Writing Exercises 63. What has been your past experience solving word problems? When you start to solve a word problem, how do you decide what to let the variable represent? 65. What are consecutive odd integers? Name three consecutive odd integers between 50 and 60. What are consecutive even integers? Name three consecutive even integers between −50−50−50 and −40.−40.−40. Self Check ⓐ After completing the exercises, use this checklist to evaluate your mastery of the objectives of this section. ⓑ If most of your checks were: …confidently. Congratulations! You have achieved your goals in this section! Reflect on the study skills you used so that you can continue to use them. What did you do to become confident of your ability to do these things? Be specific! …with some help. This must be addressed quickly as topics you do not master become potholes in your road to success. Math is sequential—every topic builds upon previous work. It is important to make sure you have a strong foundation before you move on. Whom can you ask for help? Your fellow classmates and instructor are good resources. Is there a place on campus where math tutors are available? Can your study skills be improved? …no—I don’t get it! This is critical and you must not ignore it. You need to get help immediately or you will quickly be overwhelmed. See your instructor as soon as possible to discuss your situation. Together you can come up with a plan to get you the help you need. PreviousNext Order a print copy Citation/Attribution This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission. Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax. Attribution information If you are redistributing all or part of this book in a print format, then you must include on every physical page the following attribution: Access for free at If you are redistributing all or part of this book in a digital format, then you must include on every digital page view the following attribution: Access for free at Citation information Use the information below to generate a citation. We recommend using a citation tool such as this one. Authors: Lynn Marecek, MaryAnne Anthony-Smith, Andrea Honeycutt Mathis Publisher/website: OpenStax Book title: Elementary Algebra 2e Publication date: Apr 22, 2020 Location: Houston, Texas Book URL: Section URL: © Jul 8, 2025 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University. Our mission is to improve educational access and learning for everyone. OpenStax is part of Rice University, which is a 501(c)(3) nonprofit. Give today and help us reach more students. Help Contact Us Support Center FAQ OpenStax Press Newsletter Careers Policies Accessibility Statement Terms of Use Licensing Privacy Policy Manage Cookies © 1999-2025, Rice University. Except where otherwise noted, textbooks on this site are licensed under a Creative Commons Attribution 4.0 International License. Advanced Placement® and AP® are trademarks registered and/or owned by the College Board, which is not affiliated with, and does not endorse, this site.
14943
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0119712
Browse Subject Areas Click through the PLOS taxonomy to find articles in your field. For more information about PLOS Subject Areas, click here. Loading metrics Open Access Research Article Nitrate Reduction to Nitrite, Nitric Oxide and Ammonia by Gut Bacteria under Physiological Conditions Mauro Tiso, Affiliation Molecular Medicine Branch, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, Maryland, United States of America ⨯ Alan N. Schechter E-mail: alans@intra.niddk.nih.gov Affiliation Molecular Medicine Branch, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, Maryland, United States of America ⨯ Nitrate Reduction to Nitrite, Nitric Oxide and Ammonia by Gut Bacteria under Physiological Conditions Mauro Tiso, Alan N. Schechter x Published: March 24, 2015 Article Authors Metrics Comments Media Coverage Reader Comments Figures Correction 6 May 2015: Tiso M, Schechter AN (2015) Correction: Nitrate Reduction to Nitrite, Nitric Oxide and Ammonia by Gut Bacteria under Physiological Conditions. PLOS ONE 10(5): e0127490. View correction Figures Abstract The biological nitrogen cycle involves step-wise reduction of nitrogen oxides to ammonium salts and oxidation of ammonia back to nitrites and nitrates by plants and bacteria. Neither process has been thought to have relevance to mammalian physiology; however in recent years the salivary bacterial reduction of nitrate to nitrite has been recognized as an important metabolic conversion in humans. Several enteric bacteria have also shown the ability of catalytic reduction of nitrate to ammonia via nitrite during dissimilatory respiration; however, the importance of this pathway in bacterial species colonizing the human intestine has been little studied. We measured nitrite, nitric oxide (NO) and ammonia formation in cultures of Escherichia coli, Lactobacillus and Bifidobacterium species grown at different sodium nitrate concentrations and oxygen levels. We found that the presence of 5 mM nitrate provided a growth benefit and induced both nitrite and ammonia generation in E.coli and L.plantarum bacteria grown at oxygen concentrations compatible with the content in the gastrointestinal tract. Nitrite and ammonia accumulated in the growth medium when at least 2.5 mM nitrate was present. Time-course curves suggest that nitrate is first converted to nitrite and subsequently to ammonia. Strains of L.rhamnosus, L.acidophilus and B.longum infantis grown with nitrate produced minor changes in nitrite or ammonia levels in the cultures. However, when supplied with exogenous nitrite, NO gas was readily produced independently of added nitrate. Bacterial production of lactic acid causes medium acidification that in turn generates NO by non-enzymatic nitrite reduction. In contrast, nitrite was converted to NO by E.coli cultures even at neutral pH. We suggest that the bacterial nitrate reduction to ammonia, as well as the related NO formation in the gut, could be an important aspect of the overall mammalian nitrate/nitrite/NO metabolism and is yet another way in which the microbiome links diet and health. Citation: Tiso M, Schechter AN (2015) Nitrate Reduction to Nitrite, Nitric Oxide and Ammonia by Gut Bacteria under Physiological Conditions. PLoS ONE 10(3): e0119712. Academic Editor: David Jourd'heuil, Albany Medical College, UNITED STATES Received: December 2, 2014; Accepted: January 16, 2015; Published: March 24, 2015 This is an open access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication Data Availability: All relevant data are within the paper. Funding: The authors have no support or funding to report. Competing interests: Gladwin, M. T., Cannon, III, R. O., and Schechter, A. N.: Use of nitrite salts for the treatment of cardiovascular conditions. United States Patents 20,060,182,815; 20,070,154,569; 20,100,247,682. This does not alter the authors' adherence to all PLOS ONE policies on sharing data and materials. Introduction Nitric oxide (NO) is a highly diffusible short-lived free radical gas, permeating biomembranes with a wide range of physiological functions . Nitrate (NO3−) and nitrite (NO2−) anions have long been identified as stable products of NO oxidation, but in recent years the nitrate—nitrite—nitric oxide reductive pathway has emerged as an alternative route to the classical enzymatic NO formation by oxidation of L-arginine with molecular oxygen [2,3]. Accumulating evidence indicates that oral commensal bacteria are responsible for the enzymatic reduction of inorganic nitrate to nitrite, which is then reduced to NO in the stomach under acidic conditions by non-enzymatic disproportionation , or in other tissues under physiological and hypoxic/ischemic conditions by several biochemical reactions catalyzed by a variety of enzymes and proteins [5,6]. Nitrate in the human intestine originates both from endogenous synthesis and dietary products rich in nitrate . Recent and past data have demonstrated that nitrate-rich diets increase plasma and tissue levels of nitrite [9,10], but this cannot be accounted solely by nitrate reduction from oral bacteria and other mechanisms have been implicated and are under investigation . In healthy individuals dietary nitrate is usually well absorbed in the upper intestinal tract, however a considerable fraction of the daily nitrate intake (about 1/3) was found to reach the lower intestine while only 1% of it is recovered in the feces . Research studies performed in the early 80’s by Tannenbaum and colleagues on the metabolic pathways of nitrate, both in humans and rats [13,14], showed that after diet supplementation with 15NO3- about 50–60% of the ingested labeled nitrate was recovered in the urine while a small percentage (16% in rats and 3% in humans) appeared as 15NH4+ or [15N] urea and about 35% to 40% of the dose could not be recovered as excreted nitrogen-containing compounds. The metabolic fate of this unaccounted nitrate is still poorly understood. More recently, the normal bacterial flora has been shown to generate NO and gut luminal NO levels have been measured in vivo in rats [15,16]. Many enteric bacteria are also capable of catalytic reduction of nitrate to N2 gas (denitrification) under anaerobic conditions, or to ammonia via two-steps dissimilatory or assimilatory pathways [17,18]. We hypothesize that the nitrogen imbalance detected in the early metabolic studies cited above could be at least in part attributed to the gut microbiota conversion of nitrate to ammonia via nitrite reduction. The ammonia thus generated would likely be carried to the liver via the portal vein, where it can enter the urea cycle and be converted into urea and amino acids. Very little information exists regarding O2 concentrations in vivo in the various fluids of the intestinal tract: typically, the O2 level at the luminal surface has been reported to range from 2% to 7% [19–21]. However, as oxygen diffuses from the tissues underlying the mucosa, microbial activity will reduce its content, and the lumen of the colon has been considered for many aspects an anaerobic region. In this study we investigated the formation of nitrite, NO and ammonia in cultures of representative species of gut bacteria grown with added nitrate under controlled oxygen concentrations existing in the human gastro-intestinal tract. In particular we selected Escherichia coli, the best understood enteric bacteria, and four different species of lactic acid bacteria (listed in Table 1) that have been previously shown to generate a substantial amount of NO when supplemented with 0.1 mM nitrite in anaerobic conditions (16). Our findings suggest that, in the presence of relative high physiological nitrate concentrations, Escherichia coli and Lactobacillus plantarum, two common bacterial species colonizing the human intestine, generate nitrite and subsequently ammonia in an oxygen-dependent fashion. The importance of this pathway in vivo demands further studies. Download: PPT PowerPoint slide PNG larger image TIFF original image Table 1. Bacterial species and strains used in this study. Materials and Methods Growth Media and Reagents The rich medium Luria-Bertani (LB) broth is usually the media of choice for fast growth of E.coli. However, we found that different batches of LB broth (from different vendors) contained a considerable, but variable, amount of ammonia and therefore it was not considered suitable for this study and used exclusively for the preparation of E.coli inoculum. Instead a modified LMRS broth (Lactobacilli de Man, Rogosa and Sharp broth), made without ammonium citrate (Anaerobe System, CA) was used despite E.coli growth rate being considerably lower (about 4–5 fold) than the rate in LB broth. Analyses of the LMRS medium for nitrite and ammonia indicated low concentrations present (respectively less than 1 μM and about 23 μM). Nitrate was added as a filter-sterilized solution. Lactic acid bacteria cultures were supplemented, when indicated, with hemin (stock solution: 0.5 mg/mL in 0.05 M NaOH) to a final concentration of 2.5 μg/mL and vitamin K2 (menaquinone-4) (stock solution: 2 mg/mL in ethanol) to a final concentration of 0.2 μg/mL. All reagents were purchased from Sigma-Aldrich unless otherwise specified. Organisms, Culture Conditions and Sample Preparation A full list of bacteria strains used in this study is in Table 1. Stock culture collections were obtained from ATCC (Manassas, VA). Each strain was grown for individual experiments under the indicated oxygen concentration at 37°C. Bacterial cell concentration was monitored by measuring the optical density (OD) at 600 nm using a 1 cm pathlength cuvette. Typically, one hundred microliters of a 4 to 6 hours old inoculum of each strain with OD at 600 nm between 0.6–0.8 was added aseptically to 30–35 mL of broth in a 125 mL conical sterile flask, with shaking at 200 rpm. For high throughput, 6-well microtiter plates were filled with 4 mL medium/well, covered with breath-seals, shaken at 300 rpm and placed in a glovebox (Coy Laboratory Products, Grass Lake, MI). To vary the oxygen concentrations we supply the glovebox with different O2/N2 mixture adjusted accordingly and an oxygen sensor type Servoflex MiniMP-5200 was used to detect the exact oxygen concentration. At 0% O2, the glove box contained a gaseous atmosphere of about 2% H2 catalyst-deoxygenated nitrogen. The bacteria cultures were agitated using either a magnetic stirrer or a Micromixer Mxi4t. Samples were collected after 24 h or withdrawn at regular intervals, as indicated in the text, during time-course experiments and centrifuged at 10,000 × g for 15 min at 4°C. The cell-free supernatant and the cell pellet obtained were stored at −80°C. For further use the pellet was washed three times with 1 mL distilled water, weighted and re-suspended with enough PBS buffer to 10 mg/mL. The resulting suspension was sealed to prevent ammonia evaporation and used immediately to estimate ammonia and nitrite. The supernatant was used for nitrite and ammonia determination within 14 days. This prevented the loss of ammonia content in the samples as determined by comparison with standards prepared from 10 mM NH4Cl. To determine the colony forming units (CFU) one mL aliquot was collected and the 10−2, 10−3, 10−4 dilutions were plated onto LMRS agar (pH 6.5). All plates were incubated at 37°C until colonies were evident and counted manually. Analytical procedures Determination of nitrite. To accurately measure nitrite concentration in cultures media and pellets after bacterial growth we used an acidic tri-iodide-based gas phase chemiluminescence method with a Sievers NO analyzer instrument (NOA, model 280i, GE Analytical Instruments, Boulder, CO, USA) as described previously . Determination of ammonia. Ammonia concentrations in all culture samples were determined using two commercially available colorimetric assay kits optimized for 96 well plate reader (BioVision Inc., Milpitas, CA) respectively based on ammonia enzymatic conversion (OD at 570 nm) and a modified Berthelot non-enzymatic reaction (OD at 670 nm). Both are more reliable and sensitive that the method based on measuring NADPH oxidation (OD at 340 nm). Modified protocols were designed to limit interference due to low pH by buffering samples with 100 mM Tris-HCl at pH = 7.5 and diluting accordingly to assure the readings were within the standard curve range (prepared every time using NH4Cl standard solutions). For the non-enzymatic reaction, samples were deproteinized prior to testing using a 10kDa cutoff spin column filter. Measurement of NO emission in the gas phase. NO gas liberated from live bacteria was measured in real-time by ozone-based chemiluminescence NO analyzer (CLD88Y; Eco Physics Inc., Ann Arbor, MI). Experiments were carried out as following: 10 to 100 mL of bacteria with their growth media at OD600 approximately 1.0 were placed in a spinner flask kept at 37°C while stirring and purged either with N2 gas for anaerobic conditions or with 2% O2 / 98% N2 gas mixture for low O2 conditions with the flow rate strictly regulated to 50 mL/min. For each experiment the colony-forming unit was obtained as described above and the amount of NO determined was expressed in ppb/109 CFU. Once a stable baseline was established the indicated amount of nitrite was injected in the mixture as previously described . To test for bacterial NOS activity nitrite injection was replaced with arginine or L-NAME as indicated. We verify that the release of NO into the gas phase from the solutions can be used as a continuous measurement for the NO production building a calibration curve with amounts of NO produced by the injection of sodium nitrite standards into an 0.1 M HCl acidified solution containing 10 mM ascorbic acid. Determination of lactic acid. The total amount of D-/L-lactic acids produced during growth was determined enzymatically using a spectrophotometric (absorbance at 340 nm) commercial test kit (NZYTech, Lisbon, Portugal). The assay was performed on the supernatant of cultures obtained after centrifugation at 5000 rpm for 10 min and diluted appropriately. Statistical Analysis Each experiment was performed in triplicate, and values are expressed as mean ± standard deviation (SD) from determinations representative of two or more independently grown bacterial cultures. Data were analyzed using Origin 8.1 (OriginLab Corp., Northampton, MA). To account for the small growth differences between each bacterial batch the values for nitrite and ammonia determined in the cell free supernatant were normalized using the OD at 600 nm measured after 24 h growth. Analysis for statistically significant differences among mean values was done, when applicable, using the one-way analysis of variance. Error bars represent the SD of the measurement. Results Effect of nitrate and oxygen on the growth patterns of E.coli in LMRS We first compared E.coli MG1655 strain growth patterns in aerated, low oxygen (2%) and anaerobic cultures grown in modified LMRS broth supplemented with or without 5 mM nitrate, a concentration compatible with levels found in the upper intestinal tract of healthy volunteers and with values measured in the mouse intestinal mucus . In Fig. 1A we plotted the OD of samples withdrawn at the same time intervals but different O2 levels: the aerobic condition (atmospheric concentration of 200 mbar or 21% O2) shows a shorter lag phase and, as expected, an earlier exponential growth phase starting at lower cell density than the 2% O2 and the anaerobic conditions (black lines with closed symbols), defined as concentrations of O2 below 5 mbar or 0.5% (about 0.1% in our experiments). The presence of 5 mM nitrate provided a clear growth benefit to E.coli cultures maintained at 2% O2 or in anaerobic conditions (red lines with open symbols) and partially restored the growth to levels found in the aerobic conditions. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 1. Nitrate and oxygen effect on E.coli bacterial cultures growth and formation of nitrite and ammonia. (A) Growth curves for E.coli MG1655 grown in the absence (black closed symbols) or in the presence of 5 mM nitrate (red open symbols) at 37°C in LMRS broth at 21%, 2%, and 0% O2 concentrations (respectively square, circle and diamond symbols). (B) Concentration of nitrite and ammonia (blue and red solid lines) in E.coli pellets after 24 h growth at different oxygen levels with 5 mM nitrate. (C) and (D) Respectively dependence on nitrate (at 0% O2) and oxygen (at 5 mM nitrate) of nitrite and ammonia concentrations in the cell-free culture media after 24h growth. The ammonia content of LMRS alone is indicated by the dashed lines. Values are means ± SD (n = 3). The average SD resulted smaller than the symbols dimensions (0.04 OD) and it is not shown for clarity. Effect of oxygen on E.coli production of nitrite and ammonia from nitrate The E.coli genome encodes at least three distinct nitrate reductase enzymes that are known to be expressed during anaerobic respiration . These enzymes use nitrate as electron acceptor and produce nitrite which become toxic to the cell upon reaching high intracellular concentrations and is therefore transported outside the cell wall . Alongside with transport E.coli expresses two nitrite reductase enzymes (Nrf and Nir) that detoxify NO2- by rapidly converting it to ammonia through a six-electron reduction . However, it is unknown how O2 levels affect these processes. We therefore measured both nitrite and ammonia in cell pellets (Fig. 1B) and cell–free supernatant (Fig. 1D) obtained from E.coli cultures grown for 24 h at 37°C and different oxygen concentrations with 5 mM nitrate added. In Fig. 1B the concentrations of nitrite in cell pellets (nmoles per mg of wet weight) show an increasing production starting at 4% O2 and reaching the maximum at 0% O2. The amount of ammonia detected followed the same trend but was at least 5 to 6 fold lower respect to the nitrite concentrations. The high nitrite/ammonia ratio (compared with values detected in the media) and the large errors in the ammonia measurements could be due to a considerable percentage of ammonia evaporating from the pellets while manipulating the samples at room temperature and for the remainder of this study we determined nitrate metabolites only in the cell-free supernatant. We then determined nitrite and ammonia in the media of E.coli cultures grown anaerobically for 24 h with added nitrate in the range 0 to 20 mM (Fig. 1C). Nitrite and ammonia concentrations remained steady when nitrate concentrations were lower or equal to 1.0 mM. However, when nitrate reached 2.5 mM both metabolites accumulated in the media and their concentrations were greatly increased and reached a maximum in the range 5–20 mM nitrate (to around 260 μM for NO2− and 180 μM for NH4+). Nitrate reduction by lactic acid bacteria cultures Lactic acid bacteria (LAB) are facultative anaerobe organisms that grow in abundance in the digestive tract of vertebrate animals. LAB also represent some of the most commonly used probiotic bacteria and are extensively used for the production of fermented foods (yogurts, cheeses, sausages, pickles, etc.). It was believed that LAB depend strictly on a fermentative mode of metabolism since they do not possess heme containing enzymes essential for the respiratory chain. However, over the past 30 years it has been shown that many Lactobacilli species can incorporate heme from the environment and utilize menaquinones, also known as vitamins K, to eventually perform respiration . In this regards, it is important to note that E.coli synthesize both heme and vitamins K during growth facilitating membranous electron transfer. Brooijmans et al. have also recently reported that exogenous addition of menaquinone 4 (vitamin K2) along with heme stimulate nitrate reduction in L.plantarum. We then grow single cultures of L.rhamnosus, L.acidophilus, L.plantarum and B.longum subsp. infantis species supplemented with heme and vitamin K2 and we compared the effect of oxygen and nitrate variations on the generation of nitrite and ammonia (Fig. 2). Although these strains are categorized as microaerophilic , in our experiments all strains grew well even at 21% O2 except for B.longum subsp. infantis which we confirmed as moderately aero-tolerant. First we compared nitrite and ammonia formation at different nitrate concentrations with the O2 level fixed at 2%, a partial pressure similar to the one found on the luminal surface of the intestinal mucosa (Fig. 2 A and B). Nitrate concentrations equal to or above 2.5 mM had a significant effect on nitrite and ammonia production only in the L.plantarum cultures. A smaller, but still considerable, effect on nitrite generation was observed in L.rhamnosus and L.acidophilus cultures and no significant changes were measured in B.longum infantis. We then fixed the nitrate concentration in the bacterial cultures to 5 mM, a level sufficient to show a clear effect on nitrite and ammonia generation both in E.coli and L.plantarum, and varied the O2 concentrations between 0% and 21% (the remaining being N2 gas) (Fig. 2 C and D). LAB cultures that grew at O2 levels equivalent or greater than 6% showed no significant changes in nitrite or ammonia. However when the O2 tension was 4% or lower nitrite and ammonia were both generated and excreted in the media and reached the highest concentrations in cultures grown under anaerobic conditions. Of note, cultures of B. longum infantis did not grow at, or above, 6% O2 and showed negligible content of nitrite and low ammonia independent of the oxygen level. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 2. The effect of nitrate and oxygen gradients on the generation of nitrite and ammonia in different LAB cultures. Nitrite (A) and ammonia (B) concentrations were measured in LMRS media after 24 h growth at 2% O2 with supplementation of different nitrate concentrations (0 to 10 mM). Similarly in (C) and (D) nitrate was fixed at 5 mM and we measured nitrite and ammonia dependence on oxygen concentrations (0, 2, 4, 6, 10 and 21%). Each point represents the mean ± SD (n = 3). Time-course of concurrent nitrite and ammonia formation We measured relatively high concentrations of nitrite and ammonia in E.coli and L.plantarum cultures after 24 h growth in the presence of nitrate. Previous works on different E. coli strains have suggested that nitrate reductase molybdo-enzymes induced during anaerobic growth are responsible for nitrite (and possibly NO) formation and that the periplasmic nitrite reductase complex Nrf reduces nitrite directly to ammonium ion . However this enzyme is subject to repression by oxygen and induction by high nitrite concentrations. To clarify the timeline of nitrite and ammonia formation under 2% O2, we determined the concentrations of these metabolites in LMRS supplemented with 5 mM nitrate during 48 h growth of E.coli and L.plantarum cultures, respectively indicated with red and blue solid lines in Fig. 3. In order to limit the effects of the different growth rates between species and batch cultures, in this experiment nitrate was added after the organisms exponential phase of growth upon cultures media reaching OD about 1.0 (time point = 0). We found that nitrite concentrations begun to increase within 3 to 6 hours after nitrate addition and accumulated steadily to reach a maximum in 30–36 h at about 0.4 mM for E.coli and 0.1 mM for L.plantarum (Fig. 3A). Similarly the ammonia concentrations plotted in Fig. 3B show well the almost linear and continuous increase measured between 9 and 30 h in both species but with different rates. To facilitate comparison between the 2 species the ammonia concentrations were normalized to zero at the time of nitrate addition (t = 0) and we reported also the values obtained for cultures without nitrate. Of particular note, nitrite concentration in L.plantarum increased until around 30 h and afterwards begun to decrease while ammonia kept increasing further. The formation of ammonia from nitrate indeed is proposed to occur via nitrite in two successive elementary steps, each with its rate law and characteristic kinetic parameters. However, nitrite can be reduced by other pathways (both chemical and enzymatic) and the correlation between the simultaneous multiple conversions would determine the nitrite/ammonia ratio. We plotted this ratio in Fig. 3C resulting approximately in asymmetric bell-shaped curves for both species. This result suggests that nitrate is first converted to nitrite and after some accumulation it is reduced to ammonia or other reduced nitrogen compounds. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 3. Time-course curves suggest that nitrate is first converted to nitrite and subsequently to ammonia. Samples from E.coli (red line with circles) and L. plantarum (blue line with triangles) cultures grown at 2% O2 with 5 mM nitrate were collected at regular intervals and analyzed for nitrite (panel A) and ammonia (panel B). (C). Ratio between nitrite and ammonia concentrations measured at each time point. Black lines represent E.coli cultures containing no added nitrate. NO production by bacterial cell suspensions In 1988, Ji and Hollocher first observed the generation of NO from nitrite by E.coli in anaerobic conditions and later concluded that nitrite-dependent NO production was due to the activity of the respiratory membrane-associated nitrate reductase enzymes . However, several following studies on bacterial NO formation have proposed different mechanisms independent of respiratory denitrification such as arginine dependent bacterial NOS enzymatic activity, DNRA and non-enzymatic processes [15,32–34]. Here we used chemiluminescence techniques to measure NO in gas phase generated by a suspension of bacteria cells kept at 37°C under controlled O2 levels as described in “Materials and Methods”. In Fig. 4A we plotted the amount of NO detected after injection of 100 μM exogenous nitrite in a flask containing E.coli bacteria grown for 24 h in media supplemented with the indicated nitrate concentration. Under 2% O2 small amounts of NO (< 40 ppb/109 CFU) were detected almost independently from the concentration of nitrate supplied. Inversely, much larger quantities were measured in analogous experiments after anaerobic growth and the response roughly correlated with the increasing nitrate concentrations. These results indicate that at least 2 processes producing NO are present in E.coli: one predominates at 2% O2 tension and the other in anaerobic atmosphere. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 4. Bacterial NO generation and correlation with acidity of the growth medium. (A) Chemiluminescence detection of NO emission after injection of 100 μM nitrite in the vessel containing E.coli grown at different nitrate and oxygen conditions in modified LMRS broth for 24h. (B) Comparison of NO emission at 2% O2 as in panel A, but using LAB cultures. Diagonal patter and solid bars indicate respectively the values detected before and after bacteria were re-suspended in fresh media at pH = 6.5 (as described in text). (C) Quantification of the amount of NO detected after injection of 100 μM or 250 μM nitrite in fresh LMRS media at different pH obtained by acidification with concentrated L-lactic acid. All data represent mean ± SD in duplicates. In Fig. 4B we then compared the generation of NO from the LAB species considered in this study and grown as described earlier with 5 mM nitrate at 2% O2. All the cultures produced a considerable but variable amount of NO after 100 μM nitrite injection, while the same addition to fresh LMRS, with or without nitrate, had no effect (bars with oblique lines). LAB cultures are well known to produce substantial acidification of the media due to the fermentation of glucose primarily to lactic acid and we confirmed its formation in large amounts (mM) by direct detection. The concentrations measured at 24 h are reported in Table 1 together with the corresponding culture broth final pH. The initial pH (6.5) decreased variably to values between 3.9 and 5.0 depending on the LAB species and the difference was the least for L. plantarum. To elucidate if this acidification could be responsible for the generation of NO by the known non-enzymatic nitrite disproportionation, each bacterial preparation was split in equal volumes (10 ml) and tested before and after the replacement of its growth media with fresh LMRS by short centrifugation and decantation. This procedure resulted in the almost complete halt of NO generation in all LAB (solid bars in Fig. 4B) but only partially in E.coli cultures. Lysis by brief sonication in fresh media of the LAB cells to extract the cytosolic enzymes did not restored the generation of NO after injection of nitrite. We concluded that LAB production of lactic acid causes sufficient medium acidification to induce the chemical nitrite conversion to NO, instead this process in E.coli is enzymatic and occurs around pH 6.5. To verify our conclusions we measured the NO generation from 100 μM and 250 μM nitrite added to fresh LMRS medium containing sufficient L-lactic acid to adjust its final pH in the range 4.0–6.3 (Fig. 4C). Increasing amounts of NO were produced by the non-enzymatic nitrite disproportionation as the media pH decreased. The logarithmic plot of the total amount of NO detected (ppb) versus the pH revealed a linear correlation, similar to previously reported results for acidified nitrite solution in MRS broth or phosphate buffer . Finally, we examined if NO was generated by bacterial NOS supplementing the cultures above with either 100 μM or 500 μM L-arginine, however this produced no appreciable changes in NO production independently from the oxygen and nitrate concentrations used (data not shown). Furthermore, addition of the NOS inhibitor L-NAME (100 μM) did not produce any decrease in the NO signal, which is expected in the presence of NOS enzymatic conversion (data not shown). Discussion The human microbiota comprises more than a thousand distinct bacterial species and plays a major role in human health by promoting nutrient supply, preventing pathogen colonization and shaping and maintaining normal mucosal immunity. Commensal gut bacteria have recently been appreciated as having a true symbiotic relationship with the host [36,37]; within this large pool of bacteria, probiotic supplements containing LAB (i.e. Lactobacilli and Bifidobacteria) have been claimed to have a variety of beneficial effects on human health, such as prevention of diarrhea and inflammatory bowel disease or prophylaxis of urogenital infections . However, our knowledge of the biochemical roles that specific species and strains play in human health and disease is severely limited. In this study we aimed to advance the understanding of the nitrate reduction pathways in selected common bacterial species colonizing the human intestine using in vitro conditions compatible with nitrate-rich diets and oxygen levels found on the mucosal surfaces of the GI tract. The primary findings of our investigation indicate that: 1) E.coli, a facultative anaerobe, convert nitrate to nitrite and subsequently to ammonia which progressively accumulates in culture media; 2) L.plantarum, a fermentative bacteria, grown in the presence of exogenous heme and vitamin K2 perform similar processes; 3) E.coli enzymes generate significant NO from nitrite only under anaerobic conditions; 4) All LAB cultures examined generate large amounts of lactic acid causing sufficient acidification of culture media to drive nitrite disproportionation to NO. Most eukaryotes derive their energy primarily through oxidative phosphorylation and must breathe O2 for the formation of ATP, however many enteric bacteria, including E.coli K12 strains, can use NO3- as an alternative electron acceptor when O2 is limiting and nitrate is plentiful . E.coli represents the model member of the Enterobacteriaceae and although this family constitutes only a small fraction of the gut microbiota, it is particular important because certain strains can cause illnesses. It has also been recently shown that nitrate generated as a by-product of host inflammation can be used by E.coli during respiration to confer a growth benefit and out-compete microbes residing in the colon that rely only on fermentation . L. plantarum is considered a safe probiotic and is commonly found in the mammalian intestinal tract as well as in the human saliva where nitrates are known to accumulate to millimolar levels due to the entero-salivary cycle of nitrate, which accounts for about 25% of the overall circulating nitrate . This bacterium presents the typical facultative heterofermentative pathway of the LAB family but, unique to this species, genes that encode a putative nitrate-reductase system (narGHJI) were recently identified in the L. plantarum WCFS1 genome, which suggests that it is capable of using nitrate as an electron acceptor . Indeed a recently published genetic analysis of L.plantarum has highlighted its enormous diverse and versatile metabolic capability . In our experiments significant nitrate reductase activity was detected both in E.coli, and L. plantarum as the oxygen tension decreases from atmospheric level towards zero. On the contrary, B.longum infantis, a micro-aero tolerant anaerobe of infant gastrointestinal tract origin, showed no ability to reduce nitrate even at high concentrations. Bifidobacteria represent up to 90% of the bacteria of an infant’s GI tract and our results are in accordance with the observation that human breast milk, which presents particularly high levels of nitrite , provides a dietary source for nitrite prior to the establishment of lingual and gut microbiota capable of nitrate reduction that are normally found in the adult flora. In Fig. 1 we showed that E.coli cultures containing 5 mM NO3− had a competitive growth advantage with respect to cultures with no nitrate added and then we determined the effect of oxygen and nitrate gradients on the production of nitrite and ammonia. Our results indicate that approximately 2.5 mM NO3− at either 4% or lower O2 is sufficient to induce the expression of nitrate reductase enzymes and that after 24 h a considerable amount of nitrite accumulated both inside E.coli cells as well as in the culture media. A detailed molecular analysis of the regulation of the bacterial enzymatic activities transcends the scope of this study, however it is well known that E.coli K12 strains express three molybdenum-containing nitrate reductases and that tungsten can deactivate these enzymes by replacing the molybdenum atom at the active site . We found that the addition of 300 μM tungsten oxide to cultures grown as in experiments reported in Fig. 1 almost completely abolished the formation of nitrite (data not shown). Thus, we believe that molybdenum dependent nitrate reductases are responsible for the crucial step in the formation of nitrite. It is also important to note that E.coli, as well as many other species of bacteria, are susceptible to nitrite toxicity due to the formation of metal-nitrosyl complexes and they minimize this toxicity by the coordinate induction of a nitrite membrane transporter and other enzymes that mediate nitrite reduction. A complete description of the E.coli nitrate and nitrite reductase enzymes genes and operons regulation and expression can be find in excellent publications by Stewart and Cole . Nitrogen oxides reduction pathways in the human gut The presence of nitrate and nitrite in the lower GI tract depends on numerous aspects including the types of bacteria colonizing the gut and the intricate balance between diet and the nitrogen oxides metabolic pathways. However the endogenous production of nitrate from NO oxidation (mainly via reaction with oxy-hemoglobin) has long been recognized to be an order of magnitude greater than dietary intake as shown in the late 1970s and more recently in studies using eNOS-deficient mice . In the schematic representation of Fig. 5 we have summarized the link between bacterial respiratory denitrification, nitrogen oxides reduction to ammonia, the endogenous L-arginine/NO synthase pathway and the non-enzymatic nitrite reduction to NO. In the denitrification process (red box), nitrate is reduced to nitrogen gas (N2) in a four steps process in which nitrite, NO and nitrous oxide are electron acceptors in energy generating reactions. Recently a complete denitrification pathway, leading to production of N2, has been proved to exist in human dental plaque and while it is still consider to be of minor importance in humans, we speculate that it might have an important role under very low oxygen tension in the presence of nitrate and the formation of N2 cannot be excluded in the human gut. Denitrification and dissimilatory nitrate reduction to ammonia (DNRA, blue box) share the first nitrate to nitrite reduction step and several classes of nitrate reductases have been associated with this reaction. In DNRA the second step is the direct nitrite to ammonia six electrons reduction, which does not provide energy but is a fairly common detoxification process in facultative anaerobic bacteria. DNRA has been suggested to represent the major route of nitrate metabolism in the rumen of mammals . This study identified ammonia as a product of nitrate reduction in E.coli and Lactobacilli bacteria grown in the presence of mM nitrate at 4% oxygen or lower levels. Download: PPT PowerPoint slide PNG larger image TIFF original image Fig 5. Schematic representation of the link between different pathways of nitrogen oxides reduction in the human gut and the fate of ammonia. Each colored box represents a distinct pathway: Bacterial respiratory denitrification to dinitrogen in red box: the dissimilatory nitrate reduction to ammonia (DNRA) in blue box and the non-enzymatic conversion of nitrite to NO in green box (this route become significant only at pH<5.5). The endogenous L-arginine/NO synthase pathway from epithelial cells of the intestinal mucosa lining is also noted. Respiratory denitrification can also generate small but relevant amounts of NO as an intermediate product and has been implicated in the bacterial NO production in the gut [32,33]. Other possible routes yielding NO are the acidic conversion of nitrite (green box) and the oxidation of L-arginine by NOS enzymes (brown circle). In Fig. 4C we examined the proton dependency of the non-enzymatic disproportionation of nitrite and showed that it becomes relevant only when the intracellular or body fluids pH is lower than 5.5. Our results also excluded the presence of active NOS enzymes in E.coli and L.plantarum however intestinal epithelial cells are known to produce NO through expression of both the endothelial and inducible NOS isoforms. Interestingly, NO production in the gut could be triggered also by the enzymatic processes of peroxidases, which are abundant on the cells of the gut mucosa and have been shown to use nitrite as a substrate to produce NO as part of their antibacterial action. We believe that all these different nitrate reduction pathways may coexist and occur simultaneously however it is likely that only one metabolite would predominate depending on the specific physiological conditions. Physiological significance of NO formation by bacterial nitrate reduction Dietary nitrate and nitrite are still pictured as possible toxic substances in many studies despite the mounting evidence that NO production from these ions has important beneficial implications for cardiovascular, immune and gastrointestinal functions [8,48,49]. In the gut NO serves several physiological functions such as regulation of mucosal blood flow, intestinal mobility and mucus thickness. The chronic overproduction of NO has also been associated with inflammatory bowel disease and is likely to inhibit growth of a wide variety of bacterial species. Previous studies left unclear how gut bacteria produced NO, however Sobko et al. showed that in contrast to conventional rats, NO levels in the intestine of germ-free rats are extremely low and when inoculated with normal bacteria flora the observed NO production increased 10 fold. In our experiments oxygen and proton concentrations determined the specific route of nitrate reduction to NO. The results presented in Fig. 4A indicate that E.coli is capable of enzymatic NO activity under anaerobic conditions with nitrate concentrations greater than 1 mM possibly via denitrification or the periplasmic cytochrome c nitrite reductase enzyme (Nrf) as proposed by Corker and Poole . This NO generation, however is greatly reduced at 2% oxygen and become nitrate independent. Importantly, our data are consistent with the report by Sobko and colleagues that E. coli generated insignificant NO levels during 24 h incubation with 0.1 mM nitrate . LAB produced considerable amounts of NO in response to the acidification of the media due to the accumulation of lactic acid. Replacing the growth media with fresh LMRS (pH = 6.5) almost completely blocked the LAB cultures ability to convert nitrite to NO but not in E.coli. Measurements of the intestine pH ranges between 5.7 and 7.5, thus in vivo nitrite disproportionation is probably a minor and localized aspect of the NO production. Inversely, this path is a well-established phenomenon in the acidic environment of the stomach (pH about 3). In summary we suggest that the NO generated by gut bacteria in proximity of the intestinal mucosa may either exert the beneficial effects noted above or at higher levels, interfere with these functions. Thus bacterial NO formation in the gut can be regarded as modulator of both physiological and pathological effects. Physiological implications of bacterial ammonia formation for health Colonic bacteria have been known to produce ammonia from amino acid deamination or via urease, the hydrolysis of urea into carbon dioxide and ammonia since the seminal studies of Vince et al. in the early 1970s . More recently Cole and colleagues reported that the major product of nitrite reduction in E.coli is ammonia with about only 1% being reduced to NO at neutral pH . The results obtained in our study suggest that at least certain common intestinal bacteria primarily reduce nitrite to ammonia rather than NO. In healthy subjects, under ordinary physiological conditions, the bulk of ammonia generated in the lower GI tract is then excreted in the body fluids and metabolized by the liver hepatocytes where ammonia and carbon dioxide are enzymatically converted to carbamoyl phosphate, which enters the series of reactions called the “Urea Cycle” leading to urea formation and its elimination by the kidney (see Fig. 5). The normal plasma concentration of ammonia is in the range of 10–35 μM, however, when ammonia production is excessive, portal blood-carrying ammonia can bypass the liver leading to hyperammonemia [52,53]. Ammonia in the blood freely permeates through the blood-brain barrier and high levels (>100 μM) have toxic effects on the central nervous system leading to encephalopathy and eventually coma. Patients with liver cirrhosis very frequently develop hepatic encephalopathy (HE) . In the absence of liver failure, hyperammonemic coma has been attributed to sepsis by urease capable microorganisms such as Klebsiella pneumonia . Classic therapeutic approaches for HE involve the reduction of systemic ammonia levels via antibiotic treatment (to kill intestinal ammonia producing bacteria) and administration of non-absorbable sugars, such as lactulose and lactitol . In the large intestine lactulose is broken down by the action of colonic bacteria primarily to lactic acid, and also to small amounts of formic and acetic acids [56,57]. This acidification favors the formation of the non-absorbable ammonium ion from ammonia and reduces its concentration in plasma. It is unclear in which measure dietary nitrate contributes to the ammonia concentration in the gut and blood, however we suggest the alternative hypothesis that the increased acidification of the colonic content due to the presence of lactulose favors the microbiota conversion of nitrite to NO instead of ammonia by the known acid-dependent mechanism. Conclusions For over 30 years the biological fate of exogenous nitrate could not be accounted for in the excreted nitrogen-containing compounds which amount to approximately 60% of an ingested nitrate dose in humans. Our results support the idea that nitrate is converted to nitrite and then to other reduced nitrogen biomolecules such as NO, ammonia, urea and possibly nitrogen gas by bacteria in the saliva, stomach, small and large intestine. Questions such as how much ammonia is generated from the nitrate-nitrite reduction versus other important processes, such as deamination and the bacterial ureases activity, demands detailed metabolic studies in animals and/or humans. The biological significance of the conversion of dietary nitrates at the intestinal lumen remains to be established. Nevertheless traditional Japanese and Mediterranean diets, which are known to have cardiovascular protective effects, have a mean intake of nitrate per person 2 to 3-fold higher than the typical Western diet (in United States corresponding to about 40–100 mg/day nitrate). Further investigation on the link between symbiotic bacteria, nitrogen oxides metabolism and human health are needed; however it is clear that the biological pathways of nitrogen metabolism in mammals are more complex and more important that envisioned even a few years ago. Acknowledgments We thank Dr. Barbora Piknova and Dr. Ji Won Park for helpful advice and discussions. Author Contributions Conceived and designed the experiments: MT ANS. Performed the experiments: MT. Analyzed the data: MT. Contributed reagents/materials/analysis tools: MT. Wrote the paper: MT ANS. References Ignarro LJ. Nitric Oxide: Biology and Pathobiology. San Diego, CA: Academic Press; 2000. Lundberg JM, Weitzberg E, Gladwin MT. The nitrate–nitrite–nitric oxide pathway in physiology and therapeutics. Nat Rev Drug Discov. 2008 Feb;7(2):156–67. pmid:18167491 View Article PubMed/NCBI Google Scholar Nathan S Bryan JL. Nitrite and Nitrate in Human Health and Disease (Nutrition and Health). 2013 Jul 17;:1–330. Lundberg JM, Weitzberg E, Cole J, Benjamin N. Nitrate, bacteria and human health. Nat Rev Micro. 2004 Jul 1;2(7):593–602. pmid:15197394 View Article PubMed/NCBI Google Scholar Van Faassen EE, Bahrami S, Feelisch M, Hogg N, Kelm M, Kim-Shapiro DB, et al. Nitrite as regulator of hypoxic signaling in mammalian physiology. Med Res Rev. 2009 Sep;29(5):683–741. pmid:19219851 View Article PubMed/NCBI Google Scholar Castiglione N, Rinaldo S, Giardina G, Stelitano V, Cutruzzolà F. Nitrite and nitrite reductases: from molecular mechanisms to significance in human health and disease. Antioxidants & Redox Signaling. 2012 Aug 15;17(4):684–716. View Article Google Scholar Tannenbaum SR, Fett D, Young VR, Land PD, Bruce WR. Nitrite and nitrate are formed by endogenous synthesis in the human intestine. Science. 1978 Jun 30;200(4349):1487–9. pmid:663630 View Article PubMed/NCBI Google Scholar Lidder S, Webb AJ. Vascular effects of dietary nitrate (as found in green leafy vegetables and beetroot) via the nitrate-nitrite-nitric oxide pathway. 2013 Mar;75(3):677–96. pmid:22882425 View Article PubMed/NCBI Google Scholar Govoni M, Jansson EA, Weitzberg E, Lundberg JM. The increase in plasma nitrite after a dietary nitrate load is markedly attenuated by an antibacterial mouthwash. Nitric Oxide. 2008 Dec;19(4):333–7. pmid:18793740 View Article PubMed/NCBI Google Scholar Raat NJH, Noguchi AC, Liu VB, Raghavachari N, Liu D, Xu X, et al. Dietary nitrate and nitrite modulate blood and organ nitrite and the cellular ischemic stress response. Free Radical Biology and Medicine. 2009 Sep 1;47(5):510–7. pmid:19464364 View Article PubMed/NCBI Google Scholar Weitzberg E, Lundberg JM. Novel aspects of dietary nitrate and human health. Annu Rev Nutr. 2013;33(1):129–59. View Article Google Scholar Bartholomew B, Hill MJ. The pharmacology of dietary nitrate and the origin of urinary nitrate. Food Chem Toxicol. 1984 Oct;22(10):789–95. pmid:6541617 View Article PubMed/NCBI Google Scholar Green LC, Tannenbaum SR, Goldman P. Nitrate synthesis in the germfree and conventional rat. Science. 1981 Apr 3;212(4490):56–8. pmid:6451927 View Article PubMed/NCBI Google Scholar Wagner DA, Schultz DS, Deen WM, Young VR, Tannenbaum SR. Metabolic Fate of an Oral Dose of 15N-labeled Nitrate in Humans: Effect of Diet Supplementation with Ascorbic Acid. Cancer Res. American Association for Cancer Research; 1983 Apr 1;43(4):1921–5. pmid:6831427 View Article PubMed/NCBI Google Scholar Sobko T, Reinders CI, Jansson E, Norin E, Midtvedt T, Lundberg JM. Gastrointestinal bacteria generate nitric oxide from nitrate and nitrite. Nitric Oxide. 2005 Dec;13(4):272–8. pmid:16183308 View Article PubMed/NCBI Google Scholar Sobko T, Huang L, Midtvedt T, Norin E, Gustafsson LE, Norman M, et al. Generation of NO by probiotic bacteria in the gastrointestinal tract. Free Radical Biology and Medicine. 2006 Sep;41(6):985–91. pmid:16934682 View Article PubMed/NCBI Google Scholar Sparacino-Watkins C, Stolz JF, Basu P. Nitrate and periplasmic nitrate reductases. Chem Soc Rev. The Royal Society of Chemistry; 2013;43(2):676–706. View Article Google Scholar Simon J. Enzymology and bioenergetics of respiratory nitrite ammonification. FEMS Microbiol Rev. 2002 Aug;26(3):285–309. pmid:12165429 View Article PubMed/NCBI Google Scholar Espey MG. Role of oxygen gradients in shaping redox relationships between the human intestine and its microbiota. Free Radical Biology and Medicine. Elsevier; 2013 Feb 1;55(C):130–40. View Article Google Scholar Marteyn B, Scorza FB, Sansonetti PJ, Tang C. Breathing life into pathogens: the influence of oxygen on bacterial virulence and host responses in the gastrointestinal tract. Cellular Microbiology. Blackwell Publishing Ltd; 2010 Dec 19;13(2):171–6. pmid:21166974 View Article PubMed/NCBI Google Scholar He G, Shankar RA, Chzhan M, Samouilov A, Kuppusamy P, Zweier JL. Noninvasive measurement of anatomic structure and intraluminal oxygenation in the gastrointestinal tract of living mice with spatial and spectral EPR imaging. PNAS. 1999 Apr 13;96(8):4586–91. pmid:10200306 View Article PubMed/NCBI Google Scholar MacArthur PH, Shiva S, Gladwin MT. Measurement of circulating nitrite and S-nitrosothiols by reductive chemiluminescence. J Chromatogr B Analyt Technol Biomed Life Sci. 2007 May 15;851(1–2):93–105. pmid:17400039 View Article PubMed/NCBI Google Scholar Tiso M, Tejero J, Kenney CT, Frizzell S, Gladwin MT. Nitrite reductase activity of nonsymbiotic hemoglobins from Arabidopsis thaliana. Biochemistry. 2012 Jul 3;51(26):5285–92. pmid:22620259 View Article PubMed/NCBI Google Scholar Jones SA, Chowdhury FZ, Fabich AJ, Anderson A, Schreiner DM, House AL, et al. Respiration of Escherichia coli in the mouse intestine. Infection and Immunity. 2007 Oct 1;75(10):4891–9. pmid:17698572 View Article PubMed/NCBI Google Scholar Jia W, Cole J. Nitrate and nitrite transport in Escherichia coli. Biochem Soc Trans. 2005 Feb;33(Pt 1):159–61. View Article Google Scholar Wolf G, Arendt EK, Pfähler U, Hammes WP. Heme-dependent and heme-independent nitrite reduction by lactic acid bacteria results in different N-containing products. Int J Food Microbiol. 1990 May;10(3–4):323–9. pmid:2397162 View Article PubMed/NCBI Google Scholar Brooijmans RJW, de Vos WM, Hugenholtz J. Lactobacillus plantarum WCFS1 electron transport chains. Applied and Environmental Microbiology. American Society for Microbiology; 2009 Jun;75(11):3580–5. pmid:19346351 View Article PubMed/NCBI Google Scholar Bueno E, Mesa S, Bedmar EJ, Richardson DJ, Delgado MJ. Bacterial Adaptation of Respiration from Oxic to Microoxic and Anoxic Conditions: Redox Control. Antioxidants & Redox Signaling. 2012 Apr 15;16(8):819–52. View Article Google Scholar Stewart V. Regulation of nitrate and nitrite reductase synthesis in enterobacteria. Antonie Van Leeuwenhoek. 1994;66(1–3):37–45. pmid:7747936 View Article PubMed/NCBI Google Scholar Ji XB, Hollocher TC. Reduction of nitrite to nitric oxide by enteric bacteria. Biochem Biophys Res Commun. 1988 Nov 30;157(1):106–8. pmid:3058123 View Article PubMed/NCBI Google Scholar Ji XB, Hollocher TC. Nitrate Reductase of Escherichia-Coli as a No-Producing Nitrite Reductase. Biochemical Archives. 1989 Feb;5(1):61–6. View Article Google Scholar Xu J, Verstraete W. Evaluation of nitric oxide production by lactobacilli. Appl Microbiol Biotechnol. Springer-Verlag; 2001 Aug 1;56(3–4):504–7. View Article Google Scholar Corker H, Poole RK. Nitric oxide formation by Escherichia coli. Dependence on nitrite reductase, the NO-sensing regulator Fnr, and flavohemoglobin Hmp. Journal of Biological Chemistry. American Society for Biochemistry and Molecular Biology; 2003 Aug 22;278(34):31584–92. pmid:12783887 View Article PubMed/NCBI Google Scholar Vermeiren J, Van de Wiele T, Verstraete W, Boeckx P, Boon N. Nitric oxide production by the human intestinal microbiota by dissimilatory nitrate reduction to ammonium. Journal of Biomedicine and Biotechnology. Hindawi Publishing Corporation; 2009;2009(4):284718–10. pmid:19888436 View Article PubMed/NCBI Google Scholar Sears CL. A dynamic partnership: Celebrating our gut flora. Anaerobe. 2005 Oct;11(5):247–51. pmid:16701579 View Article PubMed/NCBI Google Scholar O'Hara AM, Shanahan F. The gut flora as a forgotten organ. EMBO Rep. 2006 Jul;7(7):688–93. pmid:16819463 View Article PubMed/NCBI Google Scholar Flint HJ, O'Toole PW, Walker AW. Special issue: The Human Intestinal Microbiota. Microbiology (Reading, Engl). 2010 Nov;156(Pt 11):3203–4. pmid:21045216 View Article PubMed/NCBI Google Scholar Leser TD, Mølbak L. Better living through microbial action: the benefits of the mammalian gastrointestinal microbiota on the host. Environmental Microbiology. Blackwell Publishing Ltd; 2009 Sep;11(9):2194–206. pmid:19737302 View Article PubMed/NCBI Google Scholar Unden G, Bongaerts J. Alternative respiratory pathways of Escherichia coli: energetics and transcriptional regulation in response to electron acceptors. Biochim Biophys Acta. 1997 Jul 4;1320(3):217–34. pmid:9230919 View Article PubMed/NCBI Google Scholar Winter SE, Winter MG, Xavier MN, Thiennimitr P, Poon V, Keestra AM, et al. Host-derived nitrate boosts growth of E. coli in the inflamed gut. Science. American Association for the Advancement of Science; 2013 Feb 8;339(6120):708–11. pmid:23393266 View Article PubMed/NCBI Google Scholar Siezen RJ, van Hylckama Vlieg JET. Genomic diversity and versatility of Lactobacillus plantarum, a natural metabolic engineer. Microb Cell Fact. 2011 Aug 30;10 Suppl 1(Suppl 1):S3. pmid:21995294 View Article PubMed/NCBI Google Scholar Jones JA, Ninnis JR, Hopper AO. Nitrite and Nitrate Concentrations and Metabolism in Breast Milk, Infant Formula, and Parenteral Nutrition. Journal of Parenteral …. 2013. Gates AJ, Hughes RO, Sharp SR, Millington PD, Nilavongse A, Cole J, et al. Properties of the periplasmic nitrate reductases from Paracoccus pantotrophus and Escherichia coli after growth in tungsten-supplemented media. FEMS Microbiol Lett. 2003 Mar 28;220(2):261–9. pmid:12670690 View Article PubMed/NCBI Google Scholar Cole J. Nitrate reduction to ammonia by enteric bacteria: redundancy, or a strategy for survival during oxygen starvation? FEMS Microbiol Lett. 1996 Feb 1;136(1):1–11. pmid:8919448 View Article PubMed/NCBI Google Scholar Carlström M, Larsen FJ, Nyström T, Hezel M, Borniquel S, Weitzberg E, et al. Dietary inorganic nitrate reverses features of metabolic syndrome in endothelial nitric oxide synthase-deficient mice. Proc Natl Acad Sci USA. 2010 Oct 12;107(41):17716–20. pmid:20876122 View Article PubMed/NCBI Google Scholar Schreiber F, Stief P, Gieseke A, Heisterkamp IM, Verstraete W, de Beer D, et al. Denitrification in human dental plaque. BMC Biol. BioMed Central Ltd; 2010;8(1):24. View Article Google Scholar Jones GA. Dissimilatory metabolism of nitrate by the rumen microbiota. Can J Microbiol. 1972;18(12):1783–7. pmid:4675328 View Article PubMed/NCBI Google Scholar Sindelar JJ, Milkowski AL. Human safety controversies surrounding nitrate and nitrite in the diet. Nitric Oxide. 2012 May;26(4):259–66. pmid:22487433 View Article PubMed/NCBI Google Scholar McKnight GM, Duncan CW, Leifert C, Golden MH. Dietary nitrate in man: friend or foe? British Journal of Nutrition. Cambridge University Press; 1999 May 1;81(05):349–58. View Article Google Scholar Vince A, Dawson AM, Park N, O'Grady F. Ammonia production by intestinal bacteria. Gut. 1973 Mar;14(3):171–7. pmid:4573343 View Article PubMed/NCBI Google Scholar Cole J. Independent pathways for the anaerobic reduction of nitrite to ammonia by Escherichia coli. Biochem Soc Trans. 1982 Dec;10(6):476–8. pmid:6295832 View Article PubMed/NCBI Google Scholar Auron A, Brophy PD. Hyperammonemia in review: pathophysiology, diagnosis, and treatment. Pediatr Nephrol. Springer-Verlag; 2012 Feb 1;27(2):207–22. pmid:21431427 View Article PubMed/NCBI Google Scholar Hadjihambi A, Khetan V, Jalan R. Pharmacotherapy for hyperammonemia. Expert Opin Pharmacother. 2014 Aug;15(12):1685–95. pmid:25032885 View Article PubMed/NCBI Google Scholar Kundra A, Jain A, Banga A, Bajaj G, Kar P. Evaluation of plasma ammonia levels in patients with acute liver failure and chronic liver disease and its correlation with the severity of hepatic encephalopathy and clinical features of raised intracranial tension. Clin Biochem. 2005 Aug;38(8):696–9. pmid:15963970 View Article PubMed/NCBI Google Scholar Strauss E, Gomes de Sá Ribeiro M de F. Bacterial infections associated with hepatic encephalopathy: prevalence and outcome. Ann Hepatol. 2003 Jan;2(1):41–5. pmid:15094705 View Article PubMed/NCBI Google Scholar Patil DH, Westaby D, Mahida YR, Palmer KR, Rees R. Comparative modes of action of lactitol and lactulose in the treatment of hepatic encephalopathy. Gut. 1987. Vince A, Zeegen R, Drinkwater JE, O'Grady F, Dawson AM. The effect of lactulose on the faecal flora of patients with hepatic encephalopathy. J Med Microbiol. 1974 May;7(2):163–8. pmid:4600290 View Article PubMed/NCBI Google Scholar
14944
https://www.appinio.com/en/blog/market-research/statistical-significance
How to Calculate Statistical Significance? (+ Examples) Appinio Research · 05.12.2023 · 39min read Play How to Calculate Statistical Significance? (+ Examples) | Appinio Blog AI-generated audio 36:38 Content What is Statistical Significance? Fundamentals of Hypothesis Testing Sampling and Data Collection How to Calculate Statistical Significance? Basic Statistical Tests for Significance Understanding Confidence Intervals Advanced Topics in Significance Testing Common Statistical Significance Mistakes and Pitfalls How to Report and Communicate Significance? Statistical Significance Examples Conclusion How to Determine Statistical Significance in Minutes? Have you ever wondered how to distinguish between mere chance and genuine insights when analyzing data? Statistical significance holds the key to unlocking the true importance of your findings. In this guide, we will delve deep into statistical significance, uncovering its definition, importance, practical applications, advanced concepts, and the art of effectively communicating your results. Whether you're a researcher, data analyst, or decision-maker, understanding statistical significance is a vital skill for making informed choices and drawing meaningful conclusions from data. What is Statistical Significance? Statistical significance is a critical concept in data analysis and research that helps determine whether the observed results are likely due to a real effect or merely the result of chance variation. It quantifies the likelihood that an observed difference or relationship in data is not a random occurrence. Statistical significance is typically expressed in terms of p-values or confidence intervals, allowing researchers to make informed decisions based on data. The Importance of Statistical Significance Statistical significance serves several essential purposes: Validating Hypotheses: It helps researchers assess whether the findings support or contradict their hypotheses, enabling them to draw meaningful conclusions. Informed Decision-Making: It provides a basis for decision-making in various fields, from healthcare to business, by distinguishing between genuine effects and random fluctuations. Reducing Uncertainty: Statistical significance reduces uncertainty in research and data-driven decision-making, enhancing the reliability of results. Scientific Discovery: In scientific research, it guides scientists in identifying and investigating relationships, trends, and phenomena. Why Statistical Significance Matters in Data Analysis Statistical significance is crucial in data analysis because it: Separates Signal from Noise: It helps differentiate between patterns or differences in data that are likely meaningful and those that may occur by chance. Aids in Inference: By assessing statistical significance, data analysts can make inferences about populations based on sample data. Supports Generalization: It enables the generalization of findings from samples to larger populations, extending the relevance of research. Enhances Credibility: In both scientific research and practical decision-making, statistical significance adds credibility and rigor to the analysis. Common Statistical Significance Applications Statistical significance is widely used across various fields and applications, including: Clinical Trials: Assessing the efficacy of new medical treatments. Market Research: Analyzing consumer behavior and preferences. Quality Control: Ensuring product quality and consistency. A/B Testing: Comparing the effectiveness of different marketing strategies. Social Sciences: Investigating social phenomena and behaviors. Environmental Studies: Assessing the impact of environmental factors on ecosystems. Statistical significance is a versatile tool that empowers professionals and researchers to make data-driven decisions and draw reliable conclusions across diverse domains. Fundamentals of Hypothesis Testing Hypothesis testing is a critical aspect of statistical significance analysis, helping you determine the validity of your findings. We'll start by delving deeper into the fundamental concepts and components of hypothesis testing. Formulating Hypotheses Formulating clear and testable hypotheses is the first step in hypothesis testing. You start with two hypotheses: the null hypothesis (H0) and the alternative hypothesis (H1 or Ha). Null Hypothesis and Alternative Hypothesis The null hypothesis (H0) suggests that there is no significant difference or effect in your data. It represents the status quo or the absence of an effect. The alternative hypothesis (H1 or Ha), on the other hand, asserts that there is a significant difference or effect in your data, challenging the null hypothesis. Significance Level (Alpha) and P-Values The significance level, often denoted as alpha (α), plays a critical role in hypothesis testing. It determines the threshold at which you consider a result statistically significant. Commonly used significance levels are 0.05 and 0.01. P-Value: The p-value quantifies the strength of evidence against the null hypothesis. A lower p-value indicates stronger evidence against H0, suggesting that you should reject it in favor of the alternative hypothesis. Type I and Type II Errors In hypothesis testing, two types of errors can occur: Type I and Type II errors. Type I Error: This error occurs when you incorrectly reject a true null hypothesis. In other words, you conclude there's an effect when there isn't one. Type II Error: Type II errors happen when you fail to reject a false null hypothesis. In this case, you conclude there's no effect when there actually is. Understanding these error types is crucial for making informed decisions and interpreting the results of hypothesis tests. Power of a Statistical Test The power of a statistical test measures its ability to correctly reject a false null hypothesis. It's influenced by several factors: Sample Size: A larger sample size generally increases the power of a test, making it more likely to detect true effects. Effect Size: A larger effect size, which represents the magnitude of the difference or effect, also enhances the power of a test. Significance Level (Alpha): Lowering the significance level (α) increases the chance of making a Type II error but decreases the chance of making a Type I error, affecting the test's power. Variability in the Data: Higher variability in the data may reduce the power of a test because it can make it harder to detect an effect. Understanding and manipulating the power of a statistical test is crucial for designing experiments and studies that can effectively detect meaningful effects or differences. Sampling and Data Collection Sampling and data collection are crucial steps in the statistical significance analysis process. These steps ensure that your data is representative and free from bias, laying the foundation for reliable results. Random Sampling Random sampling is the process of selecting a subset of individuals or items from a larger population in a way that each member has an equal chance of being chosen. This technique helps minimize bias and ensure that your sample fairly represents the entire population. Simple Random Sampling: In this method, each member of the population has an equal probability of being selected. It can be accomplished using random number generators or drawing lots. Stratified Sampling: Stratified sampling divides the population into subgroups (strata) based on specific characteristics (e.g., age, gender). Samples are then randomly selected from each stratum to ensure representation. Cluster Sampling: Cluster sampling involves dividing the population into clusters and randomly selecting a few clusters for sampling. It's particularly useful when it's difficult to create a complete list of the population. Sample Size Determination Determining the appropriate sample size is a critical consideration in statistical significance analysis. An insufficient sample size can lead to unreliable results, while an excessively large sample may be resource-intensive without providing much additional benefit. Factors influencing sample size determination include: Population Variability: Higher variability in the population typically requires a larger sample size to detect significant differences. Desired Confidence Level: Increasing the desired confidence level (e.g., 95% or 99%) necessitates a larger sample size. Margin of Error: Smaller margins of error require larger sample sizes. Expected Effect Size: The magnitude of the effect you want to detect influences sample size; larger effects require smaller samples. Various statistical formulas and software tools are available to calculate sample sizes based on these factors. It's essential to strike a balance between the precision of your results and the practicality of obtaining the required sample. Data Collection Methods Selecting the appropriate data collection method is essential to gather accurate and relevant information. The choice of method depends on your research objectives and the nature of the data. Popular data collection methods include: Surveys and Questionnaires: Surveys involve asking individuals a set of structured questions to collect data on their opinions, attitudes, or behaviors. Experiments: Experimental studies involve controlled interventions to examine cause-and-effect relationships. They are common in scientific research. Observational Studies: Observational studies involve observing and recording data without intervening. They are often used in fields like psychology and sociology. Secondary Data Analysis: Secondary data analysis involves using existing data sources, such as databases or publicly available datasets, to answer research questions. Each data collection method has its strengths and limitations, and the choice should align with your research objectives and resources. Data Preprocessing and Cleaning Data preprocessing and cleaning are essential steps to ensure the quality and reliability of your data before conducting statistical significance tests. Key tasks in data preprocessing and cleaning include: Data Validation: Check for accuracy and completeness of data. Identify and handle missing values, outliers, and errors. Data Transformation: Transform data as needed, such as normalizing or standardizing variables, to meet the assumptions of statistical tests. Data Imputation: If there are missing values, consider imputation techniques to fill in the gaps, maintaining the integrity of your dataset. Data Encoding: Encode categorical variables into numerical formats, as many statistical tests require numerical inputs. Data Scaling: Scale or normalize variables to ensure they have the same units or magnitudes, especially when working with different measurement scales. Investing time in data preprocessing and cleaning can enhance the accuracy and reliability of your statistical analysis results, ultimately leading to more robust conclusions. How to Calculate Statistical Significance? Calculating statistical significance involves several key steps and depends on the type of data and hypothesis you're testing. Here, we'll provide a general overview of the process and highlight standard methods for different scenarios. 1. Formulate Your Hypotheses Before calculating statistical significance, you need to define your null hypothesis (H0) and alternative hypothesis (H1). The null hypothesis typically represents the absence of an effect, while the alternative hypothesis states what you're trying to prove or find evidence for. 2. Choose the Appropriate Statistical Test Select the statistical test that matches your research question and data type. Common tests include t-tests for comparing means, chi-square tests for independence, ANOVA for comparing multiple groups, and correlation tests for assessing relationships. 3. Collect and Organize Data Collect your data in a systematic and structured manner. Ensure you have a clear plan for data collection, data entry, and data cleaning to minimize errors and biases. 4. Perform the Statistical Test The specific steps for performing a statistical test depend on the chosen method. However, the general process involves: Calculating Test Statistics: Compute the test statistic (e.g., t, chi-square, F) based on your data and the chosen formula for the test. Determining Degrees of Freedom: Calculate the degrees of freedom associated with your test, which is critical for finding critical values from tables or statistical software. Finding Critical Values: Determine the critical values for your chosen significance level (alpha) from statistical tables or use statistical software to find them. Calculating p-Values: For many tests, calculate the p-value associated with the test statistic. The p-value represents the probability of observing the results under the null hypothesis. 5. Compare Results to Alpha Level Compare the calculated p-value to your predetermined significance level (alpha). If the p-value is less than or equal to alpha (p ≤ α), you reject the null hypothesis in favor of the alternative hypothesis, indicating statistical significance. 6. Interpret the Results Interpret the results in the context of your research question. If your findings are statistically significant, it suggests that the observed effect or relationship is unlikely to occur by random chance. If not significant, it implies that there's insufficient evidence to reject the null hypothesis. 7. Report the Findings In your research report or analysis, clearly state the statistical test you used, the calculated test statistic, degrees of freedom, p-value, and whether the results were statistically significant. Additionally, provide context, effect size measures, and practical implications. 8. Use Statistical Software Many statistical tests and calculations are complex and require specialized software like R, Python, SPSS, or Excel. These tools can automate calculations, provide critical values, and generate p-values, making the process more efficient and accurate. Remember that the specific steps and equations vary based on the chosen statistical test. It's crucial to consult relevant statistical resources or seek assistance from a statistician when dealing with complex analyses or unfamiliar tests. Calculating statistical significance correctly ensures the validity and reliability of your research findings. Basic Statistical Tests for Significance In statistical significance analysis, various tests are used to assess the significance of differences or relationships within data. Here, we explore five fundamental tests: the t-Test, Chi-Square Test, ANOVA (Analysis of Variance), Z-Test, and Mann-Whitney U Test and Wilcoxon Signed-Rank Test. t-Test The t-Test is used to compare the means of two groups and determine if the difference between them is statistically significant. There are three main types of t-tests. Independent Samples t-Test Used when comparing the means of two independent groups or samples. The formula for the t-statistic is: t = (x̄1 - x̄2) / √(s^2 / n1 + s^2 / n2) where: x̄1 and x̄2 are the sample means of the two groups. s^2 is the pooled variance of the two groups. n1 and n2 are the sample sizes of the two groups. Paired Samples t-Test Used when comparing the means of two related groups (e.g., before and after measurements on the same subjects). The formula is similar to the independent samples t-test but accounts for the paired nature of the data. One-Sample t-Test Used when comparing the mean of a single sample to a known population mean. The formula is: t = (x̄ - μ) / (s / √n) where: x̄ is the sample mean. μ is the population mean. s is the sample standard deviation. n is the sample size. Example: Suppose you want to determine if there is a significant difference in the test scores of two groups of students, Group A and Group B. You can use an independent samples t-test to analyze the data and calculate the t-statistic. Chi-Square Test The Chi-Square Test is used to assess the association between categorical variables and determine if the observed frequencies differ significantly from the expected frequencies. There are two main types of Chi-Square tests. Chi-Square Test for Independence Used to test the independence of two categorical variables in a contingency table. The formula for the Chi-Square statistic is: χ² = Σ [(O - E)² / E] where: O is the observed frequency. E is the expected frequency. Chi-Square Goodness-of-Fit Test Used to determine if the observed categorical data fits a specific expected distribution (e.g., a uniform distribution). The formula is similar to the Chi-Square test for independence. Example: Imagine you have data on the preferences of two age groups (under 30 and 30 and above) for three different types of beverages (coffee, tea, and juice). You can use a Chi-Square Test for Independence to assess if there is a significant association between age group and beverage preference. ANOVA (Analysis of Variance) ANOVA is used when you have more than two groups to compare means and determine if there are significant differences among them. One-way ANOVA is used for a single categorical independent variable, while two-way ANOVA involves two independent variables. The formula for the one-way ANOVA F-statistic is: F = (MSB / MSW) where: MSB is the mean square between groups (explained variance). MSW is the mean square within groups (unexplained variance). Example: Suppose you have data on students' test scores from three different schools. You can use one-way ANOVA to test if there are significant differences in the mean test scores among the schools. Z-Test The Z-Test is similar to the t-Test but is often used when dealing with larger sample sizes or when the population standard deviation is known. It is used to compare a sample mean to a known population mean. The formula for the Z-Test statistic is: Z = (x̄ - μ) / (σ / √n) where: x̄ is the sample mean. μ is the population mean. σ is the population standard deviation. n is the sample size. Example: If you want to determine if the mean height of a sample of individuals differs significantly from the known population mean height, you can use a Z-Test. Mann-Whitney U Test and Wilcoxon Signed-Rank Test These non-parametric tests are used when your data doesn't meet the assumptions of parametric tests like the t-Test. Mann-Whitney U Test: Used to compare two independent groups or samples to assess if one group has significantly higher values than the other. It ranks all data points and calculates the U statistic. Wilcoxon Signed-Rank Test: Used to compare two related groups, typically when dealing with paired data. It ranks the differences between paired observations and calculates the test statistic. Example: When you have ordinal or non-normally distributed data and want to determine if there's a significant difference between two groups, you can use either the Mann-Whitney U Test (for independent samples) or the Wilcoxon Signed-Rank Test (for paired samples). Understanding Confidence Intervals Confidence intervals (CIs) are essential tools in statistical significance analysis. They provide a range of values within which a population parameter is likely to fall. What is a Confidence Interval? A confidence interval is a range of values calculated from your sample data that likely contains the true population parameter with a specified level of confidence. It quantifies the uncertainty associated with estimating a population parameter from a sample. The formula for calculating a confidence interval for the population mean (μ) using a t-distribution is: CI = x̄ ± (t (s / √n)) where: CI is the confidence interval. x̄ is the sample mean. t is the critical value from the t-distribution corresponding to your chosen confidence level and degrees of freedom. s is the sample standard deviation. n is the sample size. How to Calculate Confidence Intervals? To calculate a confidence interval: Choose a confidence level (e.g., 95% or 99%) and determine the corresponding critical value from the t-distribution table or use statistical software. Calculate the sample mean (x̄) and sample standard deviation (s) from your data. Determine the sample size (n). Plug these values into the formula for the confidence interval. How to Interpret Confidence Intervals? Interpreting confidence intervals involves understanding that they provide a range of plausible values for the population parameter. Here are key points to consider: Confidence Level: If you calculate a 95% confidence interval, it means that in repeated sampling, you would expect the true population parameter to fall within the interval in 95% of cases. Overlap of Intervals: If you have two groups with non-overlapping confidence intervals for their means, it suggests a statistically significant difference between the groups. Width of the Interval: A narrower confidence interval indicates a more precise estimate, while a wider interval indicates more uncertainty. Relationship Between Confidence Intervals and Significance Testing Confidence intervals and significance testing are closely related. In fact, the concepts of confidence intervals and hypothesis testing share similarities. Null Hypothesis Rejection: If a confidence interval does not include a particular value, it suggests that the null hypothesis is rejected for that value in significance testing. Effect Size: The width of a confidence interval provides information about the effect size. A narrow interval indicates a larger effect size, while a wide interval suggests a smaller effect. Example: Suppose you want to estimate the average time it takes for customers to complete a specific task on your website. You collect a sample of data and calculate a 95% confidence interval, which turns out to be (12.5, 15.2) seconds. This means you are 95% confident that the true population average time falls within this interval. If a competitor claims their website's task completion time is 10 seconds, and this value is outside your confidence interval, you have evidence to reject their claim in favor of your own data. Advanced Topics in Significance Testing In significance testing, several advanced topics and techniques can help you navigate complex scenarios and draw more nuanced conclusions. Multiple Comparisons Problem When you conduct multiple hypothesis tests on the same dataset, you increase the likelihood of making Type I errors (false positives). This issue is known as the multiple comparisons problem. Solution: To address this problem, you can employ various methods, such as the Bonferroni correction or the False Discovery Rate (FDR) correction. These methods adjust the significance level (alpha) for individual tests to control the overall familywise error rate. Example: Imagine you're testing the effectiveness of several drug treatments on a specific condition. If you perform separate tests for each drug without adjusting for multiple comparisons, you might mistakenly conclude that some drugs are effective when, in reality, they are not. Bonferroni Correction The Bonferroni correction is a widely used method to control the familywise error rate in multiple comparisons. It adjusts the significance level (alpha) for individual tests to maintain an overall alpha level. The Bonferroni-corrected alpha (α_corrected) is calculated as: α_corrected = α / k where: α_corrected is the corrected significance level. α is the desired overall significance level (e.g., 0.05). k is the number of comparisons or tests. Example: If you are conducting 5 hypothesis tests and want to maintain an overall significance level of 0.05, the Bonferroni-corrected significance level for each test would be 0.05 / 5 = 0.01. Effect Size and Practical Significance While statistical significance tells you if an effect exists, effect size measures the magnitude of that effect. Practical significance, on the other hand, considers whether the effect is meaningful in a real-world context. Effect Size Metrics: Common effect size metrics include Cohen's d for comparing means, odds ratios for binary data, and correlation coefficients for relationships between variables. Example: If a new drug reduces blood pressure by 1 mmHg, it may be statistically significant with a large sample size, but it might not be practically substantial for clinical purposes. Non-Parametric Tests Non-parametric tests are used when your data doesn't meet the assumptions of parametric tests, such as normal distribution or homogeneity of variances. Non-parametric tests include: Mann-Whitney U Test: Used for comparing two independent groups when the assumptions for the t-test are not met. Wilcoxon Signed-Rank Test: Used for comparing two related groups or paired samples when assumptions for the t-test are violated. Kruskal-Wallis Test: An analog of one-way ANOVA for comparing more than two independent groups with non-normally distributed data. Chi-Square Test of Independence: Used for testing the independence of categorical variables when parametric assumptions are not met. Example: Non-parametric tests are valuable in scenarios where data distributional assumptions are not met, such as when dealing with ordinal or skewed data. Understanding and applying these advanced topics in significance testing can significantly enhance the quality and reliability of your statistical analyses, especially in complex research or decision-making contexts. Common Statistical Significance Mistakes and Pitfalls Avoiding common errors and pitfalls in significance testing is crucial for obtaining accurate and meaningful results. Misinterpreting P-Values One of the most common mistakes in significance testing is misinterpreting p-values. A p-value represents the probability of observing a result as extreme as, or more extreme than, the one obtained under the null hypothesis. Common pitfalls include: P-Hacking: Repeatedly testing multiple hypotheses until a significant result is found, increasing the risk of Type I errors. Overemphasis on Small P-Values: Assuming that a small p-value (e.g., p < 0.05) implies a strong practical or scientific effect. Mitigation: Understand that p-values alone do not indicate the size or importance of an effect. Always consider effect size, confidence intervals, and practical significance alongside p-values. Not Considering Sample Size Sample size plays a critical role in the reliability of your results. Insufficient sample sizes can lead to underpowered tests, making it challenging to detect real effects. Common pitfalls include: Ignoring Power Analysis: Failing to perform power analysis to determine the required sample size before conducting a study. Drawing Conclusions from Small Samples: Making solid claims based on small samples, which can lead to spurious results. Mitigation: Conduct power analysis to determine the appropriate sample size for your study and avoid drawing meaningful conclusions from small samples. Ignoring Assumptions of Tests Many statistical tests rely on specific assumptions about the data, such as normal distribution or homogeneity of variances. Ignoring these assumptions can lead to inaccurate results. Common pitfalls include: Applying Parametric Tests to Non-Normal Data: Using parametric tests like t-tests or ANOVA on data that do not follow a normal distribution. Assumption Violations in ANOVA: Not checking for homogeneity of variances in one-way or two-way ANOVA. Mitigation: Always assess whether your data meets the assumptions of the chosen statistical test. If assumptions are violated, consider non-parametric alternatives or transformations to meet the assumptions. Data Snooping and Overfitting Data snooping, or data dredging, occurs when you explore your data extensively, increasing the risk of finding spurious patterns. Overfitting happens when a model is too complex and fits the sample data closely, leading to poor generalization of new data. Common pitfalls include: Testing Multiple Hypotheses Without Correction: Conducting numerous tests without adjusting alpha levels for multiple comparisons. Complex Models with Many Parameters: Fitting models with too many parameters to limited data. Mitigation: Use appropriate correction methods for multiple comparisons, collect new data for model validation, or use simpler models to avoid overfitting. By recognizing and mitigating these common mistakes and pitfalls, you can ensure more robust and reliable results in your significance testing endeavors. How to Report and Communicate Significance? Effectively presenting and communicating your results is essential in significance testing to convey your findings clearly and facilitate decision-making. In this section, we'll delve into various aspects of reporting and communication. Presenting Results Effectively Presenting your results in a clear and organized manner is crucial for others to understand and interpret your findings. Consider the following tips: Use Clear Language: Avoid jargon and complex terminology. Explain statistical concepts in plain language. Provide Context: Explain the context and relevance of your findings. How do they relate to the research question or problem? Highlight Key Results: Focus on the most important results. Use concise and informative headings and subheadings to guide the reader. Creating Visualizations Visualizations, such as charts and graphs, are powerful tools for conveying complex statistical results in an understandable way. Choose the right type of visualization for your data: Histograms: Display the distribution of data. Bar Charts: Compare categories or groups. Line Charts: Show trends or changes over time. Scatter Plots: Display relationships between variables. Box Plots: Visualize the spread and central tendency of data. Decision Trees: Illustrate decision-making processes and classification outcomes. Ensure your visualizations are well-labeled, have clear legends, and are easy to interpret. Writing a Results Section A well-structured results section in a research paper or report is crucial for presenting your findings effectively. Follow these guidelines: Start with a Summary: Begin with a brief summary of the main results. Use Headings: Organize your results using clear headings and subheadings. Include Tables and Figures: Present key data in tables and figures for easy reference. Report Effect Sizes: Include effect size measures to provide a sense of the practical importance of your results. Discuss Statistical Significance: Mention when results are statistically significant, but avoid overemphasizing p-values. Conveying Practical Implications It's essential to go beyond statistical significance and discuss the practical implications of your findings: Explain Real-World Significance: Discuss how the results can be applied in practice and their implications for decision-making. Consider Stakeholders: Consider the perspectives and needs of stakeholders who may use your findings. Address Limitations: Acknowledge the limitations of your study and potential sources of bias or error. Recommendations: Offer recommendations or suggestions based on your results. Effectively reporting and communicating significance not only ensures that your findings are understood but also contributes to their meaningful application in various fields and decision-making processes. Needing a summary for Statistical Significance? If you still have open questions and need visualizations of statistical significance calculation we got you! Watch our Research Director Louise Leitsch giving an insightful and easy to understand talk on statistical significance in our Webinar! Statistical Significance Examples Understanding statistical significance is best achieved through concrete examples illustrating its practical application. Here are a few scenarios where statistical significance plays a crucial role: Medical Research In clinical trials, statistical significance determines whether a new drug or treatment is effective. Researchers compare the treatment group to a control group, analyzing outcomes like symptom improvement or recovery rates. If the results show statistical significance, it suggests that the treatment has a real and positive effect on patients' health. Example: A clinical trial for a new pain-relief medication finds that patients who received the drug reported significantly lower pain levels compared to those who received a placebo. This statistical significance indicates the drug's effectiveness. Marketing Campaigns Businesses use statistical significance in A/B testing to evaluate the impact of different marketing strategies. By randomly assigning customers to two groups—one exposed to the new strategy and one to the old—the company can determine if the new strategy leads to statistically significant improvements in metrics like click-through rates, conversions, or revenue. Example: An e-commerce company tests two different email subject lines for a promotional campaign. The subject line with a higher open rate, statistically significant over a larger sample size, is chosen for the main campaign. Quality Control Manufacturers use statistical significance to ensure product quality and consistency. Through process control charts and hypothesis testing, they can detect significant deviations from established quality standards, leading to timely corrective actions. Example: A car manufacturer measures the tensile strength of steel used in car frames. If a batch of steel shows a statistically significant drop in strength, the manufacturer investigates and addresses the issue to maintain safety standards. These real-world examples showcase the diverse applications of statistical significance in various fields, highlighting its importance in making data-driven decisions, conducting meaningful research, and achieving desired outcomes. Conclusion for Statistical Significance Statistical significance is a powerful tool that helps us separate meaningful insights from random noise in data. It plays a crucial role in scientific research, decision-making, and various fields like medicine, business, and social sciences. By understanding its definition, importance, and applications, you can make more informed choices and draw reliable conclusions based on data. Remember, statistical significance is just one piece of the puzzle. It should always be considered alongside effect sizes, practical implications, and contextual factors to make well-rounded decisions. So, whether you're analyzing data, conducting experiments, or interpreting research findings, keep the principles of statistical significance in mind to enhance the credibility and validity of your results. How to Determine Statistical Significance in Minutes? Introducing Appinio, the real-time market research platform that makes statistical significance analysis a breeze. Appinio empowers businesses to obtain instant consumer insights, enabling lightning-fast, data-driven decisions. Forget the heavy lifting in research and tech; with Appinio, you can focus on what truly matters – making rapid, informed choices for your business, backed by real-time consumer data. Say goodbye to the stigma of dull, intimidating, or expensive market research. Swift Insights: From questions to insights in minutes, Appinio accelerates your path to statistical significance. User-Friendly: No need for a Ph.D. in research – our intuitive platform is designed for everyone. Global Reach: Reach your target group with precision from 1200+ characteristics, and survey them in over 90 countries. Get facts and figures 🧠 Want to see more data insights? Our free reports are just the right thing for you! Go to reports
14945
https://tm.substack.com/p/sat-vocab-pedestrian
Most Important SAT Vocabulary Pedestrian: Definition & Meaning for the SAT ⚡️ PEDESTRIAN most nearly means: (A) elevated; (B) dangerous; (C) memorable; (D) unimaginative. 👉 Answer + examples, pronunciation, and full SAT explanation inside. Jul 21, 2025 3 Share TL;DR: Pedestrian means ordinary or dull; lacking imagination. Note that the word is related to the noun pedestrian, which means someone who is walking. Learn more about the word roots of the word pedestrian. ℹ️ Part of Speech of Pedestrian pedestrian is an ADJECTIVE. 🗣️ Pronunciation of Pedestrian pedestrian is pronounced /pə.ˈdɛs.tɹi.ən/ or puh-DES-tree-uhn. 📚️ Definition of Pedestrian Lacking imagination, excitement, or originality; dull and ordinary. Example: pedestrian prose that puts readers to sleep. Performed on foot; related to walking. Example: a pedestrian crosswalk. 📰 Examples of Pedestrian Here are some examples of the word pedestrian: The rom-com's pedestrian plot and forced cuteness seemed trite to most audiences, who had seen a dozen similar movies with predictable endings. The presentation of the food was photo-worthy, but the actual taste of the food itself was disappointingly pedestrian, tasting like something from a traveling carnival. The graduation speech was so pedestrian that even the principal struggled to stay awake during the endless recitation of clichés about a bright future and the importance of courage. ✅ Quiz answer Answer to the question above: D, unimaginative. Explanations: A doesn't work; elevated means raised or lofty, which is the opposite of ordinary. B is incorrect; dangerous has nothing to do with being dull or ordinary. C is wrong; memorable would be the opposite of pedestrian—something that stands out rather than being ordinary. 🚀 Learn more! My colleague Robert’s SAT Vocab book on Amazon Dr. P’s guide to building a perfect college list Vocab list of the words that appeared on the Nov 2024 SAT test Quizlet SAT vocab flashcards for Nov 2024 3 Share Discussion about this post No posts Ready for more? © 2025 Erin Billy Privacy ∙ Terms ∙ Collection notice Start writingGet the app Substack is the home for great culture
14946
https://pweb.fbe.hku.hk/~pingyu/6066/Slides/LN3_Convex%20Sets%20and%20Concave%20Functions_slides.pdf
Ch03. Convex Sets and Concave Functions Ping Yu Faculty of Business and Economics The University of Hong Kong Ping Yu (HKU) Convexity 1 / 21 1 Convex Sets 2 Concave Functions Basics The Uniqueness Theorem Sufficient Conditions for Optimization 3 Second Order Conditions for Optimization Ping Yu (HKU) Convexity 2 / 21 Overview of This Chapter We will show uniqueness of the optimizer and sufficient conditions for optimization through convexity. To study convex functions, we need to first define convex sets. Ping Yu (HKU) Convexity 2 / 21 Convex Sets Convex Sets Ping Yu (HKU) Convexity 3 / 21 Convex Sets Convex Combination, Interval and Convex Set Given two points x,y 2 Rn, a point z = tx+ (1t)y, where 0  t  1, is called a convex combination of x and y. The set of all possible convex combinations of x and y, denoted by [x,y], is called the interval with endpoints x and y (or, the line segment connecting x and y), i.e., [x,y] = ftx+ (1t)y j 0  t  1g. - This definition is an extension of the interval in R1. Definition A set S  Rn is convex iff for any points x and y in S the interval [x,y]  S. [Figure here] A set is convex if it contains the line segment connecting any two of its points; or A set is convex if for any two points in the set it also contains all points between them. Ping Yu (HKU) Convexity 4 / 21 Convex Sets Examples of Convex and Non-Convex Sets Figure: Convex and Non-Convex Set Convex sets in R2 include triangles, squares, circles, ellipses, and hosts of other sets. The quintessential convex set in Euclidean space Rn for any n > 1 is the n-dimensional open ball Br(a) of radius r > 0 about point a 2 Rn, where recall from Chapter 1 that Br(a) = fx 2 Rn j kxak < rg. In R3, while a cube is a convex set, its boundary is not. (Of course, the same is true of the square in R2.) Ping Yu (HKU) Convexity 5 / 21 Convex Sets Example Prove that the budget constraint B = fx 2 X : p0x  yg is convex. Proof. For any two points x1,x2 2 B, we have p0x1  y and p0x2  y. Then for any t 2 [0,1], we must have p0 [tx1 + (1t)x2] = t p0x1 + (1t) p0x2   y. This is equivalent to say that tx1 + (1t)x2 2 B. So the budget constraint B is convex. Ping Yu (HKU) Convexity 6 / 21 Concave Functions Concave Functions Ping Yu (HKU) Convexity 7 / 21 Concave Functions Basics Concave and Convex Functions For uniqueness, we need to know something about the shape or curvature of the functions f and (g,h). A function f : S ! R defined on a convex set S is concave if for any x,x0 2 S with x 6= x0 and for any t such that 0 < t < 1 we have f(tx+ (1t)x0)  tf(x) + (1t)f(x0). The function is strictly concave if f(tx+ (1t)x0) > tf(x) + (1t)f(x0). [Figure here] A function f : S ! R defined on a convex set S is convex if for any x,x0 2 S with x 6= x0 and for any t such that 0 < t < 1 we have f(tx+ (1t)x0)  tf(x) + (1t)f(x0). The function is strictly convex if f(tx+ (1t)x0) < tf(x) + (1t)f(x0). [Figure here] Why don’t we check t = 0 and 1 in the definition? Why the domain of f must be a convex set? (Exercise) The negative of a (strictly) convex function is (strictly) concave. (why?) There are both concave and convex functions, but only convex sets, no concave sets! Ping Yu (HKU) Convexity 8 / 21 Concave Functions Basics Figure: Concave Function A function is concave if the value of the function at the average of two points is greater than the average of the values of the function at the two points. Ping Yu (HKU) Convexity 9 / 21 Concave Functions Basics Figure: Convex Function A function is convex if the value of the function at the average is less than the average of the values. Ping Yu (HKU) Convexity 10 / 21 Concave Functions Basics Calculus Criteria for Concavity and Convexity Theorem Let f 2 C2(U), where U  Rn is open and convex. Then f is concave iff the Hessian D2f(x) = 0 B B B @ ∂2f(x) ∂x2 1  ∂2f(x) ∂x1∂xn . . . ... . . . ∂2f(x) ∂xn∂x1  ∂2f(x) ∂x2 n 1 C C C A is negative semidefinite for all x 2 U. If D2f(x) is negative definite for all x 2 U, then f is strictly concave on U. Conditions for convexity are obtained by replacing "negative" by "positive". The conditions for strict concavity in the theorem are only sufficient, not necessary. - if D2f(x) is not negative semidefinite for all x 2 U, then f is not concave; - if D2f(x) is not negative definite for all x 2 U, then f may or may not be strictly concave (see the example below). Notations: For a matrix A, A > 0 means it is positive definite, A  0 means it is positive semidefinite. Similarly for A < 0 and A  0. Ping Yu (HKU) Convexity 11 / 21 Concave Functions Basics Positive (Negative) Definiteness of A Matrix An n n matrix H is positive definite iff v0Hv > 0 for all v 6= 0 in Rn; H is negative definite iff v0Hv < 0 for all v 6= 0 in Rn. Replacing the strict inequalities above by weak ones yields the definitions of positive semidefinite and negative semidefinite. - Usually, positive (negative) definiteness is only defined for a symmetric matrix, so we restrict our discussions on symmetric matrices below. Fortunately, the Hessian is symmetric by Young’s theorem. The positive definite matrix is an extension of the positive number. To see why, note that for any positive number H, and any real number v 6= 0, v0Hv = v2H > 0. Similarly, the positive semidefinite matrix, negative definite matrix, negative semidefinite matrix are extensions of the nonnegative number, negative number and nonpositive number, respectively. Ping Yu (HKU) Convexity 12 / 21 Concave Functions Basics Identifying Definiteness and Semidefiniteness For an n n matrix H, a k k submatrix formed by picking out k columns and the same k rows is called a kth order principal submatrix of H; the determinant of a kth order principal submatrix is called a kth order principal minor. The k k submatrix formed by picking out the first k columns and the first k rows is called a kth order leading principal submatrix of H; its determinant is called the kth order leading principal minor. A matrix is positive definite iff its n leading principal minors are all > 0. A matrix is negative definite iff its n leading principal minors alternate in sign with the odd order ones being < 0 and the even order ones being > 0. A matrix is positive semidefinite iff its 2n 1 principal minors are all  0. A matrix is negative semidefinite iff its 2n 1 principal minors alternate in sign so that the odd order ones are  0 and the even order ones are  0. Ping Yu (HKU) Convexity 13 / 21 Concave Functions Basics Examples f(x) = x4 is strictly concave, but its Hessian is not negative definite for all x 2 R since D2f(0) = 0. The particular Cobb-Douglas utility function u(x1,x2) = px1 px2, (x1,x2) 2 R2 +, is concave but not strictly concave. First check that it is concave. D2f(x) = 0 B @ 1 2  1 2  px2 p x3 1 1 2 1 2 1 px1 px2 1 2 1 2 1 px1 px2 1 2  1 2  px1 p x3 2 1 C A. Since 1 2  1 2  px2 q x3 1  0, 1 2  1 2  px1 q x3 2  0 and 0 @1 2  1 2  px2 q x3 1 1 A 0 @1 2  1 2  px1 q x3 2 1 A1 2 1 2 1 px1 px2 2 = 0 for (x1,x2) 2 R2 +, u(x1,x2) is concave. Let x2 = x0 2 = 0, x1 6= x0 1; then u(tx1 + (1t)x0 1,0) = 0 = tu(x1,0) + (1t)u(x0 1,0), so u(x1,x2) is not strictly concave. Ping Yu (HKU) Convexity 14 / 21 Concave Functions The Uniqueness Theorem Local Maximum is Global Maximum Consider the mixed constrained maximization problem, i.e., max x f(x) s.t. x 2 G   x 2 Rnjg(x)  0,h(x) = 0 . Theorem If f is concave, and the feasible set G is convex, then (i) Any local maximum of f is a global maximum of f. (ii) The set argmaxff(x)jx 2 Gg is convex. In concave optimization problems, all local optima must also be global optima; therefore, to find a global optimum, it always suffices to locate a local optimum. Ping Yu (HKU) Convexity 15 / 21 Concave Functions The Uniqueness Theorem The Uniqueness Theorem Theorem If f is strictly concave, and the feasible set G is convex, then the maximizer x is unique. Proof. Suppose f has two maximizers, say, x and x0; then tx+ (1t)x0 2 G, and by the definition of strict concavity, for 0 < t < 1, f(tx+ (1t)x0) > tf(x) + (1t)f(x0) = f(x) = f(x0). A contradiction. If a strictly concave optimization problem admits a solution, the solution must be unique. So finding one solution is enough. Ping Yu (HKU) Convexity 16 / 21 Concave Functions The Uniqueness Theorem Example: Consumer’s Problem - Revisited Does the consumer’s problem max x1,x2 px1 px2 s.t. x1 + x2  1,x1  0,x2  0 have a solution? Is the solution unique? The feasible set G = fx1 + x2  1,x1  0,x2  0g is compact (why?) and px1 px2 is continuous, so by the Weierstrass Theorem, there exists a solution. The solution is unique, (x 1,x 2) =  1 2, 1 2  . But from the discussion above, px1 px2 is not strictly concave for (x1,x2) 2 R2 +. Actually, even if we restrict (x1,x2) 2 R2 ++, where R++  fxjx > 0g, px1 px2 is NOT strictly concave. Check for t 2 (0,1),x1 6= x0 1 and/or x2 6= x0 2, q tx1 + (1t)x0 1 q tx2 + (1t)x0 2  tpx1x2 + (1t) q x0 1x0 2 ( ) tx1 + (1t)x0 1 tx2 + (1t)x0 2    tpx1x2 + (1t) q x0 1x0 2 2 ( ) x1x0 2 + x0 1x2  2 q x1x2x0 1x0 2 ( ) q x1x0 2 q x0 1x2 2  0 with equality holding when x2/x1 = x0 2/x0 1 (what does this mean?). In summary, the theorem provides only sufficient (but not necessary) conditions. Ping Yu (HKU) Convexity 17 / 21 Concave Functions The Uniqueness Theorem Sufficient Conditions for Convexity of G Problem: how to guarantee that G is convex? Given a concave function g, for any a 2 R, its upper contour set fxjg(x)  ag is convex. Why? Given two poitns x and x0 such that g(x)  a and g(x0)  a, we want to show that for any t 2 [0,1], g(tx+ (1t)x0)  a. Since g is concave, g(tx+ (1t)x0)  tg(x) + (1t)g(x0)  ta+ (1t)a = a. Given a function h, to guarantee that fxjh(x) = ag is convex, we require h to be both concave and convex. - A function h is both concave and convex iff it is linear (or, more properly, affine), taking the form h(x) = a+ b0x for some constants a and b. In summary, since G = \J j=1  xjgj(x)  0 \\K k=1 fxjhk(x) = 0g  , if gj, j = 1,:::,J, is concave, and hk, k = 1, ,K, is affine, then G is convex.1 1It is not hard to show that intersection of arbitrarily many convex sets is convex. Ping Yu (HKU) Convexity 18 / 21 Concave Functions Sufficient Conditions for Optimization Theorem (Theorem of Kuhn-Tucker under Concavity) Suppose f, gj and hk, j = 1, ,J, k = 1, ,K, are all C1 function, f is concave, gj is concave, and hk is affine. If there exists (λ ,µ) such that (x,λ ,µ) satisfies the Kuhn-Tucker conditions, then x solves the mixed constrained maximization problem. We do not need the NDCQ for this sufficient condition of optimization; the NDCQ is only required for necessary conditions. Example In the consumer’s problem above, g1 (x) = x1,g2 (x) = x2 and g3 (x) = 1x1 x2 are all affine, so G is convex. Since u(x1,x2) = px1 px2 is concave, the solution to the Kuhn-Tucker conditions is the global maximizer. Ping Yu (HKU) Convexity 19 / 21 Second Order Conditions for Optimization Second Order Conditions for Optimization Ping Yu (HKU) Convexity 20 / 21 Second Order Conditions for Optimization Second Order Conditions for Optimization In the LN, we use the "bordered Hessians" to check a solution to the FOCs is a local maximizer or a local minimizer. In practice, this may be quite burdensome. As an easy (although less general) alternative, we can employ the concavity of the objective function f to draw the conclusion. - if f is strictly concave at x (or more restrictively, if D2f(x) < 0), then x is a strict local maximizer. - if f is strictly convex at x (or more restrictively, if D2f(x) > 0), then x is a strict local minimizer. Ping Yu (HKU) Convexity 21 / 21
14947
http://oerior.uniud.it/wp-content/uploads/2018/12/Koekoek1992.pdf
JOURNAL OF APPROXIMATION THEORY 69, 55-83 (19%) Generalizations of a g-Analbgue of Laguerre Polynomials ROELOF KOEKOBK Delft University of Technology, Faculty of Technical Mathematics nw! Infonnatics, Mekelweg 4, 2628 CD Delft, The Netherlands Communicated by Alphonse I-‘. Magnus Received January 9, 1990; revised March 5, 1991 We study polynomials {L~““M1~“‘~“N(x; q))FzO orthogonal with. respect to the inner product where c( > - 1, N is an integer, and M, > 0 for all Y E {O, 1, 2, . . . . N). These polyno- mials are y-analogues of the polynomials (L2”~.M’. ,““(n)],“=,, orthogonal with respect to the (Sobolev) inner product x%-“f(x) g(x) dx+ ; M,f’“‘(O) g”‘(O). v-0 We prove the orthogonality relation for which we give a discrete form (q-integral) too. We give a representation as a basic hypergcomctric series, a recurrence relation is derived, a Christoffel-Darboux type formula and a second order q-difference equation satisfied by these new basic orthogonal polynomials. 0 1992 Academic Press. Inc. 1. INTRODUCTION In [I23 we studied the polynomials {L~Mo~M1,...MN(~) > ,“= O which are orthogonal with respect to the (Sobolev) inner product where o( > - 1, N is an integer, and M, > 0 for all v E {O, 1, 2, . . . . N]. OO21-9045/E! $3.043 Copyright G 1992 by Academic Press, Inc. All rights of reproduction in any form reserved. 56 ROELOFKOEKOEK polynomials are generalizations of the classical Laguerre polynomials {L?‘(x)},“= 0 and can be defined by for certain coefficients { Ak}kN_tgl. The special case N= 1 was treated in [1.5]. Note that for N> 0 the inner product defined by (1.1) cannot be obtained from a weight function. That is why the polynomials iL ~~o&“>-.~~N(~)},~ h ave some properties which differ from the well- known properties of the classical orthogonal polynomials (see for instance [3, 181). For N = 0 these polynomials reduce to the polynomials { L:“(~)}zz 0 found by Koornwinder in [ 161. The most important proper- ties of Koornwinder’s generalized Laguerre polynomials can be found in [lo]. In [S] J. Koekoek and R. Koekoek proved that these polynomials bTwlLl . g m eneral satisfy an infinite order differential equation. For integer values of a this differential equation is of order 2a + 4. In we studied a q-analogue of Koornwinder’s generalized Laguerre polynomials. These polynomials ( L;“(~; q)},“ 0 are generalizations of Moak’s q-Laguerre polynomials described in [ 171. In [ 111 we studied further generalizations of these q-Laguerre polyno- mials. The polynomials described in [ 111 are q-analogues of the polyno- mials (L~Mo,M’,...,MN(~)}nm_O in the special case N= 1. Now it is the aim of the present paper to find the q-analogues of the polynomials {L y40,Ml,...3 MN(X)};sO in the general case. These q-orthogonal polynomials will be denoted by (L:Mo,M’,-.,MN(~; q)}zzo. 2. SOME BASIC FORMULAS First we summarize some definitions and formulas we need from the q-theory. For details the reader is referred to . We always take 0 < q < 1 in the sequel. The q-shifted factorial is defined by (a; qh = 1 (a; q)n= (1 --a)(1 -aq)(l -aq2) ..+(l -aq+l), n= 1,2, 3, . . . . For negative subscripts the q-shifted factorial is defined by 1 (a;q)-n=(l-aq-n)(l-aq-“+l)...(l-aq-’)’ a # q, q2, q3, . . . . q”, n = 1, 2, 3, . . . . (2.1) q-ANALOGUES OF LAGUERRE POLYNOMIALS Further we have for all integers IE (6 q)m (a; q)n = (aq”; q)m’ where (a; q)m := Jy (1 - aqk), k=O We will use two simple formulas involving these q-shifted factorials: (ai q)ntk= (a; q)n WY qhc, k, Iz = 0, 1, 2, . . . (2.2) and (a-‘q’-“; q),l= (--a-‘)” q-(+I; q),, a#O, n=o, I,&.... Wt We have a q-analogue of the binomial coefficient given by n [I (4; 4)n k, = (4; q)k (4; q)n- k’ It is easy to see that ‘k’:: [;I,= (L) The basic hypergeometric series or q-hypergeometric series is define where (al, a,, . . . . 4; 4) ?I := (a,; 91, ia2;4);~(%; 4)n. The q-hypergeometric series is a q-analogue of the hypergeometric series since 58 ROELOF KOEKOEK The q-binomial theorem is a q-analogue of Newton’s binomial series. If a = 0 this leads to e,(z) := 1 @cl (I 1 1 ” q;z =f2.Lzp n=O(4; 4L (2; 4Lo’ I4 < 1, (2.5) which can be seen as a q-analogue of the exponential function since li: e,((l -q)z)=e’. We will use another summation formula (2.6) which is often referred to as the q-Vandermonde summation formula. The q-difference operator D, is defined by f(x) -.04x) D,f(x):= i Cl-qb ’ x#O (2.7) f’(O), x = 0, where the functionfis differentiable in a neighbourhood of x = 0. We easily see that $e D&4 =.7(x). For functions f analytic in a neighbourhood of x = 0 this implies (D;f)(O) := (D,W- ‘f))(O) J-‘“‘(o) (4; 4)n n! (1-q)“’ n=l,2,3, . . . . (2.8) An easy consequence of the definition (2.7) is D;L-.fWl = Y”(D;~)(Yx), y real and n = 0, 1, 2, . . . . Further, we easily find from (2.7) OLtW s(x)1 =f(qx) D, g(x) +&I RJW (2.9) (2.10) +ANALOGUES OF LAGUERRE POLYNOMIALS 59 which is often referred to as the q-product rule. This q-product rule can generalized to a q-analogue of Leibniz’ rule where [z], denotes the q-binomial coefficient defined by (2.4). The q-integral of a function f on (0, co) is defined by J cc f(t) d,t := (l-q) f bkkk? 0 k=--co (2.12) provided that the sum on the right-hand side converges. This definition of the q-integral on (0, co) is due to F. H. Jackson. See [S]. For more details concerning q-integrals the reader is referred to Section 1.11 of the book . It can be shown that ‘d; lom f(t) d,t = j-” f(t) dt 0 for functions f which satisfy suitable conditions. For details the reader is referred to and to references given in . In [S] Jackson defined a q-analogue of the gamma function: (2.13) Note that this q-gamma function r,(x) satisfies the functional equation r,(x+l)=z&(l), &(l)=I. Jackson also showed that $n& z-,(x) = T(x). For details the reader is referred to [l] and to Section 8.10 of . In R. Askey gave a proof of the following integral formula which is due to Ramanujan: s cc XZ 0 (-(l-qkq), dx~r(-ww- u,o r,c-~, ’ a> - 1. (214) 60 ROELOFKOEKOEK If CI = k is a nonnegative integer we have to take the analytic continuation lim r(-a)r(a+ 11, lim (-m+k)r(-a) a-k rqc-4 .-k(-a+k)r,(-ct)T(CI+l) _(-l)k(q-k;q)klnq-lT(k+l) k! (1 -qp+1 = (q; q)k qfck’l)ln q-l (l-qp+l . For the residue of the q-gamma function the reader is referred to formula (1.10.6) in . We remark that we have in view of (2.5) 1 (-(I -4b;qL = e4( - (1 - q)x) + eCx as qT1. Finally we have a basic bilateral series which is defined by The special case r = s = 1 can be summed: = (4, a -lb, az, a-‘z-‘q; q)m (6 a-‘q, z, a -‘z-lb; q)m ’ (a-lb/ < IzI < 1. (2.15) This summation formula is due to Ramanujan. A proof of this summation formula can be found in [2,4]. 3. THE DEFINITION AND PROPERTIES OF THE q-L.4GuEnRE POLYNOMIALS In this section we state the definition and some properties of the q-Laguerre polynomials (,Cp’(x; q)} ,“= 0. These q-Laguerre polynomials were studied in detail by D. S. Moak in . For more details concerning these polynomials the reader is referred to [9, 171. Let CI > - 1. The q-Laguerre polynomials {Lr)(x; q)}rso are defined by y=ANALOGUES OF LAGUERRE POL~Q~~AL§ pyx. q) = (@+ I; q)n n ) (9; 4L We easily see that lim L(‘(x. q) = L’“‘(x) 4t1 n 7 II ) where L:‘(x) denotes the classical Laguerre polynomial. By using (2.3) we obtain (l-4)” L’qx. q) = ( _ 1)” q”‘” + a) ___ y n 3 (4; q)n +lower order terms, n = 0, I, 2, ..~. The orthogonality relation for these q-Laguerre ~o~yn~rn~a~s (Lff’(x; q)>,“, can be written as rqc-4 m s Xa L’“‘(x. q) L”““(x; q) dx T(-a)T(a+l) 0 (-(I-q)x;q), m ’ n = (@+l; 4) 6 (4; q)n 4” mn (3.3 1 This orthogonality relation can also be written as f,_il, (-c(1$‘y;4,;q)Lp’(~4k:4!LP’~cuk~~~ = (f + l; 4L 6 (4; 4) 4” mn’ c > 0, (3.4) where the normalization factor A equals A= 2 kmtk kChrn Fcllo~k~4)n~ This can be shown by proving that Tq(6-a) m i X T(--a)T(a+l) 0 (-(n-q)x;q), P(x) dx 62 ROELOF KOEKOEK for every polynomial P. To do this take for instance P(x) = ( - (1 - q)x; q)m where m is a nonnegative integer. Then we easily see that both sides of (3.5) equal q-fM+‘)M. By using the fact that we obtain from Ramanujan’s sum (2.15) with a = - c(1 -q), b = 0, and z=qa+l, = (4, -41 -qkP1, -c-y1 -4)-l q-E; q)oo a+1 (4 7 -c(l -q), -c-1(1 -4)-l q; q), . (3.6) Note that (3.4) can also be stated in terms of the q-integral defined by (2.12): 1 co s ta - W(ct; q) L’“‘(ct; q) d,t A o (-c(l -q)t;q), m n = (4”+ ‘; q)n 6 (4; q)n 4” mn’ c > 0, where A equals ta A:=Jom (-c(l-q)t;q), d, t. (3.8) (3.7) We remark that the orthogonality relations (3.3), (3.4), and (3.7) are the same since we work in the space of polynomials. This allows us to define for polynomials f and g, CL g> = q--cl) m s X U r(-a)T(ol+l) 0 (-(1-4)x;q)m f(x) g(x) dx =- ; ,_ii, ,vc(~~q;;k; q), f(Cqk) g(cqk) 1 cc s ta =- A” o (-c(l-q)t;q),f(ct)g(ct)dqt’ c > 0, (3.9) q-ANALQGUES OF LAGUERRE POLYNOMIALS 63 where A and A are defined by (3.6) and (3.8), respectively. However, since c is an arbitrary positive constant, the relations (3.4) and (37) give rise to infinite many different weight functions. So the Stieltjes moment problem for the q-Laguerre polynomials {Li,a)(x; q)) zzO is details the reader is referred to . (In particular, see ak’s remarks on page 21 and page 2.5 in .) As a q-analogue of L@‘(O) = (I: “) we have ?I Lc$(o;q)Jq”+l;q)., (4; 4)n n=O, 1,2,.... The q-Laguerre polynomials satisfy a second order q-difference equation which can be stated in terms of the q-difference operator defined by (2.7) as x D;L;‘(x; q) + i I-@+’ l-q -q 1-q” + l-q q E + Tp(qx; q) = 0. Further we have a three term recurrence relation l-f+’ -4l%; 4)= (1 -q)q2n+or+l -ei 16x; 4) [ 1-q n+u+l l-q” - (1 -q)q2n+l+’ + (1 -q)qn+a 1 -(f+” + (1 -q)qpf+= JT! 1(x; 4) and a Christoffel-Darboux formula (x _ y) ‘“;(y $lL? jy 4%; 411, ;p;; ;; LP’(Yi 4) 7 n k=O k 1 -qn+j = (1 -q)qn+a+i lx% 4) Lj,2 l(Yi 41 ' - L:i 1(x; 4) L~'(y; q)]. (3.12) If we divide by x - y and let y tend to x we obtain the confluent form of the Christoffel-Darboux formula 640/69/1-S ROELOF KOEKOEK (4 a+ l; q), i qk(4; q)k VPk 4) > (4; q)n k=O w+l; q)k 1 -qn+l = (1 -q)qn+a+l [ qy 1(x; 4) i -qYx; 4) - L’“‘(x; q) 2. Lj$ ,(x; q) n dx 1 . (3.13) The q-analogue of the well-known differentiation formula @I,?)(X) = (- l)k LFJ,$(x) yields D;LIp”‘(x; q) = (- l)k qk(a+%r?;)(qkX; q), k = 0, 1, 2, . . . . n, n = 0, 1, 2, . . . . (3.14) 4. THE DEFINITION AND THE ORTH~GONALITY We will try to determine the polynomials { L~Mo’M1~~.~,MN(~; q)},“= 0 which are orthogonal with respect to the inner product i (f, g>,= + f w@~fww~ g)(O), V=O CI> - 1, NE (0, 1, 2, . ..}. and M,aO for all VE (0, 1, 2, . . . . N}, (4.1) where the inner product ( , ) is defined by (3.9). We will show that these orthogonal polynomials can be defined by Ntl ~~,Mo,MI....,M~‘(~; q) = C q-k’a+k)~k(~;~($)(q kX; q), n k=O n=O, 1,2, . . . (4.2) for some real coefficients {Ak}kN,+OI . Moreover, we will prove the orthogonality relation ;$I qnk-(‘)Ak) &Y,,, m,n=0,1,2 ,.... (4.3) First we will determine the polynomials {I,: MO,M’...,MN(~; q) },“ o which are orthogonal with respect to the inner product (4.1). The Gram-Schmidt fj-ANALOGUES OF LAGUERRE POLYNOMIALS $5 orthogonalization process assures us that such a set of polynomials exists with degreeCL~Mo’M1’...‘MN(x; q)] = IZ. So we may write by using (3.14) k(a+k)Ak(D~L~))(q~kX; q), n=o, 1, 2, ...9 (4.4) where Lr)(x; q) denotes the q-Laguerre polynomial defined by (3.1) the coefficients {Ak)& are real constants which may depend on n, oe, M Mm 1, ...” and q. Moreover, each polynomial I,~A40~M1’...‘MN(~; q) is unique except for a multiplicative constant. We will choose this constant such that L y>o . ...> yx; q) = Lp)(& q), By using the representation (4.4) and (3.2) we easily see that the coef- ficient k, of Y in the polynomial L;MoM1~..~~Mn(~; q) equals k,=(-lyq”‘“+“‘- (l-dnA (Ipi q), a~ This implies that A, # 0. Let p(x) = xm. First of all we choose L~Mo’M’,...,~N(~; q) = 1 for the moment and we will try to determine the polynomials {jyQ.~l....> wyx; q)},“=l in such a way that (p(x), k>l”O,M1’...‘MN(~; q))4 = 0 for ail m E (0, 1, 2, . . . . n - 1). We use the definition (3.1) of the q-Laguerre polynomials and amanujan’s integral formula (2.14) to obtain for k = 0, I, 2, “..9 N and m, n = 0, 1, 2, . . . i m x”+” LjpL+,“‘(x; q) dx 0 (-(I -41% q~‘zc Jq X s co yim-tj 0 (-Cl-qb;q), dx 66 ROELOF KOEKOEK Now we use the definition (2.13) of the q-gamma function and the identities (2.2) and (2.3) to find I-,(-a)r(-a-m-j)T(a+m+j+l) r(--)T(a+l)r,(-a---j) = (1 -q)-m-jq -(~+i)~-(~)q-(~+m+l)i-(~)(q.+l;q) (q~+m+i;q), m I’ Hence, by using the summation formula (2.6) we find l-,(-cc) m s Xafm r(-a)T(cr+l) 0 (-u-q)x;q), L!f_+,k’(x; q) dx k = 0, 1, 2, . . . . n, m, II = 0, 1, 2, . . . . (4.6) Now we have by using (4.4) and (4.6) q-4 a, s X”+M r(-a)T(a+ 1) ,rd’h~~,...&v(~; q) dx 0 (-(I -q)x;q), =(4”+1;4)mq-(l+l)m-(~) n (1-q)” k;. C-1)” x (qk-“; q)n-k A (4; q)n-k k’ m, n = 0, 1, 2, . . . . First we consider the case that n > N + 2 and N+ 1 < m < n - 1. Then it is clear that (qP)(o) = 0 for all v E (0, 1, 2, . . . . N}. Since (4k-m;qLk=o for k=O, 1, 2, ..,, m and mN+2. Hence, the expression (4.4) reduces to (4.2) for n 2 N + 2. For pz d N-I- I, (4.2) is trivial. In that case the coefficients {Ak)FLj+ r can be chosen arbitrarily. This proves that the polynomials { L>Mo’“‘3--MN(~; q) j ,“= o can be defined by (4.2) for all n E (0, 1, 2, . ..>. In order to define the coefficients (Ak)kN,+i we now have to consider for n = 1, 2, 3, .~. a M&M!,..., (P(X), L, MN(x; 9) )4 = 0 for m = 0, I, 2, .~., min(n - 1, N). (4.7) Since p(x) = xm we have by using (2.8) v=O, 1, 2, . . . . N, Hence, (4.7) implies, by using (4.1), (4.2), (4.6), (3.14) and (3.10), that (4 for in = 0, 1, 2, . . . . min(n - 1, N). We remark that the definition (2.1) implies that (qY; 4)-n (1-q-“+‘)(l-q-“+2)...(&q~) (q;q)-, =(l-q”-“)(l-qY~n+l)...(l-qy--l)= 640/69/l-6 68 ROELOF KOEKOEK for y - n > 0 and IZ = 1, 2, 3, . . . . Hence (qk-“; qL-k (4;4)n-k = w+k+‘E-+l; q)n-k--m = o (4; q)n-k--m for k 2 n + 1 and m = 0, 1, 2, . . . . min(n - 1, N). Note that we have by using (2.4) (4k-m; q)n-k (4; q)n-k =[yy],=[;:;I:], = wk+‘; 4)k-m-1 (4; q)k--m- 1 ’ m -c n. This allows us to write (4 a+l; 4Ll (1-q)” q -ca+l,m-(y) “y (-l)k ‘““;qy’,;iq)kI,, Ak k=m+l > k m 1 (4; qL?z N+l i-(-l)” (1-q)“4 m(m + “,j$f m k&o (-Uk dq z+k+m+l; q)n-&m (4; q)n-k--m qmkA k = 0, for m= 0, 1,2, . . . . min(n - 1, N). However, we will define the coefficients {Ak}f’zt in such a way that (4 =+ I; 4L (s;q)nz q -(a++-(~) N+l kz;+l (-l)k(qn;q-;;4)~m-1 A, 3 km1 N+l +(-l)“q”‘“+“‘M, c (-ly(q u+k+m+l; q)n-&-m (4; !?)n-k--m qmkA, = 0, k=O (4.8) for m = 0, 1, 2, . . . . N is valid for all n E (0, 1, 2, . . . }. For n 9 N + 1 this is the same system of equations. For n < N we have added the following conditions on the arbitrary coefficients {Ak)c$+ 1 : (4 =+I; 4)m -(+l)m-(l;) Ni’ (-1)” (f;q;;;4)kIm-l Ak k=m+l 2 k m 1 N+l +(-lrq m(m+~)Mm 1 (qk (qn+k+m+l;q)~~k-~qmk~~k=o, k=O (4; q)n-k--m q-ANALOGUES OF LAGUERRE POLYNOMIALS where m = rr, n + 1, n + 2, . . . . N. Since we have by using (2.3) for k b n + 1 Wkfl; q)k-n-1 = (-l)k--n-l q-(k’n’((i;q)k-n-l =(-l)k-“-lq “k-(~)-ol:l)(q;q)k-n~l this implies (4 ‘+l; qh q-b+l)n-(;)-(“;l) N$1 (4; q)n qk-(;)Ak=qn(n+l)M A 72 0 k=n+l (4 a+l; 4)n+i (4;4)n+i q -((a+l)(n+i)-(n~i) AJ+1 x C i = 1, 2, 3, . ..) N-n. k=n+i+l (-l,(q~~i'd,:"~li-'A,=O, , k n I 1 This implies for n.... My& q) = 1 such that (4.2) also holds for n = 0. We remark that the system (4.8) of equations for the coefficients (Ak)kN=+d can be solved for every N. For instance, in [ 111 we found an exnlicit representation in the case N = 1. It would be a nice result to find an explicit formula for each coefficient Ak in general. However, in this paper we only nee the property (4.9). To complete the proof of the orthogonality relation (4.3) we note that it follows from (4.2), (3.2), and the orthogonahty we just proved that 70 ROELOFKOEKOEK Now we obtain from (4.1), (4.2), and (4.6) for m =n>N+ 1 (Xn, p%‘Ml,...> M.yx; q))q = (qa+l; 4111 q-bx+1)“-(;) IV+1 ,c, (-l) k (l-4)” (qk-“; q)n-k A (4; q)n-k k =(-I) ,, (@+l; q)n (lmq)” q --n(n+a+l) “i’ q,rk-($A k. k=O This proves (4.3) in the case that n>N+ 1. For IZ 6 N we find by using (4.9) SL q-(.+1)“-(;) n k:. (-l) k (qk-“; q)n-k (l-4)” (4; q)n-k Ak n(n+dMnAo =(-I) -~(n+u+~)N~lq‘+;)A k. k=O This proves (4.3). 5. ANOTHER REPRESENTATION The polynomials (L :Mo,M1,-,MN(~; q)}F=, given by (4.2) can also be written as Nfl L;Mo,M',-.,"'~N(~;~)= 1 q-k(a+2k)BkXk(D;:LlPL+k))(q-kX;q), (5.1) k=O where the coefficients {Bk}kN,igl are related to the coefficients {Ak}kN_fgl found in the preceding section in the following way Aizqti:‘)~~: q-kI~+k+il[k] i 4 x (qnek+‘; q)k--i (qa+k; q)i B (l-dk k, i = 0, 1, 2, . . . . N + 1 and q-ANALOGUESOF LAGUERRE POLYNOMIALS Cl- dk -(k;‘)+k(a+Zk) N+l Bk=(qu+k;q)kq ,4, w)“+k 4 (q”-j+‘; q)j-k A x(4 a+Zk+l; q)j-k j’ k=Q, 1, 2, . . . . M-t- 1, where the q-binomial coefficient is defined by (2.4). This can be shown by first proving and then using the following two relations involving q-Laguerre polynomials (qnmk+‘; q)k-i (@+k; q)i and xq-(;)-(u+k)i (D;Lf’)(q-“x; q), q-k’“+k)(D;L$))(q-kx; q) respectively, for k, n = 0, 1,2, . . . . The proof can be found in [13, 141. 6. REPRESENTATION AS BASIC HYPERGEOMETRIC SERIES If we write then it follows from (4.2) and (3.1), by using (2.2) and (2.3) that (q-“; 4L N+ l = w+l; 4) m+N+l k;. (~-n+m;~)k(~cl+k+m+l~~)~+I-k~nk-(”Ak~ Note that N+l F(z) := 1 (q-“2; q)k (qa+k+lz; q)N+l-k qnk-(‘;),A k=O 72 ROELOF KOEKOEK is a polynomial in z of degree at most N+ 1. The coefftcient of P+ ’ in F(z) equals (N+lKor+l)+yy) Nfl (-l)N+‘q c4 -(a+l)k--($)& k=O Note that it follows from (4.3) that k=O This implies that all zeros of F(z) can be written as (complex) powers of q. If N+l cq -@+l)k-(;)AkZo, k=O (6.1) then the polynomial F(z) has degree N + 1. In that case we may write Ntl F(qm)= c (q-“+“; q)k (qa+k+m+l; q),,T+l-k qnk-(%& k=O (l-qP”)(l-qfl’)...(l-qPN) xl4 Do+ 1; q)m (qP’+ 1; q), . . . (qh+ ‘; q)m (q? 4) m w; 9) . . . (4k 4) m m for some complex pi, j= 0, 1, 2, . . . . N. Hence, by using (4 a+l; 9) m+N+l= (@+l; q)N+l kf+N+2; q)m which follows directly from (2.2), we have qJw41....>MN(X; q) =(1-q80)(1-qp1)‘..(1-q8y) (4 ‘+l; q)N+l x tcf+ ‘; q)n (4; 4)n XN+2 Nf2 4 (6.2) q-ANALOGUESOFLAGUERREPOLYNQMIALS 73 If (6.1) is not satisfied, then F(z) is a polynomial of a degree less than N+ 1. In that case we find a representation as a ktjk basic hy~ergeometr~~ series where k < N+ 2 in a similar way. 7. A SECOND ORDER q-DIFFERENCE EQUATION In this section we will show that the polynomials (L:M03M1,...,MN(x; q)}F=,, satisfy a second order q-difference equation. The method found in can be applied in this case too. We prove the following theorem. THEOREM 1. The polynomials {L >Mo~M1-,MN(~; q))r=, satisfy a second order q-difference equation of the for, ,.. . x92(x) D;L;“O’M17..~~MN(X; q) - Pl(x)(D,L~MO,Ml....,MIV)(qX; q) 1-q” + l-q - PO(X) L y4o.m 1...) M.?qqx; q) =o, where PO(x), PI(x), and P(x) are polynomials with ( N+l Po(x)=q”+lAo c 4 nkP(‘)Ak XN+l flower order terms k=O ) ( N+l Pdx)=f+2Ao 1 q “k-(:)A, ~~+~+lower order terms k=O ) N+l P2(x) = A0 1 qnk- (:)Ak > XN+l + lower order terms k=O and 1-q afN+2 1-q P2(x). Pro04 We consider the q-difference equation (3.11) for the q-kaguerre polynomials. By using the fact that LF’(q-lx; q) = Lff’( x; 4) + 4-v -q) x(~qL;))(q-‘x; 9) which follows directly from (2.7), we write this q-difference equation (3.11) in the form q-2x(D;L:))(q-2x; q) + [ 1 -qa+l l-q -q (D L’“‘)(q-5. q) 4 n > 1-q” +--- l-qq OL + ‘L?‘(x; q) = 0. (7.4) 74 ROELOF KOEKOEK If we let D$ act on (7.4) and use the q-analogue of Leibniz’ rule (2.11) we obtain 1-q a+k+l x(D$+%p)(q-k-2x; q) + qk+2 n+a l-q -q x 1 x (D;+‘Lp)(q-k-‘x; q) + ’ -q”-k l-q q n+3k+3 x (lQp)(q-5; q) =o, k=O, 1, 2, . . . . (7.5) Now we consider the definition (4.2). We multiply by x and use (7.5) for k=N-1 to find xL~“O’M1,-.,MN(X; q) = 5 b/&)(D$;))(q-kX; q), (7.6) k=O where i bk(X) = q--k@+k)AkX, k=O, 1, 2 ,..., N-2 b,-,(x) = 4- (N-lI)(a+N--l)AN-lX --cl a+3N-(N+l)(a+N+l) l-qn-N+l AN,, 1-q bN(X) = 4- N(a+N)ANX---(N+l)(or+N) 1 -qm+N l-q -q “+‘x A,,,. 1 Now we multiply (7.6) by x and use (7.5) for k = N- 2 _I to obtain N-l X2L;Mo,M1,-,MN(X; q) = 1 b”,(x)(Dflp)(q k=O kx; 41, where Sk(x) = xbk(x), k=O, 1,2, . . . . N-3 &L&4=&,-&)- 1 - qn-N+2 1-q qa+3N-3bN(x) 1-q u+N-1 8,-1(x) = xbN- l(x) - qN l-q -q n+a~ bJx). 1 Repeating this process we finally obtain by using (7.5) for k=O x~Lyfo~~~~-JyX; q) =po(x) Lf’(x; q) +pJx)(D,Lp)(q-lx; q) (7.7) q-ANALOGUES OF LAGUERRE POLYNOMIALS for some polynomials pO(x) and pi(x) which satisfy pO(x) = A,xN + lower order terms 87.8) > xN + lower order terms. Now we use the q-product rule (2.10) to obtain from (7.7) 1 -qN XN-l~a,Mo.M~ . . . . . MN l-q n (qx; q) + XN D,L~“~~M’~.~++(X; q) = RJb(4 a%x; 4) + CPdX) + ~~Pl(x)~(~~~~))(x; 4) We multiply by x and replace x by q-lx to obtain l-qN XN~Of~.M~,...,M~ (l-qfqN n GGq)+q- N-lxN+ “(D,L yffolML....Iw.J)(q-lX; q) = q-lx(~,PcJ(q-‘x) Jy’(x; 4) +~-‘-mw’-4 + (~,Pl)(q-lx)l(~,L~‘)(q-“x; 4) + q-2xp1(q~lx)(D~L~))(q-2x; q). Now we use (7.4) and (7.7) to find .J+ l(o,~~“o,“~,...,M~)(~-l~; q) = rdx) LfVx; 4) + rl(xW,~9(q-‘x; qh (7.9) c 1-q” 4-1x(~,P,)(q~‘x)-l--y4~+1Pl(4-i”i l-qN --4l-qPo(X) Fl(X) = 4NQdq-‘x) + (qAk-‘~)l Nf, l-qa+l I-qN I -4 I l-q -q “+aX pl(q-lx)-q--- 1 1 -q Plb). 76 ROELOF KOEKOEK By using (7.8) and (7.10) we easily see that N+l nk--(& cq k xNf lower order terms k=l (7.11) xN+l +lower order terms. In the same way we obtain from (7.9) 1 -qN+l 1-q X N D LwWLML...,MN 4 n (x;q)+q- ‘xN+l(D2L 4 ;Mom...,MN)(q-lX; q) = D,ro(x) L?“(qx; q) + [rob) + D,G41 D,L%; q) + q-lrl(x)(D$L~‘)(q-‘x; q). Multiplying by x and applying (7.4) again gives us by using (7.9) XNf2 ( 4 D~L~Mo,MI,....MN)(~--~; q) = so(x) L;‘(x; q) + G9(D4L~‘)(q-‘x; q), (7.12) where so(x) = qN+2 1-q” x(D,ro)W’x)-- l-q q a+2rl(q-1X) I -4 2 l-qN+i l 1-q ro(x) (7.13) Sl(X) = 4 N+2xCroW’x) + (DqMq-141 -4 n+“X 1 r,(q-‘x) - q2 l-qN+l \ 1-q Tl(X). By using (7.11) we easily see that 1-q” a+3 IV+1 so(x) = - - l-q q cq nk-(‘)Ak > xNfl + lower order terms k=O N+l (7.14) si(x) = qn+at2 1 qnkp(:)Ak xN+2 +lower order terms. k=O > Elimination of (D,LF))(q-‘x; q) from (7.7), (7.9), and (7.12) gives us in view of (3.10) q-ANALOGUES OF LAGUERRE POLYNOMIALS 77 PO(X) Sl(X) -h(X) %(X) = x”Wx) ye(x) Sl(X) - f‘l(X) %(X1 = l-q” - XN+ ‘B(fyx) 1-q (7.15) for some polynomials P;(x), P:(x), and P:(x). Here we used the fact that for n = 0 it follows from (7.7) that pO(x) = AOxN. Therefore we have from (7.10) and (7.13), Y,Jx) = so(x) = 0. Now we conclude from (7.7), (7.9), and (7.12), by using (7.95) XNLyfo,Ml,.-.,MN(X; q) Ia) PI(X) 0 = XN+l(DqL~MO'ikfI,..-,MN)(q~lX; q) YJX) r:(x) XN+2(D~L~M0,M',...,MN)(q-X; q) so(x) 31(x) =x'"+2p~(X)(~~~~"o.~~.....MH)(q-2X; q) _ X2N+ “PI(~)(D,L~Mo,“I,...,~N)(~-~~; q) 1-q" +1-q __ X2N+ ‘P$(~) ,rAfo~~~wWv(~; q). We divide by x2N+ ’ to obtain xP2(x)(o~L~““,“‘,...,~N)(q-2x; q) - PI(X)(D,L~“o,M’....,~~)(q-IX; q) l-q" + E-q -PO(x) L ymfl>.... hyx; q)=o. We replace x by q2x and use the fact that qMo,M11.... MN(q2X; q) = qA40,MI....3 Myqx; q) - q( 1-q) X(D,L~MO'M',...,MN)(qx; q) which follows directly from (2.7), to find q2xp2(q2x) p&M0.m....4wyX; q) - Pf(q2x) + 4(1-q”) xPo(q'x)(qX; q) l-q" +- 1-q %(q2x) L ~~O,~l.-..~N(qx; q) = 0 which proves (7.1) if we define 4 N+4P2(X) := q9J2(q2x) qzN+v,(X) := P2(q2x) + 4(1-q”) xPo(qS) 4 2N+4PO(X) := P$(q2x). (7.16) 78 ROELOF KOEKOEK It easily follows from (7.15), (7.8), (7.11), and (7.14) that Ntl PO(x) = qa+Qo c qnk- (%ik XN+l + lower order terms k=O P1(x)=q”++%4, c q ( N+l nk-(5)Ak xN+2-t-lower order terms (7.17) k=O > N+l G(x)=Ao c 4 xN+l+lower order terms. k=O > Now (7.2) follows from (7.16) and (7.17). It remains to show that (7.3) is true. To prove this we note, by using (2.7) and (7.16), that (7.3) is equivalent to Cl- 4)CMqx) + Cl- 4”) XeY~X)l = a+Nf4P;(x)-q2P;(qx)+(1-q)qa+N+4xPf(x). 4 (7.18) Now we will prove (7.18). From (7.10) it follows by using the definition (2.7) that (1 -4)ro(4X)=qN+1~o(X)-q~o(qx)- (1 -4nbf+N+2Pl(x) (1 -4)rl(qx)=(1 -4)4N+1x~0(x)+q”+N+2~1(x)-q4P1(4x) (7.19) + Cl- 4) 4 n+N+N+2Xp1(X). Now we use (7.15) and (7.19) to see that xNlx(qx) + (I- 4”) xGYqx)l =4-NCPo(4x) s,(qx) --P1(P) so( + (1 - 4) 4-N-1[Ir0(v) sl(qx) - r,(P) so( = CPo(X) - (1 - 4”) qa+ h(41 S,(P) - cc1 - 4) XPo(X) + q=+h(x) + (l-4) 4”+~+‘-vI@)l %(qx). (7.20) By using (7.13) and (2.7) we find i Cl- 4) So(P) = 4 N+3rO(x)-q2ro(qx)-(l-q9”)q”+N+4rI(x) (1 -q)s,(qx)= (l-q) qN+3xrO(x)+qa+N+4r1(x)-q2rl(qx) (7.21) + (I- 4) 4 n+a+N+4xr1(x). q-ANALOGUESOF LAGUERREPOLYNOMIALS 79 Hence, by using (7.20) and (7.21) we obtain (I- 9) XNCfm4 + (I- 4”) xexsx)l = 4 “+“+“cP0(x) r1(x)--P16) ro(x)l + (1 - 4) 4 “+“+“xlIP0(x~ rl(x)-Pl(x) rob)1 + cc1 -4) 42XPo(x)+q~+3Pl(x)+ (1 -4h”+2+3xP1(x)l rdqx) - c42Po(x)- (l-4”) qR+3Pl(x)1 rr(4x). Finally, we use (7.19) and (7.15) to find Cl- 4) xNCc%P) + (1 - 4”) xfxqx)l =4 a+“+4cPow r1(x) -P1@) r,(x)1 + (1 - 4) 4 “+N+4xCPO(x) r1(x) -P1(X) rob91 +C(l-4)4- N+lrl(qx) + q- N+2P1(4xH ro(4x) - C(1 - 4) 4-N+1ro@) + 4-N+ZPo(4x)1 r,(qx) =4 “+N+4CPo(x) Tl(X) -P1(X) r,(x)1 + (l-q) q=+N+4 XCPo(X) r1(x) -P,(X) ro(x)l -4 -N+2[Po(44 r1(4x)--P1(4x) ro(qx)l = XN[q g+N+4P2(X) + (l-q) q”+N+4XP;(X) - q!P:(qx)J This proves (7.18) and therefore (7.3). This completes the proof of the theorem. 8. RECURRENCE RELATION In this section we will prove the following theorem. THEOREM 2. The polynomials {L2M0,M1’...‘MN(~; q)>z=, satisfy a (2N + 3)-term recurrence relation of the form XN+ ~LU,MO,MI,...,MN n (x; 4) n+N+l = 1 Er)L2Ma,M1g...,MM(~; q), n =O, I, 2, .~~. (8.4) k=max(o,n~N-l) 80 ROELOF KOEKOEK ProoJ Since xN+lL~Mo,M1,...,MN(~; q) is a polynomial of degree n+N+ 1 we have n+N+l XN+ lq&foA,...,M~(~; q) = C E’d,T~~O.~lv~~~~N(X; q), 72 = 0, 1, 2, . . . k=O (8.2) for some real coefficients Ep’, k = 0, 1, 2, . . . . IZ + N+ 1. Taking the inner product with L~Mo,M’,--MN(~; q) on both sides of (8.2) we find by using (4.1) for n = 0, 1, 2, . . . and m = 0, 1, 2, . . . . y1+ N+ 1, <L;Mo,MI ,l.., Wyx; q), L~WI,MI ,... ,MN(~; q))q .E;’ = (x ” N+IL~,Mo,MI ,..., MN (x; q), L~“~~M~~~~~~MJyX; q)& = (x N+~I;~Mo,MI,...,MN(~; q), ,T~Mo~M~,...~MN(~; q))q. (8.3) In view of the orthogonality property of the polynomials {L ;Mo,Mt ,... ,MN(~; q)},“_. we conclude that EE) = 0 for m + N + 1 < ~1. This proves (8.1). The coefficients {Ak}kN,+i in the definition (4.2) depend on 12. To dis- tinguish two coefficients with the same index, but depending on a different value of II we will write A,(n) instead of A,. Comparing the leading coef- ficients on both sides of (8.2) we obtain by using this notation and (4.5) E’“’ k =-.A.-=(-,)Nfl q -(N+l)(zn+ci+N+l) n+N+l k n+N+l x (f+l; q)N+l (1 -qy+1 A,(n) z. A,(n+N+l) ’ n=o, 1,2, . . . . If we define A, := (L ;Mo,MI, . . . . MyX; q), L:MO,Ml,..., ‘+fN(x; q))q =(qa+h)nA (4; q)n 4” then we find by using (8.3), (4.5), and the orthogonality that E’“’ kn-,-,A, n-N-1 =k,A,_N-l f 0, n=N+l,N+2,.... q-ANALOGUESOF LAGUERREPOLYNOMIALS 81 9. A CHRISTOFFEL-DARBOUX TYPE FORMULA From the recurrence relation (8.1) we easily obtain lx N+i -Y N+l)L ;Mo,.W, . ../ M,yX; q) L~%>MI . . . . .Mv(~; q) k+N+l = c E:‘[L ) ~Jmfl.....M.yY; q) m=max(O,k-N-l) _ ppMl,...> M.yy; q) qAM . . . . . MN@; q)], k=O, 1, 2, . . . . (9.4) We divide by Ak and sum over k = 0, 1, 2, . . . . n: (X N-t1 _ yN+l) i LJ~o.MI,.-.&~~; q;~f”~.M~s.-.X;IY; q) k=O k+N+l =i c ;MoJC/.... M,yX; q) L;Mo,MI . . ..I MN(~; q) k=O m=max(O,k-N-l) for IZ = 0, 1, 2, . . . Now we use (8.3) to see that E(k) ELM’ -EL=--- Ak & k-N-l<mN+l we have k=O m=n+l k=O m=rz+! k+N+l min(n,k+ N+ 1) k=O m=max(O,k-N-l) k=O m=max(O,k--N-l) =k$N l:$;:’ So it follows from (9.1) by using this observation that (X N+i n L~~o.~I.-.,~N(~; q) L;Mo,MI....,MN(~; q) -YNfl) c k=O Ak = i ‘+;+’ F [~;Mo,Mi,...rWv(~; q) ~~~oG%..A%v(~; 4) k=max(O,n-N) m=n+l k 82 ROELOF KOEKOEK for IZ = 0, 1, 2, . . . . This can be considered as a generalization of the Christoffel-Darboux formula (3.12) for the q-Laguerre polynomials. If we divide the Christoffel-Darboux type formula (9.2) by x - y and let y tend to x then we find the confluent form (N+ l)XN i {L;“L”~;Mqx; q)}2 k=O k k+N+l E(k) m = = --iI k=max(O,n-N) m=n+l Ak _ L%JWL...&‘N(~; q) $ L;Mo,MI.-..MN(~; q)] for y1 =O, 1, 2, . . . . This formula can be considered as a generalization of (3.13). ACKNOWLEDGMENTS I thank the four referees and the editor for their suggestions and comments on the first version of this paper. Further I thank J. Koekoek for all our discussions concerning this paper and especially concerning my thesis . REFERENCES 1. R. ASKEY, The q-gamma and q-beta functions, Appl. Anal. 8 (1978), 125-141. 2. R. ASKEY, Ramanujan’s extension of the gamma and beta function, Amer. Math. Monthly 87 (1980), 346-359. 3. T. S. CHIHARA, An introduction to orthogonal polynomials, in “Mathematics and Its Applications,” Vol. 13, Gordon & Breach, New York, 1978. 4. G. GASPER AND M. RAHMAN, Basic hypergeometric series, in “Encyclopedia of Mathe- matics and its Applications,” Vol. 35, Cambridge Univ. Press, London/New York, 1990. 5. F. H. JACKSON, A generalization of the functions r(n) and x”, Proc. Roy. Sot. London 74 (1904), 64-72. 6. F. H. JACKSON, On q-definite integrals, Quart. J. Pure Appl. Math. 41 (1910), 193-203. 7. J. KOEKOEK AND R. KOEKOEK, “A Simple Proof of a Differential Equation for Generaliza- tions of Laguerre Polynomials,” Faculty of Technical Mathematics and Informatics, Report No. 89-15, Delft University of Technology, 1989. 8. J. KOEKOEK AND R. KOEKOEK, On a differential equation for Koornwinder’s generalized Laguerre polynomials, Proc. Amer. Math. Sot. 112 (1991), 10451054. 9. R. KOEKOEK, “Koornwinder’s Generalized Laguerre Polynomials and Its q-Analogues,” Faculty of Technical Mathematics and Informatics, Report No. 88-87, Delft University of Technology, 1988. 10. R. KOEKOEK, Koornwinder’s Laguerre polynomials, Derft Progr. Rep. 12 (1988), 393404. 11. R. KOEKOEK, A generalization of Moak’s q-Laguerre polynomials, Canad. J. Math. 42 (1990), 280-303. q-ANALOGUES OF LAGUERRE POLYNOMIALS 83 12. R. KOEKOEK, Generalizations of Laguerre polynomials, J. Math. Anal. Api. 1% (1990) 576-5964. 13. R. KOEKOEK, “Generalizations of the Classical Laguerre Polynomials and Some q-Analogues,” Thesis, Delft University of Technology, 1990. 14. R. KOEKOEK, On q-analogues of generalizations of the Laguerre polynomials, Z’Z “Orthogonal Polynomials and their Applications,” Vol. 9, IMACS Annals on Computing and Applied Mathematics (C. Brezinski, L. Gori, and A. Ronveaux, Eds.), 3. C. Baltzer AG, Basel, 1991, 315-320. 15. R. KOEKOEK AND H. G. MEIJER, A generalization of Laguerre polynomials, SiXpA 3. Math. Anai., in press. 16. T. 11. KOORNWINDER, Orthogonal polynomials with weight function (1 - x)” (1 + x)” + M~(x + 1) + N&x - l), Canad. Math. Bull. 27, No. 2 (1984), 205-214. 17. D. S. MOAK, The q-analogue of the Laguerre polynomials, J. Maath. Anal. Appl. $1 (1981), 2Q-47. 18. G. sZEGi$ “Orthogonal Polynomials,” 4th ed., American Mathematical Society Colloquium Publications, Vol. 23, Amer. Math. Sot., Providence, RI, 1975.
14948
https://www.scribd.com/presentation/384029717/343785670-Linear-programming-Graphical-Method-ppt
Linear Programming Graphical Method | PDF | Linear Programming | Loss Function Opens in a new window Opens an external website Opens an external website in a new window This website utilizes technologies such as cookies to enable essential site functionality, as well as for analytics, personalization, and targeted advertising. To learn more, view the following link: Privacy Policy Open navigation menu Close suggestions Search Search en Change Language Upload Sign in Sign in Download free for 30 days 100%(2)100% found this document useful (2 votes) 5K views 52 pages Linear Programming Graphical Method The document discusses linear programming and the graphical method for solving linear programming problems with two variables. It provides examples of linear programming problems involving m… Full description Uploaded by Lorenz AI-enhanced title and description Go to previous items Go to next items Download Save Save 343785670-Linear-programming-Graphical-Method.ppt For Later Share 100%100% found this document useful, undefined 0%, undefined Print Embed Ask AI Report Download Save 343785670-Linear-programming-Graphical-Method.ppt For Later You are on page 1/ 52 Search Fullscreen LINEAR PROGRAMMING : GRAPHICAL METHOD Aguila, Mary Rose D.Alvar, Gladies Mae L.Cabral, Daisy Anne M.ChE 5201 adDownload to read ad-free Th e li ne ar pr og r am mi ng mo de l co ns is ts of l i n e a r o b j ec t i v e s a n d l i n ea r c o n s t r a i n t s, w h i ch m e a n s t h a t t h e v a r i a b l e s i n a m o d e l h a v e a proportionate relationship L i n e a r p r o g r a m m i n g i s a w i d e l y u s e d m at he ma ti ca l m od el i ng te chn iq ue to de t erm in e t h e o p t i m u m a l l o c a t i o n o f s c a r c e r e s o u r c e s a m o n g competing demands. Resources typi cal ly incl ude ra w ma te ri al s , manpower , machinery , time , money and space . adDownload to read ad-free Essentials of Linear Programming Model  L i m i t e d r e s o u r c e s l i m i t e d n u m b e r o f l a b o u r, material equipment and finance  Objective refers to the aim to optimize (maximize the profits or minimize the costs)  Linearity i n c r e a s e i n l a b o u r i n p u t w i l l h a v e a proportionate increase in output  Homogeneity th e pr od uc ts, w ork er s' ef fic ie nc y, and machines are assumed to be identical  Divisibility i t i s a s s u m e d t h a t r e s o u r c e s a n d products can be divided into fractions adDownload to read ad-free Properties of Linear Programming Model Relationship among decision variables must be linear in nature.2. A model m ust have an objective function.3. Resource constraints are essential.4. A model m ust have a non-negativity constraint. adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free Share this document Share on Facebook, opens a new window Share on LinkedIn, opens a new window Share with Email, opens mail client Copy link Millions of documents at your fingertips, ad-free Subscribe with a free trial You might also like LINEAR-PROGRAMMING-Example Problems With Answer 70% (10) LINEAR-PROGRAMMING-Example Problems With Answer 7 pages General Inorganic Chemistry - Lecture Notes 67% (3) General Inorganic Chemistry - Lecture Notes 24 pages Stoichiometry 75% (4) Stoichiometry 109 pages Cooperative Law Notes 50% (2) Cooperative Law Notes 20 pages Choosing the Best Test Automation Framework No ratings yet Choosing the Best Test Automation Framework 3 pages LP SensitivityAnalysis No ratings yet LP SensitivityAnalysis 47 pages Simplex (Minimization Prob) 100% (1) Simplex (Minimization Prob) 10 pages Ex of Goal Programming No ratings yet Ex of Goal Programming 13 pages PAREDES, Micaela-BSA-2-2-Linear-Programming-Answers-2.1-Munchies-Cereal-Company 100% (1) PAREDES, Micaela-BSA-2-2-Linear-Programming-Answers-2.1-Munchies-Cereal-Company 3 pages CH 9 Simplex Method No ratings yet CH 9 Simplex Method 132 pages Big M Method No ratings yet Big M Method 12 pages CHAPTER 2-2 Graphical Method of Solving Linear Programming Problems No ratings yet CHAPTER 2-2 Graphical Method of Solving Linear Programming Problems 14 pages OR Module 2 SESSION 2 No ratings yet OR Module 2 SESSION 2 52 pages Simplex Method 100% (4) Simplex Method 16 pages Operations Research: Big M Method 67% (3) Operations Research: Big M Method 6 pages Decision Science For Management-1 63% (8) Decision Science For Management-1 34 pages Linear Programming-Graphic Method: Chapter-04 No ratings yet Linear Programming-Graphic Method: Chapter-04 36 pages Simplex Method 75% (4) Simplex Method 24 pages Big M Method 100% (1) Big M Method 13 pages Linear Programming Problems - Formulation 100% (2) Linear Programming Problems - Formulation 55 pages Chapter 02 Linear Programming Graphical Method No ratings yet Chapter 02 Linear Programming Graphical Method 13 pages Linear Programming Overview 100% (3) Linear Programming Overview 92 pages Transportation Problem-1 85% (13) Transportation Problem-1 58 pages Linear Programming 75% (4) Linear Programming 38 pages Linear Programming Problems No ratings yet Linear Programming Problems 24 pages 06 - 2 - Transportation Problems and Solution Methods 100% (1) 06 - 2 - Transportation Problems and Solution Methods 60 pages Linear Programming Problems and Solutions - Superprof No ratings yet Linear Programming Problems and Solutions - Superprof 9 pages Simplex Method for Linear Programming No ratings yet Simplex Method for Linear Programming 12 pages Module 4 - Decision Theory PDF 100% (1) Module 4 - Decision Theory PDF 41 pages Simplex Method for Linear Programming No ratings yet Simplex Method for Linear Programming 14 pages Operations Research Assignment Guide No ratings yet Operations Research Assignment Guide 37 pages Linear Programming 100% (3) Linear Programming 34 pages Simplex Method No ratings yet Simplex Method 36 pages Linear Programming for Students No ratings yet Linear Programming for Students 14 pages Product Mix Problems 100% (2) Product Mix Problems 11 pages Simplex Method for Linear Programming No ratings yet Simplex Method for Linear Programming 22 pages Graphical Method in LP No ratings yet Graphical Method in LP 15 pages Linear Programming Graphical Method 83% (6) Linear Programming Graphical Method 28 pages Decision Theory: Mcgraw-Hill/Irwin 100% (2) Decision Theory: Mcgraw-Hill/Irwin 21 pages Managerial Economics Quiz No ratings yet Managerial Economics Quiz 101 pages Linear Programing (Problem Formulation) No ratings yet Linear Programing (Problem Formulation) 13 pages Linear Programming Problem: Basic Requirements No ratings yet Linear Programming Problem: Basic Requirements 5 pages Linear-Programming Graphical Method 100% (4) Linear-Programming Graphical Method 52 pages Lesson - 04 No ratings yet Lesson - 04 24 pages Maths CH-3 MGMT No ratings yet Maths CH-3 MGMT 62 pages MSC in Operational Research Operational Techniques 1 Operational Research No ratings yet MSC in Operational Research Operational Techniques 1 Operational Research 10 pages Lecture2 O.R No ratings yet Lecture2 O.R 26 pages Chapter 1 No ratings yet Chapter 1 42 pages LP Quantitative Techniques No ratings yet LP Quantitative Techniques 29 pages Linear Programming No ratings yet Linear Programming 5 pages Linear Programmin G: by Rajeev Ranjan No ratings yet Linear Programmin G: by Rajeev Ranjan 29 pages Linear Programming No ratings yet Linear Programming 44 pages Linear Programming Guide No ratings yet Linear Programming Guide 72 pages ادارة صناعية 3 كهرباء رابع No ratings yet ادارة صناعية 3 كهرباء رابع 68 pages Cha-2 OR-1 No ratings yet Cha-2 OR-1 25 pages Operations Research Notes No ratings yet Operations Research Notes 52 pages Operational Research No ratings yet Operational Research 5 pages Linear Programming No ratings yet Linear Programming 29 pages Module 3: Linear Programming: Graphical Method: Learning Outcomes No ratings yet Module 3: Linear Programming: Graphical Method: Learning Outcomes 9 pages Linear Programming for Analysts No ratings yet Linear Programming for Analysts 37 pages Chap 1 Linear Programming Problem No ratings yet Chap 1 Linear Programming Problem 298 pages Topic 2 No ratings yet Topic 2 7 pages Graphical Solution No ratings yet Graphical Solution 38 pages 2 Introduction To Linear Programming No ratings yet 2 Introduction To Linear Programming 37 pages Physical Chem2 Exercise No ratings yet Physical Chem2 Exercise 3 pages Separation - Leaching - Exercise No ratings yet Separation - Leaching - Exercise 2 pages CVCSAR Biochem Lecture Notes No ratings yet CVCSAR Biochem Lecture Notes 25 pages Cagayan River Basin Challenges No ratings yet Cagayan River Basin Challenges 11 pages Cagayan Riverine Zone Development Framework Plan 2005 2030 No ratings yet Cagayan Riverine Zone Development Framework Plan 2005 2030 57 pages Guya, Bianca Ysabelle L. BSA-5 No ratings yet Guya, Bianca Ysabelle L. BSA-5 4 pages Saint Peter'S Basilica No ratings yet Saint Peter'S Basilica 3 pages Simulation and Optimization of No ratings yet Simulation and Optimization of 5 pages Student Architects' Tree Planting No ratings yet Student Architects' Tree Planting 2 pages Bill of Rights: Guya, Bianca Ysabelle L. FN-QUIZ-2 Political Science No ratings yet Bill of Rights: Guya, Bianca Ysabelle L. FN-QUIZ-2 Political Science 3 pages Fog Computing: Survey of Trends, Architectures, Requirements, and Research Directions No ratings yet Fog Computing: Survey of Trends, Architectures, Requirements, and Research Directions 31 pages BSC 6000 No ratings yet BSC 6000 54 pages Geogia Hotel Ghana LTD Vrs Silver Star Auto LTD (J4 34 of 2012) 2012 GHASC 54 (4 December 2012) No ratings yet Geogia Hotel Ghana LTD Vrs Silver Star Auto LTD (J4 34 of 2012) 2012 GHASC 54 (4 December 2012) 26 pages SFRA6 US Web No ratings yet SFRA6 US Web 2 pages Financial Derivatives Guide No ratings yet Financial Derivatives Guide 20 pages Cotton Association of India No ratings yet Cotton Association of India 5 pages Export Import and Countertrade No ratings yet Export Import and Countertrade 32 pages Engineer Onboarding Form No ratings yet Engineer Onboarding Form 12 pages CaseStudy Ch8 (3) Eng No ratings yet CaseStudy Ch8 (3) Eng 2 pages 2009 Chamber Membership List 100% (1) 2009 Chamber Membership List 2 pages Dietetics As A Profession No ratings yet Dietetics As A Profession 11 pages Improving Population Health Using Electronic Health Records: Methods For Data Management and Epidemiological Analysis 1st Edition Goldstein No ratings yet Improving Population Health Using Electronic Health Records: Methods For Data Management and Epidemiological Analysis 1st Edition Goldstein 60 pages Solution Practice 6 Consolidations 3 No ratings yet Solution Practice 6 Consolidations 3 8 pages Flexitallic Flexpro Brochure 11-30-2017 No ratings yet Flexitallic Flexpro Brochure 11-30-2017 8 pages LP-3 (Information & Cyber Security) Lab Manual 2021-22 No ratings yet LP-3 (Information & Cyber Security) Lab Manual 2021-22 37 pages Libreoffiice Basic: Libreoffic E Referen E Card No ratings yet Libreoffiice Basic: Libreoffic E Referen E Card 2 pages Beam Telecom PVT LTD.: 8-2-610/A, Road No.10, Banjara Hills, Hyderabad-500034 Tel: +91-40-66272727 No ratings yet Beam Telecom PVT LTD.: 8-2-610/A, Road No.10, Banjara Hills, Hyderabad-500034 Tel: +91-40-66272727 2 pages Principles of Marketing: Developing New Products and Managing The Product Life Cycle No ratings yet Principles of Marketing: Developing New Products and Managing The Product Life Cycle 35 pages Brand Ambassador Playbook Roster No ratings yet Brand Ambassador Playbook Roster 27 pages Bulkheads No ratings yet Bulkheads 16 pages Classic Cars Script No ratings yet Classic Cars Script 4 pages Cineplex Loyalty Program Strategy No ratings yet Cineplex Loyalty Program Strategy 10 pages University of Cambridge International Examinations International General Certificate of Secondary Education 0% (1) University of Cambridge International Examinations International General Certificate of Secondary Education 109 pages Question 1) Briefly Explain Capital Allocation Process With The Help of Diagram? No ratings yet Question 1) Briefly Explain Capital Allocation Process With The Help of Diagram? 7 pages 18 September 2024 Updation REWA SINCE 2007 No ratings yet 18 September 2024 Updation REWA SINCE 2007 61 pages GE2 - Exercise 2.1 Juvine Ramos No ratings yet GE2 - Exercise 2.1 Juvine Ramos 4 pages RONSAIRO No ratings yet RONSAIRO 3 pages TN206 No ratings yet TN206 37 pages adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free adDownload to read ad-free ad Footer menu Back to top About About Scribd, Inc. Everand: Ebooks & Audiobooks Slideshare Join our team! Contact us Support Help / FAQ Accessibility Purchase help AdChoices Legal Terms Privacy Copyright Cookie Preferences Do not sell or share my personal information Social Instagram Instagram Facebook Facebook Pinterest Pinterest Get our free apps About About Scribd, Inc. Everand: Ebooks & Audiobooks Slideshare Join our team! Contact us Legal Terms Privacy Copyright Cookie Preferences Do not sell or share my personal information Support Help / FAQ Accessibility Purchase help AdChoices Social Instagram Instagram Facebook Facebook Pinterest Pinterest Get our free apps Documents Language: English Copyright © 2025 Scribd Inc. We take content rights seriously. Learn more in our FAQs or report infringement here. We take content rights seriously. Learn more in our FAQs or report infringement here. Language: English Copyright © 2025 Scribd Inc. 576648e32a3d8b82ca71961b7a986505 scribd.scribd.scribd.scribd.scribd.scribd.scribd.scribd.
14949
https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.fdtri.html
scipy.special.fdtri — SciPy v1.16.2 Manual Skip to main content Back to top Ctrl+K SciPy Installing User Guide API reference Building from source Development Release notes 1.16.2 (stable) development1.16.2 (stable)1.16.11.16.01.15.31.15.21.15.11.15.01.14.11.14.01.13.11.13.01.12.01.11.41.11.31.11.21.11.11.11.01.10.11.10.01.9.31.9.21.9.11.9.01.8.11.8.01.7.11.7.01.6.31.6.21.6.11.6.01.5.41.5.31.5.21.5.11.5.01.4.11.4.01.3.31.3.21.3.11.3.01.2.31.2.11.2.01.1.01.0.00.19.00.18.10.18.00.17.10.17.00.16.10.16.00.15.10.15.00.14.10.14.00.13.00.12.00.11.00.10.10.10.00.9.00.80.7 GitHub Scientific Python Forum Installing User Guide API reference Building from source Development Release notes 1.16.2 (stable) development1.16.2 (stable)1.16.11.16.01.15.31.15.21.15.11.15.01.14.11.14.01.13.11.13.01.12.01.11.41.11.31.11.21.11.11.11.01.10.11.10.01.9.31.9.21.9.11.9.01.8.11.8.01.7.11.7.01.6.31.6.21.6.11.6.01.5.41.5.31.5.21.5.11.5.01.4.11.4.01.3.31.3.21.3.11.3.01.2.31.2.11.2.01.1.01.0.00.19.00.18.10.18.00.17.10.17.00.16.10.16.00.15.10.15.00.14.10.14.00.13.00.12.00.11.00.10.10.10.00.9.00.80.7 GitHub Scientific Python Forum Search Ctrl+K Section Navigation scipy scipy.cluster scipy.constants scipy.datasets scipy.differentiate scipy.fft scipy.fftpack scipy.integrate scipy.interpolate scipy.io scipy.linalg scipy.ndimage scipy.odr scipy.optimize scipy.signal scipy.sparse scipy.spatial scipy.special scipy.stats SciPy API Special functions (scipy.special) scipy.special.fdtri scipy.special.fdtri# scipy.special.fdtri(dfn, dfd, p, out=None)=# The p-th quantile of the F-distribution. This function is the inverse of the F-distribution CDF, fdtr, returning the x such that fdtr(dfn, dfd, x) = p. Parameters:dfnarray_like First parameter (positive float). dfdarray_like Second parameter (positive float). parray_like Cumulative probability, in [0, 1]. outndarray, optional Optional output array for the function values Returns:xscalar or ndarray The quantile corresponding to p. See also fdtr F distribution cumulative distribution function fdtrc F distribution survival function scipy.stats.f F distribution Notes The computation is carried out using the relation to the inverse regularized beta function, I x−1(a,b). Let z=I p−1(d d/2,d n/2). Then, x=d d(1−z)d n z. If p is such that x<0.5, the following relation is used instead for improved stability: let z′=I 1−p−1(d n/2,d d/2). Then, x=d d z′d n(1−z′). Wrapper for the Cephes routine fdtri. The F distribution is also available as scipy.stats.f. Calling fdtri directly can improve performance compared to the ppf method of scipy.stats.f (see last example below). References Cephes Mathematical Functions Library, Examples Try it in your browser! fdtri represents the inverse of the F distribution CDF which is available as fdtr. Here, we calculate the CDF for df1=1, df2=2 at x=3. fdtri then returns 3 given the same values for df1, df2 and the computed CDF value. import numpy as np from scipy.special import fdtri, fdtr df1, df2 = 1, 2 x = 3 cdf_value = fdtr(df1, df2, x) fdtri(df1, df2, cdf_value) 3.000000000000006 Calculate the function at several points by providing a NumPy array for x. x = np.array([0.1, 0.4, 0.7]) fdtri(1, 2, x) array([0.02020202, 0.38095238, 1.92156863]) Plot the function for several parameter sets. import matplotlib.pyplot as plt dfn_parameters = [50, 10, 1, 50] dfd_parameters = [0.5, 1, 1, 5] linestyles = ['solid', 'dashed', 'dotted', 'dashdot'] parameters_list = list(zip(dfn_parameters, dfd_parameters, ... linestyles)) x = np.linspace(0, 1, 1000) fig, ax = plt.subplots() for parameter_set in parameters_list: ... dfn, dfd, style = parameter_set ... fdtri_vals = fdtri(dfn, dfd, x) ... ax.plot(x, fdtri_vals, label=rf"$d_n={dfn},\, d_d={dfd}$", ... ls=style) ax.legend() ax.set_xlabel("$x$") title = "F distribution inverse cumulative distribution function" ax.set_title(title) ax.set_ylim(0, 30) plt.show() The F distribution is also available as scipy.stats.f. Using fdtri directly can be much faster than calling the ppf method of scipy.stats.f, especially for small arrays or individual values. To get the same results one must use the following parametrization: stats.f(dfn, dfd).ppf(x)=fdtri(dfn, dfd, x). from scipy.stats import f dfn, dfd = 1, 2 x = 0.7 fdtri_res = fdtri(dfn, dfd, x) # this will often be faster than below f_dist_res = f(dfn, dfd).ppf(x) f_dist_res == fdtri_res # test that results are equal True Go Back Open In Tab previous scipy.special.fdtrcnext scipy.special.fdtridfd On this page fdtri © Copyright 2008, The SciPy community. Created using Sphinx 8.1.3. Built with the PyData Sphinx Theme 0.16.1.
14950
https://www.youtube.com/watch?v=iaWuYQVqUck
Using Derivatives to Minimize Time… Rowing Across a Lake [Real World Calculus!] the AllAroundMathGuy 10800 subscribers 15 likes Description 883 views Posted: 30 Mar 2014 Looking for Calculus in the real world? Here derivatives are used to minimize the total time for a trip that involves both rowing and walking. (Classic optimization problem in Calculus!) My instructional approach emphasizes conceptual understanding of and connections between concepts and ideas, rather than just pure memorization. My instructional approach emphasizes conceptual understanding of and connections between concepts and ideas, rather than just pure memorization. I have lots of my Calculus resources available at SituationalMathVideos #APCalculusbytheAAMG #AllAroundMathGuy 4 comments Transcript: you hey I'm here in a rowboat on Cowichan Lake of Vancouver Island and I am headed across the lake from where I'm staying here to a place about four kilometers down the lake right here the lakes about a kilometre across and what I'm trying to do is I'm trying to minimize the time it takes me to get to my destination now I ask that because there's a lot of different things I could do I can go straight across a lake park my boat walk down the lake or I can go all the way straight to my destination or I can had anywhere in between there and they're all going to take different times now that the the thing to know is that it takes me longer I'm slower rowing the boat that I am at walking I can row with a boat only about three kilometers an hour but I can walk say five kilometers an hour so even though going straight to my destination is the shortest distance I'm going slow because I'm in the boat the whole time whereas if I go straight across the lake I cut down on how much I'm in the boat but then I gotta walk a long distance down the shore now somewhere in between there there's an optimum point so if i go out a bit of an angle here some are aimed for somewhere there and then walk the rest of the way that's gonna be the minimum time the way i'm gonna figure that is we're gonna use calculus right now using the numbers that I've told you here so let's do that right now alright so let's look at the situation here we have the shoreline we have me in the boat right here and we have my destination down the shore here the distance straight to the shore is one kilometer and the distance down the shore that my destination is is four kilometers now I can do a variety of things here I can go straight to the shore I can go straight to my destination I can go anywhere in between what I want to do again is minimize the time it takes if I go straight to the shore and along the shore it's slow i'm doing a lot of walking if i go straight to the destination I row slower than I walk so it's going to take time somewhere in between here is an optimum place to land that's going to minimize what I have to do the combination of the rowing and the walking so how I'm going to approach this is I'm going to use some variables here i'm going to say that since what i'm looking for is the distance down the shore between directly to the shore and where i'm going to land the boat i'm going to call that X and then my leftover distance that I have to walk here is 4 minus X and I'm going to use that and set up something for time I'm going to write a function for time I'm use capital T for the total time it takes now how I'm going to figure out the total time is in two legs of the journey here there's that the rowing part and then there's the the walking part over here so for the rowing part i'm going to call that time one so my function here is going to be time one for the rowing part plus time due to find out what each of those things are what my time one is i don't know but i can figure it out if I know my distance and my speed that I go during that distance my distance I can write in terms of X because this thing's a right triangle here I know that this is one I have X so then this thing is going to be square root of x squared plus 1 squared or just x squared square root of x squared plus 1 using the Pythagorean relationship so my distance there is square root of x squared plus 1 and my speed there which which I know from before is 3 km/h so that's for that first leg for the walking part that I have in blue there we're going to call that time number two we don't know what that is but we do know that distance number two is four minus X and we know that I can walk at five kilometers an hour so we can use this information to write my function for total time here so my total time is going to be time number one is distance one over speed one right because time is distance divided by speed or velocity plus distance to over velocity to or in other words here if I put my expressions in for each of the things that I have square root of x squared plus 1 over 3 plus 4 minus x over 5 all right there's my time function i'm going to minimize its time as a function of x which is the distance down the shore that i have to land the boat all right now how we're going to find that minimum value for that is algebraically using the derivative whatever this function looks like I don't know what that function looks like graphically but whatever it is there's going to be some kind of a minimum point and we're going to find that by looking at where the derivative is 0 because this function is defined for all values of x it's never undefined so the only way you can have a minimum is if that derivative is 0 and we're going to find that right now so I'm going to slightly rewrite this function before I find the derivative just to make it a little bit easier to see what's happening instead of square root of X square plus 1 over 3 i'm going to call it one-third square root of x squared plus 1 and instead of this expression here instead of 4 minus x over 5 i'm going to call it four-fifths minus 1 5th x all right because that's equivalent the reason being then when I write the derivative here the derivative of this first part one third times that square root function is going to be one-third times 1 over 2 square root of that function because that's just a square root of a function but we need to include times the derivative the inside times 2x using our chain rule the derivative of the second term here is a constant I'm going to put plus zero because that's what the derivative of a constant is and the derivative of that last term there minus 150 x is just minus 1 5th so simplifying that a little bit before we do anything with it we can simplify the first term a little bit because we have those twos that cancel nicely and we have x over 3 square root x squared plus 1 we don't need the 0ther and we can just put our minus 1 5th there that's our derivative now we want to look for where that derivative is 0 so we're going to use it and solve this equation here now to solve that equation algebraically we're going to move that one fifth over to this side and then we're going to clear out those fractions if we multiply both sides by the lowest common denominator 15 root x squared plus 1 we're going to end up with three times the root x squared plus 1 and on this side 5x and we have our equation that we need to solve that to solve that we need to look at getting rid of that square root clearing of the square root and then starting to isolate X so if we square both sides we are going to have nine we're not going to have the square root anymore we're just going to have x squared plus one we're going to 25 x squared and then we start to isolate this let's distribute that nine let's move those X terms together on the same side we're going to have 16x squared if we divide both sides by 16 we're going to have x squared is 9 16 and if we isolate that x place taking the square root of both sides we're going to plus or minus three quarters now before we look back at what that means in this situation we have to reject one of these answers we have to reject the negative one because that is an extraneous route that occurred when we squared both sides without what's in red there negative three quarters is not a solution to that equation so we're going to reject the negative one and we're just going to keep x equals three quarters or point seven five now looking back in our situation we go back up here remember that X was the distance down the shore that we were going to go all right so we can minimize that time if we land Oh point seven five or three quarters of a kilometer down the shore and then walk the rest of the way all right that's the solution that's the optimum solution we're going to look at this on a graph now just to verify that that value makes some sense we could substitute in some values here to just check we could put in point seven five here that would tell us the total time it would take we could substitute a few values close 2.75 on either side just to verify but it's a lot quicker to do this with a graph alright so we'll get our graphing calculator out here now we're going to enter this function in see how fast I can enter it in here we'll go in super fast mode here alright there's our function it's important to remember that you're graphing the original function and looking for that minimum point you're not graphing the derivative so using the calculator we're not actually going to use any calculus once we've set up that function we're just going to look for the minimum point on there to check our work now set up the window that makes some sense here now that X is distance so it's not going to make much sense to go more than about 4 because 4 is the farthest down the shore you can do just because of the nature of this calculator i'm going to go 4.7 so it works out four pixels on the screen and then these values here are not going to be a whole bunch because these are this is time in hours and you'd have to play around with a little bit here i'm not going to go into the negatives and i don't think it's going to take you know two is probably going to be even slightly too much but we'll go with that on here we'll look at what our graph looks like so there's the part of that function that makes some sense in this situation now you notice there it goes slightly down here and then back up again so right in there is where that minimum point is that looks like about right here but we're going to just double-check just by finding the minimum on here now on the sophistication of this calculator you can't trust it past about four or five decimal places but if you look at that point 7 4 9 9 9 8 7 3 that is the value that we found right point 7 5 and then this is telling us that the time the Y value here right the t value in our function is just over an hour that it's going to take all right so that confirms graphically what we did algebraically we found that value that gives us the least amount of time all right now I realize that in reality you're not going to go through all this just to save yourself a few minutes because what did we spend here figuring this out that's a little more time than you're going to spend by landing somewhere else along the shore here right but just conceptually it's important to understand it you can use calculus to find that minimum value all right and just even from the situation that the time is going to vary depending on where you end up on the shore there all right that's it hope you learned something
14951
https://math.stackexchange.com/questions/3496180/pigeon-hole-principle-for-continuous-spaces
Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Pigeon hole principle for continuous spaces Ask Question Asked Modified 5 years, 8 months ago Viewed 746 times 0 $\begingroup$ Given a line segment of length L that contains n + 1 points, let D be the length of the shortest segment between consecutive points. What is the maximum value of D over all possible configurations of points? Note: It is a solved example from Brilliant.org pegion hole principle text can't seem to understand that explanation. Link to the page: In "Pigeonhole Principle on Continuous Spaces" section. Solution By Brilliant: First, consider a trivial configuration of the points. Let all points be evenly spaced with one point at each end of the segment. In this case, the points divide up the line into 'n' segments, each of length (L/n). Using the pigeonhole principle, we can approach the problem as follows: Consider each of the n evenly spaced segments as a "box" and each of the n + 1 points as an item to be placed into the boxes. The pigeonhole principle implies that at least one box (or segment) must have two items (or points), which guarantees that no two consecutive points can be farther apart than L/n.(HOW????) pigeonhole-principle Share edited Jan 3, 2020 at 17:34 9he0nix9he0nix asked Jan 3, 2020 at 15:40 9he0nix9he0nix 322 bronze badges $\endgroup$ 2 $\begingroup$ It would be good to identify what, in particular, you're confused about on the Brilliant.org solution (as well as to link to that solution). $\endgroup$ Michael Burr – Michael Burr 2020-01-03 16:04:00 +00:00 Commented Jan 3, 2020 at 16:04 $\begingroup$ I have added the details of the problem as per your suggestion, thanks for your help. $\endgroup$ 9he0nix – 9he0nix 2020-01-03 17:35:50 +00:00 Commented Jan 3, 2020 at 17:35 Add a comment | 1 Answer 1 Reset to default 0 $\begingroup$ The answer is $\frac{L}{n}$. Let's assume that the line segment is on the number line, starts at $0$, and ends at $L$. Let's assume that the $n+1$ points are $x_0\leq x_1\leq\dots\leq x_n$. Step 1 (Showing that $\frac{L}{n}$ is possible, i.e., $D\geq\frac{L}{n}$): Place the point $x_i$ at $\frac{iL}{n}$. Observe that this choice of $x_i$'s satisfies the inequalities between consecutive points. The distance between consecutive points is $|x_{i+1}-x_i|=\frac{L(i+1)}{n}-\frac{Li}{n}=\frac{L}{n}$. Therefore, it is possible to place the points so that $\frac{L}{n}$ is the minimum distance between points). Step 2 (Showing that $\frac{L}{n}$ is the best possible, i.e., that $D\leq\frac{L}{n}$): Sketch (without pigeonhole principle): With the order on the $x_i$'s, $|x_{i+1}-x_i|\geq D$ since $D$ is the shortest length between consecutive points. Moreover, observe that $|x_n-x_0|\leq L$ since all points lie on the line. Since the points are ordered on the line, $$ |x_n-x_0|=|x_n-x_{n-1}|+|x_{n-1}-x_{n-2}|+\dots+|x_2-x_1|+|x_1-x_0|. $$ Putting this all together, we have $$ L\geq|x_n-x_0|=|x_n-x_{n-1}|+|x_{n-1}-x_{n-2}|+\dots+|x_2-x_1|+|x_1-x_0|\geq nD. $$ Therefore, $D\leq \frac{L}{n}$. + Sketch (with pigeonhole principle): Let $L_1,\dots,L_n$ be a subdivision of $L$ into intervals where $L_i=\left[\frac{L(i-1)}{n},\frac{Li}{n}\right)$ for $i\neq n$ and $L_n=\left[\frac{L(n-1)}{n},L\right].$ Consider the map $f:{0,\dots,n}$ to ${1,\dots,n}$ where $f(i)$ is the index of the interval containing $x_i$. This is a map from $n+1$ numbers to $n$ numbers, so, by the pigeonhole principle, this map is not injective. In other words, there is some pair $i\neq j$ such that $f(i)=f(j)$. In other words, $x_i$ and $x_j$ are in the same interval, say this interval is $L_k$. Then, the distance $|x_i-x_j|$ is at most the length of $L_k$. Since the length of $L_k$ is $\frac{Lk}{n}-\frac{L(k-1)}{n}=\frac{L}{n}$, the minimum distance between points is at most $\frac{L}{n}$, i.e., $D\leq\frac{L}{n}$. Share edited Jan 4, 2020 at 5:49 answered Jan 3, 2020 at 15:47 Michael BurrMichael Burr 34k22 gold badges5252 silver badges8181 bronze badges $\endgroup$ 0 Add a comment | You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions pigeonhole-principle See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Related 0 Pigeonhole Principle 4 How to estimate pigeonhole principle? Pigeonhole principle and finite sequences 2 Not quite understanding parts of Pigeon Hole Principle Generalization Examples of the Pigeonhole Principle 0 Seven people sit at a round table with 10 chairs. Show that there are three consecutive chairs that are occupied. Use the pigeonhole principle on a triangle 1 Show that two points must be there with distance less than 5 Hot Network Questions Change default Firefox open file directory Bypassing C64's PETSCII to screen code mapping How to rsync a large file by comparing earlier versions on the sending end? Is direct sum of finite spectra cancellative? How do you emphasize the verb "to be" with do/does? On the Subject of Switches Are there any world leaders who are/were good at chess? Identifying a thriller where a man is trapped in a telephone box by a sniper Copy command with cs names Why is the fiber product in the definition of a Segal spaces a homotopy fiber product? Find non-trivial improvement after submitting How to design a circuit that outputs the binary position of the 3rd set bit from the right in an 8-bit input? I'm having a hard time intuiting throttle position to engine rpm consistency between gears -- why do cars behave in this observed way? How to start explorer with C: drive selected and shown in folder list? Is my new stem too tight for my carbon fork? Does the curvature engine's wake really last forever? What "real mistakes" exist in the Messier catalog? How do trees drop their leaves? Matthew 24:5 Many will come in my name! Quantizing EM field by imposing canonical commutation relations Fix integral lower bound kerning in textstyle or smaller with unicode-math What meal can come next? Identifying a movie where a man relives the same day Can Monks use their Dex modifier to determine jump distance? more hot questions Question feed
14952
https://teacher.desmos.com/activitybuilder/custom/609305dbf23dfc3699890bab?collections=5d84acdd2cb68474fde67fee%2C601ee0561930564c250a4612
Area of Triangle (slanted) Practice • Activity by Amplify Classroom Cookie Consent We use cookies and other tracking technologies to offer you a better experience, personalize content and ads, and to analyze our performance and site traffic. By continuing to use our site, you understand your information may be disclosed to our third-party partners. You can learn more about how we use your data by reviewing our Website Privacy Policy. OK Open Keyboard Shortcuts (CTRL + ALT + /) Skip to Main Content Area of Triangle (slanted) Practice 👀 See something you want to try? Log in Sign up for free Page contents Close [Copy of] IM 8.1.15 Adding the Angles in a Triangle [Copy of] IM 8.8.1 The Areas of Squares and their side lengths [Copy of] IM 8.8.2 Side lengths and Areas 6-1.11 Formula for the Area of a Triangle Home Makeover Rowe Composite Area Parallelograms Area of Rectangles/Squares Exploring Quadrilateral Area with Geoboards Intro to Area of Rectangles Area of Triangles 6-1.15(A) Area of Triangle with Ruler Practice 1 Exploring Area with Geoboards: Triangles Intro to Area of Triangles Area of Triangle (slanted) Practice Lesson 3 Find Area of Triangle with Ruler 6-1.13 Bases and Altitudes of Triangles (Part 1) 6-1.14 Bases and Altitudes of Triangles (Part 2) 6-1.15 Bases and Altitudes of Triangles (Part 3) Area of Parallelograms Intro to Parallelograms Lesson #2 Parallelograms Lesson #3 Area of Parallelograms Intro to Bases and Heights of Parallelograms Area of Basic Shapes Basic Shapes for Making Activities SP#1 Area of Basic Shapes SP#2 Area of Basic Shapes SP#3 Area of Basic Shapes SP#4 Area of Basic Shapes SP#5 Quiz Area of Basic Shapes Composite Area SP#1 Area of Composite Shapes SP#2 Area of Composite Shapes SP#3 Area of Composite Shapes SP#4 Area of Composite Shapes SP#5 Various Areas of Composite Shapes SP#6 Various Areas of Composite Shapes SP#7 Various Areas of Composite Shapes Composite Shapes Reasoning to Find Area Intro to Composite Area Intro #2 Composite Area Finding Area by Decomposing and Rearranging Other Stuff Area Puzzles Using Geoboard Make a Park Project QUADRILATERALS TOOLKIT 6-1.16 Changing Area 6-1.0 Pre-test Area of Triangle (slanted) Practice Edited by Todd Steinhauer Based on work from Jennifer Ford Mobile Tablet Laptop Your Assignments Assign Create a free accountorsign in to assign this activity to your classes. Screens Teacher Moves View or print Teacher Moves for your facilitation Preview Preview student screens and view teacher notes What would you like me to know today? 1 Checking In 2 Find the area of this triangle: Attempt #1 What can you learn from this attempt? What new strategy can you try on your next attempt? 3 Reflection 4 Find the area of this triangle: Attempt #2 What can you learn from this attempt? What new strategy can you try on your next attempt? 5 Reflection 6 Find the area of this triangle: Attempt #3 Finding the area of a triangle that has no horizontal or vertical sides is the most challenging type of triangle. What strategy/strategies did you learn today that you want to remember to use in the future? 7 Reflection 8 Here's another triangle! Drag the point to show how confident you are in your response(s). If you'd like, say more below. 9 Confidence Check Need help? Contact our support team © 2025 Amplify Education, Inc. Resources Help CenterAccessibilityNewsletter Company Customer Privacy PolicyFree and Commercial Use GuidelinesWebsite Terms of UseAcceptable Use PolicyWebsite Privacy Policy Loading...
14953
https://www.pleacher.com/mp/puzzles/tricks/base2.html
Base 2 Trick Base 2 Trick Home Meet Mr. P What's New Humor Lesson Plans Quotes Links Puzzles & Games Math Puzzles Math Tricks Number Tricks In this trick, you will give a student six cards of numbers and then have her pick a number from 1 to 63. Instruct her to tell you on which cards her number appears, and you will tell her the number she picked! Here are the six cards: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 2 3 6 7 10 11 14 15 18 19 22 23 26 27 30 31 34 35 38 39 42 43 46 47 50 51 54 55 58 59 62 63 4 5 6 7 12 13 14 15 20 21 22 23 28 29 30 31 36 37 38 39 44 45 46 47 52 53 54 55 60 61 62 63 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31 40 41 42 43 44 45 46 47 56 57 58 59 60 61 62 63 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 When having your student tell you the cards, have her identify the cards by the number in the top left corner. This will make it easier for you to tell her the number that she picked. All you do is mentally add up the numbers that she gives you! Why does this work? The cards are based on binary arithmetic (base 2), so I used this trick whenever I taught number bases (like binary and hexadecimal and even our decimal system). I use this as a teachable moment -- the students always want to know if I really memorized all the cards! Recall the decimal system (base 10): Look at the columns, going from right to left. The first column is the units column. The second column is the 10s column. The third column is the 100s column. Each succeeding column is multiplied by 10. In base 2, the columns are multiplied by 2. The first column is the units column, the next is the 2s column, the next is the 4s column, the next is the 8s, the next is the 16s, the next is the 32s, etc. Did you notice that the first digit on the six cards corresponds to the first six column headings in base 2? They are 1, 2, 4, 8, 16, and 32. Now think how a number is represented in base 2. Take the number 13. In base 2, only 1s and 0s are permitted, so the number 13 is made up of one 8, one 4, zero 2s, and one 1. It is written 1101. Now look at the cards: the number 13 appears on the cards beginning with 1, 4, and 8! I like to think of the cards as just the column headings for base 2. I visualize them as a table. Here is a table with several examples: | Number Picked | Light Blue Card 32 | Light Purple Card 16 | Magenta Card 8 | Orange Card 4 | Yellow Card 2 | Light Green Card 1 | --- --- --- | 3 = | 0 | 0 | 0 | 0 | 1 | 1 | | 13 = | 0 | 0 | 1 | 1 | 0 | 1 | | 16 = | 0 | 1 | 0 | 0 | 0 | 0 | | 32 = | 1 | 0 | 0 | 0 | 0 | 0 | | 42 = | 1 | 0 | 1 | 0 | 1 | 0 | | 63 = | 1 | 1 | 1 | 1 | 1 | 1 | P.S. I found these six cards in a box of Trix Cereal! Send comments to: David Pleacher
14954
https://www.thinkinitalian.com/as-far-as-possible-italian
How to say "as far as Possible": Italian Grammar Lesson 137 Skip to content BACK-TO-SAVINGS – 70% OFF – SIGN UP BY SEP 30 ★★★★★ Rated 4.9/5 based on 170+ reviews 7-day free trial 30-day money-back guarantee 4,000+ happy students Online Italian courses Audio lessons Readings AI tutor Pricing Free resources Download the App Blog with my best tips Italian grammar lessons Free checklist “Overcome your barriers” Ebook “How to learn languages fast” Online Italian test Italian word of the day About Author How we compare Success stories Contact Online Italian courses Audio lessons Readings AI tutor Pricing Free resources Download the App Blog with my best tips Italian grammar lessons Free checklist “Overcome your barriers” Ebook “How to learn languages fast” Online Italian test Italian word of the day About Author How we compare Success stories Contact Close Open Grammar explained Fun lessons, real-life examples, and interactive quizzes. Grammar with examples Italian sentences to start speaking right away. Short stories Interesting readings to listen and repeat Study tips Expert advice to learn faster Conversations Get through real-life situations step by step Close Open Community Engaging discussions and Q&A with native tutors 1 1 Login Free courses How to say “as far as Possible”: Italian Grammar Lesson An interactive lesson guiding you from key takeaways to expert insights. Comes with Q&A, useful vocabulary, interactive audio, quizzes and games. History Authors Lesson Published Jan 1, 2021 Updated May 11, 2025 Reviewed by Stefano Italian language tutor, course author. MEng, MBA. Member of the International Association of Hyperpolyglots (HYPIA). After learning 12 languages, I can tell you that we all master languages by listening and mimicking. I couldn’t find an app to recommend to my students, so I made my own one. With my method, you’ll be speaking Italian from Lesson 1. Written by Martina A linguist specializing in psycholinguistics and Italian language education. I hold a Research Master’s in Linguistics and teach Italian, passionately connecting research with practical teaching. Lesson 137 (B1) Intermediate Making comparisons Italian grammar lessons Time TakeawaysFactsImagesArticleConceptsQuizMatchMemoryCrosswordFAQs “As… as Possible” in Italian Il più vs il Meno As Much as Possible As Least as Possible 30 Free Courses Speak faster without translating in your mind​ Create free account Key Takeaways The phrase il più... possibile is used to express maximizing qualities or actions, meaning "as... as possible." When using il meno... possibile, it conveys minimizing qualities or actions, translating to "as little as possible." Both expressions can be used with adjectives and adverbs, enhancing their meanings without needing comparative forms. The standalone phrases il più possibile and il meno possibile emphasize maximum and minimum amounts, respectively. Examples include Cerca di essere il più gentile possibile for kindness and Parla il meno possibile for speaking less. Stefano's Insights Your browser does not support the audio element. Play to see captions... Show Transcript Ah, la meravigliosa flessibilità della lingua italiana! 'Il più possibile' è il nostro modo di dire 'as ... as possible'. È una costruzione versatile, che si adatta a verbi, aggettivi e sostantivi. Un trucco? Inserire un avverbio tra 'il più' e 'possibile'. E non dimentichiamo la versione negativa: 'il meno possibile'. Mi ricorda di quando cercavo di dormire il meno possibile durante le lezioni di matematica al liceo... senza troppo successo! È interessante come possiamo giocare con le parole per esprimere concetti simili a quelli inglesi, ma con quel tocco italiano che rende tutto più melodico. Provate a usarlo nelle vostre conversazioni e vedrete come arricchirà il vostro italiano. Ah, the wonderful flexibility of the Italian language! 'Il più possibile' is our way of saying 'as ... as possible'. It's a versatile construction, adaptable to verbs, adjectives, and nouns. A tip? Insert an adverb between 'il più' and 'possibile'. And let's not forget the negative version: 'il meno possibile'. It reminds me of when I tried to sleep as little as possible during high school math classes... without much success! It's interesting how we can play with words to express concepts similar to English, but with that Italian touch that makes everything more melodic. Try using it in your conversations and see how it enriches your Italian. Quick facts How is "il più possibile" used in Italian? "Il più possibile" translates to "as ... as possible" and is used similarly in Italian. What should be placed between "il più" and "possibile"? An adverb is usually placed between "il più" and "possibile" to specify the extent. How does "il meno possibile" differ from "il più possibile"? "Il meno possibile" uses "meno" (less) instead of "più" (more) and indicates the negative form. Can "il più possibile" be used with adjectives? Yes, adjectives can be placed between "il più" and "possibile," similar to adverbs. How should "il più possibile" be adapted for countable nouns? For countable nouns, drop the article "il" and use a plural noun (e.g., "più libri possibile" - as many books as possible). What is an example of using "il meno possibile" with an adjective? "Cerca di renderlo il meno noioso possibile" means "Try to make it as little boring as possible" (as fun as possible). How is "il più possibile" structured with uncountable nouns? For uncountable nouns, drop the article and use the singular form (e.g., "più caffè possibile" - as much coffee as possible). In which context would you use "il più tardi possibile"? "Il più tardi possibile" translates to "as late as possible," useful for scheduling and deadlines. What does "il meno sgarbatamente possibile" mean? "Il meno sgarbatamente possibile" translates to "as politely as possible," focusing on reducing rudeness. How can you express "as quickly as possible" in Italian? "As quickly as possible" in Italian is "il più rapidamente possibile," emphasizing speed. Audio images 🔊 🔊 🔊 Learn on the go Install the FREE Think In Italian app for faster loading, offline mode, and quick access anytime. No registration needed. Works on all devices. Ready in 1 click and 3 seconds. INSTALL APP NOW Audio lesson with 30 sentences to listen and repeat Italian grammar video lesson Main Article “As… as Possible” in Italian In Italian, the expression “as… as possible” is commonly translated as il più… possibile or il meno… possibile, depending on whether you want to maximize or minimize something. In both languages, this construction is used to express the idea of reaching the maximum or minimum degree of an action or characteristic. It can be used with both adjectives and adverbs, as expressed by the following structure: il più/meno (aggettivo/avverbio) possibile. Il più vs il Meno As Much as Possible The expression il più… possibile is used to maximize. It is literally translated as “as (adjective/adverb) as possible”, as il più means “the most” and acts as an intensifier for the adjective or adverb that follows. When paired with adjectives, it suggests a quality at its highest level: Cerca di essere il più gentile possibile. Try to be as kind as possible. Notice that the adjective does not take any comparative or superlative form because più already carries the meaning of “more”, fulfilling the comparative function. When paired with adverbs, it highlights the highest degree of an action: Corri il più velocemente possibile. Run as quickly as possible. You might also need the expression il più possibile alone, which means “as much/many as possible”. It acts as a phrase to emphasize the maximum amount of something without directly referencing an adjective or adverb. It can be used with nouns, verbs, or in a broader sense to suggest effort, quantity, or frequency. The main difference is that it is not used to modify a specific adjective or adverb but it rather implies a general sense of doing the most. For example: Lavoriamoil più possibile. Let’s work as much as possible. Cerco di leggere il più possibile. I try to read as much as possible. As Least as Possible The expression il meno… possibile is used to minimize. It can either be translated as “as less (adjective/adverb) as possible”, or as “as (adjective/adverb) as possible”, depending on the meaning that you want to convey. In fact, il meno means “the least” and acts as a diminisher for the adjective or adverb that follows. When paired with adjectives, it suggests a quality at its lowest level: Cerca di essere il meno rumoroso possibile. Try to be as quiet as possible. Notice that the adjective does not take any comparative or superlative form because meno already carries the meaning of “less”, fulfilling the comparative function. When paired with adverbs, it highlights the lowest degree of an action: Mangia il meno velocemente possibile. Eat as slowly as possible. You might also need the expression il meno possibile alone, which means “as little as possible”. It acts as a phrase to emphasize the minimum amount of something without directly referencing an adjective or adverb. It can be used with nouns,verbs, or in a broader sense to suggest effort, quantity, or frequency. The main difference is that it is not used to modify a specific adjective or adverb but rather implies a general sense of doing the least. For example: Parlail meno possibile. Speak as little as possible. Mangiail meno possibile. Eat as little as possible. Key Terms and Concepts Il più... possibile A phrase used to express maximizing a quality or action. It translates to 'as... as possible' when paired with adjectives or adverbs, such as 'as kind as possible.' Il meno... possibile This expression is used to minimize a quality or action. It translates to 'as... as possible,' but with the meaning of 'least' when paired with adjectives or adverbs. Adjective Modification Using il più or il meno with adjectives emphasizes a trait's maximum or minimum degree, without needing comparative forms. Adverb Modification Pairing il più or il meno with adverbs highlights the maximum or minimum degree of an action, like 'as quickly as possible.' Generic Use of Il più/meno possibile Used alone to suggest maximizing or minimizing effort, quantity, or frequency without modifying an adjective or adverb, e.g., 'work as much as possible.' Test your knowledge in 10 quick questions Next Words Phrases Sentences Words più more meno less possibile possible veloce quick elegante elegant libro book adverbio adverb aggettivo adjective sostantivo noun conto countable Phrases il più velocemente possibile as quickly as possible il più lentamente possibile as slowly as possible il meno possibile as little as possible il più chiaro possibile as clear as possible più tempo possibile as much time as possible il più presto possibile as soon as possible più soldi possibile as much money as possible il più attento possibile as attentive as possible più informazioni possibile as much information as possible il meno rumoroso possibile as quiet as possible Sentences Vorrei finire il progetto il più rapidamente possibile. I would like to finish the project as quickly as possible. Cerca di essere il più cortese possibile quando parli con i clienti. Try to be as polite as possible when speaking with customers. Dobbiamo ridurre i costi il più possibile. We need to reduce costs as much as possible. Mangia il meno zucchero possibile per mantenerti in salute. Eat as little sugar as possible to stay healthy. Prova a raccogliere più informazioni possibile prima di decidere. Try to gather as much information as possible before deciding. Match Memory Crossword Match the Phrases Start Over Memory game Flip the cards to find matching pairs! Restart Game Crossword Across Down Answers FAQs How do you say as soon as in Italian? If you're looking for ways to convey the idea of doing something as quickly as possible, there are two common phrases in Italian: "Al più presto" and "Il prima possibile." What does "il più presto possibile" mean? The phrase "il più presto possibile" translates to as soon as possible in Italian. Questions? The comments section has moved to the Think In Italian Reddit community. Join today! Italian word of the day piove Piove! Esci senza ombrello? It’s raining! Are you going out without an umbrella? See more examples Latest Videos from my Channels Grammar Stories Chats Tips Subscribe to grammar audio lessons channel Subscribe to short stories channel Subscribe to conversation channel Subscribe to Stefano's tips channel Follow me to fluency​ Create a free lifetime account to get access to all the free coursesand other resources. Speak faster without translating in your mind Immerse yourself in real-life stories Practice speaking, stress-free CREATE FREE ACCOUNT Rave Reviews "I've tried other apps like Babbel and Memrise. None made me fluent or made me feel like I was making much meaningful progress in learning a language." Ecem Topcu Aug 7, 2025 "While other courses rely heavily on translation, grammar exercises, or memorization, Think in Italian makes you comfortable speaking Italian like an Italian." Deborah Hause Jul 11, 2025 "While other courses rely heavily on translation, grammar exercises, or memorization, Think in Italian makes you comfortable speaking Italian like an Italian." Dom Scott Jun 21, 2025 "Absolutely marvelous course. I have been using other learning apps, good enough, but I was getting fed up of the monotony and lack of stimuli. I found this course by accident, good accidents do happen." Bernard Evans Jun 2, 2025 "This course is excellent. It's well organized and teaches Italian sentence structure and vocabulary in a logical progression. I've made good progress with Think In Italian." George Dielemans May 27, 2025 "Think in Italian is brilliant. It is the basis of my Italian leaning. I use it everyday. I have researched and tried many other learning methods, but THIS ONE IS THE BEST most integrated, complete and truly current." Mark Kohr May 3, 2025 ★★★★★ Rated 4.9/5 based on 170+ reviews Read more about Italian grammar lessons FacebookYoutubeInstagramTiktokPinterest-pGoogleReddit Martina Ciao! I'm Martina, you AI tutor. I'm ready to answer your questions about my courses, the articles on this website, your account, etc. Send Terms and conditions Refund policy Editorial Guidelines Privacy statement Affiliate program Terms and conditions Refund policy Editorial Guidelines Privacy statement Affiliate program Think Languages LLC 1309 Coffeen Ave, Suite 1200, Sheridan WY, USA, 82801 +1 917 9937880 We use AI to edit content written by our human experts.
14955
https://www.math.utah.edu/~gustafso/s2019/2270/background/ch6/fundTheoremLinearAlgebra.pdf
Fundamental Theorem of Linear Algebra • Orthogonal Vectors • Orthogonal and Orthonormal Set • Orthogonal Complement of a Subspace W • Column Space, Row Space and Null Space of a Matrix A • The Fundamental Theorem of Linear Algebra Orthogonality Definition 1 (Orthogonal Vectors) Two vectors ⃗ u, ⃗ v are said to be orthogonal provided their dot product is zero: ⃗ u · ⃗ v = 0. If both vectors are nonzero (not required in the definition), then the angle θ between the two vectors is determined by cos θ = ⃗ u · ⃗ v ∥⃗ u∥∥⃗ v∥= 0, which implies θ = 90◦. In short, orthogonal vectors form a right angle. Orthogonal and Orthonormal Set Definition 2 (Orthogonal Set of Vectors) A given set of nonzero vectors ⃗ u1, ..., ⃗ uk that satisfies the orthogonality condition ⃗ ui · ⃗ uj = 0, i ̸= j, is called an orthogonal set. Definition 3 (Orthonormal Set of Vectors) A given set of unit vectors ⃗ u1, . . ., ⃗ uk that satisfies the orthogonality condition is called an orthonormal set. Orthogonal Complement W ⊥of a Subspace W Definition. Let W be a subspace of an inner product space V , inner product ⟨⃗ u,⃗ v⟩. The orthogonal complement of W , denoted W ⊥, is the set of all vectors ⃗ v in V such that ⟨⃗ u,⃗ v⟩= 0 for all ⃗ u in W . In set notation: W ⊥= {⃗ v : ⟨⃗ u,⃗ v⟩= 0 for all ⃗ u in W } Example. If V = R3 and W = span{⃗ u1,⃗ u2}, then W ⊥is the span of the calcu-lus/physics cross product ⃗ u1 × ⃗ u2. The equation dim(W ) + dim(W ⊥) = 3 holds (in general dim(W ) + dim(W ⊥) = dim(V )). Theorem. If W is the span of the columns ⃗ u1, . . . ,⃗ un of m × n matrix A (the column space of A), then W ⊥= nullspace(AT) = span{Strang’s Special Solutions for AT⃗ u = ⃗ 0}. Proof. Given W = span{⃗ u1, . . . ,⃗ un}, then W ⊥= {⃗ v : ⃗ v · ⃗ w = 0, all ⃗ w ∈W } = {⃗ v : ⃗ uj · ⃗ v = 0, j = 1, . . . , n} = {⃗ v : AT⃗ v = ⃗ 0}. Strang’s Special solutions are a basis for the homogeneous problem AT⃗ u = ⃗ 0. Therefore, W ⊥= nullspace(AT) = span{Strang’s Special Solutions for AT⃗ u = ⃗ 0}. Column Space, Row Space and Null Space of a Matrix A The column space, row space and null space of an m×n matrix A are sets in Rn or Rm, defined to be the span of a certain set of vectors. The span theorem implies that each of these three sets are subspaces. Definition. The Column Space of a matrix A is the span of the columns of A, a subspace of Rm. The Pivot Theorem implies that colspace(A) = span{pivot columns of A}. Definition. The Row Space of a matrix A is the span of the rows of A, a subspace of Rn. The definition implies two possible bases for this subspace, just one selected in an application: rowspace(A) = span{Nonzero rows of rref(A)} = span{pivot columns of AT}. Definition. The Null Space of a matrix A is the set of all solutions ⃗ x to the homogeneous problem A⃗ x = ⃗ 0, a subspace of Rn. Because solution ⃗ x of A⃗ x = ⃗ 0 is a linear combination of Strang’s special solutions, then nullspace(A) = span{Strang’s Special Solutions for A⃗ x = ⃗ 0}. The Row space is orthogonal to the Null Space Theorem. Each row vector⃗ r in matrix A satisfies⃗ r · ⃗ x = 0, where ⃗ x is a solution of the homogeneous equation A⃗ x = ⃗ 0. Therefore rowspace(A) ⊥nullspace(A). The theorem is remembered from this diagram:   · · · · · · · · ·  ⃗ x =   0 0 0   is equivalent to   row 1 · ⃗ x row 2 · ⃗ x row 3 · ⃗ x  =   0 0 0   which says that the rows of A are orthogonal to solutions ⃗ x of A⃗ x = ⃗ 0. Computing the Orthogonal Complement of a subspace W Theorem. In case W is the subspace of R3 spanned by two independent vectors ⃗ u1,⃗ u2, then the orthogonal complement of W is the line through the origin generated by the cross product vector ⃗ u1 × ⃗ u2: W ⊥= span{⃗ u1,⃗ u2}⊥= span{⃗ u1 × ⃗ u2} Theorem. In case W is a subspace of Rm spanned by all column vectors ⃗ u1, . . . ,⃗ un of an m × n matrix A, then the orthogonal complement of W is the subspace W ⊥= span{⃗ u1, . . . ,⃗ un}⊥ = { ⃗ y : ⃗ y · ⃗ ui = 0 for all i = 1, . . . , n} = nullspace(AT) = span{Strang’s Special Solutions for AT⃗ u = ⃗ 0} Method. To compute a basis for W ⊥, find Strang’s special Solutions for the homogeneous problem AT⃗ u = ⃗ 0. The basis size is k = number of free variables in AT⃗ u = ⃗ 0. Applications may add an additional step to replace this basis by the Gram-Schmidt orthog-onal basis ⃗ y1, . . . ,⃗ yk. Then W ⊥= span{⃗ y1, . . . ,⃗ yk}. Fundamental Theorem of Linear Algebra Definition. The four fundamental subspaces are rowspace(A), colspace(A), nullspace(A) and nullspace(AT). The Fundamental Theorem of Linear Algebra has two parts: (1) Dimension of the Four Fundamental Subspaces. Assume matrix A is m × n with r pivots. Then dim(rowspace(A)) = r, dim(colspace(A)) = r, dim(nullspace(A)) = n −r, dim(nullspace(AT)) = m −r (2) Orthogonality of the Four Fundamental Subspaces. rowspace(A) ⊥nullspace(A) colspace(A) ⊥nullspace(AT) Gilbert Strang’s textbook Linear Algebra has a cover illustration for the fundamental theo-rem of linear algebra. The original article is The Fundamental Theorem of Linear Algebra, The free 1993 jstor PDF is available via the Marriott library. Requires UofU 2-factor login.
14956
https://rosettacode.org/wiki/Hamming_numbers
Jump to content [dismiss] | | | Join our Discord to discuss Rosetta Code! | Hamming numbers From Rosetta Code Hamming numbers You are encouraged to solve this task according to the task description, using any language you may know. Hamming numbers are numbers of the form ``` H = 2i × 3j × 5k where i, j, k ≥ 0 ``` Hamming numbers are also known as ugly numbers and also 5-smooth numbers (numbers whose prime divisors are less or equal to 5). Task Generate the sequence of Hamming numbers, in increasing order. In particular: Show the first twenty Hamming numbers. Show the 1691st Hamming number (the last one below 231). Show the one millionth Hamming number (if the language – or a convenient library – supports arbitrary-precision integers). Related tasks Humble numbers N-smooth numbers References Wikipedia entry: Hamming numbers (this link is re-directed to Regular number). Wikipedia entry: Smooth number OEIS entry: A051037 5-smooth or Hamming numbers Hamming problem from Dr. Dobb's CodeTalk (dead link as of Sep 2011; parts of the thread here and here). 11l Translation of: Python ``` F hamming(limit) V h = limit V (x2, x3, x5) = (2, 3, 5) V i = 0 V j = 0 V k = 0 L(n) 1 .< limit h[n] = min(x2, x3, x5) I x2 == h[n] i++ x2 = 2 h[i] I x3 == h[n] j++ x3 = 3 h[j] I x5 == h[n] k++ x5 = 5 h[k] R h.last print((1..20).map(i -> hamming(i))) print(hamming(1691)) ``` Output: ``` [1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36] 2125764000 ``` 360 Assembly Hamming numbers 12/03/2017 HAM CSECT USING HAM,R13 base register B 72(R15) skip savearea DC 17F'0' savearea STM R14,R12,12(R13) save previous context ST R13,4(R15) link backward ST R15,8(R13) link forward LR R13,R15 set addressability LA R6,1 ii=1 DO WHILE=(C,R6,LE,=F'20') do ii=1 to 20 BAL R14,PRTHAM call prtham LA R6,1(R6) ii++ ENDDO , enddo ii LA R6,1691 ii=1691 BAL R14,PRTHAM call prtham L R13,4(0,R13) restore previous savearea pointer LM R14,R12,12(R13) restore previous context XR R15,R15 rc=0 BR R14 exit PRTHAM EQU ---- prtham ST R14,R14PRT save return addr LR R1,R6 ii XDECO R1,XDEC edit MVC PG+2(4),XDEC+8 output ii LR R1,R6 ii BAL R14,HAMMING call hamming(ii) XDECO R0,XDEC edit MVC PG+8(10),XDEC+2 output hamming(ii) XPRNT PG,L'PG print buffer L R14,R14PRT restore return addr BR R14 ---- return HAMMING EQU ---- hamming(ll) ST R14,R14HAM save return addr ST R1,LL ll MVC HH,=F'1' h(1)=1 SR R0,R0 0 ST R0,I i=0 ST R0,J j=0 ST R0,K k=0 MVC X2,=F'2' x2=2 MVC X3,=F'3' x3=3 MVC X5,=F'5' x5=5 LA R7,1 n=1 L R2,LL ll BCTR R2,0 -1 ST R2,LLM1 ll-1 DO WHILE=(C,R7,LE,LLM1) do n=1 to ll-1 L R4,X2 m=x2 IF C,R4,GT,X3 THEN if m>x3 then L R4,X3 m=x3 ENDIF , endif IF C,R4,GT,X5 THEN if m>x5 then L R4,X5 m=x5 ENDIF , endif LR R1,R7 n SLA R1,2 4 ST R4,HH(R1) h(n+1)=m IF C,R4,EQ,X2 THEN if m=x2 then L R1,I i LA R1,1(R1) i+1 ST R1,I i=i+1 SLA R1,2 4 L R2,HH(R1) h(i+1) MH R2,=H'2' 2 ST R2,X2 x2=2h(i+1) ENDIF , endif IF C,R4,EQ,X3 THEN if m=x3 then L R1,J j LA R1,1(R1) j+1 ST R1,J j=j+1 SLA R1,2 4 L R2,HH(R1) h(j+1) MH R2,=H'3' 3 ST R2,X3 x3=3h(j+1) ENDIF , endif IF C,R4,EQ,X5 THEN if m=x5 then L R1,K k LA R1,1(R1) k+1 ST R1,K k=k+1 SLA R1,2 4 L R2,HH(R1) h(k+1) MH R2,=H'5' 5 ST R2,X5 x5=5h(k+1) ENDIF , endif LA R7,1(R7) n++ ENDDO , enddo n L R1,LL ll SLA R1,2 4 L R0,HH-4(R1) return h(ll) L R14,R14HAM restore return addr BR R14 ---- return R14HAM DS A return addr of hamming R14PRT DS A return addr of print LL DS F ll LLM1 DS F ll-1 I DS F i J DS F j K DS F k X2 DS F x2 X3 DS F x3 X5 DS F x5 PG DC CL80'H(xxxx)=xxxxxxxxxx' XDEC DS CL12 temp LTORG positioning literal pool HH DS 1691F array h(1691) YREGS END HAM Output: ``` H( 1)= 1 H( 2)= 2 H( 3)= 3 H( 4)= 4 H( 5)= 5 H( 6)= 6 H( 7)= 8 H( 8)= 9 H( 9)= 10 H( 10)= 12 H( 11)= 15 H( 12)= 16 H( 13)= 18 H( 14)= 20 H( 15)= 24 H( 16)= 25 H( 17)= 27 H( 18)= 30 H( 19)= 32 H( 20)= 36 H(1691)=2125764000 ``` Ada Works with: GNAT GNAT provides the datatypes Integer, Long_Integer and Long_Long_Integer, which are not large enough to store hamming numbers. In this program, we represent them as the factors for each of the prime numbers 2, 3 and 5, and only convert them to a base-10 numbers for display. We use the gmp library binding part of GNATCOLL, though a simple 'pragma import' would be enough. This version is very fast (20ms for the million-th hamming number), thanks to a good algorithm. We also do not manipulate large numbers directly (gmp lib), but only the factors of the prime. It will fail to compute the billion'th number because we use an array of the stack to store all numbers. It is possible to get rid of this array though it will make the code slightly less readable. ``` with Ada.Numerics.Generic_Elementary_Functions; with Ada.Text_IO; use Ada.Text_IO; with GNATCOLL.GMP.Integers; with GNATCOLL.GMP.Lib; procedure Hamming is type Log_Type is new Long_Long_Float; package Funcs is new Ada.Numerics.Generic_Elementary_Functions (Log_Type); type Factors_Array is array (Positive range <>) of Positive; generic Factors : Factors_Array := (2, 3, 5); -- The factors for smooth numbers. Hamming numbers are 5-smooth. package Smooth_Numbers is type Number is private; function Compute (Nth : Positive) return Number; function Image (N : Number) return String; private type Exponent_Type is new Natural; type Exponents_Array is array (Factors'Range) of Exponent_Type; -- Numbers are stored as the exponents of the prime factors. type Number is record Exponents : Exponents_Array; Log : Log_Type; -- The log of the value, used to ease sorting. end record; function "=" (N1, N2 : Number) return Boolean is (for all F in Factors'Range => N1.Exponents (F) = N2.Exponents (F)); end Smooth_Numbers; package body Smooth_Numbers is One : constant Number := (Exponents => (others => 0), Log => 0.0); Factors_Log : array (Factors'Range) of Log_Type; function Image (N : Number) return String is use GNATCOLL.GMP.Integers, GNATCOLL.GMP.Lib; R, Tmp : Big_Integer; begin Set (R, "1"); for F in Factors'Range loop Set (Tmp, Factors (F)'Image); Raise_To_N (Tmp, GNATCOLL.GMP.Unsigned_Long (N.Exponents (F))); Multiply (R, Tmp); end loop; return Image (R); end Image; function Compute (Nth : Positive) return Number is Candidates : array (Factors'Range) of Number; Values : array (1 .. Nth) of Number; -- Will result in Storage_Error for very large values of Nth Indices : array (Factors'Range) of Natural := (others => Values'First); Current : Number; Tmp : Number; begin for F in Factors'Range loop Factors_Log (F) := Funcs.Log (Log_Type (Factors (F))); Candidates (F) := One; Candidates (F).Exponents (F) := 1; Candidates (F).Log := Factors_Log (F); end loop; Values (1) := One; for Count in 2 .. Nth loop -- Find next value (the lowest of the candidates) Current := Candidates (Factors'First); for F in Factors'First + 1 .. Factors'Last loop if Candidates (F).Log < Current.Log then Current := Candidates (F); end if; end loop; Values (Count) := Current; -- Update the candidates. There might be several candidates with -- the same value for F in Factors'Range loop if Candidates (F) = Current then Indices (F) := Indices (F) + 1; Tmp := Values (Indices (F)); Tmp.Exponents (F) := Tmp.Exponents (F) + 1; Tmp.Log := Tmp.Log + Factors_Log (F); Candidates (F) := Tmp; end if; end loop; end loop; return Values (Nth); end Compute; end Smooth_Numbers; package Hamming is new Smooth_Numbers ((2, 3, 5)); begin for N in 1 .. 20 loop Put (" " & Hamming.Image (Hamming.Compute (N))); end loop; New_Line; Put_Line (Hamming.Image (Hamming.Compute (1691))); Put_Line (Hamming.Image (Hamming.Compute (1_000_000))); end Hamming; ``` Output: ``` 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 ``` ALGOL 68 Works with: Algol 68 Genie 1.19.0 Hamming numbers are generated in a trivial iterative way as in the Python version below. This program keeps the series needed to generate the numbers as short as possible using flexible rows; on the downside, it spends considerable time on garbage collection. ``` PR precision=100 PR MODE SERIES = FLEX [1 : 0] UNT, # Initially, no elements # UNT = LONG LONG INT; # A 100-digit unsigned integer # PROC hamming number = (INT n) UNT: # The n-th Hamming number # CASE n IN 1, 2, 3, 4, 5, 6, 8, 9, 10, 12 # First 10 in a table # OUT # Additional operators # OP MIN = (INT i, j) INT: (i < j | i | j), MIN = (UNT i, j) UNT: (i < j | i | j); PRIO MIN = 9; OP LAST = (SERIES h) UNT: h[UPB h]; # Last element of a series # OP +:= = (REF SERIES s, UNT elem) VOID: # Extend a series by one element, only keep the elements you need # (INT lwb = (i MIN j) MIN k, upb = UPB s; REF SERIES new s = HEAP FLEX [lwb : upb + 1] UNT; (new s[lwb : upb] := s[lwb : upb], new s[upb + 1] := elem); s := new s ); # Determine the n-th hamming number iteratively # SERIES h := 1, # Series, initially one element # UNT m2 := 2, m3 := 3, m5 := 5, # Multipliers # INT i := 1, j := 1, k := 1; # Counters # TO n - 1 DO h +:= (m2 MIN m3) MIN m5; (LAST h = m2 | m2 := 2 h[i +:= 1]); (LAST h = m3 | m3 := 3 h[j +:= 1]); (LAST h = m5 | m5 := 5 h[k +:= 1]) OD; LAST h ESAC; FOR k TO 20 DO print ((whole (hamming number (k), 0), blank)) OD; print ((newline, whole (hamming number (1 691), 0))); print ((newline, whole (hamming number (1 000 000), 0))) ``` Output: ``` 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 ``` ALGOL W Algol W only has 32 bit integers, so we just show the first 20 Hamming Numbers and Hamming number 1691. This uses the algorithm from Dr Dobbs (as in the Python version). The Coffee Script solution has some notes on how it works. begin % returns the minimum of a and b % integer procedure min ( integer value a, b ) ; if a < b then a else b; % find and print Hamming Numbers % % Algol W only supports 32-bit integers so we just find % % the 1691 32-bit Hamming Numbers % integer MAX_HAMMING; MAX_HAMMING := 1691; begin integer array H( 1 :: MAX_HAMMING ); integer p2, p3, p5, last2, last3, last5; H( 1 ) := 1; last2 := last3 := last5 := 1; p2 := 2; p3 := 3; p5 := 5; for hPos := 2 until MAX_HAMMING do begin integer m; % the next Hamming number is the lowest of the next multiple of 2, 3, and 5 % m := min( min( p2, p3 ), p5 ); H( hPos ) := m; if m = p2 then begin last2 := last2 + 1; p2 := 2 H( last2 ) end if_used_power_of_2 ; if m = p3 then begin last3 := last3 + 1; p3 := 3 H( last3 ) end if_used_power_of_3 ; if m = p5 then begin last5 := last5 + 1; p5 := 5 H( last5 ) end if_used_power_of_5 ; end for_hPos ; i_w := 1; s_w := 1; write( H( 1 ) ); for i := 2 until 20 do writeon( H( i ) ); write( H( MAX_HAMMING ) ) end end. Output: ``` 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 ``` Arturo ``` hamming: function [limit][ if limit=1 -> return 1 h: map 0..limit-1 'z -> 1 x2: 2, x3: 3, x5: 5 i: 0, j: 0, k: 0 loop 1..limit-1 'n [ set h n min @[x2 x3 x5] if x2 = h\[n] [ i: i + 1 x2: 2 h\[i] ] if x3 = h\[n] [ j: j + 1 x3: 3 h\[j] ] if x5 = h\[n] [ k: k + 1 x5: 5 h\[k] ] ] last h ] print map 1..20 => hamming print hamming 1691 print hamming 1000000 ``` Output: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 ATS ``` // // How to compile: // patscc -DATS_MEMALLOC_LIBC -o hamming hamming.dats // include "share/atspre_staload.hats" fun min3 ( A: arrayref(int, 3) ) : natLt(3) = i where { var x: int = A var i: natLt(3) = 0 val () = if A < x then (x := A; i := 1) val () = if A < x then (x := A; i := 2) } ( end of [min3] ) fun hamming {n:pos} ( n: int(n) ) : int = let // var A = @int val A = $UNSAFE.cast{arrayref(int, 3)}(addr@A) var I = @int val I = $UNSAFE.cast{arrayref(int, 3)}(addr@I) val H = arrayref_make_elt (i2sz(succ(n)), 0) val () = H := 1 // fun loop{k:pos} (k: int(k)) : void = ( // if k < n then let val i = min3(A) val k = ( if A[i] > H[k-1] then (H[k] := A[i]; k+1) else k ) : intBtwe(k, k+1) val ii = I[i] val () = I[i] := ii+1 val ii = $UNSAFE.cast{natLte(n)}(ii) val () = if i = 0 then A[i] := 2H[ii] val () = if i = 1 then A[i] := 3H[ii] val () = if i = 2 then A[i] := 5H[ii] in loop(k) end // end of [then] else () // end of [else] // ) ( end of [loop] ) // in loop (1); H[n-1] end ( end of [hamming] ) implement main0 () = { val () = loop(1) where { fun loop {n:pos} ( n: int(n) ) : void = if n <= 20 then let val () = println! ("hamming(",n,") = ", hamming(n)) in loop(n+1) end // end of [then] // end of [if] } ( end of [val] ) val n = 1691 val () = println! ("hamming(",n,") = ", hamming(n)) // } ( end of [main0] ) ``` Output: ``` hamming(1) = 1 hamming(2) = 2 hamming(3) = 3 hamming(4) = 4 hamming(5) = 5 hamming(6) = 6 hamming(7) = 8 hamming(8) = 9 hamming(9) = 10 hamming(10) = 12 hamming(11) = 15 hamming(12) = 16 hamming(13) = 18 hamming(14) = 20 hamming(15) = 24 hamming(16) = 25 hamming(17) = 27 hamming(18) = 30 hamming(19) = 32 hamming(20) = 36 hamming(1691) = 2125764000 ``` AutoHotkey ``` SetBatchLines, -1 Msgbox % hamming(1,20) Msgbox % hamming(1690) return hamming(first,last=0) { if (first < 1) ans=ERROR if (last = 0) last := first i:=0, j:=0, k:=0 num1 := ceil((last 20)(1/3)) num2 := ceil(num1 ln(2)/ln(3)) num3 := ceil(num1 ln(2)/ln(5)) loop { H := (2i) (3j) (5k) if (H > 0) ans = %H%`n%ans% i++ if (i > num1) { i=0 j++ if (j > num2) { j=0 k++ } } if (k > num3) break } Sort ans, N Loop, parse, ans, `n, `r { if (A_index > last) break if (A_index < first) continue Output = %Output%`n%A_LoopField% } return Output } ``` AWK ``` syntax: gawk -M -f hamming_numbers.awk BEGIN { for (i=1; i<=20; i++) { printf("%d ",hamming(i)) } printf("\n1691: %d\n",hamming(1691)) printf("\n1000000: %d\n",hamming(1000000)) exit(0) } function hamming(limit, h,i,j,k,n,x2,x3,x5) { h = 1 x2 = 2 x3 = 3 x5 = 5 for (n=1; n<=limit; n++) { h[n] = min(x2,min(x3,x5)) if (h[n] == x2) { x2 = 2 h[++i] } if (h[n] == x3) { x3 = 3 h[++j] } if (h[n] == x5) { x5 = 5 h[++k] } } return(h[limit-1]) } function min(x,y) { return((x < y) ? x : y) } ``` Output: ``` 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 1691: 2125764000 1000000: 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 ``` BASIC256 Translation of: FreeBASIC ``` print "The first 20 Hamming numbers are :" for i = 1 to 20 print Hamming(i);" "; next i print print "H( 1691) = "; Hamming(1691) end function min(a, b) if a < b then return a else return b end function function Hamming(limit) dim h(1000000) h = 1 x2 = 2 : x3 = 3 : x5 = 5 i = 0 : j = 0 : k = 0 for n = 1 to limit h[n] = min(x2, min(x3, x5)) if x2 = h[n] then i += 1: x2 = 2 h[i] if x3 = h[n] then j += 1: x3 = 3 h[j] if x5 = h[n] then k += 1: x5 = 5 h[k] next n return h[limit -1] end function ``` BBC BASIC ``` @% = &1010 FOR h% = 1 TO 20 PRINT "H("; h% ") = "; FNhamming(h%) NEXT PRINT "H(1691) = "; FNhamming(1691) END DEF FNhamming(l%) LOCAL i%, j%, k%, n%, m, x2, x3, x5, h%() DIM h%(l%) : h%(0) = 1 x2 = 2 : x3 = 3 : x5 = 5 FOR n% = 1 TO l%-1 m = x2 IF m > x3 m = x3 IF m > x5 m = x5 h%(n%) = m IF m = x2 i% += 1 : x2 = 2 h%(i%) IF m = x3 j% += 1 : x3 = 3 h%(j%) IF m = x5 k% += 1 : x5 = 5 h%(k%) NEXT = h%(l%-1) ``` Output: ``` H(1) = 1 H(2) = 2 H(3) = 3 H(4) = 4 H(5) = 5 H(6) = 6 H(7) = 8 H(8) = 9 H(9) = 10 H(10) = 12 H(11) = 15 H(12) = 16 H(13) = 18 H(14) = 20 H(15) = 24 H(16) = 25 H(17) = 27 H(18) = 30 H(19) = 32 H(20) = 36 H(1691) = 2125764000 ``` Bc ``` cat hamming_numbers.bc define min(x,y) { if (x < y) { return x } else { return y } } define hamming(limit) { i = 0 j = 0 k = 0 h = 1 x2 = 2 x3 = 3 x5 = 5 for (n=1; n<=limit; n++) { h[n] = min(x2,min(x3,x5)) if (h[n] == x2) { x2 = 2 h[++i] } if (h[n] == x3) { x3 = 3 h[++j] } if (h[n] == x5) { x5 = 5 h[++k] } } return (h[limit-1]) } for (lab=1; lab<=20; lab++) { hamming(lab) } hamming(1691) hamming(1000000) quit ``` Output: `` $ bc hamming_numbers.bc bc 1.06.95 Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details typewarranty'. 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 51931278044838873608958984375000000000000000000000000000000000000000\ 0000000000000000 ``` Bracmat Translation of: D ( ( hamming = x2 x3 x5 n i j k min . tbl$(h,!arg) { This creates an array. Arrays are always global in Bracmat. } & 1:?(0$h) & 2:?x2 & 3:?x3 & 5:?x5 & 0:?n:?i:?j:?k & whl ' ( !n+1:<!arg:?n & !x2:?min & (!x3:<!min:?min|) & (!x5:<!min:?min|) & !min:?(!n$h) { !n is index into array h } & ( !x2:!min & 2!((1+!i:?i)$h):?x2 | ) & ( !x3:!min & 3!((1+!j:?j)$h):?x3 | ) & ( !x5:!min & 5!((1+!k:?k)$h):?x5 | ) ) & !((!arg+-1)$h) (tbl$(h,0)&) { We delete the array by setting its size to 0 } ) & 0:?I & whl'(!I+1:~>20:?I&put$(hamming$!I " ")) & out$ & out$(hamming$1691) & out$(hamming$1000000) ); Output: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 Bruijn Translation of: Haskell n1000000 takes a very long time but eventually reduces to the correct result. ``` :import std/Combinator . :import std/Number . :import std/List . merge y ]])]]] go 3 <? 1 (3 : (6 2 4)) (1 : (6 5 0)) classic version while avoiding duplicate generation hammings-classic (+1) : (foldr u empty ((+2) : ((+3) : {}(+5)))) u ] :test ((hammings-classic !! (+42)) =? (+162)) () enumeration by a chain of folded merges (faster) hammings-folded ([(0 ∘ a) ∘ (0 ∘ b)] (foldr merge1 empty)) $ c merge1 ]] a iterate (map (mul (+5))) b iterate (map (mul (+3))) c iterate (mul (+2)) (+1) :test ((hammings-folded !! (+42)) =? (+162)) () --- output --- main [first-twenty : (n1691 : {}n1000000)] first-twenty take (+20) hammings-folded n1691 hammings-folded !! (+1690) n1000000 hammings-folded !! (+999999) ``` C Using a min-heap to keep track of numbers. Does not handle big integers. ``` include include typedef unsigned long long ham; size_t alloc = 0, n = 1; ham q = 0; void qpush(ham h) { int i, j; if (alloc <= n) { alloc = alloc ? alloc 2 : 16; q = realloc(q, sizeof(ham) alloc); } for (i = n++; (j = i/2) && q[j] > h; q[i] = q[j], i = j); q[i] = h; } ham qpop() { int i, j; ham r, t; / outer loop for skipping duplicates / for (r = q; n > 1 && r == q; q[i] = t) { / inner loop is the normal down heap routine / for (i = 1, t = q[--n]; (j = i 2) < n;) { if (j + 1 < n && q[j] > q[j+1]) j++; if (t <= q[j]) break; q[i] = q[j], i = j; } } return r; } int main() { int i; ham h; for (qpush(i = 1); i <= 1691; i++) { / takes smallest value, and queue its multiples / h = qpop(); qpush(h 2); qpush(h 3); qpush(h 5); if (i <= 20 || i == 1691) printf("%6d: %llu\n", i, h); } / free(q); / return 0; } ``` Alternative Standard algorithm. Numbers are stored as exponents of factors instead of big integers, while GMP is only used for display. It's much more efficient this way. ``` include include include include include / number of factors. best be mutually prime -- duh. / define NK 3 define MAX_HAM (1 << 24) define MAX_POW 1024 int n_hams = 0, idx[NK] = {0}, fac[] = { 2, 3, 5, 7, 11}; / k-smooth numbers are stored as their exponents of each factor; v is the log of the number, for convenience. / typedef struct { int e[NK]; double v; } ham_t, ham; ham_t hams, values[NK] = {{{0}, 0}}; double inc[NK][MAX_POW]; / most of the time v can be just incremented, but eventually floating point precision will bite us, so better recalculate / inline void _setv(ham x) { int i; for (x->v = 0, i = 0; i < NK; i++) x->v += inc[i][x->e[i]]; } inline int _eq(ham a, ham b) { int i; for (i = 0; i < NK && a->e[i] == b->e[i]; i++); return i == NK; } ham get_ham(int n) { int i, ni; ham h; n--; while (n_hams < n) { for (ni = 0, i = 1; i < NK; i++) if (values[i].v < values[ni].v) ni = i; (h = hams + ++n_hams) = values[ni]; for (ni = 0; ni < NK; ni++) { if (! _eq(values + ni, h)) continue; values[ni] = hams[++idx[ni]]; values[ni].e[ni]++; _setv(values + ni); } } return hams + n; } void show_ham(ham h) { static mpz_t das_ham, tmp; int i; mpz_init_set_ui(das_ham, 1); mpz_init_set_ui(tmp, 1); for (i = 0; i < NK; i++) { mpz_ui_pow_ui(tmp, fac[i], h->e[i]); mpz_mul(das_ham, das_ham, tmp); } gmp_printf("%Zu\n", das_ham); } int main() { int i, j; hams = malloc(sizeof(ham_t) MAX_HAM); for (i = 0; i < NK; i++) { values[i].e[i] = 1; inc[i] = log(fac[i]); _setv(values + i); for (j = 2; j < MAX_POW; j++) inc[i][j] = j inc[i]; } printf(" 1,691: "); show_ham(get_ham(1691)); printf(" 1,000,000: "); show_ham(get_ham(1e6)); printf("10,000,000: "); show_ham(get_ham(1e7)); return 0; } ``` Output: 1,691: 2125764000 1,000,000: 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 10,000,000: 16244105063830431823239 ..<a gadzillion digits>.. 000000000000000000000 C# Translation of: D ``` using System; using System.Numerics; using System.Linq; namespace Hamming { class MainClass { public static BigInteger Hamming(int n) { BigInteger two = 2, three = 3, five = 5; var h = new BigInteger[n]; h = 1; BigInteger x2 = 2, x3 = 3, x5 = 5; int i = 0, j = 0, k = 0; for (int index = 1; index < n; index++) { h[index] = BigInteger.Min(x2, BigInteger.Min(x3, x5)); if (h[index] == x2) x2 = two h[++i]; if (h[index] == x3) x3 = three h[++j]; if (h[index] == x5) x5 = five h[++k]; } return h[n - 1]; } public static void Main(string[] args) { Console.WriteLine(string.Join(" ", Enumerable.Range(1, 20).ToList().Select(x => Hamming(x)))); Console.WriteLine(Hamming(1691)); Console.WriteLine(Hamming(1000000)); } } } ``` Output: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 Generic version for any set of numbers The algorithm is similar to the one above. ``` using System; using System.Numerics; using System.Linq; namespace Hamming { class MainClass { public static BigInteger[] Hamming(int n, int[] a) { var primes = a.Select(x => (BigInteger)x).ToArray(); var values = a.Select(x => (BigInteger)x).ToArray(); var indexes = new int[a.Length]; var results = new BigInteger[n]; results = 1; for (int iter = 1; iter < n; iter++) { results[iter] = values; for (int p = 1; p < primes.Length; p++) if (results[iter] > values[p]) results[iter] = values[p]; for (int p = 0; p < primes.Length; p++) if (results[iter] == values[p]) values[p] = primes[p] results[++indexes[p]]; } return results; } public static void Main(string[] args) { foreach (int[] primes in new int[][] { new int[] {2,3,5}, new int[] {2,3,5,7} }) { Console.WriteLine("{0}-Smooth:", primes.Last()); Console.WriteLine(string.Join(" ", Hamming(20, primes))); Console.WriteLine(Hamming(1691, primes).Last()); Console.WriteLine(Hamming(1000000, primes).Last()); Console.WriteLine(); } } } } ``` Output: ``` 5-Smooth: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 7-Smooth: 1 2 3 4 5 6 7 8 9 10 12 14 15 16 18 20 21 24 25 27 3317760 4157409948433216829957008507500000000 ``` Fast version Like some of the other implementations on this page, this version represents each number as a list of exponents which would be applied to each prime number. So the number 60 would be represented as int { 2, 1, 1 } which is interpreted as 2^2 3^1 5^1. As often happens, optimizing for speed caused a marked increase in code size and complexity. Clearly the versions I wrote above are easier to read & understand. They were also much quicker to write. But the generic version above runs in 3+ seconds for the 1000000th 5-smooth number whereas this version does it in 0.35 seconds, 8-10 times faster. I've tried to comment it as best I could, without bloating the code too much. --Mike Lorenz ``` using System; using System.Linq; using System.Numerics; namespace HammingFast { class MainClass { private static int[] _primes = { 2, 3, 5, 7, 11, 13, 17, 19, 23, 29 }; public static BigInteger Big(int[] exponents) { BigInteger val = 1; for (int i = 0; i < exponents.Length; i++) for (int e = 0; e < exponents[i]; e++) val = val _primes[i]; return val; } public static int[] Hamming(int n, int nprimes) { var hammings = new int[n, nprimes]; // array of hamming #s we generate var hammlogs = new double[n]; // log values for above var primelogs = new double[nprimes]; // pre-calculated prime log values var indexes = new int[nprimes]; // intermediate hamming values as indexes into hammings var listheads = new int[nprimes, nprimes]; // intermediate hamming list heads var listlogs = new double[nprimes]; // log values of list heads for (int p = 0; p < nprimes; p++) { listheads[p, p] = 1; // init list heads to prime values primelogs[p] = Math.Log(_primes[p]); // pre-calc prime log values listlogs[p] = Math.Log(_primes[p]); // init list head log values } for (int iter = 1; iter < n; iter++) { int min = 0; // find index of min item in list heads for (int p = 1; p < nprimes; p++) if (listlogs[p] < listlogs[min]) min = p; hammlogs[iter] = listlogs[min]; // that's the next hamming number for (int i = 0; i < nprimes; i++) hammings[iter, i] = listheads[min, i]; for (int p = 0; p < nprimes; p++) { // update each list head if it matches new value bool equal = true; // test each exponent to see if number matches for (int i = 0; i < nprimes; i++) { if (hammings[iter, i] != listheads[p, i]) { equal = false; break; } } if (equal) { // if it matches... int x = ++indexes[p]; // set index to next hamming number for (int i = 0; i < nprimes; i++) // copy each hamming exponent listheads[p, i] = hammings[x, i]; listheads[p, p] += 1; // increment exponent = mult by prime listlogs[p] = hammlogs[x] + primelogs[p]; // add log(prime) to log(value) = mult by prime } } } var result = new int[nprimes]; for (int i = 0; i < nprimes; i++) result[i] = hammings[n - 1, i]; return result; } public static void Main(string[] args) { foreach (int np in new int[] { 3, 4, 5 }) { Console.WriteLine("{0}-Smooth:", _primes[np - 1]); Console.WriteLine(string.Join(" ", Enumerable.Range(1, 20).Select(x => Big(Hamming(x, np))))); Console.WriteLine(Big(Hamming(1691, np))); Console.WriteLine(Big(Hamming(1000000, np))); Console.WriteLine(); } } } } ``` Output: ``` 5-Smooth: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 7-Smooth: 1 2 3 4 5 6 7 8 9 10 12 14 15 16 18 20 21 24 25 27 3317760 4157409948433216829957008507500000000 11-Smooth: 1 2 3 4 5 6 7 8 9 10 11 12 14 15 16 18 20 21 22 24 296352 561912530929780078125000 ``` C# Enumerator Version I wanted to fix the enumerator (old) version, as it wasn't working. It became a bit of an obsession... after a few iterations I came up with the following, which is the fastest C# version on my computer - your mileage may vary. It combines the speed of the Log method; Log(2)+Log(3)=Log(23) to help determine which is the next one to use. Then I have added some logic (using the series property) to ensure that exponent sets are never duplicated - which speeds the calculations up a bit.... Adding this trick to the Fast Version will probably result in the fastest version, but I'll leave that to someone else to implement. Finally it's all enumerated through a crazy one-way-linked-list-type-structure that only exists as long as the enumerator and is left up to the garbage collector to remove the bits no longer needed... I hope it's commented enough... follow it if you dare! ``` using System; using System.Collections.Generic; using System.Linq; using System.Numerics; namespace HammingTest { class HammingNode { public double log; public int[] exponents; public HammingNode next; public int series; } class HammingListEnumerator : IEnumerable<BigInteger> { private int[] primes; private double[] primelogs; private HammingNode next; private HammingNode[] values; private HammingNode[] indexes; public HammingListEnumerator(IEnumerable<int> seeds) { // Ensure our seeds are properly ordered, and generate their log values primes = seeds.OrderBy(x => x).ToArray(); primelogs = primes.Select(x => Math.Log10(x)).ToArray(); // Start at 1 (log(1)=0, exponents are all 0, series = none) next = new HammingNode { log = 0, exponents = new int[primes.Length], series = primes.Length }; // Set all exponent sequences to the start, and calculate the first value for each exponent indexes = new HammingNode[primes.Length]; values = new HammingNode[primes.Length]; for(int i = 0; i < primes.Length; ++i) { indexes[i] = next; values[i] = AddExponent(next, i); } } // Make a copy of a node, and increment the specified exponent value private HammingNode AddExponent(HammingNode node, int i) { HammingNode ret = new HammingNode { log = node.log + primelogs[i], exponents = (int[])node.exponents.Clone(), series = i }; ++ret.exponents[i]; return ret; } private void GetNext() { // Find which exponent value is the lowest int min = 0; for(int i = 1; i < values.Length; ++i) if(values[i].log < values[min].log) min = i; // Add it to the end of the 'list', and move to it next.next = values[min]; next = values[min]; // Find the next node in an allowed sequence (skip those that would be duplicates) HammingNode val = indexes[min].next; while(val.series < min) val = val.next; // Keep the current index, and calculate the next value in the series for that exponent indexes[min] = val; values[min] = AddExponent(val, min); } // Skip values without having to calculate the BigInteger value from the exponents public HammingListEnumerator Skip(int count) { for(int i = count; i > 0; --i) GetNext(); return this; } // Calculate the BigInteger value from the exponents internal BigInteger ValueOf(HammingNode n) { BigInteger val = 1; for(int i = 0; i < n.exponents.Length; ++i) for(int e = 0; e < n.exponents[i]; e++) val = val primes[i]; return val; } public IEnumerator<BigInteger> GetEnumerator() { while(true) { yield return ValueOf(next); GetNext(); } } System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { return this.GetEnumerator(); } } class Program { static void Main(string[] args) { foreach(int[] primes in new int[][] { new int[] { 2, 3, 5 }, new int[] { 2, 3, 5, 7 }, new int[] { 2, 3, 5, 7, 9}}) { HammingListEnumerator hammings = new HammingListEnumerator(primes); System.Diagnostics.Debug.WriteLine("{0}-Smooth:", primes.Last()); System.Diagnostics.Debug.WriteLine(String.Join(" ", hammings.Take(20).ToArray())); System.Diagnostics.Debug.WriteLine(hammings.Skip(1691 - 20).First()); System.Diagnostics.Debug.WriteLine(hammings.Skip(1000000 - 1691).First()); System.Diagnostics.Debug.WriteLine(""); } } } } ``` Output: ``` 5-Smooth: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 7-Smooth: 1 2 3 4 5 6 7 8 9 10 12 14 15 16 18 20 21 24 25 27 3317760 4157409948433216829957008507500000000 11-Smooth: 1 2 3 4 5 6 7 8 9 10 11 12 14 15 16 18 20 21 22 24 296352 561912530929780078125000 ``` Alternate Generic Enumerating version YMMV, but unlike the author of the above code, I found the above version to be much slower on my machine than the "Generic version". The following version is actually just a little slower than the Generic version but uses much less memory due to avoiding duplicates and only keeping in memory those "lazy list" streams necessary for calculation from 1/5 of the current range to 1/2 (for Smooth-5 numbers), and not successive values in those ranges but only the values the are the multiples of previous ranges. Like the Haskell code from which it is translated, the head of the streams is not retained so can be garbage collected when no longer necessary and it is recommended that the primes be processed in reverse order so that the least dense streams are processed first for slightly less memory use and operations. It also shows that one can use somewhat functional programming techniques in C#. The class implements its own partial version of a lazy list using the Lazy class and uses lambda closures for the recursive use of the successive streams to avoid stack use. It uses Aggregate to implement the Haskell "foldl" function. The code demonstrates that even though C# is primarily imperative in paradigm, with its ability to implement closures using delegates/lambdas, it can express some algorithms such as this mostly functionally. It isn't nearly as fast as a Haskell, Scala or even Clojure and Scheme (GambitC) versions of this algorithm, being about five times slower is primarily due to its use of many small heap based instances of classes, both for the LazyList's and for closures (implemented using at least one class to hold the captured free variables) and the inefficiency of DotNet's allocation and garbage collection of many small instance objects (although about twice as fast as F#'s implementation, whose closures must require even more small object instances); it seems Haskell and the (Java) JVM are much more efficient at doing these allocations/garbage collections for many small objects. The slower speed to a relatively minor extent is also due to less efficient BigInteger operations: Translation of: Haskell ``` using System; using System.Collections; using System.Collections.Generic; using System.Linq; using System.Numerics; namespace Hamming { class Hammings : IEnumerable { private class LazyList { public T v; public Lazy> cont; public LazyList(T v, Lazy> cont) { this.v = v; this.cont = cont; } } private uint[] primes; private Hammings() { } // must have an argument!!! public Hammings(uint[] prms) { this.primes = prms; } private LazyList merge(LazyList xs, LazyList ys) { if (xs == null) return ys; else { var x = xs.v; var y = ys.v; if (BigInteger.Compare(x, y) < 0) { var cont = new Lazy>(() => merge(xs.cont.Value, ys)); return new LazyList(x, cont); } else { var cont = new Lazy>(() => merge(xs, ys.cont.Value)); return new LazyList(y, cont); } } } private LazyList llmult(uint mltplr, LazyList ll) { return new LazyList(mltplr ll.v, new Lazy>(() => llmult(mltplr, ll.cont.Value))); } public IEnumerator GetEnumerator() { Func,uint,LazyList\> u = (acc, p) => { LazyList r = null; var cont = new Lazy>(() => r); r = new LazyList(1, cont); r = this.merge(acc, llmult(p, r)); return r; }; yield return 1; for (var stt = primes.Aggregate(null, u); ; stt = stt.cont.Value) yield return stt.v; } IEnumerator IEnumerable.GetEnumerator() { return this.GetEnumerator(); } } class Program { static void Main(string[] args) { Console.WriteLine("Calculates the Hamming sequence of numbers.\r\n"); var primes = new uint[] { 5, 3, 2 }; Console.WriteLine(String.Join(" ", (new Hammings(primes)).Take(20).ToArray())); Console.WriteLine((new Hammings(primes)).ElementAt(1691 - 1)); var n = 1000000; var elpsd = -DateTime.Now.Ticks; var num = (new Hammings(primes)).ElementAt(n - 1); elpsd += DateTime.Now.Ticks; Console.WriteLine(num); Console.WriteLine("The {0}th hamming number took {1} milliseconds", n, elpsd / 10000); Console.Write("\r\nPress any key to exit:"); Console.ReadKey(true); Console.WriteLine(); } } } ``` Fast enumerating logarithmic version The so-called "fast" generic version above isn't really all that fast due to all the extra array accesses required by the generic implementation and that it doesn't avoid duplicates as the above functional code does avoid. It also uses a lot of memory as it has arrays that are the size of the range for which the Hamming numbers are calculated. The following code eliminates or reduces all of those problems by being non-generic, eliminating duplicate calculations, saving memory by "draining" the growable List's used in blocks as back pointer indexes are used (thus using memory at the same rate as the functional version), thus avoiding excessive allocations/garbage collections; it also is enumerates through the Hamming numbers although that comes at a slight cost in overhead function calls: Translation of: Nim ``` using System; using System.Collections; using System.Collections.Generic; using System.Linq; using System.Numerics; class HammingsLogArr : IEnumerable> { public static BigInteger trival(Tuple tpl) { BigInteger rslt = 1; for (var i = 0; i < tpl.Item1; ++i) rslt = 2; for (var i = 0; i < tpl.Item2; ++i) rslt = 3; for (var i = 0; i < tpl.Item3; ++i) rslt = 5; return rslt; } private const double lb3 = 1.5849625007211561814537389439478; // Math.Log(3) / Math.Log(2); private const double lb5 = 2.3219280948873623478703194294894; // Math.Log(5) / Math.Log(2); private struct logrep { public double lg; public uint x2, x3, x5; public logrep(double lg, uint x, uint y, uint z) { this.lg = lg; this.x2 = x; this.x3 = y; this.x5 = z; } public logrep mul2() { return new logrep (this.lg + 1.0, this.x2 + 1, this.x3, this.x5); } public logrep mul3() { return new logrep(this.lg + lb3, this.x2, this.x3 + 1, this.x5); } public logrep mul5() { return new logrep(this.lg + lb5, this.x2, this.x3, this.x5 + 1); } } public IEnumerator> GetEnumerator() { var one = new logrep(); var s2 = new List(); var s3 = new List(); s2.Add(one); s3.Add(one.mul3()); var s5 = one.mul5(); var mrg = one.mul3(); var s2hdi = 0; var s3hdi = 0; while (true) { if (s2hdi >= s2.Count) { s2.RemoveRange(0, s2hdi); s2hdi = 0; } // assume capacity stays the same... var v = s2[s2hdi]; if ( v.lg < mrg.lg) { s2.Add(v.mul2()); s2hdi++; } else { if (s3hdi >= s3.Count) { s3.RemoveRange(0, s3hdi); s3hdi = 0; } v = mrg; s2.Add(v.mul2()); s3.Add(v.mul3()); s3hdi++; var chkv = s3[s3hdi]; if (chkv.lg < s5.lg) { mrg = chkv; } else { mrg = s5; s5 = s5.mul5(); s3hdi--; } } yield return Tuple.Create(v.x2, v.x3, v.x5); } } IEnumerator IEnumerable.GetEnumerator() { return this.GetEnumerator(); } } class Program { static void Main(string[] args) { Console.WriteLine(String.Join(" ", (new HammingsLogArr()).Take(20) .Select(t => HammingsLogArr.trival(t)) .ToArray())); Console.WriteLine(HammingsLogArr.trival((new HammingsLogArr()).ElementAt((int)1691 - 1))); var n = 1000000UL; var elpsd = -DateTime.Now.Ticks; var rslt = (new HammingsLogArr()).ElementAt((int)n - 1); elpsd += DateTime.Now.Ticks; Console.WriteLine("2^{0} times 3^{1} times 5^{2}", rslt.Item1, rslt.Item2, rslt.Item3); var lgrthm = Math.Log10(2.0) ((double)rslt.Item1 + ((double)rslt.Item2 Math.Log(3.0) + (double)rslt.Item3 Math.Log(5.0)) / Math.Log(2.0)); var pwr = Math.Floor(lgrthm); var mntsa = Math.Pow(10.0, lgrthm - pwr); Console.WriteLine("Approximately: {0}E+{1}", mntsa, pwr); var s = HammingsLogArr.trival(rslt).ToString(); var lngth = s.Length; Console.WriteLine("Decimal digits: {0}", lngth); if (lngth <= 10000) { var i = 0; for (; i < lngth - 100; i += 100) Console.WriteLine(s.Substring(i, 100)); Console.WriteLine(s.Substring(i)); } Console.WriteLine("The {0}th hamming number took {1} milliseconds", n, elpsd / 10000); Console.Write("\r\nPress any key to exit:"); Console.ReadKey(true); Console.WriteLine(); } } ``` Output: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 2^55 times 3^47 times 5^64 Approximately: 5.19312780448414E+83 Decimal digits: 84 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 The 1000000th hamming number took 55 milliseconds The above code is about 30 times faster than the functional code due to both eliminating the lambda closures that were the main problem with that code as well as eliminating the BigInteger operations. It has about O(n) empirical performance and can find the billionth Hamming number in about 60 seconds. Extremely fast non-enumerating version calculating the error band The above code is about as fast as one can go generating sequences; however, if one is willing to forego sequences and just calculate the nth Hamming number (again), then some reading on the relationship between the size of numbers to the sequence numbers is helpful (Wikipedia: regular number). One finds that there is a very distinct relationship and that it quite quickly reduces to quite a small error band proportional to the log of the output value for larger ranges. Thus, the following code just scans for logarithmic representations to insert into a sequence for this top error band and extracts the correct nth representation from that band. It reduces time complexity to O(n^(2/3)) from O(n) for the sequence versions, but even more amazingly, reduces memory requirements to O(n^(1/3)) from O(n^(2/3)) and thus makes it possible to calculate very large values in the sequence on common personal computers. The code is as follows: Translation of: Nim ``` using System; using System.Collections; using System.Collections.Generic; using System.Linq; using System.Numerics; static class NthHamming { public static BigInteger trival(Tuple tpl) { BigInteger rslt = 1; for (var i = 0; i < tpl.Item1; ++i) rslt = 2; for (var i = 0; i < tpl.Item2; ++i) rslt = 3; for (var i = 0; i < tpl.Item3; ++i) rslt = 5; return rslt; } private struct logrep { public uint x2, x3, x5; public double lg; public logrep(uint x, uint y, uint z, double lg) { this.x2 = x; this.x3 = y; this.x5 = z; this.lg = lg; } } private const double lb3 = 1.5849625007211561814537389439478; // Math.Log(3) / Math.Log(2); private const double lb5 = 2.3219280948873623478703194294894; // Math.Log(5) / Math.Log(2); private const double fctr = 6.0 lb3 lb5; private const double crctn = 2.4534452978042592646620291867186; // Math.Log(Math.sqrt(30.0)) / Math.Log(2.0) public static Tuple findNth(UInt64 n) { if (n < 1) throw new Exception("NthHamming.findNth: argument must be > 0!"); if (n < 2) return Tuple.Create(0u, 0u, 0u); // trivial case for argument of one var lgest = Math.Pow(fctr (double)n, 1.0/3.0) - crctn; // from WP formula var frctn = (n < 1000000000) ? 0.509 : 0.105; var lghi = Math.Pow(fctr ((double)n + frctn lgest), 1.0/3.0) - crctn; var lglo = 2.0 lgest - lghi; // upper and lower bound of upper "band" var count = 0UL; // need 64 bit precision in case... var bnd = new List(); for (uint k = 0, klmt = (uint)(lghi / lb5) + 1; k < klmt; ++k) { var p = (double)k lb5; for (uint j = 0, jlmt = (uint)((lghi - p) / lb3) + 1; j < jlmt; ++j) { var q = p + (double)j lb3; var ir = lghi - q; var lg = q + Math.Floor(ir); // current log2 value (estimated) count += (ulong)ir + 1; if (lg >= lglo) bnd.Add(new logrep((UInt32)ir, j, k, lg)); } } if (n > count) throw new Exception("NthHamming.findNth: band high estimate is too low!"); var ndx = (int)(count - n); if (ndx >= bnd.Count) throw new Exception("NthHamming.findNth: band low estimate is too high!"); bnd.Sort((a, b) => (b.lg < a.lg) ? -1 : 1); // sort in decending order var rslt = bnd[ndx]; return Tuple.Create(rslt.x2, rslt.x3, rslt.x5); } } class Program { static void Main(string[] args) { Console.WriteLine(String.Join(" ", Enumerable.Range(1,20).Select(i => NthHamming.trival(NthHamming.findNth((ulong)i))).ToArray())); Console.WriteLine(NthHamming.trival((new HammingsLogArr()).ElementAt(1691 - 1))); var n = 1000000000000UL; var elpsd = -DateTime.Now.Ticks; var rslt = NthHamming.findNth(n); elpsd += DateTime.Now.Ticks; Console.WriteLine("2^{0} times 3^{1} times 5^{2}", rslt.Item1, rslt.Item2, rslt.Item3); var lgrthm = Math.Log10(2.0) ((double)rslt.Item1 + ((double)rslt.Item2 Math.Log(3.0) + (double)rslt.Item3 Math.Log(5.0)) / Math.Log(2.0)); var pwr = Math.Floor(lgrthm); var mntsa = Math.Pow(10.0, lgrthm - pwr); Console.WriteLine("Approximately: {0}E+{1}", mntsa, pwr); var s = HammingsLogArr.trival(rslt).ToString(); var lngth = s.Length; Console.WriteLine("Decimal digits: {0}", lngth); if (lngth <= 10000) { var i = 0; for (; i < lngth - 100; i += 100) Console.WriteLine(s.Substring(i, 100)); Console.WriteLine(s.Substring(i)); } Console.WriteLine("The {0}th hamming number took {1} milliseconds", n, elpsd / 10000); Console.Write("\r\nPress any key to exit:"); Console.ReadKey(true); Console.WriteLine(); } } ``` The output is the same as above except that the time is too small to be measured. The billionth number in the sequence can be calculated in just about 10 milliseconds, the trillionth in about one second, the thousand trillionth in about a hundred seconds, and it should be possible to calculate the 10^19th value in less than a day (untested) on common personal computers. The (2^64 - 1)th value (18446744073709551615) cannot be calculated due to a slight overflow problem as it approaches that limit. C++ C++11 For Each Generator ``` include include // Hamming like sequences Generator // // Nigel Galloway. August 13th., 2012 // class Ham { private: std::vector _H, _hp, _hv, _x; public: bool operator!=(const Ham& other) const {return true;} Ham begin() const {return this;} Ham end() const {return this;} unsigned int operator() const {return _x.back();} Ham(const std::vector &pfs):_H(pfs),_hp(pfs.size(),0),_hv({pfs}),_x({1}){} const Ham& operator++() { for (int i=0; i<_H.size(); i++) for (;_hv[i]<=_x.back();_hv[i]=_x[++_hp[i]]_H[i]); _x.push_back(_hv); for (int i=1; i<_H.size(); i++) if (_hv[i]<_x.back()) _x.back()=_hv[i]; return this; } }; ``` 5-Smooth ``` int main() { int count = 1; for (unsigned int i : Ham({2,3,5})) { if (count <= 62) std::cout << i << ' '; if (count++ == 1691) { std::cout << "\nThe one thousand six hundred and ninety first Hamming Number is " << i << std::endl; break; } } return 0; } ``` Produces: ``` 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 40 45 48 50 54 60 64 72 75 80 81 90 96 100 108 120 125 128 135 144 150 160 162 180 192 200 216 225 240 243 250 256 270 288 300 320 324 360 375 384 400 405 The one thousand six hundred and ninety first Hamming Number is 2125764000 ``` 7-Smooth ``` int main() { int count = 1; for (unsigned int i : Ham({2,3,5,7})) { std::cout << i << ' '; if (count++ == 64) break; } std::cout << std::endl; return 0; } ``` Produces: ``` 1 2 3 4 5 6 7 8 9 10 12 14 15 16 18 20 21 24 25 27 28 30 32 35 36 40 42 45 48 49 50 54 56 60 63 64 70 72 75 80 81 84 90 96 98 100 105 108 112 120 125 126 128 135 140 144 147 150 160 162 168 175 180 189 ``` Avoiding Duplicates with Functional Code If converted to use multi-precision integers (via GMP, as in this code), the above code is slower than it needs to be due to several reasons: 1) It uses the an adaptation of the original Dijkstra's algorithm, which is somewhat slower due to repeated calculations (2 time 3, 3 times 2, etc.), 2) the generator is written generally to handle any series of multiples, and 3) for finding the nth Hamming number, the code as for (auto hmg : Ham({5, 3, 2}) means that there is a copy of the sometimes very large multi-precision number which can consume more time than the actual computation. The following code isn't particularly fast due to other reasons that will be discussed, but avoids duplication of calculations by a modification of the algorithm; it is written functionally as a LazyList (which could easily also have iteration abilities added, with the a basic LazyList type defined here since there is no library available: Translation of: Haskell Works with: C++11 ``` include include include include include template class Lazy { public: T _v; private: std::function _f; public: explicit Lazy(std::function thnk) : _v(T()), _f(thnk) {}; T value() { // not thread safe! if (this->_f != nullptr) { this->_v = this->_f(); this->_f = nullptr; } return this->_v; } }; template class LazyList { public: T head; std::shared_ptr>> tail; LazyList(): head(T()) {} // only used in initializing Lazy... LazyList(T head, std::function()> thnk) : head(head), tail(std::make_shared>>(thnk)) {} // default Copy/Move constructors and assignment operators seem to work well enough bool isEmpty() { return this->tail == nullptr; } }; typedef std::shared_ptr PBI; typedef LazyList LL; typedef std::function FLL2LL; LL merge(LL a, LL b) { auto ha = a.head; auto hb = b.head; if (ha < hb) { return LL(ha, = { return merge(a.tail->value(), b); }); } else { return LL(hb, = { return merge(a, b.tail->value()); }); } } LL smult(int m, LL s) { const auto im = mpz_class(m); const auto psmlt = std::make_shared( { return ss; }); psmlt = = { return LL(std::make_shared(ss.head im), = { return (psmlt)(ss.tail->value()); }); }; return (psmlt)(s); // worker wrapper pattern with recursive closure as worker... } LL u(LL s, int n) { const auto r = std::make_shared(LL()); // interior mutable... r = smult(n, LL(std::make_shared(1), = { return r; })); if (!s.isEmpty()) { r = merge(s, r); } return r; } LL hammings() { auto r = LL(); for (auto pn : std::vector({5, 3, 2})) { r = u(r, pn); } return LL(std::make_shared(1), = { return r; }); } int main() { auto hmgs = hammings(); for (auto i = 0; i < 20; ++i) { std::cout << hmgs.head << " "; hmgs = hmgs.tail->value(); } std::cout << "\n"; hmgs = hammings(); for (auto i = 1; i < 1691; ++i) hmgs = hmgs.tail->value(); std::cout << hmgs.head << "\n"; auto start = std::chrono::steady_clock::now(); hmgs = hammings(); for (auto i = 1; i < 1000000; ++i) hmgs = hmgs.tail->value(); auto stop = std::chrono::steady_clock::now(); auto ms = std::chrono::duration_cast<std::chrono::milliseconds>(stop - start); std::cout << hmgs.head << " in " << ms.count() << "milliseconds.\n"; } ``` Output: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 in 1079 milliseconds. Note that the repeat loop to increment to the desired value is written so as not to oopy unnecessary Hamming values, unlike the use of the first Generator class. This shows that one can program functionally in C++, but the performance is many times slower than a language more suitable for functional paradigms such as Haskell or even Kotlin; this is likely due to the cost of memory allocation with (multi-threading-safe) reference counting and that the memory system isn't tuned to many small allocations/de-allocations such as are generally necessary with functional programming. One can easily see how to adapt this algorithm to make it work for the general case by just having an argument which is an vector of the required multipliers used in the hammings function. There is another problem in using languages such as this that do not have cyclic reference breaking capbilities: the code will leak memory due to the cyclic memory reference cycles and it is perhaps impossible to change the function algorithm to manually free, even though the code uses "shared"/reference counting facilities. Avoiding Duplicates with Imperative Code To show that it is the execution time for the functional LazyList that is taking the time, here is the same algorithm implemented imperatively using vectors, also avoiding duplicate calculations; it is not written as a general function for any set of multipliers as the extra vector addressing takes some extra time; again, it minimizes copying of values: Translation of: Rust Works with: C++11 ``` include include include include class Hammings { private: const mpz_class _two = 2, _three = 3, _five = 5; std::vector _m = {}, _h = {1}; mpz_class _x5 = 5, _x53 = 9, _mrg = 3, _x532 = 2; int _i = 1, _j = 0; public: Hammings() {_m.reserve(65536); _h.reserve(65536); }; bool operator!=(const Hammings& other) const { return true; } Hammings begin() const { return this; } Hammings end() const { return this; } mpz_class operator() { return _h.back(); } const Hammings& operator++() { if (_i > _h.capacity() / 2) { _h.erase(_h.begin(), _h.begin() + _i); _i = 0; } if (_x532 < _mrg) { _h.push_back(_x532); _x532 = _h[_i++] _two; } else { _h.push_back(_mrg); if (_x53 < _x5) { _mrg = _x53; _x53 = _m[_j++] _three; } else { _mrg = _x5; _x5 = _x5 _five; } if (_j > _m.capacity() / 2) { _m.erase(_m.begin(), _m.begin() + _j); _j = 0; } _m.push_back(_mrg); } return this; } }; int main() { auto cnt = 1; for (auto hmg : Hammings()) { if (cnt <= 20) std::cout << hmg << " "; if (cnt == 20) std::cout << "\n"; if (cnt++ >= 1691) { std::cout << hmg << "\n"; break; } } auto start = std::chrono::steady_clock::now(); hmgs = hammings(); auto&& hmgitr = Hammings(); for (auto i = 1; i < 1000000; ++i) ++hmgitr; auto stop = std::chrono::steady_clock::now(); auto ms = std::chrono::duration_cast<std::chrono::milliseconds>(stop - start); std::cout << hmgitr << " in " << ms.count() << "milliseconds.\n"; } ``` Output: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 in 79 milliseconds. This code takes about the same amount of time as Haskell for the functional algorithm calculating multi-precision values (also uses GMP; not including the list processing time), but greatly reduces the C++ processing time compared to the functional version due to the use of imperative code and vectors. Chapel Chapel is by no means a functional language although it has some Higher Order Functional (HOF) concepts such as zippering, scanning, and reducing of iterations, it lacks closures (functions that can capture variable bindings from the enclosing scope(s)) even though it has first class functions that can be passed as values and lambdas (anonymous functions), nor is tail-call-optimization of recursive functions and iterators guarantied. However, now that Chapel supports class fields that can be of any type including references to other classes of any storage type, we can emulate closures using shared classes (shared classes are automatically de-allocated when they have no more references, currently using reference counting). The following code does that for the non-duplicating version of the sequence of Hamming numbers: Translation of: Haskell Hamming_numbers#Avoiding_generation_of_duplicates Works with: 1.24.1 version or greater, maybe lesser ``` use BigInteger; use Time; // Chapel doesn't have closure functions that can capture variables from // outside scope, so we use a class to emulate them for this special case; // the member fields mult, mrglst, and mltlst, emulate "captured" variables // that would normally be captured by the next continuation closure... class HammingsList { const head: bigint; const mult: uint(8); var mrglst: shared HammingsList?; var mltlst: shared HammingsList?; var tail: shared HammingsList? = nil; proc init(hd: bigint, mlt: uint(8), mrgl: shared HammingsList?, mltl: shared HammingsList?) { head = hd; mult = mlt; mrglst = mrgl; mltlst = mltl; } proc next(): shared HammingsList { if tail != nil then return tail: shared HammingsList; const nhd: bigint = mltlst!.head mult; if mrglst == nil then { tail = new shared HammingsList(nhd, mult, nil: shared HammingsList?, nil: shared HammingsList?); mltlst = mltlst!.next(); tail!.mltlst <=> mltlst; } else { if mrglst!.head < nhd then { tail = new shared HammingsList(mrglst!.head, mult, nil: shared HammingsList?, nil: shared HammingsList?); mrglst = mrglst!.next(); mrglst <=> tail!.mrglst; mltlst <=> tail!.mltlst; } else { tail = new shared HammingsList(nhd, mult, nil: shared HammingsList?, nil: shared HammingsList?); mltlst = mltlst!.next(); mltlst <=> tail!.mltlst; mrglst <=> tail!.mrglst; } } return tail: shared HammingsList; } } proc u(n: uint(8), s: shared HammingsList?): shared HammingsList { var r = new shared HammingsList(1: bigint, n, s, nil: shared HammingsList?); r.mltlst = r; // lazy recursion! return r.next(); } iter hammings(): bigint { var nxt: shared HammingsList? = nil: shared HammingsList?; const mlts: [ 0 .. 2 ] int = [ 5, 3, 2 ]; for m in mlts do nxt = u(m: uint(8), nxt); yield 1 : bigint; while true { yield nxt!.head; nxt = nxt!.next(); } } write("The first 20 Hamming numbers are: "); var cnt: int = 0; for h in hammings() { write(" ", h); cnt += 1; if cnt >= 20 then break; } write(".\nThe 1691st Hamming number is "); cnt = 0; for h in hammings() { cnt += 1; if cnt < 1691 then continue; write(h); break; } writeln(".\nThe millionth Hamming number is "); var timer: Timer; timer.start(); cnt = 0; for h in hammings() { cnt += 1; if cnt < 1000000 then continue; write(h); break; } timer.stop(); writeln(".\nThis last took ", timer.elapsed(TimeUnits.milliseconds), " milliseconds."); ``` Output: The first 20 Hamming numbers are: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36. The 1691st Hamming number is 2125764000. The millionth Hamming number is 519312780448388736089589843750000000000000000000000000000000000000000000000000000000. This last took 224.652 milliseconds. The above time is as run on an Intel Skylake i5-6500 at 3.6 GHz (turbo, single-threaded). It isn't as fast as the following versions due to the many memory allocations and de-allocations as for typically functional forms of code, but it is in the order of speed of many actual functional languages and faster than many, depending on how well their memory management is adapted for this programming paradigm and also because the "bigint" implementation isn't as fast as the "gmp" package used by many languages for multi-precision integer caluclations. This shows that the functional forms of most algorithms can be translated into Chapel, although some concessions need to be made for the functional facilities that Chapel doesn't have. However, there is one major problem in using languages such as this for functional algorithms when such languages do not have cyclic reference breaking capabilities: the code will leak memory due to the cyclic memory reference cycles and it is perhaps impossible to change the functional algorithm to manually free, even though the code uses "shared"/reference counting facilities. Alternate Imperative Version Using "Growable" Arrays In general, we can convert functional algorithms into imperative style algorithms using Array's to emulate memoizing lazy lists and simple mutable variables to express the recursion within a while loop, as in the following codes (as also used when necessary in the above code). Rather than implement the true Dijkstra merge algorithm which is slower and uses more memory, the following codes implement the better non-duplicating algorithm: Translation of: Nim ``` use BigInteger; use Time; iter nodupsHamming(): bigint { var s2dom = { 0 .. 1023 }; var s2: [s2dom] bigint; // init so can double! var s3dom = { 0 .. 1023 }; var s3: [s3dom] bigint; // init so can double! s2 = 1: bigint; s3 = 3: bigint; var x5 = 5: bigint; var mrg = 3: bigint; var s2hdi, s2tli, s3hdi, s3tli: int; while true { s2tli += 1; if s2hdi + s2hdi >= s2tli { // move in place to avoid allocation! s2[0 .. s2tli - s2hdi - 1] = s2[s2hdi .. s2tli - 1]; s2tli -= s2hdi; s2hdi = 0; } const s2sz = s2.size; if s2tli >= s2sz then s2dom = { 0 .. s2sz + s2sz - 1 }; var rslt: bigint; const s2hd = s2[s2hdi]; if s2hd < mrg { rslt = s2hd; s2hdi += 1; } else { s3tli += 1; if s3hdi + s3hdi >= s2tli { // move in place to avoid allocation! s3[0 .. s3tli - s3hdi - 1] = s3[s3hdi .. s3tli - 1]; s3tli -= s3hdi; s3hdi = 0; } const s3sz = s3.size; if s3tli >= s3sz then s3dom = { 0 .. s3sz + s3sz - 1 }; rslt = mrg; s3[s3tli] = rslt 3; s3hdi += 1; const s3hd = s3[s3hdi]; if s3hd < x5 { mrg = s3hd; } else { mrg = x5; x5 = x5 5; s3hdi -= 1; } } s2[s2tli] = rslt 2; yield rslt; } } // test it... write("The first 20 hamming numbers are: "); var cnt = 0: uint(64); for h in nodupsHamming() { if cnt >= 20 then break; cnt += 1; write(" ", h); } write("\nThe 1691st hamming number is "); cnt = 1; for h in nodupsHamming() { if cnt >= 1691 { writeln(h); break; } cnt += 1; } write("The millionth hamming number is "); var timer: Timer; cnt = 1; timer.start(); var rslt: bigint; for h in nodupsHamming() { if cnt >= 1000000 { rslt = h; break; } cnt += 1; } timer.stop(); write(rslt); writeln(".\nThis last took ", timer.elapsed(TimeUnits.milliseconds), " milliseconds."); ``` Output: The first 20 hamming numbers are: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 The 1691st hamming number is 2125764000 The millionth hamming number is 519312780448388736089589843750000000000000000000000000000000000000000000000000000000. This last took 114.867 milliseconds. The above time is as run on an Intel Skylake i5-6500 at 3.6 GHz (turbo, single-threaded). As you can see, this algorithm is quite fast, as it minimizes the memory allocations/de-allocations, but it still takes considerable time for the many multi-precision (BigInteger) calculations even though the GMP library is being used under-the-covers. Alternate version using logarithm approximations for sorting order To greatly reduce the time used for BigInteger calculations, the following algorithm uses logarithmic approximations (real(64)) for internal calculations for sorting and only outputs the final answer(s) as BigInteger(s): Translation of: Nim ``` use BigInteger; use Math; use Time; config const nth: uint(64) = 1000000; const lb2 = 1: real(64); // log base 2 of 2! const lb3 = log2(3: real(64)); const lb5 = log2(5: real(64)); record LogRep { var lg: real(64); var x2: uint(32); var x3: uint(32); var x5: uint(32); inline proc mul2(): LogRep { return new LogRep(this.lg + lb2, this.x2 + 1, this.x3, this.x5); } inline proc mul3(): LogRep { return new LogRep(this.lg + lb3, this.x2, this.x3 + 1, this.x5); } inline proc mul5(): LogRep { return new LogRep(this.lg + lb5, this.x2, this.x3, this.x5 + 1); } proc lr2bigint(): bigint { proc xpnd(bs: uint, v: uint(32)): bigint { var rslt = 1: bigint; var bsm = bs: bigint; var vm = v: uint; while vm > 0 { if vm & 1 then rslt = bsm; bsm = bsm; vm >>= 1; } return rslt; } return xpnd(2: uint, this.x2) xpnd(3: uint, this.x3) xpnd(5: uint, this.x5); } proc writeThis(lr) throws { lr <~> this.lr2bigint(); } } operator <(const ref a: LogRep, const ref b: LogRep): bool { return a.lg < b.lg; } const one = new LogRep(0, 0, 0, 0); iter nodupsHammingLog(): LogRep { var s2dom = { 0 .. 1023 }; var s2: [s2dom] LogRep; // init so can double! var s3dom = { 0 .. 1023 }; var s3: [s3dom] LogRep; // init so can double! s2 = one; s3 = one.mul3(); var x5 = one.mul5(); var mrg = one.mul3(); var s2hdi, s2tli, s3hdi, s3tli: int; while true { s2tli += 1; if s2hdi + s2hdi >= s2tli { // move in place to avoid allocation! s2[0 .. s2tli - s2hdi - 1] = s2[s2hdi .. s2tli - 1]; s2tli -= s2hdi; s2hdi = 0; } const s2sz = s2.size; if s2tli >= s2sz then s2dom = { 0 .. s2sz + s2sz - 1 }; var rslt: LogRep; const s2hd = s2[s2hdi]; if s2hd.lg < mrg.lg { rslt = s2hd; s2hdi += 1; } else { s3tli += 1; if s3hdi + s3hdi >= s2tli { // move in place to avoid allocation! s3[0 .. s3tli - s3hdi - 1] = s3[s3hdi .. s3tli - 1]; s3tli -= s3hdi; s3hdi = 0; } const s3sz = s3.size; if s3tli >= s3sz then s3dom = { 0 .. s3sz + s3sz - 1 }; rslt = mrg; s3[s3tli] = mrg.mul3(); s3hdi += 1; const s3hd = s3[s3hdi]; if s3hd.lg < x5.lg { mrg = s3hd; } else { mrg = x5; x5 = x5.mul5(); s3hdi -= 1; } } s2[s2tli] = rslt.mul2(); yield rslt; } } // test it... write("The first 20 hamming numbers are: "); var cnt = 0: uint(64); for h in nodupsHammingLog() { if cnt >= 20 then break; cnt += 1; write(" ", h); } write("\nThe 1691st hamming number is "); cnt = 1; for h in nodupsHammingLog() { if cnt >= 1691 { writeln(h); break; } cnt += 1; } write("The ", nth, "th hamming number is "); var timer: Timer; cnt = 1; timer.start(); var rslt: LogRep; for h in nodupsHammingLog() { if cnt >= nth { rslt = h; break; } cnt += 1; } timer.stop(); write(rslt); writeln(".\nThis last took ", timer.elapsed(TimeUnits.milliseconds), " milliseconds."); ``` Output: The first 20 hamming numbers are: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 The 1691st hamming number is 2125764000 The 1000000th hamming number is 519312780448388736089589843750000000000000000000000000000000000000000000000000000000. This last took 6.372 milliseconds. The above time is as run on an Intel Skylake i5-6500 at 3.6 GHz (turbo, single-threaded). As you can see, the time expended for the required task is almost too fast to measure, meaning that much of the time expended in previous versions was just the time doing multi-precision arithmetic; the program takes about 8.1 seconds to find the billionth Hamming number. Very Fast Algorithm Using a Sorted Error Band The above code is about as fast as one can go generating sequences; however, if one is willing to forego sequences and just calculate the nth Hamming number (repeatedly), then some reading on the relationship between the size of numbers to the sequence numbers is helpful (Wikipedia: Regular Number). One finds that there is a very distinct relationship and that it quite quickly reduces to quite a small error band proportional to the log of the output value for larger ranges. Thus, the following code just scans for logarithmic representations to insert into a sequence for this top error band and extracts the correct nth representation from that band. It reduces time complexity to O(n^(2/3)) from O(n) for the sequence versions, but even more amazingly, reduces memory requirements to O(n^(1/3)) from O(n^(2/3)) and thus makes it possible to calculate very large values in the sequence on common personal computers. The code is as follows: Translation of: Nim Works with: 1.22 version for zero based tuple indices ``` use BigInteger; use Math; use Sort; use Time; config const nth = 1000000: uint(64); type TriVal = 3uint(32); proc trival2bigint(x: TriVal): bigint { proc xpnd(bs: uint, v: uint(32)): bigint { var rslt = 1: bigint; var bsm = bs: bigint; var vm = v: uint; while vm > 0 { if vm & 1 then rslt = bsm; bsm = bsm; vm >>= 1; } return rslt; } const (x2, x3, x5) = x; return xpnd(2: uint, x2) xpnd(3: uint, x3) xpnd(5: uint, x5); } proc nthHamming(n: uint(64)): TriVal { if n < 1 { writeln("nthHamming - argument must be at least one!"); exit(1); } if n < 2 then return (0: uint(32), 0: uint(32), 0: uint(32)); // TriVal for 1 type LogRep = (real(64), uint(32), uint(32), uint(32)); record Comparator {} // used for sorting in reverse order! proc Comparator.compare(a: LogRep, b: LogRep): real(64) { return b - a; } var logrepComp: Comparator; const lb3 = log2(3.0: real(64)); const lb5 = log2(5.0: real(64)); const fctr = 6.0: real(64) lb3 lb5; const crctn = log2(sqrt(30.0: real(64))); // log base 2 of sqrt 30 // from Wikipedia Regular Numbers formula... const lgest = (fctr n: real(64))(1.0: real(64) / 3.0: real(64)) - crctn; const frctn = if n < 1000000000 then 0.509: real(64) else 0.105: real(64); const lghi = (fctr (n: real(64) + frctn lgest)) (1.0: real(64) / 3.0: real(64)) - crctn; const lglo = 2.0: real(64) lgest - lghi; // lower limit of the upper "band" var count = 0: uint(64); // need to use extended precision, might go over var bndi = 0; var dombnd = { 0 .. bndi }; // one value so doubling size works! var bnd: [dombnd] LogRep; const klmt = (lghi / lb5): uint(32); for k in 0 .. klmt { // i, j, k values can be just uint(32) values! const p = k: real(64) lb5; const jlmt = ((lghi - p) / lb3): uint(32); for j in 0 .. jlmt { const q = p + j: real(64) lb3; const ir = lghi - q; const lg = q + floor(ir); // current log value (est) count += ir: uint(64) + 1; if lg >= lglo { const sz = dombnd.size; if bndi >= sz then dombnd = { 0..sz + sz - 1 }; bnd[bndi] = (lg, ir: uint(32), j, k); bndi += 1; } } } if n > count { writeln("nth_hamming: band high estimate is too low!"); exit(1); } dombnd = { 0 .. bndi - 1 }; const ndx = (count - n): int; if ndx >= dombnd.size { writeln("nth_hamming: band low estimate is too high!"); exit(1); } sort(bnd, comparator = logrepComp); // descending order leaves zeros at end! const rslt = bnd[ndx]; return (rslt, rslt, rslt); } // test it... write("The first 20 Hamming numbers are: "); for i in 1 .. 20 do write(" ", trival2bigint(nthHamming(i: uint(64)))); writeln("\nThe 1691st hamming number is ", trival2bigint(nthHamming(1691: uint(64)))); var timer: Timer; timer.start(); const answr = nthHamming(nth); timer.stop(); write("The ", nth, "th Hamming number is 2", answr, " 3", answr, " 5", answr); const lgrslt = (answr: real(64) + answr: real(64) log2(3: real(64)) + answr: real(64) log2(5: real(64))) log10(2: real(64)); const whl = lgrslt: uint(64); const frac = lgrslt - whl: real(64); write(",\nwhich is approximately ", 10: real(64)frac, "E+", whl); const bganswr = trival2bigint(answr); const answrstr = bganswr: string; const asz = answrstr.size; writeln(" and has ", asz, " digits."); if asz <= 2000 then write("Can be printed as: ", answrstr); else write("It's too long to print"); writeln("!\nThis last took ", timer.elapsed(TimeUnits.milliseconds), " milliseconds."); ``` Output: The first 20 Hamming numbers are: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 The 1691st hamming number is 2125764000 The 1000000th Hamming number is 255 347 564, which is approximately 5.19313E+83 and has 84 digits. Can be printed as: 519312780448388736089589843750000000000000000000000000000000000000000000000000000000! This last took 0.0 milliseconds. As you can see, the execution time is much too small to be measured. The billionth number in the sequence can be calculated in about 15 milliseconds and the trillionth in about 0.359 seconds. The (2^64 - 1)th value (18446744073709551615) cannot be calculated due to a slight overflow problem as it approaches that limit. However, this version gives inaccurate results much about the 1e13th Hamming number due to the log base two (double) approximate representation not having enough precision to accurately sort the values put into the error band array. Alternate version with a greatly increased range without error To solve the problem of inadequate precision in the double log base two representation, the following code uses a BigInt representation of the log value with about twice the significant bits, which is then sufficient to extend the usable range well beyond any reasonable requirement: Translation of: Nim Works with: 1.22 version for zero based tuple indices ``` use BigInteger; use Math; use Sort; use Time; config const nth = 1000000: uint(64); type TriVal = 3uint(32); proc trival2bigint(x: TriVal): bigint { proc xpnd(bs: uint, v: uint(32)): bigint { var rslt = 1: bigint; var bsm = bs: bigint; var vm = v: uint; while vm > 0 { if vm & 1 then rslt = bsm; bsm = bsm; vm >>= 1; } return rslt; } const (x2, x3, x5) = x; return xpnd(2: uint, x2) xpnd(3: uint, x3) xpnd(5: uint, x5); } proc nthHamming(n: uint(64)): TriVal { if n < 1 { writeln("nthHamming - argument must be at least one!"); exit(1); } if n < 2 then return (0: uint(32), 0: uint(32), 0: uint(32)); // TriVal for 1 type LogRep = (bigint, uint(32), uint(32), uint(32)); record Comparator {} // used for sorting in reverse order! proc Comparator.compare(a: LogRep, b: LogRep): int { return (b - a): int; } var logrepComp: Comparator; const lb3 = log2(3.0: real(64)); const lb5 = log2(5.0: real(64)); const bglb2 = "1267650600228229401496703205376": bigint; const bglb3 = "2009178665378409109047848542368": bigint; const bglb5 = "2943393543170754072109742145491": bigint; const fctr = 6.0: real(64) lb3 lb5; const crctn = log2(sqrt(30.0: real(64))); // log base 2 of sqrt 30 // from Wikipedia Regular Numbers formula... const lgest = (fctr n: real(64))(1.0: real(64) / 3.0: real(64)) - crctn; const frctn = if n < 1000000000 then 0.509: real(64) else 0.105: real(64); const lghi = (fctr (n: real(64) + frctn lgest)) (1.0: real(64) / 3.0: real(64)) - crctn; const lglo = 2.0: real(64) lgest - lghi; // lower limit of the upper "band" var count = 0: uint(64); // need to use extended precision, might go over var bndi = 0; var dombnd = { 0 .. bndi }; // one value so doubling size works! var bnd: [dombnd] LogRep; const klmt = (lghi / lb5): uint(32); for k in 0 .. klmt { // i, j, k values can be just uint(32) values! const p = k: real(64) lb5; const jlmt = ((lghi - p) / lb3): uint(32); for j in 0 .. jlmt { const q = p + j: real(64) lb3; const ir = lghi - q; const lg = q + floor(ir); // current log value (est) count += ir: uint(64) + 1; if lg >= lglo { const sz = dombnd.size; if bndi >= sz then dombnd = { 0..sz + sz - 1 }; const bglg = bglb2 ir: int(64) + bglb3 j: int(64) + bglb5 k: int(64); bnd[bndi] = (bglg, ir: uint(32), j, k); bndi += 1; } } } if n > count { writeln("nth_hamming: band high estimate is too low!"); exit(1); } dombnd = { 0 .. bndi - 1 }; const ndx = (count - n): int; if ndx >= dombnd.size { writeln("nth_hamming: band low estimate is too high!"); exit(1); } sort(bnd, comparator = logrepComp); // descending order leaves zeros at end! const rslt = bnd[ndx]; return (rslt, rslt, rslt); } // test it... write("The first 20 Hamming numbers are: "); for i in 1 .. 20 do write(" ", trival2bigint(nthHamming(i: uint(64)))); writeln("\nThe 1691st hamming number is ", trival2bigint(nthHamming(1691: uint(64)))); var timer: Timer; timer.start(); const answr = nthHamming(nth); timer.stop(); write("The ", nth, "th Hamming number is 2", answr, " 3", answr, " 5", answr); const lgrslt = (answr: real(64) + answr: real(64) log2(3: real(64)) + answr: real(64) log2(5: real(64))) log10(2: real(64)); const whl = lgrslt: uint(64); const frac = lgrslt - whl: real(64); write(",\nwhich is approximately ", 10: real(64)frac, "E+", whl); const bganswr = trival2bigint(answr); const answrstr = bganswr: string; const asz = answrstr.size; writeln(" and has ", asz, " digits."); if asz <= 2000 then write("Can be printed as: ", answrstr); else write("It's too long to print"); writeln("!\nThis last took ", timer.elapsed(TimeUnits.milliseconds), " milliseconds."); ``` The above code has the same output as before and doesn't take an appreciably different amount time to execute; it can produce the billionth Hamming number in about 31 milliseconds, the trillionth in about 0.546 seconds and the thousand trillionth (which is now possible without error) in about 39.36 seconds. Thus, it successfully extends the usable range of the algorithm to near the maximum expressible 64 bit number in a few hours of execution time on a modern desktop computer although the (2^64 - 1)th Hamming number can't be found due to the restrictions of the expressible range limit in sizing of the required error band. That said, if one actually needed a sequence of Hamming numbers for fairly large ranges, one would likely be better off to make this last adjustment to the final logarithmic sequence version above as although this error-band version is extremely fast for single values, the accumulative cost for repeating use will be more than the incremental cost of the sequence version at some range limit. Clojure This version implements Dijkstra's merge solution, so is closely related to the Haskell version. ``` (defn smerge [xs ys] (lazy-seq (let [x (first xs), y (first ys), [z xs ys] (cond (< x y) [x (rest xs) ys] (> x y) [y xs (rest ys)] :else [x (rest xs) (rest ys)])] (cons z (smerge xs ys))))) (def hamming (lazy-seq (->> (map #(' 5 %) hamming) (smerge (map #(' 3 %) hamming)) (smerge (map #(' 2 %) hamming)) (cons 1)))) ``` Note that the above version uses a lot of space and time after calculating a few hundred thousand elements of the sequence. This is no doubt due to not avoiding the generation of duplicates in the sequences as well as its "holding on to the head": it maintains the entire generated sequences in memory. Avoiding duplicates and reducing memory use In order to fix the problems with the above program as to memory use and extra time expended, the following code implements the Haskell idea as a function so that it does not retain the pointers to the streams used so that they can be garbage collected from the beginning as they are consumed. it avoids duplicate number generation by using intermediate streams for each of the multiples and building each on the results of the last; also, it orders the streams from least dense to most so that the intermediate streams retained are as short as possible, with the "s5" stream only from one fifth to a third of the current value, the "s35" stream only between a third and a half of the current output value, and the s235 stream only between a half and the current output - as the sequence is not very dense with increasing range, mot many values need be retained: Translation of: Haskell ``` (defn hamming "Computes the unbounded sequence of Hamming 235 numbers." [] (letfn [(merge [xs ys] (if (nil? xs) ys (let [xv (first xs), yv (first ys)] (if (< xv yv) (cons xv (lazy-seq (merge (next xs) ys))) (cons yv (lazy-seq (merge xs (next ys)))))))), (smult [m s] ;; equiv to map ( m) s -- faster (cons (' m (first s)) (lazy-seq (smult m (next s))))), (u [s n] (let [r (atom nil)] (reset! r (merge s (smult n (cons 1 (lazy-seq @r)))))))] (cons 1 (lazy-seq (reduce u nil (list 5 3 2)))))) ``` Much of the time expended for larger ranges (say 10 million or more) is due to the time doing extended precision arithmetic, with also a significant percentage spent in garbage collection. Following is the output from the REPL after compiling the program: After compiling code in REPL: Output: ``` (take 20 (hamming)) (1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36) (->> (hamming) (drop 1690) (first) (time)) "Elapsed time: 1.105582 msecs" 2125764000 (->> (hamming) (drop 999999) (first) (time)) "Elapsed time: 447.561128 msecs" 519312780448388736089589843750000000000000000000000000000000000000000000000000000000N ``` So that generated '.class' files in a folder or a generated '.jar' file (possibly standalone, containing the run time library) run at about the same speed as inside the IDE (after compilation), the Leiningen "project.clj" file needs to be modified to contain the following line so as to eliminate JVM options that slow the performance: :jvm-opts ^:replace [] CoffeeScript ``` Generate hamming numbers in order. Hamming numbers have the property that they don't evenly divide any prime numbers outside a given set, such as [2, 3, 5]. generate_hamming_sequence = (primes, max_n) -> # We use a lazy algorithm, only ever keeping N candidates # in play, one for each of our seed primes. Let's say # primes is [2,3,5]. Our virtual streams are these: # # hammings: 1,2,3,4,5,6,8,10,12,15,16,18,20,... # hammings2: 2,4,6,9.10,12,16,20,24,30,32,36,40... # hammings3: 3,6,9,12,15,18,24,30,36,45,... # hammings5: 5,10,15,20,25,30,40,50,... # # After encountering 40 for the last time, our candidates # will be # 50 = 2 25 # 45 = 3 15 # 50 = 5 10 # Then, after 45 # 50 = 2 25 # 48 = 3 16 <= new # 50 = 5 10 hamming_numbers = candidates = ([p, p, 1] for p in primes) last_number = 1 while hamming_numbers.length < max_n # Get the next candidate Hamming Number tuple. i = min_idx(candidates) candidate = candidates[i] [n, p, seq_idx] = candidate # Add to sequence unless it's a duplicate. if n > last_number hamming_numbers.push n last_number = n # Replace the candidate with its successor (based on # p = 2, 3, or 5). # # This is the heart of the algorithm. Let's say, over the # primes [2,3,5], we encounter the hamming number 32 based on it being # 2 16, where 16 is the 12th number in the sequence. # We'll be passed in [32, 2, 12] as candidate, and # hamming_numbers will be [1,2,3,4,5,6,8,9,10,12,16,18,...] # by now. The next candidate we need to enqueue is # [36, 2, 13], where the numbers mean this: # # 36 - next multiple of 2 of a Hamming number # 2 - prime number # 13 - 1-based index of 18 in the sequence # # When we encounter [36, 2, 13], we will then enqueue # [40, 2, 14], based on 20 being the 14th hamming number. q = hamming_numbers[seq_idx] candidates[i] = [pq, p, seq_idx+1] hamming_numbers min_idx = (arr) -> # Don't waste your time reading this--it just returns # the index of the smallest tuple in an array, respecting that # the tuples may contain integers. (CS compiles to JS, which is # kind of stupid about sorting. There are libraries to work around # the limitation, but I wanted this code to be standalone.) less_than = (tup1, tup2) -> i = 0 while i < tup2.length return true if tup1[i] <= tup2[i] return false if tup1[i] > tup2[i] i += 1 min_i = 0 for i in [1...arr.length] if less_than arr[i], arr[min_i] min_i = i return min_i primes = [2, 3, 5] numbers = generate_hamming_sequence(primes, 10000) console.log numbers console.log numbers ``` Common Lisp Maintaining three queues, popping the smallest value every time. ``` (defun next-hamm (factors seqs) (let ((x (apply #'min (map 'list #'first seqs)))) (loop for s in seqs for f in factors for i from 0 with add = t do (if (= x (first s)) (pop s)) ;; prevent a value from being added to multiple lists (when add (setf (elt seqs i) (nconc s (list ( x f)))) (if (zerop (mod x f)) (setf add nil))) finally (return x)))) (loop with factors = '(2 3 5) with seqs = (loop for i in factors collect '(1)) for n from 1 to 1000001 do (let ((x (next-hamm factors seqs))) (if (or (< n 21) (= n 1691) (= n 1000000)) (format t "~d: ~d~%" n x)))) ``` A much faster method: ``` (defun hamming (n) (let ((fac '(2 3 5)) (idx (make-array 3 :initial-element 0)) (h (make-array (1+ n) :initial-element 1 :element-type 'integer))) (loop for i from 1 to n with e with x = '(1 1 1) do (setf e (setf (aref h i) (apply #'min x)) x (loop for y in x for f in fac for j from 0 collect (if (= e y) ( f (aref h (incf (aref idx j)))) y)))) (aref h n))) (loop for i from 1 to 20 do (format t "~2d: ~d~%" i (hamming i))) (loop for i in '(1691 1000000) do (format t "~d: ~d~%" i (hamming i))) ``` Output: 1: 1 2: 2 3: 3 4: 4 5: 5 6: 6 7: 8 8: 9 9: 10 10: 12 11: 15 12: 16 13: 18 14: 20 15: 24 16: 25 17: 27 18: 30 19: 32 20: 36 1691: 2125764000 1000000: 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 Crystal Translation of: Bc ``` require "big" def hamming(limit) h = Array.new(limit, 1.to_big_i) # h = Array.new(limit+1, 1.to_big_i) x2, x3, x5 = 2.to_big_i, 3.to_big_i, 5.to_big_i i, j, k = 0, 0, 0 (1...limit).each do |n| # (1..limit).each do |n| h[n] = Math.min(x2, Math.min(x3, x5)) x2 = 2 h[i += 1] if x2 == h[n] x3 = 3 h[j += 1] if x3 == h[n] x5 = 5 h[k += 1] if x5 == h[n] end h[limit - 1] end start = Time.monotonic print "Hamming Number (1..20): "; (1..20).each { |i| print "#{hamming(i)} " } puts puts "Hamming Number 1691: #{hamming 1691}" puts "Hamming Number 1,000,000: #{hamming 1_000_000}" puts "Elasped Time: #{(Time.monotonic - start).total_seconds} secs" ``` ``` System: I7-6700HQ, 3.5 GHz, Linux Kernel 5.6.17, Crystal 0.35 Run as: $ crystal run hammingnumbers.cr --release ``` Output: Hamming Number (1..20): 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 Hamming Number 1691: 2125764000 Hamming Number 1,000,000: 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 Elasped Time: 0.21420532 secs Functional Non-Duplicates Version The above implementation is true to the original Dijkstra algorithm but it's one of the few times where Dijkstra's analysis wasn't complete; there has been developed a later algorithm that is at least twice as fast due to only processing non-duplicate Hamming numbers and keeping only the numbers as necessary for further extensions of the sequence (the tails of the lists). Although Crystal isn't really a functional language, it is capable of enough functional forms of code to be able to implement this new algorithm. The algorithm requires lazy lists, for which currently Crystal has no library module, but as Crystal does have full first class functions including the ability to capture environment variables as closures, the LazyList type is easy enough to implement, as in the following code: Translation of: Kotlin ``` require "big" Unlike some languages like Kotlin, Crystal doesn't have a Lazy module, but it has closures, so it is easy to implement a LazyList class; Memoizes the results of the thunk so only executed once... class LazyList(T) getter head @tail : LazyList(T)? = nil def initialize(@head : T, @thnk : Proc(LazyList(T))) end def initialize(@head : T, @thnk : Proc(Nil)) end def initialize(@head : T, @thnk : Nil) end def tail # not thread safe without a lock/mutex... if thnk = @thnk @tail = thnk.call; @thnk = nil end @tail end end class Hammings include Iterator(BigInt) private BASES = [ 5, 3, 2 ] of Int32 private EMPTY = nil.as(LazyList(BigInt)?) @ll : LazyList(BigInt) def initialize rst = uninitialized LazyList(BigInt) BASES.each.accumulate(EMPTY) { |u, n| Hammings.unify(u, n) } .skip(1).each { |ll| rst = ll.not_nil! } @ll = LazyList.new(BigInt.new(1), ->{ rst } ) end protected def self.unify(s : LazyList(BigInt)?, n : Int32) r = uninitialized LazyList(BigInt)? if ss = s r = merge(ss, mults(n, LazyList.new(BigInt.new(1), -> { r.not_nil! }))) else r = mults(n, LazyList.new(BigInt.new(1), -> { r.not_nil! })) end r end private def self.mults(m : Int32, lls : LazyList(BigInt)) mlts = uninitialized Proc(LazyList(BigInt), LazyList(BigInt)) mlts = -> (ill : LazyList(BigInt)) { LazyList.new(ill.head m, -> { mlts.call(ill.tail.not_nil!) }) } mlts.call(lls) end private def self.merge(x : LazyList(BigInt), y : LazyList(BigInt)) xhd = x.head; yhd = y.head if xhd < yhd LazyList.new(xhd, -> { merge(x.tail.not_nil!, y) }) else LazyList.new(yhd, -> { merge(x, y.tail.not_nil!) }) end end def next rslt = @ll.head; @ll = @ll.tail.not_nil!; rslt end end print "The first 20 Hamming numbers are: " Hammings.new.first(20).each { |h| print(" ", h) } print ".\r\nThe 1691st Hamming number is " Hammings.new.skip(1690).first(1).each { |h| print h } print ".\r\nThe millionth Hamming number is " start_time = Time.monotonic Hammings.new.skip(999_999).first(1).each { |h| print h } elpsd = (Time.monotonic - start_time).total_milliseconds printf(".\r\nThis last took %f milliseconds.\r\n", elpsd) ``` Output: The first 20 Hamming numbers are: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36. The 1691st Hamming number is 2125764000. The millionth Hamming number is 519312780448388736089589843750000000000000000000000000000000000000000000000000000000. This last took 162.713293 milliseconds. The time is as run on an Intel SkyLake i5-6500 CPU running at 3.6 GHz single threaded as here. The code is a little slower than the fastest functional languages, such as Haskell or Kotlin due to that the speed of the Boehm Garbage Collector used by Crystal isn't as tuned for the many small allocations as necessary for functional forms of code such as the LazyList as those other languages which use memory pools to reduce the allocation/deallocation time from many small blocks of memory; that said, many common languages are much slower than this for functional algorithms due to their memory allocators being even slower and less tuned for this use. About a quarter of the time is spent doing extended precision calculations (which time will increase disproportional to range as the numbers get larger) but over two thirds of the the is spent just handling memory allocations/deallocations. Functional Non-Duplicates Version Using Log Estimations In order to show the time expended in multi-precision integer calculations, the following code implements the same algorithm as above but uses logarithmic estimations rather than multi-precision integer arithmetic to compute each instance of the Hamming number sequence, only converting to BigInt for the results: ``` require "big" Unlike some languages like Kotlin, Crystal doesn't have a Lazy module, but it has closures, so it is easy to implement a LazyList class; Memoizes the results of the thunk so only executed once... class LazyList(T) getter head @tail : LazyList(T)? = nil def initialize(@head : T, @thnk : Proc(LazyList(T))) end def initialize(@head : T, @thnk : Proc(Nil)) end def initialize(@head : T, @thnk : Nil) end def tail # not thread safe without a lock/mutex... if thnk = @thnk @tail = thnk.call; @thnk = nil end @tail end end class LogRep private LOG2_2 = 1.0_f64 private LOG2_3 = Math.log2 3.0_f64 private LOG2_5 = Math.log2 5.0_f64 def initialize(@logrep : Float64, @x2 : Int32, @x3 : Int32, @x5 : Int32) end def self.mult2(x : LogRep) LogRep.new(x.@logrep + LOG2_2, x.@x2 + 1, x.@x3, x.@x5) end def self.mult3(x : LogRep) LogRep.new(x.@logrep + LOG2_3, x.@x2, x.@x3 + 1, x.@x5) end def self.mult5(x : LogRep) LogRep.new(x.@logrep + LOG2_5, x.@x2, x.@x3, x.@x5 + 1) end def <(other : LogRep) self.@logrep < other.@logrep end def toBigInt expnd = -> (x : Int32, mlt : Int32) do rslt = BigInt.new(1); m = BigInt.new(mlt) while x > 0 rslt = m if (x & 1) > 0; m = m; x >>= 1 end rslt end expnd.call(@x2, 2) expnd.call(@x3, 3) expnd.call(@x5, 5) end end class HammingsLogRep include Iterator(LogRep) private BASES = [ -> (x : LogRep) { LogRep.mult5 x }, -> (x : LogRep) { LogRep.mult3 x }, -> (x : LogRep) { LogRep.mult2 x } ] private EMPTY = nil.as(LazyList(LogRep)?) private ONE = LogRep.new(0.0, 0, 0, 0) @ll : LazyList(LogRep) def initialize rst = uninitialized LazyList(LogRep) BASES.each.accumulate(EMPTY) { |u, n| HammingsLogRep.unify(u, n) } .skip(1).each { |ll| rst = ll.not_nil! } @ll = LazyList.new(ONE, ->{ rst } ) end protected def self.unify(s : LazyList(LogRep)?, n : LogRep -> LogRep) r = uninitialized LazyList(LogRep)? if ss = s r = merge(ss, mults(n, LazyList.new(ONE, -> { r.not_nil! }))) else r = mults(n, LazyList.new(ONE, -> { r.not_nil! })) end r end private def self.mults(m : LogRep -> LogRep, lls : LazyList(LogRep)) mlts = uninitialized Proc(LazyList(LogRep), LazyList(LogRep)) mlts = -> (ill : LazyList(LogRep)) { LazyList.new(m.call(ill.head), -> { mlts.call(ill.tail.not_nil!) }) } mlts.call(lls) end private def self.merge(x : LazyList(LogRep), y : LazyList(LogRep)) xhd = x.head; yhd = y.head if xhd < yhd LazyList.new(xhd, -> { merge(x.tail.not_nil!, y) }) else LazyList.new(yhd, -> { merge(x, y.tail.not_nil!) }) end end def next rslt = @ll.head; @ll = @ll.tail.not_nil!; rslt end end print "The first 20 Hamming numbers are: " HammingsLogRep.new.first(20).each { |h| print(" ", h.toBigInt) } print ".\r\nThe 1691st Hamming number is " HammingsLogRep.new.skip(1690).first(1).each { |h| print h.toBigInt } print ".\r\nThe millionth Hamming number is " start_time = Time.monotonic HammingsLogRep.new.skip(999_999).first(1).each { |h| print h.toBigInt } elpsd = (Time.monotonic - start_time).total_milliseconds printf(".\r\nThis last took %f milliseconds.\r\n", elpsd) ``` Output: The first 20 Hamming numbers are: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36. The 1691st Hamming number is 2125764000. The millionth Hamming number is 519312780448388736089589843750000000000000000000000000000000000000000000000000000000. This last took 131.661941 milliseconds. As can be seen by comparing with the above results using the same Intel Skylake i5-6500 CPU, this is about 20 percent faster due to less time spent doing the increasingly long multi-precision BigInt's. Note that using a struct rather than a class would make this code about twice as slow due to the larger memory copies required in copying "value's" rather than "reference" pointers. Functional Non-Duplicates Version Using Log Estimations and Imperative Code To show that the majority of the time for the above implementations is used in memory allocations/deallocations for the functional lazy list form of code, the following code implements this imperatively by using home-grown "growable" arrays; these "growable" arrays were hand implemented using pointer allocations to avoid the automatic bounds checking done for conventional Array's; note that the LogRep is now a struct rather than a class as now there aren't many value copies and to save the quite large amount of time required to allocation/deallocate memory as if class's were used: Translation of: Nim ``` require "big" struct LogRep private LOG2_2 = 1.0_f64 private LOG2_3 = Math.log2 3.0_f64 private LOG2_5 = Math.log2 5.0_f64 def initialize(@logrep : Float64, @x2 : Int32, @x3 : Int32, @x5 : Int32) end def mult2 LogRep.new(@logrep + LOG2_2, @x2 + 1, @x3, @x5) end def mult3 LogRep.new(@logrep + LOG2_3, @x2, @x3 + 1, @x5) end def mult5 LogRep.new(@logrep + LOG2_5, @x2, @x3, @x5 + 1) end def <(other : LogRep) self.@logrep < other.@logrep end def toBigInt expnd = -> (x : Int32, mlt : Int32) do rslt = BigInt.new(1); m = BigInt.new(mlt) while x > 0 rslt = m if (x & 1) > 0; m = m; x >>= 1 end rslt end expnd.call(@x2, 2) expnd.call(@x3, 3) expnd.call(@x5, 5) end end class HammingsImpLogRep include Iterator(LogRep) private ONE = LogRep.new(0.0, 0, 0, 0) # use pointers to avoid bounds checking... @s2 = Pointer(LogRep).malloc 1024; @s3 = Pointer(LogRep).malloc 1024 @s5 : LogRep = ONE.mult5; @mrg : LogRep = ONE.mult3 @s2sz = 1024; @s3sz = 1024 @s2hdi = 0; @s2tli = 0; @s3hdi = 0; @s3tli = 0 def initialize @s2 = ONE; @s3 = ONE.mult3 end def next @s2tli += 1 if @s2hdi + @s2hdi >= @s2sz # unused is half of used @s2.move_from(@s2 + @s2hdi, @s2tli - @s2hdi) @s2tli -= @s2hdi; @s2hdi = 0 end if @s2tli >= @s2sz # grow array, copying former contents @s2sz += @s2sz; ns2 = Pointer(LogRep).malloc @s2sz ns2.move_from(@s2, @s2tli); @s2 = ns2 end rsltp = @s2 + @s2hdi; if rsltp.value < @mrg @s2[@s2tli] = rsltp.value.mult2; @s2hdi += 1 else @s3tli += 1 if @s3hdi + @s3hdi >= @s3sz # unused is half of used @s3.move_from(@s3 + @s3hdi, @s3tli - @s3hdi) @s3tli -= @s3hdi; @s3hdi = 0 end if @s3tli >= @s3sz # grow array, copying former contents @s3sz += @s3sz; ns3 = Pointer(LogRep).malloc @s3sz ns3.move_from(@s3, @s3tli); @s3 = ns3 end @s2[@s2tli] = @mrg.mult2; @s3[@s3tli] = @mrg.mult3 @s3hdi += 1; ns3hdp = @s3 + @s3hdi rslt = @mrg; rsltp = pointerof(rslt) if ns3hdp.value < @s5 @mrg = ns3hdp.value else @mrg = @s5; @s5 = @s5.mult5; @s3hdi -= 1 end end rsltp.value end end print "The first 20 Hamming numbers are: " HammingsImpLogRep.new.first(20).each { |h| print(" ", h.toBigInt) } print ".\r\nThe 1691st Hamming number is " HammingsImpLogRep.new.skip(1690).first(1).each { |h| print h.toBigInt } print ".\r\nThe millionth Hamming number is " start_time = Time.monotonic HammingsImpLogRep.new.skip(999_999).first(1).each { |h| print h.toBigInt } elpsd = (Time.monotonic - start_time).total_milliseconds printf(".\r\nThis last took %f milliseconds.\r\n", elpsd) ``` Output: The first 20 Hamming numbers are: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36. The 1691st Hamming number is 2125764000. The millionth Hamming number is 519312780448388736089589843750000000000000000000000000000000000000000000000000000000. This last took 7.330211 milliseconds. As can be seen by comparing with the above results using the same Intel Skylake i5-6500 CPU, this is about eighteen times faster than the functional version also using logarithmic representations due to less time spent doing memory allocations/deallocations by using the imperative form of code. This version can find the billionth Hamming number in about 7.6 seconds on this machine. D Basic Version This version keeps all numbers in memory, computing all the Hamming numbers up to the needed one. Performs constant number of operations per Hamming number produced. ``` import std.stdio, std.bigint, std.algorithm, std.range, core.memory; auto hamming(in uint n) pure nothrow /@safe/ { immutable BigInt two = 2, three = 3, five = 5; auto h = new BigInt[n]; h = 1; BigInt x2 = 2, x3 = 3, x5 = 5; size_t i, j, k; foreach (ref el; h.dropOne) { el = min(x2, x3, x5); if (el == x2) x2 = two h[++i]; if (el == x3) x3 = three h[++j]; if (el == x5) x5 = five h[++k]; } return h.back; } void main() { GC.disable; iota(1, 21).map!hamming.writeln; 1_691.hamming.writeln; 1_000_000.hamming.writeln; } ``` Output: [1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36] 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 Runtime is about 1.6 seconds with LDC2. Alternative Version 1 This keeps numbers in memory, but over-computes a sequence by a factor of about , calculating extra multiples past that as well. Incurs an extra factor of operations per each number produced (reinserting its multiples into a tree). Doesn't stop when the target number is reached, instead continuing until it is no longer needed: Translation of: Java ``` import std.stdio, std.bigint, std.container, std.algorithm, std.range, core.memory; BigInt hamming(in int n) in { assert(n > 0); } body { auto frontier = redBlackTree(2.BigInt, 3.BigInt, 5.BigInt); auto lowest = 1.BigInt; foreach (immutable _; 1 .. n) { lowest = frontier.front; frontier.removeFront; frontier.insert(lowest 2); frontier.insert(lowest 3); frontier.insert(lowest 5); } return lowest; } void main() { GC.disable; writeln("First 20 Hamming numbers: ", iota(1, 21).map!hamming); writeln("hamming(1691) = ", 1691.hamming); writeln("hamming(1_000_000) = ", 1_000_000.hamming); } ``` Output: First 20 Hamming numbers: [1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36] hamming(1691) = 2125764000 hamming(1_000_000) = 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 About 3.2 seconds run time with LDC2. Alternative Version 2 Does exactly what the first version does, creating an array and filling it with Hamming numbers, keeping the three back pointers into the sequence for next multiples calculations, except that it represents the numbers as their coefficients triples and their logarithm values (for comparisons), thus saving on BigInt calculations. Translation of: C ``` import std.stdio: writefln; import std.bigint: BigInt; import std.conv: text; import std.numeric: gcd; import std.algorithm: copy, map; import std.array: array; import core.stdc.stdlib: calloc; import std.math: log; // ^^ // Number of factors. enum NK = 3; enum MAX_HAM = 10_000_000; static assert(gcd(NK, MAX_HAM) == 1); enum int[NK] factors = [2, 3, 5]; /// K-smooth numbers (stored as their exponents of each factor). struct Hamming { double v; // Log of the number, for convenience. ushort[NK] e; // Exponents of each factor. public static __gshared immutable double[factors.length] inc = factors[].map!log.array; bool opEquals(in ref Hamming y) const pure nothrow @nogc { //return this.e == y.e; // Too much slow. foreach (immutable i; 0 .. this.e.length) if (this.e[i] != y.e[i]) return false; return true; } void update() pure nothrow @nogc { //this.v = dotProduct(inc, this.e); // Too much slow. this.v = 0.0; foreach (immutable i; 0 .. this.e.length) this.v += inc[i] this.e[i]; } string toString() const { BigInt result = 1; foreach (immutable i, immutable f; factors) result = f.BigInt ^^ this.e[i]; return result.text; } } // Global variables. __gshared Hamming[] hams; __gshared Hamming[NK] values; nothrow @nogc static this() { // Slower than calloc if you don't use all the MAX_HAM items. //hams = new Hamming[MAX_HAM]; auto ptr = cast(Hamming)calloc(MAX_HAM, Hamming.sizeof); static const err = new Error("Not enough memory."); if (!ptr) throw err; hams = ptr[0 .. MAX_HAM]; foreach (immutable i, ref v; values) { v.e[i] = 1; v.v = Hamming.inc[i]; } } ref Hamming getHam(in size_t n) nothrow @nogc in { assert(n <= MAX_HAM); } body { // Most of the time v can be just incremented, but eventually // floating point precision will bite us, so better recalculate. __gshared static size_t[NK] idx; __gshared static int n_hams; for (; n_hams < n; n_hams++) { { // Find the index of the minimum v. size_t ni = 0; foreach (immutable i; 1 .. NK) if (values[i].v < values[ni].v) ni = i; hams[n_hams] = values[ni]; hams[n_hams].update; } foreach (immutable i; 0 .. NK) if (values[i] == hams[n_hams]) { values[i] = hams[idx[i]]; idx[i]++; values[i].e[i]++; values[i].update; } } return hams[n - 2]; } void main() { foreach (immutable n; [1691, 10 ^^ 6, MAX_HAM]) writefln("%8d: %s", n, n.getHam); } ``` The output is similar to the second C version. Runtime is about 0.11 seconds if MAX_HAM = 1_000_000 (as the task requires), and 0.90 seconds if MAX_HAM = 10_000_000. Alternative Version 3 This version is similar to the precedent, but frees unused values. It's a little slower than the precedent version, but it uses much less RAM, so it allows to compute the result for larger n. ``` import std.stdio: writefln; import std.bigint: BigInt; import std.conv: text; import std.algorithm: map; import std.array: array; import core.stdc.stdlib: malloc, calloc, free; import std.math: log; // ^^ // Number of factors. enum NK = 3; __gshared immutable int[NK] primes = [2, 3, 5]; __gshared immutable double[NK] lnPrimes = primes[].map!log.array; /// K-smooth numbers (stored as their exponents of each factor). struct Hamming { double ln; // Log of the number. ushort[NK] e; // Exponents of each factor. Hamming next; size_t n; // Recompute the logarithm from the exponents. void recalculate() pure nothrow @safe @nogc { this.ln = 0.0; foreach (immutable i, immutable ei; this.e) this.ln += lnPrimes[i] ei; } string toString() const { BigInt result = 1; foreach (immutable i, immutable f; primes) result = f.BigInt ^^ this.e[i]; return result.text; } } Hamming getHam(in size_t n) nothrow @nogc in { assert(n && n != size_t.max); } body { static struct Candidate { typeof(Hamming.ln) ln; typeof(Hamming.e) e; void increment(in size_t n) pure nothrow @safe @nogc { e[n] += 1; ln += lnPrimes[n]; } bool opEquals(T)(in ref T y) const pure nothrow @safe @nogc { // return this.e == y.e; // Slow. return !((this.e ^ y.e) | (this.e ^ y.e) | (this.e ^ y.e)); } int opCmp(T)(in ref T y) const pure nothrow @safe @nogc { return (ln > y.ln) ? 1 : (ln < y.ln ? -1 : 0); } } static struct HammingIterator { // Not a Range. Candidate cand; Hamming base; size_t primeIdx; this(in size_t i, Hamming b) pure nothrow @safe @nogc { primeIdx = i; base = b; cand.e = base.e; cand.ln = base.ln; cand.increment(primeIdx); } void next() pure nothrow @safe @nogc { base = base.next; cand.e = base.e; cand.ln = base.ln; cand.increment(primeIdx); } } HammingIterator[NK] its; Hamming head = cast(Hamming)calloc(Hamming.sizeof, 1); Hamming freeList, cur = head; Candidate next; foreach (immutable i, ref it; its) it = HammingIterator(i, cur); for (size_t i = cur.n = 1; i < n; ) { auto leastReferenced = size_t.max; next.ln = double.max; foreach (ref it; its) { if (it.cand == cur) it.next; if (it.base.n < leastReferenced) leastReferenced = it.base.n; if (it.cand < next) next = it.cand; } // Collect unferenced numbers. while (head.n < leastReferenced) { auto tmp = head; head = head.next; tmp.next = freeList; freeList = tmp; } if (!freeList) { cur.next = cast(Hamming)malloc(Hamming.sizeof); } else { cur.next = freeList; freeList = freeList.next; } cur = cur.next; version (fastmath) { cur.ln = next.ln; cur.e = next.e; } else { cur.e = next.e; cur.recalculate; // Prevent FP error accumulation. } cur.n = i++; cur.next = null; } auto result = cur; version (leak) {} else { while (head) { auto tmp = head; head = head.next; tmp.free; } while (freeList) { auto tmp = freeList; freeList = freeList.next; tmp.free; } } return result; } void main() { foreach (immutable n; [1691, 10 ^^ 6, 10_000_000]) writefln("%8d: %s", n, n.getHam); } ``` The output is the same as the second alternative version. Dart In order to produce reasonable ranges of Hamming numbers, one needs the BigInt type, but processing of many BigInt's in generating a sequence slows the code; for that reason the following code records the determined values as a combination of an approximation of the log base two value and the triple of the powers of two, three and five, only generating the final output values as BigInt's as required: ``` import 'dart:math'; final lb2of2 = 1.0; final lb2of3 = log(3.0) / log(2.0); final lb2of5 = log(5.0) / log(2.0); class Trival { final double log2; final int twos; final int threes; final int fives; Trival mul2() { return Trival(this.log2 + lb2of2, this.twos + 1, this.threes, this.fives); } Trival mul3() { return Trival(this.log2 + lb2of3, this.twos, this.threes + 1, this.fives); } Trival mul5() { return Trival(this.log2 + lb2of5, this.twos, this.threes, this.fives + 1); } @override String toString() { return this.log2.toString() + " " + this.twos.toString() + " " + this.threes.toString() + " " + this.fives.toString(); } const Trival(this.log2, this.twos, this.threes, this.fives); } Iterable makeHammings() sync { var one = Trival(0.0, 0, 0, 0); yield(one); var s532 = one.mul2(); var mrg = one.mul3(); var s53 = one.mul3().mul3(); // equivalent to 9 for advance step var s5 = one.mul5(); var i = -1; var j = -1; List h = []; List m = []; Trival rslt; while (true) { if (s532.log2 < mrg.log2) { rslt = s532; h.add(s532); ++i; s532 = h[i].mul2(); } else { rslt = mrg; h.add(mrg); if (s53.log2 < s5.log2) { mrg = s53; m.add(s53); ++j; s53 = m[j].mul3(); } else { mrg = s5; m.add(s5); s5 = s5.mul5(); } if (j > (m.length >> 1)) {m.removeRange(0, j); j = 0; } } if (i > (h.length >> 1)) {h.removeRange(0, i); i = 0; } yield(rslt); } } BigInt trival2Int(Trival tv) { return BigInt.from(2).pow(tv.twos) BigInt.from(3).pow(tv.threes) BigInt.from(5).pow(tv.fives); } void main() { final numhams = 1000000000000; var hamseqstr = "The first 20 Hamming numbers are: ( "; makeHammings().take(20) .forEach((h) => hamseqstr += trival2BigInt(h).toString() + " "); print(hamseqstr + ")"); var nthhamseqstr = "The first 20 Hamming numbers are: ( "; for (var i = 1; i <= 20; ++i) { nthhamseqstr += trival2BigInt(nthHamming(i)).toString() + " "; } print(nthhamseqstr + ")"); final strt = DateTime.now().millisecondsSinceEpoch; final answr = makeHammings().skip(999999).first; final elpsd = DateTime.now().millisecondsSinceEpoch - strt; print("The ${numhams}th Hamming number is: $answr"); print("in full as: ${trival2BigInt(answr)}"); print("This test took $elpsd milliseconds."); } ``` Output: The first 20 Hamming numbers are: ( 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 ) The 1000000th Hamming number is: 278.096635606686 55 47 64 in full as: 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 This test took 311 milliseconds. Due to using a mutable extendable List (Array) and mutation, the above generator is reasonably fast, and as well has the feature that List memory is recovered as it is no longer required, with a considerable saving in both execution speed and memory requirement. Alternate extremely fast version using an "error band" Although not a Hamming sequence generator, the following code uses the known characteristics of the distribution of Hamming numbers to just scan through to find all possibilities in a relatively narrow "error band" which then can be sorted based on the log base two approximation and the nth element determined inside that band; it has a huge advantage that memory requirements drop to O(n^(1/3)) and asymptotic execution complexity drops from O(n) to O(n^(2/3)) for an extremely fast execution speed (thanks to WillNess for the start of this algorithm as referenced in the Haskell section): Template:Translated from ``` import 'dart:math'; final lb2of2 = 1.0; final lb2of3 = log(3.0) / log(2.0); final lb2of5 = log(5.0) / log(2.0); class Trival { final double log2; final int twos; final int threes; final int fives; Trival mul2() { return Trival(this.log2 + lb2of2, this.twos + 1, this.threes, this.fives); } Trival mul3() { return Trival(this.log2 + lb2of3, this.twos, this.threes + 1, this.fives); } Trival mul5() { return Trival(this.log2 + lb2of5, this.twos, this.threes, this.fives + 1); } @override String toString() { return this.log2.toString() + " " + this.twos.toString() + " " + this.threes.toString() + " " + this.fives.toString(); } const Trival(this.log2, this.twos, this.threes, this.fives); } BigInt trival2BigInt(Trival tv) { return BigInt.from(2).pow(tv.twos) BigInt.from(3).pow(tv.threes) BigInt.from(5).pow(tv.fives); } Trival nthHamming(int n) { if (n < 1) throw Exception("nthHamming: argument must be higher than 0!!!"); if (n < 7) { if (n & (n - 1) == 0) { final bts = n.bitLength - 1; return Trival(bts.toDouble(), bts, 0, 0); } switch (n) { case 3: return Trival(lb2of3, 0, 1, 0); case 5: return Trival(lb2of5, 0, 0, 1); case 6: return Trival(lb2of2 + lb2of3, 1, 1, 0); } } final fctr = 6.0 lb2of3 lb2of5; final crctn = log(sqrt(30.0)) / log(2.0); final lb2est = pow(fctr n.toDouble(), 1.0/3.0) - crctn; final lb2rng = 2.0/lb2est; final lb2hi = lb2est + 1.0/lb2est; List ebnd = []; var cnt = 0; for (var k = 0; k < (lb2hi / lb2of5).ceil(); ++k) { final lb2p = lb2hi - k lb2of5; for (var j = 0; j < (lb2p / lb2of3).ceil(); ++j) { final lb2q = lb2p - j lb2of3; final i = lb2q.floor(); final lb2frac = lb2q - i; cnt += i + 1; if (lb2frac <= lb2rng) { final lb2v = i lb2of2 + j lb2of3 + k lb2of5; ebnd.add(Trival(lb2v, i, j, k)); } } } ebnd.sort((a, b) => b.log2.compareTo(a.log2)); // descending order final ndx = cnt - n; if (ndx < 0) throw Exception("nthHamming: not enough triples generated!!!"); if (ndx >= ebnd.length) throw Exception("nthHamming: error band is too narrow!!!"); return ebnd[ndx]; } void main() { final numhams = 1000000; var nthhamseqstr = "The first 20 Hamming numbers are: ( "; for (var i = 1; i <= 20; ++i) { nthhamseqstr += trival2BigInt(nthHamming(i)).toString() + " "; } print(nthhamseqstr + ")"); final strt = DateTime.now().millisecondsSinceEpoch; final answr = nthHamming(numhams); final elpsd = DateTime.now().millisecondsSinceEpoch - strt0; print("The ${numhams}th Hamming number is: $answr"); print("in full as: ${trival2BigInt(answr)}"); print("This test took $elpsd milliseconds."); } ``` The output from the above code is the same as the above version but it is so fast that the time to find the millionth Hamming number is too small to be measured other than the Dart VM JIT time. It can find the billionth prime in a fraction of a second and the trillionth prime in seconds. Increasing the range above 1e13 by using a BigInt log base two representation For arguments higher than about 1e13, the precision of the Double log base two approximations used above is not adequate to do an accurate sort, but the algorithm continues to work (although perhaps slightly slower) by changing the code to use BigInt log base two representations as follows: ``` import 'dart:math'; final biglb2of2 = BigInt.from(1) << 100; // 100 bit representations... final biglb2of3 = (BigInt.from(1784509131911002) << 50) + BigInt.from(134114660393120); final biglb2of5 = (BigInt.from(2614258625728952) << 50) + BigInt.from(773584997695443); class BigTrival { final BigInt log2; final int twos; final int threes; final int fives; @override String toString() { return this.log2.toString() + " " + this.twos.toString() + " " + this.threes.toString() + " " + this.fives.toString(); } const BigTrival(this.log2, this.twos, this.threes, this.fives); } BigInt bigtrival2BigInt(BigTrival tv) { return BigInt.from(2).pow(tv.twos) BigInt.from(3).pow(tv.threes) BigInt.from(5).pow(tv.fives); } BigTrival nthHamming(int n) { if (n < 1) throw Exception("nthHamming: argument must be higher than 0!!!"); if (n < 7) { if (n & (n - 1) == 0) { final bts = n.bitLength - 1; return BigTrival(BigInt.from(bts) << 100, bts, 0, 0); } switch (n) { case 3: return BigTrival(biglb2of3, 0, 1, 0); case 5: return BigTrival(biglb2of5, 0, 0, 1); case 6: return BigTrival(biglb2of2 + biglb2of3, 1, 1, 0); } } final fctr = lb2of3 lb2of5 6; final crctn = log(sqrt(30.0)) / log(2.0); final lb2est = pow(fctr n.toDouble(), 1.0/3.0) - crctn; final lb2rng = 2.0/lb2est; final lb2hi = lb2est + 1.0/lb2est; List ebnd = []; var cnt = 0; for (var k = 0; k < (lb2hi / lb2of5).ceil(); ++k) { final lb2p = lb2hi - k lb2of5; for (var j = 0; j < (lb2p / lb2of3).ceil(); ++j) { final lb2q = lb2p - j lb2of3; final i = lb2q.floor(); final lb2frac = lb2q - i; cnt += i + 1; if (lb2frac <= lb2rng) { // final lb2v = i lb2of2 + j lb2of3 + k lb2of5; // ebnd.add(Trival(lb2v, i, j, k)); final lb2v = BigInt.from(i) biglb2of2 + BigInt.from(j) biglb2of3 + BigInt.from(k) biglb2of5; ebnd.add(BigTrival(lb2v, i, j, k)); } } } ebnd.sort((a, b) => b.log2.compareTo(a.log2)); // descending order final ndx = cnt - n; if (ndx < 0) throw Exception("nthHamming: not enough triples generated!!!"); if (ndx >= ebnd.length) throw Exception("nthHamming: error band is too narrow!!!"); return ebnd[ndx]; } void main() { final numhams = 1000000000; var nthhamseqstr = "The first 20 Hamming numbers are: ( "; for (var i = 1; i <= 20; ++i) { nthhamseqstr += bigtrival2BigInt(nthHamming(i)).toString() + " "; } print(nthhamseqstr + ")"); final strt = DateTime.now().millisecondsSinceEpoch; final answr = nthHamming(numhams); final elpsd = DateTime.now().millisecondsSinceEpoch - strt; print("The ${numhams}th Hamming number is: $answr"); print("in full as: ${bigtrival2BigInt(answr)}"); print("This test took $elpsd milliseconds."); } ``` With these changes, the algorithm can find the 1e19'th prime in the order af days depending on the CPU used. DCL ``` $ limit = p1 $ $ n = 0 $ h_'n = 1 $ x2 = 2 $ x3 = 3 $ x5 = 5 $ i = 0 $ j = 0 $ k = 0 $ $ n = 1 $ loop: $ x = x2 $ if x3 .lt. x then $ x = x3 $ if x5 .lt. x then $ x = x5 $ h_'n = x $ if x2 .eq. h_'n $ then $ i = i + 1 $ x2 = 2 h_'i $ endif $ if x3 .eq. h_'n $ then $ j = j + 1 $ x3 = 3 h_'j $ endif $ if x5 .eq. h_'n $ then $ k = k + 1 $ x5 = 5 h_'k $ endif $ n = n + 1 $ if n .le. limit then $ goto loop $ $ i = 0 $ loop2: $ write sys$output h_'i $ i = i + 1 $ if i .lt. 20 then $ goto loop2 $ $ n = limit - 1 $ write sys$output h_'n ``` Output: ``` Here's the output; $ @hamming 1691 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 ``` Delphi See Pascal. EasyLang Translation of: 11l ``` func hamming lim . len h[] lim h = 1 x2 = 2 ; x3 = 3 ; x5 = 5 i = 1 ; j = 1 ; k = 1 for n = 2 to lim h[n] = lower x2 lower x3 x5 if x2 = h[n] i += 1 x2 = 2 h[i] . if x3 = h[n] j += 1 x3 = 3 h[j] . if x5 = h[n] k += 1 x5 = 5 h[k] . . return h[lim] . for nr = 1 to 20 write hamming nr & " " . print "" print hamming 1691 ``` Output: ``` 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 ``` Eiffel `` note description : "Initial part, in order, of the sequence of Hamming numbers" math : "[ Hamming numbers, also known as regular numbers and 5-smooth numbers, are natural integers that have 2, 3 and 5 as their only prime factors. ]" computer_arithmetic : "[ This version avoids integer overflow and stops at the last representable number in the sequence. ]" output : "[ Per requirements of the RosettaCode example, execution will produce items of indexes 1 to 20 and 1691. The algorithm (procedurehamming') is more general and will produce the first n' Hamming numbers for anyn'. ]" source : "This problem was posed in Edsger W. Dijkstra, A Discipline of Programming, Prentice Hall, 1978" date : "8 August 2012" authors : "Bertrand Meyer", "Emmanuel Stapf" revision : "1.0" libraries : "Relies on SORTED_TWO_WAY_LIST from EiffelBase" implementation : "[ Using SORTED_TWO_WAY_LIST provides an elegant illustration of how to implement a lazy scheme in Eiffel through the use of object-oriented data structures. ]" warning : "[ The formatting () specifications for Eiffel in RosettaCode are slightly obsolete: `note' and other newer keywords not supported, red color for manifest strings. This should be fixed soon. ]" class APPLICATION create make feature {NONE} -- Initialization make -- Print first 20 Hamming numbers, in order, and the 1691-st one. local Hammings: like hamming -- List of Hamming numbers, up to 1691-st one. do Hammings := hamming (1691) across 1 |..| 20 as i loop io.put_natural (Hammings.i_th (i.item)); io.put_string (" ") end io.put_new_line; io.put_natural (Hammings.i_th (1691)); io.put_new_line end feature -- Basic operations hamming (n: INTEGER): ARRAYED_LIST [NATURAL] -- First `n' elements (in order) of the Hamming sequence, -- or as many of them as will not produce overflow. local sl: SORTED_TWO_WAY_LIST [NATURAL] overflow: BOOLEAN first, next: NATURAL do create Result.make (n); create sl.make sl.extend (1); sl.start across 1 |..| n as i invariant -- "The numbers output so far are the first `i' - 1 Hamming numbers, in order". -- "Result.first is the `i'-th Hamming number." until sl.is_empty loop first := sl.first; sl.start Result.extend (first); sl.remove across << 2, 3, 5 >> as multiplier loop next := multiplier.item first overflow := overflow or next <= first if not overflow and then not sl.has (next) then sl.extend (next) end end end end end ``` Output: 1 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 Elixir ``` defmodule Hamming do def generater do queues = [{2, queue}, {3, queue}, {5, queue}] Stream.unfold({1, queues}, fn {n, q} -> next(n, q) end) end defp next(n, queues) do queues = Enum.map(queues, fn {m, queue} -> {m, push(queue, mn)} end) min = Enum.map(queues, fn {_, queue} -> top(queue) end) |> Enum.min queues = Enum.map(queues, fn {m, queue} -> {m, (if min==top(queue), do: erase_top(queue), else: queue)} end) {n, {min, queues}} end defp queue, do: {[], []} defp push({input, output}, term), do: {[term | input], output} defp top({input, []}), do: List.last(input) defp top({, [h|]}), do: h defp erase_top({input, []}), do: erase_top({[], Enum.reverse(input)}) defp erase_top({input, [_|t]}), do: {input, t} end IO.puts "first twenty Hamming numbers:" IO.inspect Hamming.generater |> Enum.take(20) IO.puts "1691st Hamming number:" IO.puts Hamming.generater |> Enum.take(1691) |> List.last IO.puts "one millionth Hamming number:" IO.puts Hamming.generater |> Enum.take(1_000_000) |> List.last ``` Output: ``` first twenty Hamming numbers: [1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36] 1691st Hamming number: 2125764000 one millionth Hamming number: 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 ``` Elm The Elm language has many restrictions that make the implementation of the Hamming Number sequence algorithms difficult, as the classic Edsger Dijkstra algorithm as written in Haskell Hamming_numbers#The_classic_version cannot be written in Elm as current Elm forbids cyclic value references (the value "hamming" is back referenced three times), and the implementation wouldn't be efficient even if it could as the current Elm version 0.19.x has removed the "Lazy" package the would defer the memoization of the result of a computation as necessary in implementing Haskell's lazy lists. Thus, one has to implement memoization using a different data structure than a lazy list; however, all current Elm data structures are persistent/forbid mutation and can only implement some sort of Copy On Write (COW), thus there is no implementation of a linear array and the "Array" module is a tree based structure (with some concessions to data blocks for slightly better performance) that will have a logarithmic execution complexity when the size increases above a minimum. In fact, all Elm data structures that could be used for this also have a logarithmic response (Dict, Set, Array). The implementation of List is not lazy so new elements can't be added to the "tail" but need to be added to the "head" for efficiency, which means if one wants to add higher elements to a list in increasing order, one needs to (COW) reverse the List (twice) in order to do it! The solution here uses a pure functional implementation of a Min Heap (Binary Heap) Priority Queue so that the minimum element can be viewed in O(1) time although inserting new elements/replacing elements still takes O(log n) time where "n" is the number of elements in the queue. As written, no queue needs to be maintained for the multiples of five, but two queues are maintained, one for the merge of the multiples of five and three, and the larger one for the merge of all the multiples of five, three, and two. In order to minimize redundant computation time, the implementation maintains the "next" comparison values as part of the recursive function loop states that can change with every loop. To express the sequence, a Co-Inductive Stream (CIS) is used as a deferred execution (lazy) stream; it does not memoize computations (as discussed above) but that isn't necessary for this application where the sequence is only traversed once and consumed as being traversed. In addition, in order to reduce the "BigInt" computation time, the calculations are done on the basis of a "Float" logarithmic approximation while maintaining "Trival" triple representation of the number of powers of two, three, and five, are multiplied in order to obtain the current value represented by the logarithmic approximation. The working code is as follows: ``` module Main exposing ( main ) import Bitwise exposing (..) import BigInt import Task exposing ( Task, succeed, perform, andThen ) import Html exposing ( div, text ) import Browser exposing ( element ) import Time exposing ( now, posixToMillis ) cLIMIT : Int cLIMIT = 1000000 -- an infinite non-empty non-memoizing Co-Inductive Stream (CIS)... type CIS a = CIS a (() -> CIS a) takeCIS2List : Int -> CIS a -> List a takeCIS2List n cis = let loop i (CIS hd tl) lst = if i < 1 then List.reverse lst else loop (i - 1) (tl()) (hd :: lst) in loop n cis [] nthCIS : Int -> CIS a -> a nthCIS n (CIS hd tl) = if n <= 1 then hd else nthCIS (n - 1) (tl()) type PriorityQ comparable v = Mt | Br comparable v (PriorityQ comparable v) (PriorityQ comparable v) emptyPQ : PriorityQ comparable v emptyPQ = Mt peekMinPQ : PriorityQ comparable v -> Maybe (comparable, v) peekMinPQ pq = case pq of (Br k v _ _) -> Just (k, v) Mt -> Nothing pushPQ : comparable -> v -> PriorityQ comparable v -> PriorityQ comparable v pushPQ wk wv pq = case pq of Mt -> Br wk wv Mt Mt (Br vk vv pl pr) -> if wk <= vk then Br wk wv (pushPQ vk vv pr) pl else Br vk vv (pushPQ wk wv pr) pl siftdown : comparable -> v -> PriorityQ comparable v -> PriorityQ comparable v -> PriorityQ comparable v siftdown wk wv pql pqr = case pql of Mt -> Br wk wv Mt Mt (Br vkl vvl pll prl) -> case pqr of Mt -> if wk <= vkl then Br wk wv pql Mt else Br vkl vvl (Br wk wv Mt Mt) Mt (Br vkr vvr plr prr) -> if wk <= vkl && wk <= vkr then Br wk wv pql pqr else if vkl <= vkr then Br vkl vvl (siftdown wk wv pll prl) pqr else Br vkr vvr pql (siftdown wk wv plr prr) replaceMinPQ : comparable -> v -> PriorityQ comparable v -> PriorityQ comparable v replaceMinPQ wk wv pq = case pq of Mt -> Mt (Br _ _ pl pr) -> siftdown wk wv pl pr type alias Trival = (Int, Int, Int) showTrival : Trival -> String showTrival tv = let (x2, x3, x5) = tv xpnd x m r = if x <= 0 then r else xpnd (shiftRightBy 1 x) (BigInt.mul m m) (if (and 1 x) /= 0 then BigInt.mul m r else r) in BigInt.fromInt 1 |> xpnd x2 (BigInt.fromInt 2) |> xpnd x3 (BigInt.fromInt 3) |> xpnd x5 (BigInt.fromInt 5) |> BigInt.toString type alias LogRep = { lr: Float, trv: Trival } ltLogRep : LogRep -> LogRep -> Bool ltLogRep lra lrb = lra.lr < lrb.lr oneLogRep : LogRep oneLogRep = { lr = 0.0, trv = (0, 0, 0) } lg2_2 : Float lg2_2 = 1.0 -- log base two of two lg2_3 : Float lg2_3 = logBase 2.0 3.0 lg2_5 : Float lg2_5 = logBase 2.0 5.0 multLR2 : LogRep -> LogRep multLR2 lr = let (x2, x3, x5) = lr.trv in LogRep (lr.lr + lg2_2) (x2 + 1, x3, x5) multLR3 : LogRep -> LogRep multLR3 lr = let (x2, x3, x5) = lr.trv in LogRep (lr.lr + lg2_3) (x2, x3 + 1, x5) multLR5 : LogRep -> LogRep multLR5 lr = let (x2, x3, x5) = lr.trv in LogRep (lr.lr + lg2_5) (x2, x3, x5 + 1) hammingsLog : () -> CIS Trival hammingsLog() = let im235 = multLR2 oneLogRep im35 = multLR3 oneLogRep imrg = im35 im5 = multLR5 oneLogRep next bpq mpq m235 mrg m35 m5 = if ltLogRep m235 mrg then let omin = case peekMinPQ bpq of Just (lr, trv) -> LogRep lr trv Nothing -> m235 -- at the beginning! nm235 = multLR2 omin nbpq = replaceMinPQ m235.lr m235.trv bpq in CIS m235.trv <| \ () -> next nbpq mpq nm235 mrg m35 m5 else if ltLogRep mrg m5 then let omin = case peekMinPQ mpq of Just (lr, trv) -> LogRep lr trv Nothing -> mrg -- at the beginning! nm35 = multLR3 omin nmrg = if ltLogRep nm35 m5 then nm35 else m5 nmpq = replaceMinPQ mrg.lr mrg.trv mpq nbpq = pushPQ mrg.lr mrg.trv bpq in CIS mrg.trv <| \ () -> next nbpq nmpq m235 nmrg nm35 m5 else let nm5 = multLR5 m5 nmrg = if ltLogRep m35 nm5 then m35 else nm5 nmpq = pushPQ m5.lr m5.trv mpq nbpq = pushPQ m5.lr m5.trv bpq in CIS m5.trv <| \ () -> next nbpq nmpq m235 nmrg m35 nm5 in CIS (0, 0, 0) <| \ () -> next emptyPQ emptyPQ im235 imrg im35 im5 timemillis : () -> Task Never Int -- a side effect function timemillis() = now |> andThen (\ t -> succeed (posixToMillis t)) test : Int -> Cmd Msg -- side effect function chain (includes "perform")... test lmt = let msg1 = "The first 20 Hamming numbers are: " ++ (hammingsLog() |> takeCIS2List 20 |> List.map showTrival |> String.join ", ") ++ "." msg2 = "The 1691st Hamming number is " ++ (hammingsLog() |> nthCIS 1691 |> showTrival) ++ "." msg3 = "The " ++ String.fromInt cLIMIT ++ "th Hamming number is:" in timemillis() |> andThen (\ strt -> let rsltstr = hammingsLog() |> nthCIS lmt |> showTrival in timemillis() |> andThen (\ stop -> succeed [msg1, msg2, msg3, rsltstr ++ " in " ++ String.fromInt (stop - strt) ++ " milliseconds."])) |> perform Done -- following code has to do with outputting to a web page using MUV/TEA... type alias Model = List String type Msg = Done Model main : Program () Model Msg main = -- starts with empty list of strings; views model of filled list... element { init = \ _ -> ( [], test cLIMIT ) , update = \ (Done mdl) _ -> ( mdl , Cmd.none ) , subscriptions = \ _ -> Sub.none , view = div [] << List.map (div [] << List.singleton << text) } ``` Output: The first 20 Hamming numbers are: 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36. The 1691st Hamming number is 2125764000. The 1000000th Hamming number is: 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 in 767 milliseconds. Do note that, due to the logarithmic response of the Min Heap Priority Queue, the execution time is logarithmic with number of elements evaluation and not linear as it would otherwise be, so if it takes 0.7 seconds to find the millionth Hamming number, it takes something about 10 seconds to find the ten millionth value instead of about 7 seconds. Considering that the generated "native" code is just JavaScript, it is reasonably fast and somewhat competitive with easier implementations in other languages such as F#. Erlang For relatively small values of n we can use an elegant code: ``` list(N) -> array:to_list(element(1, array(N, [2, 3, 5]))). nth(N) -> array:get(N-1, element(1, array(N, [2, 3, 5]))). array(N, Primes) -> array(array:new(), N, 1, [{P, 1, P} || P <- Primes]). array(Array, Max, Max, Candidates) -> {Array, Candidates}; array(Array, Max, I, Candidates) -> Smallest = smallest(Candidates), N_array = array:set(I, Smallest, Array), array(N_array, Max, I+1, update(Smallest, N_array, Candidates)). update(Val, Array, Candidates) -> [update_(Val, C, Array) || C <- Candidates]. update_(Val, {Val, Ind, Mul}, Array) -> {Mularray:get(Ind, Array), Ind+1, Mul}; update_(, X, ) -> X. smallest(L) -> lists:min([element(1, V) || V <- L]). ``` However, when n become large (let say above 5e7) the memory needed grew very large as I store all the values. Fortunately, the algorithm uses only a small fraction of the end of the array. So I can drop the beginning of the array when it is no longer needed. ``` nth(N, Batch) -> array:get(N-1, element(1, compact_array(N, Batch, [2, 3, 5]))). compact_array(Goal, Lim, Primes) -> {Array, Candidates} = array(Lim, Primes), compact_array(Goal, Lim, Lim, Array, Candidates). compact_array(Goal, _, Index, Array, Candidates) when Index > Goal -> {Array, Candidates}; compact_array(Goal, Lim, Index, Array, Candidates) -> {N_array, N_candidates} = array(compact(Array, Candidates), Index + Lim, Index, Candidates), compact_array(Goal, Lim, Index+Lim, N_array, N_candidates). compact(Array, L) -> Index = lists:min([element(2, V) || V <- L]), Keep = [E || E <- array:sparse_to_orddict(Array), element(1, E) >= Index], array:from_orddict(Keep). ``` With this approach memory is no longer an issue: Output: ``` timer:tc(task_hamming_numbers, nth, [100_000_000, 1_000_000]). {232894309, 18140143309611363532953342430693354584669635033709097929462505366714035156593135818380467866054222964635144914854949550271375442721368122191972041094311075107507067573147191502194201568268202614781694681859513649083616294200541611489469967999559505365172812095568020073934100699850397033005903158113691518456912149989919601385875227049401605594538145621585911726469930727034807205200195312500} ``` So a bit less than 4 minutes to get the 100 000 000th regular number. The complexity is slightly worse than linear which is not a surprise given than all the regular numbers are computed. ERRE For bigger numbers, you have to use an external program, like MULPREC.R ``` PROGRAM HAMMING !$DOUBLE DIM H PROCEDURE HAMMING(L%->RES) LOCAL I%,J%,K%,N%,M,X2,X3,X5 H=1 X2=2 X3=3 X5=5 FOR N%=1 TO L%-1 DO M=X2 IF M>X3 THEN M=X3 END IF IF M>X5 THEN M=X5 END IF H[N%]=M IF M=X2 THEN I%+=1 X2=2H[I%] END IF IF M=X3 THEN J%+=1 X3=3H[J%] END IF IF M=X5 THEN K%+=1 X5=5H[K%] END IF END FOR RES=H[L%-1] END PROCEDURE BEGIN FOR H%=1 TO 20 DO HAMMING(H%->RES) PRINT("H(";H%;")=";RES) END FOR HAMMING(1691->RES) PRINT("H(1691)=";RES) END PROGRAM ``` Output: ``` H( 1 )= 1 H( 2 )= 2 H( 3 )= 3 H( 4 )= 4 H( 5 )= 5 H( 6 )= 6 H( 7 )= 8 H( 8 )= 9 H( 9 )= 10 H( 10 )= 12 H( 11 )= 15 H( 12 )= 16 H( 13 )= 18 H( 14 )= 20 H( 15 )= 24 H( 16 )= 25 H( 17 )= 27 H( 18 )= 30 H( 19 )= 32 H( 20 )= 36 H(1691)= 2125764000 ``` F# This version implements Dijkstra's merge solution, so is closely related to the Haskell classic version. ``` type LazyList<'a> = Cons of 'a Lazy<LazyList<'a>> let rec hammings() = let rec (-|-) (Cons(x, nxf) as xs) (Cons(y, nyf) as ys) = if x < y then Cons(x, lazy(nxf.Value -|- ys)) elif x > y then Cons(y, lazy(xs -|- nyf.Value)) else Cons(x, lazy(nxf.Value -|- nyf.Value)) let rec inf_map f (Cons(x, nxf)) = Cons(f x, lazy(inf_map f nxf.Value)) Cons(1I, lazy(let x = inf_map (() 2I) hamming let y = inf_map (() 3I) hamming let z = inf_map (() 5I) hamming x -|- y -|- z)) // testing... [] let main args = let rec iterLazyListFor f n (Cons(v, rf)) = if n > 0 then f v; iterLazyListFor f (n - 1) rf.Value let rec nthLazyList n ((Cons(v, rf)) as ll) = if n <= 1 then v else nthLazyList (n - 1) rf.Value printf "( "; iterLazyListFor (printf "%A ") 20 (hammings()); printfn ")" printfn "%A" (hammings() |> nthLazyList 1691) printfn "%A" (hammings() |> nthLazyList 1000000) 0 ``` The above code memory residency is quite high as it holds the entire lazy sequence in memory due to the reference preventing garbage collection as the sequence is consumed, The following code reduces that high memory residency by making the routine a function and using internal local stream references for the intermediate streams so that they can be collected as the stream is consumed as long as no reference is held to the main results stream (which is not in the sample test functions); it also avoids duplication of factors by successively building up streams and further reduces memory use by ordering of the streams so that the least dense are determined first: Translation of: Haskell ``` let cNUMVALS = 1000000 type LazyList<'a> = Cons of 'a Lazy<LazyList<'a>> let hammings() = let rec merge (Cons(x, f) as xs) (Cons(y, g) as ys) = if x < y then Cons(x, lazy(merge (f.Force()) ys)) else Cons(y, lazy(merge xs (g.Force()))) let rec smult m (Cons(x, rxs)) = Cons(m x, lazy(smult m (rxs.Force()))) let rec first = smult 5I (Cons(1I, lazy first)) let u s n = let rec r = merge s (smult n (Cons(1I, lazy r))) in r Seq.unfold (fun (Cons(hd, rst)) -> Some (hd, rst.Value)) (Cons(1I, lazy(Seq.fold u first [| 3I; 2I |]))) [] let main argv = printf "( "; hammings() |> Seq.take 20 |> Seq.iter (printf "%A "); printfn ")" printfn "%A" (hammings() |> Seq.item (1691 - 1)) let strt = System.DateTime.Now.Ticks let rslt = (hammings()) |> Seq.item (cNUMVALS - 1) let stop = System.DateTime.Now.Ticks printfn "%A" rslt printfn "Found this last up to %d in %d milliseconds." cNUMVALS ((stop - strt) / 10000L) 0 // return an integer exit code ``` Both codes output the same results as follows but the second is over three times faster: Output: ( 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 ) 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 Found this last up to 1000000 in 1302 milliseconds. Both codes are over 10 times slower as compared to Haskell (or Kotlin or Scala or Clojure) when all are written in exactly the same style, perhaps due in some small degree to the BigInteger implementation being much slower for these operations than GMP and the JVM's implementation of BigInteger. Much of this is due to that the DotNet runtime does not allocate from a memory pool as the Haskell and JVM runtime's do, which is much slower when allocating for these functional algorithms where many small allocations/de-allocations are necessary. Fast somewhat imperative sequence version using logarithms Since the above pure functional approach isn't very efficient, a more imperative approach using "growable" arrays which are "drained" of unnecessary older values in blocks once the back pointer indices are advanced is used in the following code. The code also implements an algorithm to avoid duplicate calculations and thus does the same number of operations as the above code but faster due to using integer and floating point operations rather an BigInteger ones. Due to the "draining" the memory use is the same as the above by a constant factor. Note that the implementation of IEnumerable using sequences in F# is also not very efficient and a "roll-your-own" IEnumerable implementation would likely be somewhat faster: F# has a particularly slow enumeration ability in the use of the Seq type (although easy to use) so in order to be able to bypass that, the following code still uses the imperative ResizeArray's but outputs a closure "next" function that can be used directly to avoid the generation of a Seq sequence where maximum speed is desired: Template:Tran ``` let cCOUNT = 1000000 type LogRep = struct val lr: double; val x2: uint32; val x3: uint32; val x5: uint32 new(lr, x2, x3, x5) = {lr = lr; x2 = x2; x3 = x3; x5 = x5 } end let one: LogRep = LogRep(0.0, 0u, 0u, 0u) let lg2_2: double = 1.0 let lg3_2: double = log 3.0 / log 2.0 let lg5_2: double = log 5.0 / log 2.0 let inline mul2 (lr: LogRep): LogRep = LogRep(lr.lr + lg2_2, lr.x2 + 1u, lr.x3, lr.x5) let inline mul3 (lr: LogRep): LogRep = LogRep(lr.lr + lg3_2, lr.x2, lr.x3 + 1u, lr.x5) let inline mul5 (lr: LogRep): LogRep = LogRep(lr.lr + lg5_2, lr.x2, lr.x3, lr.x5 + 1u) let hammingsLog() = // imperative arrays, eliminates the BigInteger operations... let s2 = ResizeArray<>() in let s3 = ResizeArray<>() s2.Add(one); s3.Add(mul3 one) let mutable s5 = mul5 one in let mutable mrg = mul3 one let mutable s2hdi = 0 in let mutable s3hdi = 0 let next() = // imperative next function to advance value if s2hdi + s2hdi >= s2.Count then s2.RemoveRange(0, s2hdi); s2hdi <- 0 let mutable rslt: LogRep = s2.[s2hdi] if rslt.lr < mrg.lr then s2.Add(mul2 rslt); s2hdi <- s2hdi + 1 else if s3hdi + s3hdi >= s3.Count then s3.RemoveRange(0, s3hdi); s3hdi <- 0 rslt <- mrg; s2.Add(mul2 rslt); s3.Add(mul3 rslt); s3hdi <- s3hdi + 1 let chkv: LogRep = s3.[s3hdi] if chkv.lr < s5.lr then mrg <- chkv else mrg <- s5; s5 <- mul5 s5; s3hdi <- s3hdi - 1 rslt next let hl2Seq f = Seq.unfold (fun v -> Some(v, f())) (f()) let nthLogHamming n f = let rec nxt i = if i >= n then f() else f() |> ignore; nxt (i + 1) in nxt 0 let lr2BigInt (lr: LogRep) = // convert trival to BigInteger let rec xpnd n mlt rslt = if n <= 0u then rslt else xpnd (n - 1u) mlt (mlt rslt) xpnd lr.x2 2I 1I |> xpnd lr.x3 3I |> xpnd lr.x5 5I [] let main argv = printf "( "; hammingsLog() |> hl2Seq |> Seq.take 20 |> Seq.iter (printf "%A " << lr2BigInt); printfn ")" printfn "%A" (hammingsLog() |> hl2Seq |> Seq.item (1691 - 1) |> lr2BigInt) let strt = System.DateTime.Now.Ticks // slow way using Seq: // let rslt = (hammingsLog()) |> hl2Seq |> Seq.item (1000000 - 1) // fast way using closure directly: let rslt = (hammingsLog()) |> nthLogHamming (1000000 - 1) let stop = System.DateTime.Now.Ticks printfn "%A" (rslt |> lr2BigInt) printfn "Found this last up to %d in %d milliseconds." cCOUNT ((stop - strt) / 10000L) printfn "" 0 // return an integer exit code ``` Output: ( 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 ) 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 Found this last up to 1000000 in 57 milliseconds. The above code can find the billionth Hamming number in about 60 seconds on the same Intel i5-6500 at 3.6 GHz (single threaded boosted). If the "fast way" is commented out and the commenting out removed from the "slow way", the code is about twice as slow. Extremely fast non-enumerating version sorting values in error band If one is willing to forego sequences and just calculate the nth Hamming number, then some reading on the relationship between the size of numbers to the sequence numbers is helpful (Wikipedia: regular number). One finds that there is a very distinct relationship and that it quite quickly reduces to quite a small error band proportional to the log of the output value for larger ranges. Thus, the following code just scans for logarithmic representations to insert into a sequence for this top error band and extracts the correct nth representation from that band. It reduces time complexity to O(n^(2/3)) from O(n) for the sequence versions, but even more amazingly, reduces memory requirements to O(n^(1/3)) from O(n^(2/3)) and thus makes it possible to calculate very large values in the sequence on common personal computers. The code is as follows: Translation of: Haskell ``` let nthHamming n = if n < 1UL then failwith "nthHamming; argument must be > 0!" if n < 2UL then 0u, 0u, 0u else // trivial case for first value of one let lb3 = 1.5849625007211561814537389439478 // Math.Log(3) / Math.Log(2); let lb5 = 2.3219280948873623478703194294894 // Math.Log(5) / Math.Log(2); let fctr = 6.0 lb3 lb5 let crctn = 2.4534452978042592646620291867186 // Math.Log(Math.sqrt(30.0)) / Math.Log(2.0) let lbest = (fctr double n) (1.0/3.0) - crctn // from WP formula let lbhi = lbest + 1.0 / lbest let lblo = 2.0 lbest - lbhi // upper and lower bound of upper "band" let klmt = uint32 (lbhi / lb5) let rec loopk k kcnt kbnd = if k > klmt then kcnt, kbnd else let p = lbhi - double k lb5 let jlmt = uint32 (p / lb3) let rec loopj j jcnt jbnd = if j > jlmt then loopk (k + 1u) jcnt jbnd else let q = p - double j lb3 let i = uint32 q let lg = lbhi - q + double i // current log 2 value (estimated) let nbnd = if lg >= lblo then (lg, (uint32 i, j, k)) :: jbnd else jbnd loopj (j + 1u) (jcnt + uint64 i + 1UL) nbnd in loopj 0u kcnt kbnd let count, bnd = loopk 0u 0UL [] // 64-bit value so doesn't overflow if n > count then failwith "nthHamming: band high estimate is too low!" let ndx = int (count - n) if ndx >= bnd.Length then failwith "NthHamming.findNth: band low estimate is too high!" let sbnd = bnd |> List.sortBy (fun (lg, ) -> -lg) // sort in decending order let , rslt = sbnd.[ndx] rslt [] let main argv = let topNum = 1000000UL printf "( "; {1..20} |> Seq.iter (printf "%A " << trival << nthHamming << uint64); printfn ")" printfn "%A" (nthHamming 1691UL |> trival) let rslt = nthHammingx topNum let strt = System.DateTime.Now.Ticks let rslt = nthHamming topNum let stop = System.DateTime.Now.Ticks let x2, x3, x5 = rslt printfn "2%A times 3%A times 5%A" x2 x3 x5 let lgrthm = log10 2.0 (double x2 + (double x3 log 3.0 + double x5 log 5.0) / log 2.0) let exp = floor lgrthm |> int let mntsa = 10.0 (lgrthm - double exp) printfn "Approximately %AE+%A" mntsa exp let s = trival rslt |> string let lngth = s.Length printfn "Digits: %A" lngth if lngth <= 10000 then {0..100..lngth-1} |> Seq.iter (fun i -> printfn "%s" (s.Substring(i, if i + 100 < lngth then 100 else lngth - i))) printfn "\r\nFound this last up to %A in %A milliseconds." topNum ((stop - strt) / 10000L) printf "\r\nPress any key to exit:" System.Console.ReadKey(true) |> ignore printfn "" 0 // return an integer exit code ``` Output: ``` ( 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 ) 2125764000 255u times 347u times 564u Approximately 5.193127804E+83 Digits: 84 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 Found this last up to 1000000UL in 0L milliseconds. ``` Even though the above code is implemented in a completely functional style using immutable bindings and (non-lazy) lists (without closures), it is about as fast as implementations in the fastest of languages. It is faster than the Haskell version due to that version using lazy lists with the overhead of creating the requisite "thunks". It takes too short a time to be measured to calculate the millionth Hamming number, the billionth number in the sequence can be calculated in just about 15 milliseconds, the trillionth in about one second, the thousand trillionth in about a hundred seconds, and it should be possible to calculate the 10^19th value in less than a day (untested) on common personal computers. The (2^64 - 1)th value (18446744073709551615) cannot be calculated due to a slight overflow problem as it approaches that limit. Enhancement to by able to find Hamming numbers beyond the ten trillionth one Due to the limited 53-bit mantissa of 64-bit double floating piint numbers, the above code can't properly sort the error band for input arguments somewhere above 1013; the following code makes the sort accurate by using a multi-precision logarithm representation of sufficient precision so that the sort is accurate for arguments well beyond the uint64 input argument range, at about a doubling in cost in execution speed: Translation of: Haskell ``` let nthHamming n = if n < 1UL then failwith "nthHamming: argument must be > 0!" if n < 2UL then 0u, 0u, 0u else // trivial case for first value of one let lb3 = 1.5849625007211561814537389439478 // Math.Log(3) / Math.Log(2); let lb5 = 2.3219280948873623478703194294894 // Math.Log(5) / Math.Log(2); let fctr = 6.0 lb3 lb5 let crctn = 2.4534452978042592646620291867186 // Math.Log(Math.sqrt(30.0)) / Math.Log(2.0) let lbest = (fctr double n) (1.0/3.0) - crctn // from WP formula let lbhi = lbest + 1.0/lbest let lblo = 2.0 lbest - lbhi // upper and lower bound of upper "band" let bglb2 = 1267650600228229401496703205376I let bglb3 = 2009178665378409109047848542368I let bglb5 = 2943393543170754072109742145491I let klmt = uint32 (lbhi / lb5) let rec loopk k kcnt kbnd = if k > klmt then kcnt, kbnd else let p = lbhi - double k lb5 let jlmt = uint32 (p / lb3) let rec loopj j jcnt jbnd = if j > jlmt then loopk (k + 1u) jcnt jbnd else let q = p - double j lb3 let i = uint32 q let lg = lbhi - q + double i // current log 2 value (estimated) let nbnd = if lg < lblo then jbnd else let bglg = bglb2 bigint i + bglb3 bigint j + bglb5 bigint k in (bglg, (uint32 i, j, k)) :: jbnd loopj (j + 1u) (jcnt + uint64 i + 1UL) nbnd in loopj 0u kcnt kbnd let count, bnd = loopk 0u 0UL [] // 64-bit value so doesn't overflow if n > count then failwith "nthHamming: band high estimate is too low!" let ndx = int (count - n) if ndx >= bnd.Length then failwith "NthHamming.findNth: band low estimate is too high!" let sbnd = bnd |> List.sortBy (fun (lg, ) -> -lg) // sort in decending order let , rslt = sbnd.[ndx] rslt ``` Factor Translation of: Scala ``` USING: accessors deques dlists fry kernel make math math.order ; IN: rosetta.hamming TUPLE: hamming-iterator 2s 3s 5s ; : ( -- hamming-iterator ) hamming-iterator new 1 1dlist >>2s 1 1dlist >>3s 1 1dlist >>5s ; : enqueue ( n hamming-iterator -- ) [ [ 2 ] [ 2s>> ] bi push-back ] [ [ 3 ] [ 3s>> ] bi push-back ] [ [ 5 ] [ 5s>> ] bi push-back ] 2tri ; : next ( hamming-iterator -- n ) dup [ 2s>> ] [ 3s>> ] [ 5s>> ] tri 3dup [ peek-front ] tri@ min min [ '[ dup peek-front _ = [ pop-front ] [ drop ] if ] tri@ ] [ swap enqueue ] [ ] tri ; : next-n ( hamming-iterator n -- seq ) swap '[ _ [ _ next , ] times ] { } make ; : nth-from-now ( hamming-iterator n -- m ) 1 - over '[ _ next drop ] times next ; ``` ``` 20 next-n . 1691 nth-from-now . 1000000 nth-from-now . ``` Translation of: Haskell Lazy lists are quite slow in Factor, but still. ``` USING: combinators fry kernel lists lists.lazy locals math ; IN: rosetta.hamming-lazy :: sort-merge ( xs ys -- result ) xs car :> x ys car :> y { { [ x y < ] [ [ x ] [ xs cdr ys sort-merge ] lazy-cons ] } { [ x y > ] [ [ y ] [ ys cdr xs sort-merge ] lazy-cons ] } [ [ x ] [ xs cdr ys cdr sort-merge ] lazy-cons ] } cond ; :: hamming ( -- hamming ) f :> h! [ 1 ] [ h 2 3 5 [ '[ _ ] lazy-map ] tri-curry@ tri sort-merge sort-merge ] lazy-cons h! h ; ``` ``` 20 hamming ltake list>array . 1690 hamming lnth . 999999 hamming lnth . ``` Forth Works with: Gforth version 0.7.0 This version uses a compact representation of Hamming numbers: each 64-bit cell represents a number 2^l3^m5^n, where l, n, and m are bitfields in the cell (20 bits for now). It also uses a fixed-point logarithm to compare the Hamming numbers and prints them in factored form. This code has been tested up to the 10^9th Hamming number. ``` \ manipulating and computing with Hamming numbers: : extract2 ( h -- l ) 40 rshift ; : extract3 ( h -- m ) 20 rshift $fffff and ; : extract5 ( h -- n ) $fffff and ; ' + alias h ( h1 h2 -- h ) : h. { h -- } ." 2^" h extract2 0 .r ." 3^" h extract3 0 .r ." 5^" h extract5 . ; \ the following numbers have been produced with bc -l as follows 1 62 lshift constant ldscale2 7309349404307464679 constant ldscale3 \ 2^62l(3)/l(2) (rounded up) 10708003330985790206 constant ldscale5 \ 2^62l(5)/l(2) (rounded down) : hld { h -- ud } \ ud is a scaled fixed-point representation of the logarithm dualis of h h extract2 ldscale2 um h extract3 ldscale3 um d+ h extract5 ldscale5 um d+ ; : h<= ( h1 h2 -- f ) 2dup = if 2drop true exit then hld rot hld assert( 2over 2over d<> ) du>= ; : hmin ( h1 h2 -- h ) 2dup h<= if drop else nip then ; \ actual algorithm 0 value seq variable seqlast 0 seqlast ! : lastseq ( -- u ) \ last stored number in the sequence seq seqlast @ th @ ; : genseq ( h1 "name" -- ) \ h1 is the factor for the sequence create , 0 , \ factor and index of element used for last return does> ( -- u2 ) \ u2 is the next number resulting from multiplying h1 with numbers \ in the sequence that is larger than the last number in the \ sequence dup @ lastseq { h1 l } cell+ dup @ begin ( index-addr index ) seq over th @ h1 h dup l h<= while drop 1+ repeat >r swap ! r> ; $10000000000 genseq s2 $00000100000 genseq s3 $00000000001 genseq s5 : nextseq ( -- ) s2 s3 hmin s5 hmin , 1 seqlast +! ; : nthseq ( u1 -- h ) \ the u1 th element in the sequence dup seqlast @ u+do nextseq loop 1- 0 max cells seq + @ ; : .nseq ( u1 -- ) dup seqlast @ u+do nextseq loop 0 u+do seq i th @ h. loop ; here to seq 0 , \ that's 1 20 .nseq cr 1691 nthseq h. cr 1000000 nthseq h. ``` Output: ``` 2^03^05^0 2^13^05^0 2^03^15^0 2^23^05^0 2^03^05^1 2^13^15^0 2^33^05^0 2^03^25^0 2^13^05^1 2^23^15^0 2^03^15^1 2^43^05^0 2^13^25^0 2^23^05^1 2^33^15^0 2^03^05^2 2^03^35^0 2^13^15^1 2^53^05^0 2^23^25^0 2^53^125^3 2^553^475^64 ``` A smaller, less capable solution is presented here. It solves two out of three requirements and is ANS-Forth compliant. ``` 2000 cells constant /hamming create hamming /hamming allot ( n1 n2 n3 n4 n5 n6 n7 -- n3 n4 n5 n6 n1 n2 n8) : min? >r dup r> min >r 2rot r> ; : hit? ( n1 n2 n3 n4 n5 n6 n7 n8 -- n3 n4 n9 n10 n1 n2 n7) r 2dup = \ compare number with found minimum if -rot drop 1+ hamming over cells + @ r@ rot then r> drop >r 2rot r> ; \ if so, increment and rotate : hamming# ( n1 -- n2) 1 hamming ! >r \ set first cell and initialize parms 0 5 over 3 over 2 r@ 1 ?do \ determine minimum and set cell dup min? min? min? dup hamming i cells + ! 2 hit? 5 hit? 3 hit? drop loop \ find if minimum equals value 2drop 2drop 2drop hamming r> 1- cells + @ ; \ clean up stack and fetch hamming number : test cr 21 1 ?do i . i hamming# . cr loop 1691 hamming# . cr ; ``` Fortran Works with: Fortran version 90 and later Using big_integer_module from here ``` program Hamming_Test use big_integer_module implicit none call Hamming(1,20) write(,) call Hamming(1691) write(,) call Hamming(1000000) contains subroutine Hamming(first, last) integer, intent(in) :: first integer, intent(in), optional :: last integer :: i, n, i2, i3, i5, lim type(big_integer), allocatable :: hnums(:) if(present(last)) then lim = last else lim = first end if if(first < 1 .or. lim > 2500000 ) then write(,) "Invalid input" return end if allocate(hnums(lim)) i2 = 1 ; i3 = 1 ; i5 = 1 hnums(1) = 1 n = 1 do while(n < lim) n = n + 1 hnums(n) = mini(2hnums(i2), 3hnums(i3), 5hnums(i5)) if(2hnums(i2) == hnums(n)) i2 = i2 + 1 if(3hnums(i3) == hnums(n)) i3 = i3 + 1 if(5hnums(i5) == hnums(n)) i5 = i5 + 1 end do if(present(last)) then do i = first, last call print_big(hnums(i)) write(, "(a)", advance="no") " " end do else call print_big(hnums(first)) end if deallocate(hnums) end subroutine function mini(a, b, c) type(big_integer) :: mini type(big_integer), intent(in) :: a, b, c if(a < b ) then if(a < c) then mini = a else mini = c end if else if(b < c) then mini = b else mini = c end if end function mini end program ``` Output: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 FreeBASIC ``` ' FB 1.05.0 Win64 ' The biggest integer which FB supports natively is 8 bytes so unable ' to calculate 1 millionth Hamming number without using an external ' "bigint" library such as GMP Function min(x As Integer, y As Integer) As Integer Return IIf(x < y, x, y) End Function Function hamming(n As Integer) As Integer Dim h(1 To n) As Integer h(1) = 1 Dim As Integer i = 1, j = 1, k = 1 Dim As Integer x2 = 2, x3 = 3, x5 = 5 For m As Integer = 2 To n h(m) = min(x2, min(x3, x5)) If h(m) = x2 Then i += 1 x2 = 2 h(i) End If If h(m) = x3 Then j += 1 x3 = 3 h(j) End if If h(m) = x5 Then k += 1 x5 = 5 h(k) End If Next Return h(n) End Function Print "The first 20 Hamming numbers are :" For i As Integer = 1 To 20 Print hamming(i); " "; Next Print : Print Print "The 1691st hamming number is :" Print hamming(1691) Print Print "Press any key to quit" Sleep ``` Output: ``` The first 20 Hamming numbers are : 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 The 1691st Hamming number is : 2125764000 ``` FutureBasic FB 7.0.24 macOS 14.7.2 Sonoma (FutureBasic does not yet have support for arbitrary pecision integers) ``` include "NSLog.incl" NSUInteger local fn Minimum( a as NSUInteger, b as NSUInteger ) return (a < b) ? a : b end fn = 0 UInt64 local fn HammingNumberAtPosition( position as NSUInteger ) if (position == 0) then return 0 CFMutableArrayRef hammingNumbers = fn MutableArrayWithCapacity( position ) MutableArrayAddObject( hammingNumbers, @1 ) NSUInteger i2 = 0, i3 = 0, i5 = 0 for NSUInteger i = 1 to position - 1 NSUInteger nxt2 = fn NumberUnsignedLongLongValue( hammingNumbers[i2] ) 2 NSUInteger nxt3 = fn NumberUnsignedLongLongValue( hammingNumbers[i3] ) 3 NSUInteger nxt5 = fn NumberUnsignedLongLongValue( hammingNumbers[i5] ) 5 NSUInteger nxt = fn Minimum( nxt2, fn Minimum( nxt3, nxt5 ) ) NSUInteger nextHamming = fn Minimum( nxt2, fn Minimum( nxt3, nxt5 ) ) MutableArrayAddObject( hammingNumbers, @(nextHamming) ) if (nxt == nxt2) then i2++ if (nxt == nxt3) then i3++ if (nxt == nxt5) then i5++ next return fn NumberUnsignedLongLongValue( hammingNumbers[position - 1] ) end fn = 0 local fn RunHammingNumberTests CFMutableArrayRef mutArr = fn MutableArrayNew for NSUInteger i = 1 to 20 MutableArrayAddObject( mutArr, @(fn HammingNumberAtPosition(i)) ) next NSLog( @"First 20 Hamming Numbers: %@", fn ArrayComponentsJoinedByString( mutArr, @" " ) ) NSLog( @" 1691st Hamming Number: %llu", fn HammingNumberAtPosition( 1691 ) ) end fn fn RunHammingNumberTests HandleEvents ``` Output: ``` First 20 Hamming Numbers: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 1691st Hamming Number: 2125764000 ``` FunL Translation of: Scala ``` native scala.collection.mutable.Queue val hamming = q2 = Queue() q3 = Queue() q5 = Queue() def enqueue( n ) = q2.enqueue( n2 ) q3.enqueue( n3 ) q5.enqueue( n5 ) def stream = val n = min( min(q2.head(), q3.head()), q5.head() ) if q2.head() == n then q2.dequeue() if q3.head() == n then q3.dequeue() if q5.head() == n then q5.dequeue() enqueue( n ) n # stream() for q <- [q2, q3, q5] do q.enqueue( 1 ) stream() ``` Translation of: Haskell ``` val hamming = 1 # merge( map((2), hamming), merge(map((3), hamming), map((5), hamming)) ) def merge( inx@x:, iny@y: ) | x < y = x # merge( inx.tail(), iny ) | x > y = y # merge( inx, iny.tail() ) | otherwise = merge( inx, iny.tail() ) println( hamming.take(20) ) println( hamming(1690) ) println( hamming(2000) ) ``` Output: ``` [1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36] 2125764000 8100000000 ``` Fōrmulæ Fōrmulæ programs are not textual, visualization/edition of programs is done showing/manipulating structures but not text. Moreover, there can be multiple visual representations of the same program. Even though it is possible to have textual representation —i.e. XML, JSON— they are intended for storage and transfer purposes more than visualization and edition. Programs in Fōrmulæ are created/edited online in its website. In this page you can see and run the program(s) related to this task and their results. You can also change either the programs or the parameters they are called with, for experimentation, but remember that these programs were created with the main purpose of showing a clear solution of the task, and they generally lack any kind of validation. Solution Case 1. First twenty Hamming numbers Case 2. 1691-st Hamming number Case 3. One million-th Hamming number Go Concise version using dynamic-programming ``` package main import ( "fmt" "math/big" ) func min(a, b big.Int) big.Int { if a.Cmp(b) < 0 { return a } return b } func hamming(n int) []big.Int { h := make([]big.Int, n) h = big.NewInt(1) two, three, five := big.NewInt(2), big.NewInt(3), big.NewInt(5) next2, next3, next5 := big.NewInt(2), big.NewInt(3), big.NewInt(5) i, j, k := 0, 0, 0 for m := 1; m < len(h); m++ { h[m] = new(big.Int).Set(min(next2, min(next3, next5))) if h[m].Cmp(next2) == 0 { i++; next2.Mul( two, h[i]) } if h[m].Cmp(next3) == 0 { j++; next3.Mul(three, h[j]) } if h[m].Cmp(next5) == 0 { k++; next5.Mul( five, h[k]) } } return h } func main() { h := hamming(1e6) fmt.Println(h[:20]) fmt.Println(h[1691-1]) fmt.Println(h[len(h)-1]) } ``` Output: ``` [1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36] 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 ``` Longer version using dynamic-programming and logarithms More than 10 times faster. ``` package main import ( "flag" "fmt" "log" "math" "math/big" "os" ) var ( // print the whole sequence or just one element? seqMode = flag.Bool("s", false, "sequence mode") // precomputed base-2 logarithms for 3 and 5 lg3, lg5 float64 = math.Log2(3), math.Log2(5) // state of the three multiplied sequences front = cursor{ {0, 0, 1}, // 2 {1, 0, lg3}, // 3 {2, 0, lg5}, // 5 } // table for dynamic-programming stored results table []int16 ) type cursor struct { f int // index (0, 1, 2) corresponding to factor (2, 3, 5) i int // index into table for the entry being multiplied lg float64 // base-2 logarithm of the multiple (for ordering) } func (c cursor) val() int16 { x := table[c.i] x[c.f]++ // multiply by incrementing the exponent return x } func (c cursor) advance() { c.i++ // skip entries that would produce duplicates for (c.f < 2 && table[c.i] > 0) || (c.f < 1 && table[c.i] > 0) { c.i++ } x := c.val() c.lg = float64(x) + lg3float64(x) + lg5float64(x) } func step() { table = append(table, front.val()) front.advance() // re-establish sorted order if front.lg > front.lg { front, front = front, front if front.lg > front.lg { front, front = front, front } } } func show(elem int16) { z := big.NewInt(1) for i, base := range []int64{2, 3, 5} { b := big.NewInt(base) x := big.NewInt(int64(elem[i])) z.Mul(z, b.Exp(b, x, nil)) } fmt.Println(z) } func main() { log.SetPrefix(os.Args + ": ") log.SetOutput(os.Stderr) flag.Parse() if flag.NArg() != 1 { log.Fatalln("need one positive integer argument") } var ordinal int // ordinal of last sequence element to compute _, err := fmt.Sscan(flag.Arg(0), &ordinal) if err != nil || ordinal <= 0 { log.Fatalln("argument must be a positive integer") } table = make([]int16, 1, ordinal) for i, n := 1, ordinal; i < n; i++ { if seqMode { show(table[i-1]) } step() } show(table[ordinal-1]) } ``` Output: ``` $ ./hamming -s 20 | xargs 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 $ time ./hamming 1000000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 real 0m0.110s user 0m0.090s sys 0m0.020s $ uname -a Linux lance 3.0-ARCH #1 SMP PREEMPT Sat Aug 6 16:18:35 CEST 2011 x86_64 Intel(R) Core(TM)2 Duo CPU P8400 @ 2.26GHz GenuineIntel GNU/Linux ``` Low Memory Use Enumerating Version Eliminating Duplicates While the above code is fast due to avoiding big.Int operations and being tuned to avoid duplication of work, it has two problems: It uses memory at about six times the value of "n", the nth value and has a practical upper range where the logarithm estimate used to compare currently processed value round off error will become too big so that values will become not in proper order for some values for large ranges. This latter problem could be fixed by using double precision of two 64-bit uint's for the accumulated estimate, but the algorithm would still consume quite a lot of memory. The following algorithm implements a continuously increasing enumeration of the Hamming numbers at about the same speed as the first solution by eliminating duplicate calculations, by organizing the streams/lazylists so that the least dense ones are processed first, and by using local variables that don't retain the heads of the streams/lazylists so that they can be garbage collected as consumed. In this way, the billionth value can be calculated using only about a billion bytes of memory (one sixth of the above), with most of that used for storage of the necessary big.Int's. If a tweaked logarithm algorithm were used, it would reduce the memory use almost to zero and would speed it up, although not to the same extent as the immediately above code as much of the remaining time would be spent in allocation of new stream/lazylist values and garbage collection. The program implements the memoized streams/lazylists with a "roll-your-own" implementation and only the necessary methods as required by this algorithm as Go does not have a library to supply such, and uses a function closure to implement a simple form of enumeration of the Hamming values. It used "llmult to perform the function of the "map" function used in the Haskell code, which is to produce a new stream which has each value of the input stream multiplied by a constant. Instead of the Haskell "foldl" function, this program uses a simple Go "for" comprehension of the input primes array. Translation of: Haskell ``` // Hamming project main.go package main import ( "fmt" "math/big" "time" ) type lazyList struct { head big.Int tail lazyList contf func() lazyList } func (oll lazyList) next() lazyList { if oll.contf != nil { // not thread-safe oll.tail = oll.contf() oll.contf = nil } return oll.tail } func merge(a lazyList, b lazyList) lazyList { rslt := new(lazyList) x := a.head y := b.head if x.Cmp(y) < 0 { rslt.head = x rslt.contf = func() lazyList { return merge(a.next(), b) } } else { rslt.head = y rslt.contf = func() lazyList { return merge(a, b.next()) } } return rslt } func llmult(m big.Int, ll lazyList) lazyList { rslt := new(lazyList) rslt.head = new(big.Int).Set(big.NewInt(0)).Mul(m, ll.head) rslt.contf = func() lazyList { return llmult(m, ll.next()) } return rslt } func u(s lazyList, n big.Int) lazyList { rslt := new(lazyList) cr := new(lazyList) cr.head = big.NewInt(1) cr.contf = func() lazyList { return rslt } if s == nil { rslt = llmult(n, cr) } else { rslt = merge(s, llmult(n, cr)) } return rslt } func Hamming() func() big.Int { prms := []int64{5, 3, 2} curr := new(lazyList) curr.head = big.NewInt(1) curr.contf = func() lazyList { var r lazyList = nil for _, v := range prms { r = u(r, big.NewInt(v)) } return r } return func() big.Int { temp := curr curr = curr.next() return temp.head } } func main() { n := 1000000 hamiter := Hamming() rarr := make([]big.Int, 20) for i, _ := range rarr { rarr[i] = hamiter() } fmt.Println(rarr) hamiter = Hamming() for i := 1; i < 1691; i++ { hamiter() } fmt.Println(hamiter()) strt := time.Now() hamiter = Hamming() for i := 1; i < n; i++ { hamiter() } rslt := hamiter() end := time.Now() fmt.Printf("Found the %vth Hamming number as %v in %v.\r\n", n, rslt.String(), end.Sub(strt)) } ``` The outputs are about the same as the above versions. In order to perform this algorithm, one can see how much more verbose Go is than more functional languages such as Haskell or F# for this primarily functional algorithm. Fast imperative version avoiding duplicates, reducing memory, and using logarithmic representation While the above version can calculate to larger ranges due to somewhat reduced memory use, it is still somewhat limited as to range by memory limits due to the increasing size of the big integers used, limited in speed due to those big integer calculations, and also limited in speed due to Go's slow memory allocations and de-allocations. The following code uses combined techniques to overcome all three limitations: 1) as for other solutions on this page, it uses a representation using integer exponents of 2, 3, and 5 and a scaled integer logarithm only for value comparisons (scaled such that round-off errors aren't a factor over the applicable range); thus memory use per element is constant rather than growing with range for big integers, and operations are simple integer comparisons and additions and are thus very fast. 2) memory reductions are by draining the used arrays by batches (rather than one by one as above) in place to reduce the time required for constant allocations and de-allocations. The code is as follows: Translation of: Rust ``` package main import ( "fmt" "math/big" "time" ) // constants as expanded integers to minimize round-off errors, and // reduce execution time using integer operations not float... const cLAA2 uint64 = 35184372088832 // 2.0f64.ln() 2.0f64.powi(45)).round() as u64; const cLBA2 uint64 = 55765910372219 // 3.0f64.ln() / 2.0f64.ln() 2.0f64.powi(45)).round() as u64; const cLCA2 uint64 = 81695582054030 // 5.0f64.ln() / 2.0f64.ln() 2.0f64.powi(45)).round() as u64; type logelm struct { // log representation of an element with only allowable powers exp2 uint16 exp3 uint16 exp5 uint16 logr uint64 // log representation used for comparison only - not exact } func (self logelm) lte(othr logelm) bool { if self.logr <= othr.logr { return true } else { return false } } func (self logelm) mul2() logelm { return logelm{ exp2: self.exp2 + 1, exp3: self.exp3, exp5: self.exp5, logr: self.logr + cLAA2, } } func (self logelm) mul3() logelm { return logelm{ exp2: self.exp2, exp3: self.exp3 + 1, exp5: self.exp5, logr: self.logr + cLBA2, } } func (self logelm) mul5() logelm { return logelm{ exp2: self.exp2, exp3: self.exp3, exp5: self.exp5 + 1, logr: self.logr + cLCA2, } } func log_nodups_hamming(n uint) big.Int { if n < 1 { panic("log_nodups_hamming: argument < 1!") } if n < 2 { // trivial case of first in sequence return big.NewInt(1) } if n > 1.2e15 { panic("log_nodups_hamming: argument too large!") } one := logelm{} next5, merge := one.mul5(), one.mul3() next53, next532 := merge.mul3(), one.mul2() g := make([]logelm, 1, 65536) g = one // never used, just so append works h := make([]logelm, 1, 65536) h = one // never used, just so append works i, j := 1, 1 for m := uint(1); m < n; m++ { cph := cap(h) if i >= cph/2 { nm := copy(h[0:i], h[i:]) h = h[0:nm:cph] i = 0 } if next532.lte(&merge) { h = append(h, next532) next532 = h[i].mul2() i++ } else { h = append(h, merge) if next53.lte(&next5) { merge = next53 next53 = g[j].mul3() j++ } else { merge = next5 next5 = next5.mul5() } cpg := cap(g) if j >= cpg/2 { nm := copy(g[0:j], g[j:]) g = g[0:nm:cpg] j = 0 } g = append(g, merge) } } two, three, five := big.NewInt(2), big.NewInt(3), big.NewInt(5) o := h[len(h)-1] // convert last element to big integer... ob := big.NewInt(1) for i := uint16(0); i < o.exp2; i++ { ob.Mul(two, ob) } for i := uint16(0); i < o.exp3; i++ { ob.Mul(three, ob) } for i := uint16(0); i < o.exp5; i++ { ob.Mul(five, ob) } return ob } func main() { n := uint(1e6) rarr := make([]big.Int, 20) for i, _ := range rarr { rarr[i] = log_nodumps_hamming(i) } fmt.Println(rarr) fmt.Println(log_nodups_hamming(1691)) strt := time.Now() rslt := log_nodups_hamming(n) end := time.Now() rs := rslt.String() lrs := len(rs) fmt.Printf("%v digits:\r\n", lrs) ndx := 0 for ; ndx < lrs-100; ndx += 100 { fmt.Println(rs[ndx : ndx+100]) } fmt.Println(rs[ndx:]) fmt.Printf("This last found the %vth hamming number in %v.\r\n", n, end.Sub(strt)) } ``` Output: [1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36] 2125764000 84 digits: 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 This last found the 1000000th hamming number in 10.0006ms. The above code can produce the billionth hamming number (844 digits) in about 14 seconds and given a machine with over 9 Gigabytes, it can calculate to the limit of 1.2e13 (about 19,335 digits) in about a day or so. Functional enumerating versions as the immediately precedent code, even if adapted to the logarithm algorithm, will take longer because of the time required for enumeration, but much worse is the time required for allocations/de-allocations (garbage collection) of individual elements rather than as here in batches and for the majority of times done in place not requiring allocation/de-allocation at all. Extremely fast version inserting logarithms into the top error band The above code is not as fast as one can go as it is limited by the need to calculate all Hamming numbers in the sequence up to the required one: some reading on the relationship between the size of numbers to the sequence numbers is helpful (Wikipedia: regular number). One finds that there is a very distinct relationship and that it quite quickly reduces to quite a small error band proportional to the log of the output value for larger ranges. Thus, the following code just scans for logarithmic representations to insert into a sequence for this top error band and extracts the correct nth representation from that band. It reduces time complexity to O(n^(2/3)) from O(n) for the sequence versions, but even more amazingly, reduces memory requirements to O(n^(1/3)) from O(n^(2/3)) and thus makes it possible to calculate very large values in the sequence on common personal computers. The code is as follows: Translation of: Nim ``` package main import ( "fmt" "math" "math/big" "sort" "time" ) type logrep struct { lg float64 x2, x3, x5 uint32 } type logreps []logrep func (s logreps) Len() int { // necessary methods for sorting return len(s) } func (s logreps) Swap(i, j int) { s[i], s[j] = s[j], s[i] } func (s logreps) Less(i, j int) bool { return s[j].lg < s[i].lg // sort in decreasing order (reverse order compare) } func nthHamming(n uint64) (uint32, uint32, uint32) { if n < 2 { if n < 1 { panic("nthHamming: argument is zero!") } return 0, 0, 0 } const lb3 = 1.5849625007211561814537389439478 // math.Log2(3.0) const lb5 = 2.3219280948873623478703194294894 // math.Log2(5.0) fctr := 6.0 lb3 lb5 crctn := math.Log2(math.Sqrt(30.0)) // from WP formula lgest := math.Pow(fctrfloat64(n), 1.0/3.0) - crctn var frctn float64 if n < 1000000000 { frctn = 0.509 } else { frctn = 0.106 } lghi := math.Pow(fctr(float64(n)+frctnlgest), 1.0/3.0) - crctn lglo := 2.0lgest - lghi // and a lower limit of the upper "band" var count uint64 = 0 bnd := make(logreps, 0) // give it one value so doubling size works klmt := uint32(lghi/lb5) + 1 for k := uint32(0); k < klmt; k++ { p := float64(k) lb5 jlmt := uint32((lghi-p)/lb3) + 1 for j := uint32(0); j < jlmt; j++ { q := p + float64(j)lb3 ir := lghi - q lg := q + math.Floor(ir) // current log value estimated count += uint64(ir) + 1 if lg >= lglo { bnd = append(bnd, logrep{lg, uint32(ir), j, k}) } } } if n > count { panic("nthHamming: band high estimate is too low!") } ndx := int(count - n) if ndx >= bnd.Len() { panic("nthHamming: band low estimate is too high!") } sort.Sort(bnd) // sort decreasing order due definition of Less above rslt := bnd[ndx] return rslt.x2, rslt.x3, rslt.x5 } func convertTpl2BigInt(x2, x3, x5 uint32) big.Int { result := big.NewInt(1) two := big.NewInt(2) three := big.NewInt(3) five := big.NewInt(5) for i := uint32(0); i < x2; i++ { result.Mul(result, two) } for i := uint32(0); i < x3; i++ { result.Mul(result, three) } for i := uint32(0); i < x5; i++ { result.Mul(result, five) } return result } func main() { for i := 1; i <= 20; i++ { fmt.Printf("%v ", convertTpl2BigInt(nthHamming(uint64(i)))) } fmt.Println() fmt.Println(convertTpl2BigInt(nthHamming(1691))) strt := time.Now() x2, x3, x5 := nthHamming(uint64(1e6)) end := time.Now() fmt.Printf("2^%v times 3^%v times 5^%v\r\n", x2, x3, x5) lrslt := convertTpl2BigInt(x2, x3, x5) lgrslt := (float64(x2) + math.Log2(3.0)float64(x3) + math.Log2(5.0)float64(x5)) math.Log10(2.0) exp := math.Floor(lgrslt) mant := math.Pow(10.0, lgrslt-exp) fmt.Printf("Approximately: %vE+%v\r\n", mant, exp) rs := lrslt.String() lrs := len(rs) fmt.Printf("%v digits:\r\n", lrs) if lrs <= 10000 { ndx := 0 for ; ndx < lrs-100; ndx += 100 { fmt.Println(rs[ndx : ndx+100]) } fmt.Println(rs[ndx:]) } fmt.Printf("This last found the %vth hamming number in %v.\r\n", uint64(1e6), end.Sub(strt)) } ``` Output: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 2^55 times 3^47 times 5^64 Approximately: 5.193127804483804E+83 84 digits: 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 This last found the 1000000th hamming number in 0s. As can be seen above, the time to calculate the millionth Hamming number is now too small to be measured. The billionth number in the sequence can be calculated in just about 15 milliseconds, the trillionth in about 1.5 seconds, the thousand trillionth in about 150 seconds, and it should be possible to calculate the 10^19th value in less than a day (untested) on common personal computers. The (2^64 - 1)th value (18446744073709551615th value) cannot be calculated due to a slight overflow problem as it approaches that limit. Groovy ``` class Hamming { static final ONE = BigInteger.ONE static final THREE = BigInteger.valueOf(3) static final FIVE = BigInteger.valueOf(5) static void main(args) { print 'Hamming(1 .. 20) =' (1..20).each { print " ${hamming it}" } println "\nHamming(1691) = ${hamming 1691}" println "Hamming(1000000) = ${hamming 1000000}" } static hamming(n) { def priorityQueue = new PriorityQueue<BigInteger>() priorityQueue.add ONE def lowest n.times { lowest = priorityQueue.poll() while (priorityQueue.peek() == lowest) { priorityQueue.poll() } updateQueue(priorityQueue, lowest) } lowest } static updateQueue(priorityQueue, lowest) { priorityQueue.add(lowest.shiftLeft 1) priorityQueue.add(lowest.multiply THREE) priorityQueue.add(lowest.multiply FIVE) } } ``` Haskell The classic version `` hamming = 1 : map (2) hammingunionmap (3) hammingunion` map (5) hamming union a@(x:xs) b@(y:ys) = case compare x y of LT -> x : union xs b EQ -> x : union xs ys GT -> y : union a ys main = do print $ take 20 hamming print (hamming !! (1691-1), hamming !! (1692-1)) print $ hamming !! (1000000-1) -- Output: -- [1,2,3,4,5,6,8,9,10,12,15,16,18,20,24,25,27,30,32,36] -- (2125764000,2147483648) -- 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 ``` Runs in about a second on Ideone.com. The nested unions' effect is to produce the minimal value at each step, with duplicates removed. As Haskell evaluation model is on-demand, the three map expressions are in effect iterators, maintaining hidden pointers back into the shared named storage with which they were each created (a name is a pointer/handle in Haskell; to name is to point at, to refer to, to take a handle on). The amount of operations is constant for each number produced, so the time complexity should be . Empirically, it is slightly above that and worsening, suggestive of extra cost of bignum arithmetics. Using triples representation with logarithm values for comparisons amends this problem, but runs ~ 1.2x slower for the 1,000,000. This is what that DDJ blog post's "pseudo-C" code was transcribing, mentioned at the Python entry that started this task ( curiously, it is in almost word-for-word correspondence with Edsger Dijkstra's code from his book A Discipline of Programming, p. 132 ). D, Go, PARI/GP, Prolog all implement the same idea of back-pointers into shared storage. A Haskell run-time system can actually free up the storage automatically at the start of the shared list and only keep the needed portion of it, from the (5) back-pointer, – which is about in length – behind the scenes, as long as there's no re-use evident in the code. Avoiding generation of duplicates The classic version can be sped up quite a bit (about twice, with roughly the same empirical orders of growth) by avoiding generation of duplicate values in the first place: ``` hammings :: () -> [Integer] hammings() = 1 : foldr u [] [2,3,5] where u n s = -- fix (merge s . map (n) . (1:)) r where r = merge s (map (n) (1:r)) merge [] b = b merge a@(x:xs) b@(y:ys) | x < y = x : merge xs b | otherwise = y : merge a ys main :: IO () main = do print $ take 20 (hammings ()) print $ (hammings ()) !! 1690 print $ (hammings ()) !! (1000000-1) ``` Explicit multiples reinserting This is a common approach which explicitly maintains an internal buffer of elements, removing the numbers from its front and reinserting their 2- 3- and 5-multiples in order. It overproduces the sequence, stopping when the n-th number is no longer needed instead of when it's first found. Also overworks by maintaining this buffer in total order where just heap would be sufficient. Worse, this particular version uses a sequential list for its buffer. That means operations for each number, instead of of the above version (and thus overall). Translation of Java (which does use priority queue though, so should have O‍ ‍(n‍ ‍logn) operations overall). Uses union from the "classic" version above: ``` hammFrom n = drop n $ iterate ((_ , (a:t)) -> (a, union t [2a,3a,5a])) (0, ) ``` Output: ``` take 20 $ map fst $ hammFrom 1 [1,2,3,4,5,6,8,9,10,12,15,16,18,20,24,25,27,30,32,36] take 2 $ map fst $ hammFrom 1691 [2125764000,2147483648] mapM_ print $ take 10 $ hammFrom 1 (1,[2,3,5]) (2,[3,4,5,6,10]) (3,[4,5,6,9,10,15]) (4,[5,6,8,9,10,12,15,20]) (5,[6,8,9,10,12,15,20,25]) (6,[8,9,10,12,15,18,20,25,30]) (8,[9,10,12,15,16,18,20,24,25,30,40]) (9,[10,12,15,16,18,20,24,25,27,30,40,45]) (10,[12,15,16,18,20,24,25,27,30,40,45,50]) (12,[15,16,18,20,24,25,27,30,36,40,45,50,60]) map (length . snd . head . hammFrom) [2000,4000,8000,16000] [402,638,1007,1596] map (logBase 2) $ zipWith (/) =<< tail $ [402,638,1007,1596] [0.67,0.66,0.66] ``` Runs too slowly to reach 1,000,000, with empirical orders of growth above ~‍ ‍(n‍ ‍1.7‍ ‍) and worsening. Last two lines show the internal buffer's length for several sample n‍ ‍s, and its empirical orders of growth which strongly support the claim. Enumeration by a chain of folded merges ``` hamm = foldr merge1 [] . iterate (map (5)) . foldr merge1 [] . iterate (map (3)) $ iterate (2) 1 where merge1 (x:xs) ys = x : merge xs ys {- 1, 2, 4, 8, 16, 32, ... 3, 6, 12, 24, 48, 96, ... 9, 18, 36, 72, 144, 288, ... 27, ... -} ``` Uses merge, as there's no need for duplicates-removing union because each number is produced only once here, too. The merges are arranged in a chain of folds. Might be suitable for parallel execution, because of their large number. Twice slower than the classic version at producing 1,000,000th Hamming number, and worsening, running at ~n1.14..1.16 empirically (vs. the classic version's linear operations). This is surprisingly efficient considering the large number of merges going on (about 300 for the 1Mth number, and ~3n1/3 in general). Can be significantly improved, both in time complexity and absolute run time, by replacing the linear fold with the tree-shaped mergeAll from the Data.List.Ordered module of data-ordlist package. Direct calculation through triples enumeration It is also possible to more or less directly calculate the n-th Hamming number by enumerating (and counting) all the (i,j,k) triples below its estimated value – with ordering according to their exponents, iln2 + jln3 + kln5 – while storing only the "band" of topmost triples close enough to the target value (more in the original post on DDJ). The savings come from enumerating only pairs of indices, and finding the corresponding third index by a direct calculation, thus achieving the O(n^(2/3)) time complexity. Space complexity, with constant empirical estimation correction, is ~n^(2/3); but is further tweaked to ~n^(1/3) (following the idea from the entry below). The total count of thus produced triples is then the band's topmost value's index in the Hamming sequence, 1-based. The nth number in the sequence is then found by a simple lookup in the sorted band, provided it was wide enough. This produces the 1,000,000-th value instantaneously. Following the 2017-10 IDEOne update to a faster 64-bit system, the 1 trillionth number is found in 0.7s on Ideone.com: ``` -- directly find n-th Hamming number, in ~ O(n^{2/3}) time. -- based on "top band" idea by Louis Klauder, from the DDJ discussion. -- by Will Ness, original post: drdobbs.com/blogs/architecture-and-design/228700538 import Data.List (sortBy, foldl') -- ' import Data.Function (on) main = let (r,t) = nthHam 1000000 in print t >> print (trival t) trival (i,j,k) = 2^i 3^j 5^k nthHam :: Int -> (Double, (Int, Int, Int)) -- ( 64bit: use Int!!! NB! ) nthHam n -- n: 1-based: 1,2,3... | n <= 0 = error $ "n is 1--based: must be n > 0: " ++ show n | n < 2 = ( 0.0, (0, 0, 0) ) -- trivial case so estimation works for rest | w >= 1 = error $ "Breach of contract: (w < 1): " ++ show w | m < 0 = error $ "Not enough triples generated: " ++ show ((c,n) :: (Int, Int)) | m >= nb = error $ "Generated band is too narrow: " ++ show (m,nb) | otherwise = sortBy (flip compare on fst) b !! m -- m-th from top in sorted band where lb3 = logBase 2 3; lb5 = logBase 2 5; lb30_2 = logBase 2 30 / 2 v = (6lb3lb5 fromIntegral n)(1/3) - lb30_2 -- estimated logval, base 2 estval n = (v + (1/v), 2/v) -- the space tweak! (thx, GBG!) (hi,w) = estval n -- hi > logval > hi-w m = fromIntegral (c - n) -- target index, from top nb = length (b :: [(Double, (Int, Int, Int))]) -- length of the band (c,b) = foldl_ ((c,b) (i,t)-> let c2=c+i in c2 seq -- ( total count, the band ) case t of []-> (c2,b);[v]->(c2,v:b) ) (0,[]) -- ( =~= mconcat ) [ ( fromIntegral i+1, -- total triples w/ this (j,k) [ (r,(i,j,k)) | frac < w ] ) -- store it, if inside band | k <- [ 0 .. floor ( hi /lb5) ], let p = fromIntegral klb5, j <- [ 0 .. floor ((hi-p)/lb3) ], let q = fromIntegral jlb3 + p, let (i,frac) = pr (hi-q) ; r = hi - frac -- r = i + q ] where pr = properFraction -- pr 1.24 => (1,0.24) foldl_ = foldl' ``` Output: -- time: 0.00s memory: 4.2MB (55,47,64) 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 Using loops for a faster code, and a narrower band to save space The DDJ blog post by Will Ness doesn't use the fact mentioned by the Wikipedia article that the error term in the estimation of the log of the resulting value for the nth Hamming number is directly proportional to this same log result. Using this fact, we are able to reduce the span of the "band" to only a constant fraction of the estimated log result for large n, and thus reduce memory space requirements to O(n^(1/3)) from O(n^(2/3)) for a considerable space saving for larger ranges. As well, although it isn't quite as elegant in a Haskell style sense, one can get an additional constant factor in execution time by replacing the "loops" based on list comprehensions to tail-recursive function call "loops", as in the following code: ``` {-# OPTIONS_GHC -O3 -XStrict #-} import Data.Word import Data.List (sortBy) import Data.Function (on) nthHam :: Word64 -> (Int, Int, Int) nthHam n -- n: 1-based 1,2,3... | n < 2 = case n of 0 -> error "nthHam: Argument is zero!" _ -> (0, 0, 0) -- trivial case for 1 | m < 0 = error $ "Not enough triples generated: " ++ show (c,n) | m >= nb = error $ "Generated band is too narrow: " ++ show (m,nb) | otherwise = case res of (_, tv) -> tv -- 2^i 3^j 5^k where lb3 = logBase 2 3; lb5 = logBase 2 5.0 lbrt30 = logBase 2 $ sqrt 30 :: Double -- estimate adjustment as per WP lg2est = (6 lb3 lb5 fromIntegral n)(1/3) - lbrt30 -- estimated logval, base 2 (hi,lo) = (lg2est + 1/lg2est, 2 lg2est - hi) -- hi > log2est > lo (c, b) = let klmt = floor (hi / lb5) loopk k ck bndk = if k > klmt then (ck, bndk) else let p = hi - fromIntegral k lb5; jlmt = floor (p / lb3) loopj j cj bndj = if j > jlmt then loopk (k + 1) cj bndj else let q = p - fromIntegral j lb3 (i, frac) = properFraction q nj = j + 1; ncj = cj + fromIntegral i + 1 r = hi - frac nbndj = i seq bndj seq if r < lo then bndj else case (r, (i, j, k)) of nhd -> nhd seq nhd : bndj in ncj seq nbndj seq loopj nj ncj nbndj in loopj 0 ck bndk in loopk 0 0 [] (m,nb) = ( fromIntegral $ c - n, length b ) -- m 0-based from top, |band| (s,res) = ( sortBy (flip compare on fst) b, s!!m ) -- sorted decreasing, result< main = putStrLn $ show $ nthHam 1000000000000 ``` This implementation can likely calculate the 10^19th Hamming number in less than a day and can't quite reach the (2^64-1)th (18446744073709551615th) Hamming due to a slight range overflow as it approaches this limit. Maximum memory used to these limits is less than a few hundred Megabytes, so possible on typical personal computers given the required day or two of computing time. On IdeOne (64-bit), this takes 0.03 seconds for the 10 billionth and 0.70 seconds for the trillionth number (October 2017 update to a faster 64-bit system). Using "roll-your-own" extended precision logarithm values in the error band to extend range All of these codes using algorithms can't do an accurate sort of the error band for arguments somewhere above 10^13 due to the limited precision of the Double logarithm values, but this is easily fixed by using "roll-your-own" Integer logarithm values as follows with very little cost in execution time as it only applies to the relatively very small error band: ``` {-# OPTIONS_GHC -O3 -XStrict #-} import Data.Word import Data.List (sortBy) import Data.Function (on) nthHam :: Word64 -> (Int, Int, Int) nthHam n -- n: 1-based 1,2,3... | n < 2 = case n of 0 -> error "nthHam: Argument is zero!" _ -> (0, 0, 0) -- trivial case for 1 | m < 0 = error $ "Not enough triples generated: " ++ show (c,n) | m >= nb = error $ "Generated band is too narrow: " ++ show (m,nb) | otherwise = case res of (_, tv) -> tv -- 2^i 3^j 5^k where lb3 = logBase 2 3; lb5 = logBase 2 5.0 lbrt30 = logBase 2 $ sqrt 30 :: Double -- estimate adjustment as per WP lg2est = (6 lb3 lb5 fromIntegral n)(1/3) - lbrt30 -- estimated logval, base 2 (hi,lo) = (lg2est + 1/lg2est, 2 lg2est - hi) -- hi > log2est > lo bglb2 = 1267650600228229401496703205376 :: Integer bglb3 = 2009178665378409109047848542368 :: Integer bglb5 = 2943393543170754072109742145491 :: Integer (c, b) = let klmt = floor (hi / lb5) loopk k ck bndk = if k > klmt then (ck, bndk) else let p = hi - fromIntegral k lb5; jlmt = floor (p / lb3) loopj j cj bndj = if j > jlmt then loopk (k + 1) cj bndj else let q = p - fromIntegral j lb3 (i, frac) = properFraction q nj = j + 1; ncj = cj + fromIntegral i + 1 r = hi - frac nbndj = i seq bndj seq if r < lo then bndj else let bglg = bglb2 fromIntegral i + bglb3 fromIntegral j + bglb5 fromIntegral k in bglg seq case (bglg, (i, j, k)) of nhd -> nhd seq nhd : bndj in ncj seq nbndj seq loopj nj ncj nbndj in loopj 0 ck bndk in loopk 0 0 [] (m,nb) = ( fromIntegral $ c - n, length b ) -- m 0-based from top, |band| -- (s,res) = (b, s!!m) (s,res) = ( sortBy (flip compare on fst) b, s!!m ) -- sorted decreasing, result< main = putStrLn $ show $ nthHam 1000000000000 ``` All of these codes run a constant factor faster using the forced "Strict" mode, which shows that it is very difficult to anticipate the Haskell strictness analyser, especially in the case of the first code using List comprehensions. Icon and Unicon This solution uses Unicon's object oriented extensions. An Icon only version has not been provided. Lazy evaluation is used to improve performance. ``` Lazily generate the three Hamming numbers that can be derived directly from a known Hamming number h class Triplet : Class (cv, ce) method nextVal() suspend cv := @ce end initially (baseNum) cv := 2baseNum ce := create (3|5)baseNum end Generate Hamming numbers, in order. Default is first 30 But an optional argument can be used to generate more (or less) e.g. hamming 5000 generates the first 5000. procedure main(args) limit := integer(args) | 30 every write("\t", generateHamming() \ limit) end Do the work. Start with known Hamming number 1 and maintain a set of triplet Hamming numbers as they get derived from that one. Most of the code here is to figure out which Hamming number is next in sequence (while removing duplicates) procedure generateHamming() triplers := set() insert(triplers, Triplet(1)) suspend 1 repeat { # Pick a Hamming triplet that may have the next smallest number t1 := !triplers # any will do to start every t1 ~=== (t2 := !triplers) do { if t1.cv > t2.cv then { # oops we were wrong, switch assumption t1 := t2 } else if t1.cv = t2.cv then { # t2's value is a duplicate, so # advance triplet t2, if none left in t2, remove it t2.nextVal() | delete(triplers, t2) } } # Ok, t1 has the next Hamming number, grab it suspend t1.cv insert(triplers, Triplet(t1.cv)) # Advance triplet t1, if none left in t1, remove it t1.nextVal() | delete(triplers, t1) } end ``` J Solution: A concise tacit expression using a (right) fold: ``` hamming=: {. (/:~@~.@] , 2 3 5 {)/@(1x ,~ i.@-) ``` Example usage: ``` hamming 20 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 {: hamming 1691 2125764000 ``` For the millionth (and billionth (1e9)) Hamming number see the nh verb from Hamming Number essay on the J wiki. Explanation: I'll explain this J-sentence by dividing it in three parts from left to right omitting the leftmost {.: sort and remove duplicates ``` /:~@~.@] ``` produce 3 elements by selection and multiplication (we have already produced smaller values, this will overproduce slightly larger values, but the extra values overlap, and we handle that by discarding duplicates): ``` 2 3 5 { ``` note that LHA (2 3 5 {) RHA is equivalent to ``` 2 3 5 LHA { RHA ``` the RH part forms an array of descending indices and the initial Hamming number 1 ``` (1x ,~ i.@-) ``` e.g. if we want the first 5 Hamming numbers, it produces the array: ``` 4 3 2 1 0 1 ``` in other words, we compute a sequence which begins with the desired hamming sequence and then take the first n elements (which will be our desired hamming sequence) ``` ({. (/:~@~.@] , 2 3 5 {)/@(1x ,~ i.@-)) 7 1 2 3 4 5 6 8 ``` This starts using a descending sequence with 1 appended: ``` (1x ,~ i.@-) 7 6 5 4 3 2 1 0 1 ``` and then the fold expression is inserted between these list elements and the result computed: ``` 6(/:~@~.@] , 2 3 5 {) 5(/:~@~.@] , 2 3 5 {) 4(/:~@~.@] , 2 3 5 {) 3(/:~@~.@] , 2 3 5 {) 2(/:~@~.@] , 2 3 5 {) 1(/:~@~.@] , 2 3 5 {) 0(/:~@~.@] , 2 3 5 {) 1 1 2 3 4 5 6 8 9 10 12 15 18 20 25 30 16 24 40 ``` (Note: A train of verbs in J is evaluated by supplying arguments to the every other verb (counting from the right) and the combining these results with the remaining verbs. Also: { has been implemented so that an index of 0 will select the only item from an array with no dimensions.) Java Works with: Java version 1.5+ Has a common shortcoming of overproducing the sequence by about entries, until the n-th number is no longer needed, instead of stopping as soon as it is reached. See Haskell for an illustration. Inserting the top number's three multiples deep into the priority queue as it does, incurs extra cost for each number produced. To not worsen the expected algorithm complexity, the priority queue should have (amortized) implementation for both insertion and deletion operations but it looks like it's in Java. ``` import java.math.BigInteger; import java.util.PriorityQueue; final class Hamming { private static BigInteger THREE = BigInteger.valueOf(3); private static BigInteger FIVE = BigInteger.valueOf(5); private static void updateFrontier(BigInteger x, PriorityQueue<BigInteger> pq) { pq.offer(x.shiftLeft(1)); pq.offer(x.multiply(THREE)); pq.offer(x.multiply(FIVE)); } public static BigInteger hamming(int n) { if (n <= 0) throw new IllegalArgumentException("Invalid parameter"); PriorityQueue<BigInteger> frontier = new PriorityQueue<BigInteger>(); updateFrontier(BigInteger.ONE, frontier); BigInteger lowest = BigInteger.ONE; for (int i = 1; i < n; i++) { lowest = frontier.poll(); while (frontier.peek().equals(lowest)) frontier.poll(); updateFrontier(lowest, frontier); } return lowest; } public static void main(String[] args) { System.out.print("Hamming(1 .. 20) ="); for (int i = 1; i < 21; i++) System.out.print(" " + hamming(i)); System.out.println("\nHamming(1691) = " + hamming(1691)); System.out.println("Hamming(1000000) = " + hamming(1000000)); } } ``` Output: Hamming(1 .. 20) = 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 Hamming(1691) = 2125764000 Hamming(1000000) = 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 Another possibility is to realize that Hamming numbers can be represented and stored as triples of nonnegative integers which are the powers of 2, 3 and 5, and do all operations needed by the algorithms directly on these triples without converting to , which saves tremendous memory and time. Also, the search frontier through this three-dimensional grid can be generated in an order that never reaches the same state twice, so we don't need to keep track which states have already been encountered, saving even more memory. The objects of encode Hamming numbers in three fields , and . Multiplying by 2, 3 and 5 can now be done just by incrementing that field. The order comparison of triples needed by the priority queue is implemented with simple logarithm formulas of multiplication and addition, resorting to conversion to only in the rare cases that the floating point arithmetic produces too close results. ``` import java.math.BigInteger; import java.util.; public class HammingTriple implements Comparable { // Precompute a couple of constants that we need all the time private static final BigInteger two = BigInteger.valueOf(2); private static final BigInteger three = BigInteger.valueOf(3); private static final BigInteger five = BigInteger.valueOf(5); private static final double logOf2 = Math.log(2); private static final double logOf3 = Math.log(3); private static final double logOf5 = Math.log(5); // The powers of this triple private int a, b, c; public HammingTriple(int a, int b, int c) { this.a = a; this.b = b; this.c = c; } public String toString() { return "[" + a + ", " + b + ", " + c + "]"; } public BigInteger getValue() { return two.pow(a).multiply(three.pow(b)).multiply(five.pow(c)); } public boolean equals(Object other) { if(other instanceof HammingTriple) { HammingTriple h = (HammingTriple) other; return this.a == h.a && this.b == h.b && this.c == h.c; } else { return false; } } // Return 0 if this == other, +1 if this > other, and -1 if this < other public int compareTo(HammingTriple other) { // equality if(this.a == other.a && this.b == other.b && this.c == other.c) { return 0; } // this dominates if(this.a >= other.a && this.b >= other.b && this.c >= other.c) { return +1; } // other dominates if(this.a <= other.a && this.b <= other.b && this.c <= other.c) { return -1; } // take the logarithms for comparison double log1 = this.a logOf2 + this.b logOf3 + this.c logOf5; double log2 = other.a logOf2 + other.b logOf3 + other.c logOf5; // are these different enough to be reliable? if(Math.abs(log1 - log2) > 0.0000001) { return (log1 < log2) ? -1: +1; } // oh well, looks like we have to do this the hard way return this.getValue().compareTo(other.getValue()); // (getting this far should be pretty rare, though) } public static BigInteger computeHamming(int n, boolean verbose) { if(verbose) { System.out.println("Hamming number #" + n); } long startTime = System.currentTimeMillis(); // The elements of the search frontier PriorityQueue<HammingTriple> frontierQ = new PriorityQueue<HammingTriple>(); int maxFrontierSize = 1; // Initialize the frontier frontierQ.offer(new HammingTriple(0, 0, 0)); // 1 while(true) { if(frontierQ.size() > maxFrontierSize) { maxFrontierSize = frontierQ.size(); } // Pop out the next Hamming number from the frontier HammingTriple curr = frontierQ.poll(); if(--n == 0) { if(verbose) { System.out.println("Time: " + (System.currentTimeMillis() - startTime) + " ms"); System.out.println("Frontier max size: " + maxFrontierSize); System.out.println("As powers: " + curr.toString()); System.out.println("As value: " + curr.getValue()); } return curr.getValue(); } // Current times five, if at origin in (a,b) plane if(curr.a == 0 && curr.b == 0) { frontierQ.offer(new HammingTriple(curr.a, curr.b, curr.c + 1)); } // Current times three, if at line a == 0 if(curr.a == 0) { frontierQ.offer(new HammingTriple(curr.a, curr.b + 1, curr.c)); } // Current times two, unconditionally curr.a++; frontierQ.offer(curr); // reuse the current HammingTriple object } } } ``` ``` Hamming number #1000000 Time: 650 ms Frontier max size: 10777 As powers: [55, 47, 64] As value: 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 Hamming number #1000000000 Time: 1763306 ms Frontier max size: 1070167 As powers: [1334, 335, 404] As value: 62160757555652448616308163328720720039470565190896527065916324096423370220027531418244175407 772567327803701726166152919355404186200255249167295000868314547113136940786355040041603128729517887 0364794838245609107270160079056207179759030665476588225699039176388785014115448224991592743918456282 8227449023750262318234797192076792208033475638322151983772515798004125909334741121595323950448656375 1044570269974247729669174417794061727369755885568000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000 ``` Alternative This uses memoized streams - similar in principle to the classic lazy-evaluated sequence, but with a java flavor. Hope you like this recipe! ``` import java.math.BigInteger; public class Hamming { public static void main(String args[]) { Stream hamming = makeHamming(); System.out.print("H[1..20] "); for (int i=0; i<20; i++) { System.out.print(hamming.value()); System.out.print(" "); hamming = hamming.advance(); } System.out.println(); System.out.print("H "); hamming = makeHamming(); for (int i=1; i<1691; i++) { hamming = hamming.advance(); } System.out.println(hamming.value()); hamming = makeHamming(); System.out.print("H[10^6] "); for (int i=1; i<1000000; i++) { hamming = hamming.advance(); } System.out.println(hamming.value()); } public interface Stream { BigInteger value(); Stream advance(); } public static class MultStream implements Stream { MultStream(int mult) { m_mult = BigInteger.valueOf(mult); } MultStream setBase(Stream s) { m_base = s; return this; } public BigInteger value() { return m_mult.multiply(m_base.value()); } public Stream advance() { return setBase(m_base.advance()); } private final BigInteger m_mult; private Stream m_base; } private final static class RegularStream implements Stream { RegularStream(Stream[] streams, BigInteger val) { m_streams = streams; m_val = val; } public BigInteger value() { return m_val; } public Stream advance() { // memoized value for the next stream instance. if (m_advance != null) { return m_advance; } int minidx = 0 ; BigInteger next = nextStreamValue(0); for (int i=1; i<m_streams.length; i++) { BigInteger v = nextStreamValue(i); if (v.compareTo(next) < 0) { next = v; minidx = i; } } RegularStream ret = new RegularStream(m_streams, next); // memoize the value! m_advance = ret; m_streams[minidx].advance(); return ret; } private BigInteger nextStreamValue(int streamidx) { // skip past duplicates in the streams we're merging. BigInteger ret = m_streams[streamidx].value(); while (ret.equals(m_val)) { m_streams[streamidx] = m_streams[streamidx].advance(); ret = m_streams[streamidx].value(); } return ret; } private final Stream[] m_streams; private final BigInteger m_val; private RegularStream m_advance = null; } private final static Stream makeHamming() { MultStream nums[] = new MultStream[] { new MultStream(2), new MultStream(3), new MultStream(5) }; Stream ret = new RegularStream(nums, BigInteger.ONE); for (int i=0; i<nums.length; i++) { nums[i].setBase(ret); } return ret; } } ``` ``` $ java Hamming H[1..20] 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 H 2125764000 H[10^6] 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 $ ``` JavaScript Works with: JavaScript version 1.7 Works with: Firefox version 2 Translation of: Ruby This does not calculate the 1,000,000th Hamming number. Note the use of for (x in obj) to iterate over the properties of an object, versus for each (y in obj) to iterate over the values of the properties of an object. ``` function hamming() { var queues = {2: [], 3: [], 5: []}; var base; var next_ham = 1; while (true) { yield next_ham; for (base in queues) {queues[base].push(next_ham base)} next_ham = [ queue for each (queue in queues) ].reduce(function(min, val) { return Math.min(min,val) }); for (base in queues) {if (queues[base] == next_ham) queues[base].shift()} } } var ham = hamming(); var first20=[], i=1; for (; i <= 20; i++) first20.push(ham.next()); print(first20.join(', ')); print('...'); for (; i <= 1690; i++) ham.next(); print(i + " => " + ham.next()); ``` Output: 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36 ... 1691 => 2125764000 Fast & complete version Translation of: C# A translation of my fast C# version. I was curious to see how much slower JavaScript is. The result: it runs about 5x times slower than C#, though YMMV. You can try it yourself here: --Mike Lorenz ``` var _primes = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37]; function log(text) { $('#main').append(text + "\n"); } function big(exponents) { var i, e, val = bigInt.one; for (i = 0; i < exponents.length; i++) for (e = 0; e < exponents[i]; e++) val = val.times(_primes[i]); return val.toString(); } function hamming(n, nprimes) { var i, iter, p, q, min, equal, x; var hammings = new Array(n); // array of hamming #s we generate hammings = new Array(nprimes); for (p = 0; p < nprimes; p++) { hammings[p] = 0; } var hammlogs = new Array(n); // log values for above hammlogs = 0; var primelogs = new Array(nprimes); // pre-calculated prime log values var listlogs = new Array(nprimes); // log values of list heads for (p = 0; p < nprimes; p++) { primelogs[p] = listlogs[p] = Math.log(_primes[p]); } var indexes = new Array(nprimes); // intermediate hamming values as indexes into hammings for (p = 0; p < nprimes; p++) { indexes[p] = 0; } var listheads = new Array(nprimes); // intermediate hamming list heads for (p = 0; p < nprimes; p++) { listheads[p] = new Array(nprimes); for (q = 0; q < nprimes; q++) { listheads[p][q] = 0; } listheads[p][p] = 1; } for (iter = 1; iter < n; iter++) { min = 0; for (p = 1; p < nprimes; p++) if (listlogs[p] < listlogs[min]) min = p; hammlogs[iter] = listlogs[min]; // that's the next hamming number hammings[iter] = listheads[min].slice(); for (p = 0; p < nprimes; p++) { // update each list head if it matches new value equal = true; // test each exponent to see if number matches for (i = 0; i < nprimes; i++) { if (hammings[iter][i] != listheads[p][i]) { equal = false; break; } } if (equal) { // if it matches... x = ++indexes[p]; // set index to next hamming number listheads[p] = hammings[x].slice(); // copy hamming number listheads[p][p] += 1; // increment exponent = mult by prime listlogs[p] = hammlogs[x] + primelogs[p]; // add log(prime) to log(value) = mult by prime } } } return hammings[n - 1]; } $(document).ready(function() { var i, nprimes; var t = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,1691,1000000]; for (nprimes = 3; nprimes <= 4; nprimes++) { var start = new Date(); log('<h1>' + _primes[nprimes - 1] + '-Smooth:' + '</h1>'); log('<table>'); for (i = 0; i < t.length; i++) log('<tr>' + '<td>' + t[i] + ':' + '</td><td>' + big(hamming(t[i], nprimes)) + '</td>'); var end = new Date(); log('<tr>' + '<td>' + 'Elapsed time:' + '</td><td>' + (end-start)/1000 + ' seconds' + '</td>'); log('</table>'); } }); ``` Output: ``` 5-Smooth: 1: 1 2: 2 3: 3 4: 4 5: 5 6: 6 7: 8 8: 9 9: 10 10: 12 11: 15 12: 16 13: 18 14: 20 15: 24 16: 25 17: 27 18: 30 19: 32 20: 36 1691: 2125764000 1000000: 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 Elapsed time: 1.73 seconds 7-Smooth: 1: 1 2: 2 3: 3 4: 4 5: 5 6: 6 7: 7 8: 8 9: 9 10: 10 11: 12 12: 14 13: 15 14: 16 15: 18 16: 20 17: 21 18: 24 19: 25 20: 27 1691: 3317760 1000000: 4157409948433216829957008507500000000 Elapsed time: 1.989 seconds ``` jq Works with: jq version 1.4 We take the primary challenge here to be to write a Hamming number generator that can generate a given number of Hamming numbers, or the n-th Hamming number, without storing previously generated numbers. To motivate a more complex version, in Part 1 of this section hamming(n) is defined as a generator of Hamming numbers, as numbers. This function uses an efficient algorithm and can run indefinitely, but it has one disadvantage: currently, jq converts large integers to floating point approximations, and thus precision is lost. For example, it reports the millionth Hamming number as 1.926511252902403e+44. In Part 2, the algorithm in the first part is modified to use the [p,q,r] representation of Hamming numbers, where p, q, and r are the relevant exponents respectively of 2, 3, and 5. The task description focuses on finding the n-th element of an infinite sequence and so it should be mentioned that using jq versions greater than 1.4, it would be possible to simply the generator so that is always unbounded, and then harness it with new builtins such as "limit" and "nth". Hamming number generator ``` Return the index in the input array of the min_by(f) value def index_min_by(f): . as $in | if length == 0 then null else . as $first | reduce range(0; length) as $i ([0, $first, ($first|f)]; # state: [ix; min; f|min] ($in[$i]|f) as $v | if $v < . then [ $i, $in[$i], $v ] else . end) | . end; Emit n Hamming numbers if n>0; the nth if n<0 def hamming(n): # input: [twos, threes, fives] of which at least one is assumed to be non-empty # output: the index of the array holding the min of the firsts def next: map( . ) | index_min_by(.); # input: [value, [twos, threes, fives] ....] # ix is the index in [twos, threes, fives] of the array to be popped # output: [popped, updated_arrays ...] def pop(ix): . as $triple | setpath(; $triple[ix]) | setpath([1,ix]; $triple[ix][1:]); # input: [x, [twos, threes, fives], count] # push value2 to twos, value3 to threes, value5 to fives and increment count def push(v): [., [. + [2v], . + [3v], . + [5v]], . + 1]; # _hamming is the workhorse # input: [previous, [twos, threes, fives], count] def _hamming: . as $previous | if (n > 0 and . == n) or (n<0 and . == -n) then $previous else (.|next) as $ix # $ix cannot be null | pop($ix) | . as $next | (if $next == $previous then empty elif n>=0 then $previous else empty end), (if $next == $previous then . else push($next) end | _hamming) end; [1, , 1] | _hamming; . as $n | hamming($n) ``` Examples: ``` First twenty: hamming(20) See elsewhere for output 1691st Hamming number: hamming(-1691) => 2125764000 Millionth: hamming(-1000000) => 1.926511252902403e+44 ``` Hamming numbers as triples In this section, Hamming numbers are represented as triples, [p,q,r], where p, q and r are the relevant powers of 2, 3, and 5 respectively. We therefore begin with some functions for managing Hamming numbers represented in this manner: ``` The log (base e) of a Hamming triple: def ln_hamming: if length != 3 then error("ln_hamming: (.)") else . end | (. (2|log)) + (. (3|log)) + (. (5|log)); The numeric value of a Hamming triple: def hamming_tof: ln_hamming | exp; def hamming_toi: def pow(n): . as $in | reduce range(0;n) as $i (1; . $in); . as $in | (2|pow($in)) (3|pow($in)) (5|pow($in)); Return the index in the input array of the min_by(f) value def index_min_by(f): . as $in | if length == 0 then null else . as $first | reduce range(0; length) as $i ([0, $first, ($first|f)]; # state: [ix; min; f|min] ($in[$i]|f) as $v | if $v < . then [ $i, $in[$i], $v ] else . end) | . end; Emit n Hamming numbers (as triples) if n>0; the nth if n<0; otherwise indefinitely. def hamming(n): # n must be 2, 3 or 5 def hamming_times(n): n as $n | if $n==2 then . += 1 elif $n==3 then . += 1 else . += 1 end; # input: [twos, threes, fives] of which at least one is assumed to be non-empty # output: the index of the array holding the min of the firsts def next: map( . ) | index_min_by( ln_hamming ); # input: [value, [twos, threes, fives] ....] # ix is the index in [twos, threes, fives] of the array to be popped # output: [popped, updated_arrays ...] def pop(ix): . as $triple | setpath(; $triple[ix]) | setpath([1,ix]; $triple[ix][1:]); # input: [x, [twos, threes, fives], count] # push value2 to twos, value3 to threes, value5 to fives and increment count def push(v): [., [. + [v|hamming_times(2)], . + [v|hamming_times(3)], . + [v|hamming_times(5)]], . + 1]; # _hamming is the workhorse # input: [previous, [twos, threes, fives], count] def _hamming: . as $previous | if (n > 0 and . == n) or (n<0 and . == -n) then $previous else (.|next) as $ix # $ix cannot be null | pop($ix) | . as $next | (if $next == $previous then empty elif n>=0 then $previous else empty end), (if $next == $previous then . else push($next) end | _hamming) end; ,, ], 1] | _hamming; ``` Examples ``` The first twenty Hamming numbers as integers: hamming(-20) | hamming_toi => (see elsewhere) 1691st as a Hamming triple: hamming(-1691) => [5,12,3] The millionth: hamming(-1000000) => [55,47,64] ``` Julia Simple brute force algorithm, derived from the discussion at ProgrammingPraxis.com. ``` function hammingsequence(N) if N < 1 throw("Hamming sequence exponent must be a positive integer") end ham = N > 4000 ? Vector{BigInt}() : Vector{Int}() base2, base3, base5 = (1, 1, 1) for i in 1:N-1 x = min(2ham[base2], 3ham[base3], 5ham[base5]) push!(ham, x) if 2ham[base2] <= x base2 += 1 end if 3ham[base3] <= x base3 += 1 end if 5ham[base5] <= x base5 += 1 end end ham end println(hammingsequence(20)) println(hammingsequence(1691)[end]) println(hammingsequence(1000000)[end]) ``` Output: ``` [1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36] 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 ``` The above code is terribly inefficient, just as said, but can be improved by about a factor of two by using intermediate variables (next2, next3, and next5) to avoid recalculating the long multi-precision integers for each comparison, as it seems that the Julia compiler (version 1.0.2) is not doing common sub expression elimination: ``` function hammingsequence(N::Int) if N < 1 throw("Hamming sequence index must be a positive integer") end ham = Vector{BigInt}() base2, base3, base5 = 1, 1, 1 next2, next3, next5 = BigInt(2), BigInt(3), BigInt(5) for _ in 1:N-1 x = min(next2, next3, next5) push!(ham, x) next2 <= x && (base2 += 1; next2 = 2ham[base2]) next3 <= x && (base3 += 1; next3 = 3ham[base3]) next5 <= x && (base5 += 1; next5 = 5ham[base5]) end ham end ``` Infinite generator, avoiding duplicates, using logarithms for faster processing The above code is slow for several reasons, partly because it is doing many multi-precision integer multiplications requiring much memory allocation and garbage collection for which Julia is quite slow, but also because there are many repeated calculations (3 times 2 equals 2 times three, etc.). The following code is about 60 times faster by using floating point logarithms for multiplication and comparison; it also is an infinite generator (an iterator), which means that memory consumption can be greatly reduced by eliminating values which are no longer of any use: Translation of: Nim ``` struct LogRep lg :: Float64 x2 :: UInt32 x3 :: UInt32 x5 :: UInt32 end const ONE = LogRep(0.0, 0, 0, 0) const LB2_2 = 1.0 const LB2_3 = log(2,3) const LB2_5 = log(2,5) function mult2(lr :: LogRep) # :: LogRep LogRep(lr.lg + LB2_2, lr.x2 + 1, lr.x3, lr.x5) end function mult3(lr :: LogRep) # :: LogRep LogRep(lr.lg + LB2_3, lr.x2, lr.x3 + 1, lr.x5) end function mult5(lr :: LogRep) # :: LogRep LogRep(lr.lg + LB2_5, lr.x2, lr.x3, lr.x5 + 1) end function lr2BigInt(lr :: LogRep) # :: BigInt BigInt(2)^lr.x2 BigInt(3)^lr.x3 BigInt(5)^lr.x5 end mutable struct HammingsLog s2 :: Vector{LogRep} s3 :: Vector{LogRep} s5 :: LogRep mrg :: LogRep s2hdi :: Int s3hdi :: Int HammingsLog() = new( [ONE], [mult3(ONE)], mult5(ONE), mult3(ONE), 1, 1 ) end Base.eltype(::Type{HammingsLog}) = LogRep function Base.iterate(HM::HammingsLog, st = HM) # :: Union{Nothing,Tuple{LogRep,HammingsLog}} s2sz = length(st.s2) if st.s2hdi + st.s2hdi - 2 >= s2sz ns2sz = s2sz - st.s2hdi + 1 copyto!(st.s2, 1, st.s2, st.s2hdi, ns2sz) resize!(st.s2, ns2sz); st.s2hdi = 1 end rslt = @inbounds(st.s2[st.s2hdi]) if rslt.lg < st.mrg.lg st.s2hdi += 1 else s3sz = length(st.s3) if st.s3hdi + st.s3hdi - 2 >= s3sz ns3sz = s3sz - st.s3hdi + 1 copyto!(st.s3, 1, st.s3, st.s3hdi, ns3sz) resize!(st.s3, ns3sz); st.s3hdi = 1 end rslt = st.mrg; push!(st.s3, mult3(rslt)) st.s3hdi += 1; chkv = @inbounds(st.s3[st.s3hdi]) if chkv.lg < st.s5.lg st.mrg = chkv else st.mrg = st.s5; st.s5 = mult5(st.s5); st.s3hdi -= 1 end end push!(st.s2, mult2(rslt)); rslt, st end function test(n :: Int) :: Tuple{LogRep, Float64} start = time(); rslt :: LogRep = ONE count = n; for t in HammingsLog() count <= 1 && (rslt = t; break); count -= 1 end elpsd = (time() - start) 1000 rslt, elpsd end foreach(x -> print(lr2BigInt(x)," "), (Iterators.take(HammingsLog(), 20))); println() let count = 1691; for t in HammingsLog() count <= 1 && (println(lr2BigInt(t)); break); count -= 1 end end rslt, elpsd = test(1000000) println(lr2BigInt(rslt)) println("This last took $elpsd milliseconds.") ``` Output: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 This last took 16.8759822845459 milliseconds. The above execution time is as run on an Intel i5-6500 at 3.6 GHz (single threaded boost), and the program can find the billionth Hamming number in about 17 seconds. Determination of the nth Hamming number by processing of error band For some phenomenal speed in determining the nth Hamming/regular number, one doesn't need to find all the values up to that limit but rather only the values within an error band which is a factor of two either way from the correct value; this has the advantage that the number of processing loops are reduced from O(n^3) to O(n^(2/3)) for a considerable saving for larger ranges and has the further advantage that memory consumption is reduced to O(n^(1/3)) meaning that huge ranges can be computed on a common desktop computer. The folwingcode can compute the trillionth (10^12th) Hamming number is a couple of seconds: ``` function nthhamming(n :: UInt64) # :: Tuple{UInt32, UInt32, UInt32} # take care of trivial cases too small for band size estimation to work... n < 1 && throw("nthhamming: argument must be greater than zero!!!") n < 2 && return (0, 0, 0) n < 3 && return (1, 0, 0) # some constants... log2of2, log2of3, log2of5 = 1.0, log(2, 3), log(2, 5) fctr, crctn = 6.0 log2of3 log2of5, log(2, sqrt(30)) log2est = (fctr Float64(n))^(1.0 / 3.0) - crctn # log2 answer from WP formula log2hi = log2est + 1.0 / log2est; width = 2.0 / log2est # up to 2X higher/lower # loop to find the count of regular numbers and band of possible candidates... count :: UInt64 = 0; band = Vector{Tuple{Float64,Tuple{UInt32,UInt32,UInt32}}}() fiveslmt = UInt32(ceil(log2hi / log2of5)); fives :: UInt32 = 0 while fives < fiveslmt log2p = log2hi - fives log2of5 threeslmt = UInt32(ceil(log2p / log2of3)); threes :: UInt32 = 0 while threes < threeslmt log2q = log2p - threes log2of3 twos = UInt32(floor(log2q)); frac = log2q - twos; count += twos + 1 frac <= width && push!(band, (log2hi - frac, (twos, threes, fives))) threes += 1 end fives += 1 end # process the band found including checks for validity and range... n > count && throw("nthhamming: band high estimate is too low!!!") ndx = count - n + 1 ndx > length(band) && throw("nthhamming: band width estimate is too narrow!!!") sort!(band, by=(tpl -> let (lg,_) = tpl; -lg end)) # sort in decending order # get and return the answer... _, tri = band[ndx] tri end foreach(x-> print(trival(nthhamming(UInt(x))), " "), 1:20); println() println(trival(nthhamming(UInt64(1691)))) println(trival(nthhamming(UInt64(1000000)))) ``` Above about a range of 10^13, a Float64 logarithm doesn't have enough precision to be able to sort the error band properly, so a refinement of using a "roll-your-own" extended precision logarithm must be used, as follows: ``` function nthhamming(n :: UInt64) # :: Tuple{UInt32, UInt32, UInt32} # take care of trivial cases too small for band size estimation to work... n < 1 && throw("nthhamming: argument must be greater than zero!!!") n < 2 && return (0, 0, 0) n < 3 && return (1, 0, 0) # some constants... log2of2, log2of3, log2of5 = 1.0, log(2, 3), log(2, 5) fctr, crctn = 6.0 log2of3 log2of5, log(2, sqrt(30)) log2est = (fctr Float64(n))^(1.0 / 3.0) - crctn # log2 answer from WP formula log2hi = log2est + 1.0 / log2est; width = 2.0 / log2est # up to 2X higher/lower # some really really big constants representing the "roll-your-own" big logs... biglog2of2 = BigInt(1267650600228229401496703205376) biglog2of3 = BigInt(2009178665378409109047848542368) biglog2of5 = BigInt(2943393543170754072109742145491) # loop to find the count of regular numbers and band of possible candidates... count :: UInt64 = 0; band = Vector{Tuple{BigInt,Tuple{UInt32,UInt32,UInt32}}}() fiveslmt = UInt32(ceil(log2hi / log2of5)); fives :: UInt32 = 0 while fives < fiveslmt log2p = log2hi - fives log2of5 threeslmt = UInt32(ceil(log2p / log2of3)); threes :: UInt32 = 0 while threes < threeslmt log2q = log2p - threes log2of3 twos = UInt32(floor(log2q)); frac = log2q - twos; count += twos + 1 if frac <= width biglog = biglog2of2 twos + biglog2of3 threes + biglog2of5 fives push!(band, (biglog, (twos, threes, fives))) end threes += 1 end fives += 1 end # process the band found including checks for validity and range... n > count && throw("nthhamming: band high estimate is too low!!!") ndx = count - n + 1 ndx > length(band) && throw("nthhamming: band width estimate is too narrow!!!") sort!(band, by=(tpl -> let (lg,_) = tpl; -lg end)) # sort in decending order # get and return the answer... _, tri = band[ndx] tri end ``` The above code can find the trillionth Hamming number in about two seconds (very little slower) and the thousand trillionth value in about 192 seconds. This routine would be able to find the million trillionth Hamming number in about 20,000 seconds or about five and a half hours. Kotlin Translation of: Java ``` import java.math.BigInteger import java.util. val Three = BigInteger.valueOf(3)!! val Five = BigInteger.valueOf(5)!! fun updateFrontier(x : BigInteger, pq : PriorityQueue) { pq.add(x.shiftLeft(1)) pq.add(x.multiply(Three)) pq.add(x.multiply(Five)) } fun hamming(n : Int) : BigInteger { val frontier = PriorityQueue() updateFrontier(BigInteger.ONE, frontier) var lowest = BigInteger.ONE for (i in 1 .. n-1) { lowest = frontier.poll() ?: lowest while (frontier.peek() == lowest) frontier.poll() updateFrontier(lowest, frontier) } return lowest } fun main(args : Array) { System.out.print("Hamming(1 .. 20) =") for (i in 1 .. 20) System.out.print(" ${hamming(i)}") System.out.println("\nHamming(1691) = ${hamming(1691)}") System.out.println("Hamming(1000000) = ${hamming(1000000)}") } ``` Output: Hamming(1 .. 20) = 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 Hamming(1691) = 2125764000 Hamming(1000000) = 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 Overloaded function: ``` import java.math.BigInteger import java.util. val One = BigInteger.ONE!! val Three = BigInteger.valueOf(3)!! val Five = BigInteger.valueOf(5)!! fun PriorityQueue.update(x: BigInteger) : PriorityQueue { add(x.shiftLeft(1)) add(x.multiply(Three)) add(x.multiply(Five)) return this } fun hamming(n: Int): BigInteger { val frontier = PriorityQueue().update(One) var lowest = One repeat(n - 1) { lowest = frontier.poll() ?: lowest while (frontier.peek() == lowest) frontier.poll() frontier.update(lowest) } return lowest } fun hamming(i : Iterable) : Iterable = i.map { hamming(it) } fun main(args: Array) { val r = 1..20 println("Hamming($r) = " + hamming(r)) arrayOf(1691, 1000000).forEach { println("Hamming($it) = " + hamming(it)) } } ``` Recursive function: ``` import java.math.BigInteger import java.util. val One = BigInteger.ONE!! val Three = BigInteger.valueOf(3)!! val Five = BigInteger.valueOf(5)!! infix fun PriorityQueue.update(x: BigInteger) : PriorityQueue { add(x.shiftLeft(1)) add(x.multiply(Three)) add(x.multiply(Five)) return this } fun hamming(a: Any?): Any = when (a) { is Number -> { val pq = PriorityQueue() update One var lowest = One repeat(a.toInt() - 1) { lowest = pq.poll() ?: lowest while (pq.peek() == lowest) pq.poll() pq update lowest } lowest } is Iterable<> -> a.map { hamming(it) } else -> throw IllegalArgumentException("cannot parse argument") } fun main(args: Array) { arrayOf(1..20, 1691, 1000000).forEach { println("Hamming($it) = " + hamming(it)) } } ``` Output: Hamming(1..20) = [1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36] Hamming(1691) = 2125764000 Hamming(1000000) = 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 Functional Style Eliminating Duplicates, Optional Sequence Output The following code implements a functional version, with the only mutable state that required to implement a recursive binding as commented in the code. It is fast because it uses non-genereric functions so that much of the boxing/unboxing can be optimized away, and it takes very little memory because of the avoiding duplicates, the order that the primes are processed with least dense first, and because it is implemented in such a way so as to use only local bindings for the heads of the lazy lists so that they can be consumed as used and garbage collected away. Kotlin does not have a lazy list like Haskell or a memoized lazy Stream like Scala, so the code implements a basic version of LazyList to be used by the algorithm (Java 8 Streams are not memoized as required here): Translation of: scala ``` import java.math.BigInteger as BI data class LazyList(val head: T, val lztail: Lazy?>) { fun toSequence() = generateSequence(this) { it.lztail.value } .map { it.head } } fun hamming(): LazyList { fun merge(s1: LazyList, s2: LazyList): LazyList { val s1v = s1.head; val s2v = s2.head if (s1v < s2v) { return LazyList(s1v, lazy({->merge(s1.lztail.value!!, s2)})) } else { return LazyList(s2v, lazy({->merge(s1, s2.lztail.value!!)})) } } fun llmult(m: BI, s: LazyList): LazyList { fun llmlt(ss: LazyList): LazyList { return LazyList(m ss.head, lazy({->llmlt(ss.lztail.value!!)})) } return llmlt(s) } fun u(s: LazyList?, n: Long): LazyList { var r: LazyList? = null // mutable nullable so can do the below if (s == null) { // recursively referenced variables are ugly!!! r = llmult(BI.valueOf(n), LazyList(BI.valueOf(1), lazy{ -> r })) } else { // recursively referenced variables only work with lazy r = merge(s, llmult(BI.valueOf(n), // or a loop race limit LazyList(BI.valueOf(1), lazy{ -> r }))) } return r } val prms = arrayOf(5L, 3L, 2L) val thunk = {->prms.fold?>(null, {s, n -> u(s,n)})!!} return LazyList(BI.valueOf(1), lazy(thunk)) } fun main(args: Array) { tailrec fun nth(n: Int, h: LazyList): BI = if (n > 1) { nth(n - 1, h.lztail.value!!) } else { h.head } // non-generic faster: boxing optimized away println(hamming().toSequence().take(20).toList()) println(nth(1691, hamming())) val strt = System.currentTimeMillis() println(nth(1000000, hamming())) val stop = System.currentTimeMillis() println("Took ${stop - strt} milliseconds for the last.") } ``` Output: [1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36] 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 Took 381 milliseconds for the last. Run on a AMD Bulldozer FX8120 3.1 GHz which is about half the speed as an equivalent Intel (but also half the price). Lambdatalk 1) recursive version ``` {def hamming {def hamming.loop {lambda {:h :a :i :b :j :c :k :m :n} {if {>= :n :m} then {A.last :h} else {let { {:h {A.set! :n {min :a :b :c} :h}} {:a :a} {:i :i} {:b :b} {:j :j} {:c :c} {:k :k} {:m :m} {:n :n} } {hamming.loop :h {if {= :a {A.get :n :h}} then { 2 {A.get {+ :i 1} :h}} {+ :i 1} else :a :i} {if {= :b {A.get :n :h}} then { 3 {A.get {+ :j 1} :h}} {+ :j 1} else :b :j} {if {= :c {A.get :n :h}} then { 5 {A.get {+ :k 1} :h}} {+ :k 1} else :c :k} :m {+ :n 1} } }}}} {lambda {:n} {hamming.loop {A.new {S.serie 1 :n}} 2 0 3 0 5 0 :n 1} }} -> hamming {S.map hamming {S.serie 1 20}} -> 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 {hamming 1691} -> 2125764000 // < 200ms Currently limited to javascript's integers and by stackoverflow on some computers. ``` 2) iterative version Build a table of 2^i•3^j•5^k from i,j,k = 0 to n and sort it. 2.1) compute ``` {def ham {lambda {:n} {S.sort < {S.map {{lambda {:n :i} {S.map {{lambda {:n :i :j} {S.map {{lambda {:i :j :k} { {pow 2 :i} {pow 3 :j} {pow 5 :k}}} :i :j} {S.serie 0 :n} } } :n :i} {S.serie 0 :n} } } :n} {S.serie 0 :n} } }}} -> ham {def H {ham 30}} -> H {S.slice 0 19 {H}} -> 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 {S.get 1690 {H}} -> 2125764000 // on my macbook pro ``` 2.2) display Display a hamming number as 2a•3b•5c ``` {def factor {def factor.r {lambda {:n :i} {if {> :i :n} then else {if {= {% :n :i} 0} then :i {factor.r {/ :n :i} :i} else {factor.r :n {+ :i 1}} }}}} {lambda {:n} :n is the product of 1 {factor.r :n 2} }} -> factor {def asproductofpowers {def asproductofpowers.loop {lambda {:a :b :c :n} {if {= {S.first :n} 1} then 2{sup :a}•3{sup :b}•5{sup :c} else {asproductofpowers.loop {if {= {S.first :n} 2} then {+ :a 1} else :a} {if {= {S.first :n} 3} then {+ :b 1} else :b} {if {= {S.first :n} 5} then {+ :c 1} else :c} {W.rest :n} } }}} {lambda {:n} {asproductofpowers.loop 0 0 0 {S.reverse :n}}}} -> asproductofpowers {factor 2125764000} -> 2125764000 is the product of 1 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 5 5 5 {asproductofpowers {factor 2125764000}} -> 2^5•3^12•5^3 {S.map {lambda {:i} {div}:i: {S.get :i {H}} = {asproductofpowers {factor {S.get :i {H}}}}} {S.serie 0 19}} -> 0: 1 = 2^0•3^0•5^0 1: 2 = 2^1•3^0•5^0 2: 3 = 2^0•3^1•5^0 3: 4 = 2^2•3^0•5^0 4: 5 = 2^0•3^0•5^1 5: 6 = 2^1•3^1•5^0 6: 8 = 2^3•3^0•5^0 7: 9 = 2^0•3^2•5^0 8: 10 = 2^1•3^0•5^1 9: 12 = 2^2•3^1•5^0 10: 15 = 2^0•3^1•5^1 11: 16 = 2^4•3^0•5^0 12: 18 = 2^1•3^2•5^0 13: 20 = 2^2•3^0•5^1 14: 24 = 2^3•3^1•5^0 15: 25 = 2^0•3^0•5^2 16: 27 = 2^0•3^3•5^0 17: 30 = 2^1•3^1•5^1 18: 32 = 2^5•3^0•5^0 19: 36 = 2^2•3^2•5^0 ``` See for a better display as 2a•3b•5c. Liberty BASIC LB has unlimited precision integers. ``` dim h( 1000000) for i =1 to 20 print hamming( i); " "; next i print print "H( 1691)", hamming( 1691) print "H( 1000000)", hamming( 1000000) end function hamming( limit) h( 0) =1 x2 =2: x3 =3: x5 =5 i =0: j =0: k =0 for n =1 to limit h( n) = min( x2, min( x3, x5)) if x2 = h( n) then i = i +1: x2 =2 h( i) if x3 = h( n) then j = j +1: x3 =3 h( j) if x5 = h( n) then k = k +1: x5 =5 h( k) next n hamming =h( limit -1) end function ``` ``` 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 H( 1691) 2125764000 H( 1000000) 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 ``` Logo ``` to init.ham ; queues make "twos make "threes make "fives end to next.ham localmake "ham first :twos if less? first :threes :ham [make "ham first :threes] if less? first :fives :ham [make "ham first :fives] if equal? :ham first :twos [ignore dequeue "twos] if equal? :ham first :threes [ignore dequeue "threes] if equal? :ham first :fives [ignore dequeue "fives] queue "twos :ham 2 queue "threes :ham 3 queue "fives :ham 5 output :ham end init.ham repeat 20 [print next.ham] repeat 1690-20 [ignore next.ham] print next.ham ``` Lua ``` function hiter() hammings = {1} prev, vals = {1, 1, 1} index = 1 local function nextv() local n, v = 1, hammings[prev]2 if hammings[prev]3 < v then n, v = 2, hammings[prev]3 end if hammings[prev]5 < v then n, v = 3, hammings[prev]5 end prev[n] = prev[n] + 1 if hammings[index] == v then return nextv() end index = index + 1 hammings[index] = v return v end return nextv end j = hiter() for i = 1, 20 do print(j()) end n, l = 0, 0 while n < 2^31 do n, l = j(), n end print(l) ``` M2000 Interpreter For Long Only We have to exit loop (and function) before calculating new X2 or X3 or X4 and get overflow error Module hamming_long { function hamming(l as long, &h(),&last()) { l=if(l<1->1&, l) long oldlen=len(h()) if oldlen<l then dim h(l) else =h(l-1): exit def long i, j, k, n, m, x2, x3, x5, ll stock last(0) out x2,x3,x5,i,j,k n=oldlen : ll=l-1 { m=x2 if m>x3 then m=x3 if m>x5 then m=x5 h(n)=m if n>=1690 then =h(n):break if m=x2 then i++:x2=2&h(i) if m=x3 then j++:x3=3&h(j) if m=x5 then k++:x5=5&h(k) if n<ll then n++: loop } stock last(0) in x2,x3,x5,i,j,k =h(ll) } dim h(1)=1&, last() def long i const nl$={ } document doc$ last()=(2&,3&,5&,0&,0&,0&) for i=1 to 20 Doc$=format$("{0::-10} {1::-10}", i, hamming(i,&h(), &last()))+nl$ next i i=1691 Doc$=format$("{0::-10} {1::-10}", i, hamming(i,&h(), &last()))+nl$ print #-2,Doc$ clipboard Doc$ } hamming_long Output: ``` 1 1 2 2 3 3 4 4 5 5 6 6 7 8 8 9 9 10 10 12 11 15 12 16 13 18 14 20 15 24 16 25 17 27 18 30 19 32 20 36 1691 2125764000 ``` Using Decimal type Max hamming number is the 43208th We have to exit loop (and function) before calculating new X2 or X3 or X4 and get overflow error Module hamming { function hamming(l as long, &h(),&last()) { l=if(l<1->1&, l) oldlen=len(h()) if oldlen<l then dim h(l) else =h(l-1): exit def decimal i, j, k, m, x2, x3, x5 stock last(0) out x2,x3,x5,i,j,k n=oldlen : ll=l-1& { m=x2 if m>x3 then m=x3 if m>x5 then m=x5 h(n)=m if n>=43207& then =h(n):break if m=x2 then i++:x2=2@h(i) if m=x3 then j++:x3=3@h(j) if m=x5 then k++:x5=5@h(k) if n<ll then n++: loop } stock last(0) in x2,x3,x5,i,j,k =h(ll) } dim h(1)=1@, last() last()=(2@,3@,5@,0@,0@,0@) Document doc$ const nl$={ } for i=1 to 20 Doc$=format$("{0::-10} {1::-28}", i, hamming(i,&h(), &last()))+nl$ next i i=1691 Doc$=format$("{0::-10} {1::-28}", i, hamming(i,&h(), &last()))+nl$ i=9999 Doc$=format$("{0::-10} {1::-28}", i, hamming(i,&h(), &last()))+nl$ i=43208 Doc$=format$("{0::-10} {1::-28}", i, hamming(i,&h(), &last()))+nl$ print #-2, Doc$ clipboard Doc$ } hamming Output: ``` 1 1 2 2 3 3 4 4 5 5 6 6 7 8 8 9 9 10 10 12 11 15 12 16 13 18 14 20 15 24 16 25 17 27 18 30 19 32 20 36 1691 2125764000 9999 288230376151711744 43208 9164837199872000000000000000 ``` Mathematica / Wolfram Language HammingList[N_] := Module[{A, B, C}, {A, B, C} = (N^(1/3)){2.8054745679851933, 1.7700573778298891, 1.2082521307023026} - {1, 1, 1}; Take[ Sort@Flatten@Table[ 2^x 3^y 5^z , {x, 0, A}, {y, 0, (-B/A)x + B}, {z, 0, C - (C/A)x - (C/B)y}], N]]; HammingList -> {1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36} HammingList // Last -> 2125764000 HammingList // Last ->519312780448388736089589843750000000000000000000000000000000000000000000000000000000 MATLAB / Octave Translation of: Julia The n parameter was chosen by trial and error. You have to pick an n large enough that the powers of 2, 3 and 5 will all be greater than n at the 1691st Hamming number. ``` n = 40; powers_2 = 2.^[0:n-1]; powers_3 = 3.^[0:n-1]; powers_5 = 5.^[0:n-1]; matrix = powers_2' powers_3; powers_23 = sort(reshape(matrix,nn,1)); matrix = powers_23 powers_5; powers_235 = sort(reshape(matrix,nnn,1)); % % Remove the integer overflow values. % powers_235 = powers_235(powers_235 > 0); disp(powers_235(1:20)) disp(powers_235(1691)) ``` Mojo Since current Mojo (version 0.7) does not have many forms of recursive expression, the below is an imperative version of the First In Last Out (FILO) Queue version of the fastest iterative Nim version using logarithmic approximations for the comparison and final conversion of the power tuples to a big integer output. Since Mojo does not currently have a big integer library, enough of the required functionality of one (multiplication and conversion to string) is implemented in the following code: Translation of: Nim ``` from collections.vector import (DynamicVector, CollectionElement) from math import (log2, trunc, pow) from memory import memset_zero #, memcpy) from time import now alias cCOUNT: Int = 1_000_000 struct BigNat(Stringable): # enough just to support conversion and printing ''' Enough "infinite" precision to support as required here - multiply and divide by 10 conversion to string... ''' var contents: DynamicVector[UInt32] fn init(inout self): self.contents = DynamicVectorUInt32 fn init(inout self, val: UInt32): self.contents = DynamicVectorUInt32 self.contents.resize(1, val) fn copyinit(inout self, existing: Self): self.contents = existing.contents fn moveinit(inout self, owned existing: Self): self.contents = existing.contents^ fn str(self) -> String: var rslt: String = "" var v = self.contents while len(v) > 0: var t: UInt64 = 0 for i in range(len(v) - 1, -1, -1): t = ((t << 32) + v[i].to_int()) v[i] = (t // 10).to_int(); t -= v[i].to_int() 10 var sz = len(v) - 1 while sz >= 0 and v[sz] == 0: sz -= 1 v.resize(sz + 1, 0) rslt = str(t) + rslt return rslt fn mult(inout self, mltplr: Self): var rslt = DynamicVectorUInt32 rslt.resize(len(self.contents) + len(mltplr.contents), 0) for i in range(len(mltplr.contents)): var t: UInt64 = 0 for j in range(len(self.contents)): t += self.contents[j].to_int() mltplr.contents[i].to_int() + rslt[i + j].to_int() rslt[i + j] = (t & 0xFFFFFFFF).to_int(); t >>= 32 rslt[i + len(self.contents)] += t.to_int() var sz = len(rslt) - 1 while sz >= 0 and rslt[sz] == 0: sz -= 1 rslt.resize(sz + 1, 0); self.contents = rslt alias lb2: Float64 = 1.0 alias lb3: Float64 = log2DType.float64, 1 alias lb5: Float64 = log2DType.float64, 1 @value struct LogRep(CollectionElement, Stringable): var logrep: Float64 var x2: UInt32 var x3: UInt32 var x5: UInt32 fn del(owned self): return @always_inline fn mul2(self) -> Self: return LogRep(self.logrep + lb2, self.x2 + 1, self.x3, self.x5) @always_inline fn mul3(self) -> Self: return LogRep(self.logrep + lb3, self.x2, self.x3 + 1, self.x5) @always_inline fn mul5(self) -> Self: return LogRep(self.logrep + lb5, self.x2, self.x3, self.x5 + 1) fn str(self) -> String: var rslt = BigNat(1) fn expnd(inout rslt: BigNat, bs: UInt32, n: UInt32): var bsm = BigNat(bs); var nm = n while nm > 0: if (nm & 1) != 0: rslt.mult(bsm) bsm.mult(bsm); nm >>= 1 expnd(rslt, 2, self.x2); expnd(rslt, 3, self.x3); expnd(rslt, 5, self.x5) return str(rslt) alias oneLR: LogRep = LogRep(0.0, 0, 0, 0) alias LogRepThunk = fn() escaping -> LogRep fn hammingsLogImp() -> LogRepThunk: var s2 = DynamicVectorLogRep; var s3 = DynamicVectorLogRep; var s5 = oneLR; var mrg = oneLR s2.resize(512, oneLR); s2 = oneLR.mul2(); s3.resize(1, oneLR); s3 = oneLR.mul3() var s2p = s2.steal_data(); var s3p = s3.steal_data() var s2hdi = 0; var s2tli = -1; var s3hdi = 0; var s3tli = -1 @always_inline fn next() escaping -> LogRep: var rslt = s2[s2hdi] var s2len = len(s2) s2tli += 1; if s2tli >= s2len: s2tli = 0 if s2hdi == s2tli: if s2len < 1024: s2.resize(1024, oneLR) else: s2.resize(s2len + s2len, oneLR) # ; s2p = s2.steal_data() for i in range(s2hdi): s2[s2len + i] = s2[i] memcpyUInt8, 0 s2tli += s2len; s2len += s2len if rslt.logrep < mrg.logrep: s2hdi += 1 if s2hdi >= s2len: s2hdi = 0 else: rslt = mrg var s3len = len(s3) s3tli += 1; if s3tli >= s3len: s3tli = 0 if s3hdi == s3tli: if s3len < 1024: s3.resize(1024, oneLR) else: s3.resize(s3len + s3len, oneLR) # ; s3p = s3.steal_data() for i in range(s3hdi): s3[s3len + i] = s3[i] memcpyUInt8, 0 s3tli += s3len; s3len += s3len if mrg.logrep < s5.logrep: s3hdi += 1 if s3hdi >= s3len: s3hdi = 0 else: s5 = s5.mul5() s3[s3tli] = rslt.mul3(); let t = s3[s3hdi]; mrg = t if t.logrep < s5.logrep else s5 s2[s2tli] = rslt.mul2(); return rslt return next fn main(): print("The first 20 Hamming numbers are:") var f = hammingsLogImp(); for i in range(20): print_no_newline(f(), " ") print() f = hammingsLogImp(); var h: LogRep = oneLR for i in range(1691): h = f() print("The 1691st Hamming number is", h) let strt: Int = now() f = hammingsLogImp() for i in range(cCOUNT): h = f() let elpsd = (now() - strt) / 1000 print("The " + str(cCOUNT) + "th Hamming number is:") print("2" + str(h.x2) + " 3" + str(h.x3) + " 5" + str(h.x5)) let lg2 = lb2 Float64(h.x2.to_int()) + lb3 Float64(h.x3.to_int()) + lb5 Float64(h.x5.to_int()) let lg10 = lg2 / log2(Float64(10)) let expnt = trunc(lg10); let num = pow(Float64(10.0), lg10 - expnt) let apprxstr = str(num) + "E+" + str(expnt.to_int()) print("Approximately: ", apprxstr) let answrstr = str(h) print("The result has", len(answrstr), "digits.") print(answrstr) print("This took " + str(elpsd) + " microseconds.") ``` Output: The first 20 Hamming numbers are: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 The 1691st Hamming number is 2125764000 The 1000000th Hamming number is: 255 347 564 Approximately: 5.1931278110620553E+83 The result has 84 digits. 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 This took 3626.192 microseconds. The above was as run on an AMD 7840HS CPU single-thread boosted to 5.1 GHz. It is about the same speed as the Nim version from which it was translated. MUMPS ``` Hamming(n) New count,ok,next,number,which For which=2,3,5 Set number=1 For count=1:1:n Do . Set ok=0 Set:count<21 ok=1 Set:count=1691 ok=1 Set:count=n ok=1 . Write:ok !,$Justify(count,5),": ",number . For which=2,3,5 Set next(numberwhich)=which . Set number=$Order(next("")) . Kill next(number) . Quit Quit Do Hamming(2000) 1: 1 2: 2 3: 3 4: 4 5: 5 6: 6 7: 8 8: 9 9: 10 10: 12 11: 15 12: 16 13: 18 14: 20 15: 24 16: 25 17: 27 18: 30 19: 32 20: 36 1691: 2125764000 2000: 8062156800 ``` Nim Library: bigints Classic Dijkstra algorithm ``` import bigints proc min(a: varargs[BigInt]): BigInt = result = a for i in 1..a.high: if a[i] < result: result = a[i] proc hamming(limit: int): BigInt = var h = newSeqBigInt x2 = initBigInt(2) x3 = initBigInt(3) x5 = initBigInt(5) i, j, k = 0 for i in 0..h.high: h[i] = initBigInt(1) for n in 1 ..< limit: h[n] = min(x2, x3, x5) if x2 == h[n]: inc i x2 = h[i] 2 if x3 == h[n]: inc j x3 = h[j] 3 if x5 == h[n]: inc k x5 = h[k] 5 result = h[h.high] for i in 1 .. 20: stdout.write hamming(i), " " echo "" echo hamming(1691) echo hamming(1_000_000) ``` Output: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 The above takes over a second to produce the millionth Hamming number on many machines. Slightly more efficient version The following code improves on the above by reducing the number of computationally-time-expensive BigInt comparisons slightly: ``` import bigints, times proc hamming(limit: int): BigInt = doAssert limit > 0 var h = newSeqBigInt x2 = initBigInt(2) x3 = initBigInt(3) x5 = initBigInt(5) i, j, k = 0 h = initBigInt(1) # BigInt comparisons are expensive, reduce them... proc min3(x, y, z: BigInt): (int, BigInt) = let (cs, r1) = if y == z: (6, y) elif y < z: (2, y) else: (4, z) if x == r1: (cs or 1, x) elif x < r1: (1, x) else: (cs, r1) for n in 1 ..< limit: let (cs, e1) = min3(x2, x3, x5) h[n] = e1 if (cs and 1) != 0: i += 1; x2 = h[i] 2 if (cs and 2) != 0: j += 1; x3 = h[j] 3 if (cs and 4) != 0: k += 1; x5 = h[k] 5 h[h.high] for i in 1 .. 20: stdout.write hamming(i), " " echo "" echo hamming(1691) let strt = epochTime() let rslt = hamming(1_000_000) let stop = epochTime() echo rslt echo "This last took ", (stop - strt)1000, " milliseconds." ``` Output: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 This last took 566.3743019104004 milliseconds. It can be shown that the above reduces the execution time by about 20 per cent. But note that compiling with --gc:arc allows to lower execution time to 380-390 ms. Functional iterator sequence, eliminating duplicate calculations and reducing memory use The above code still wastes quite a lot of time doing redundant BigInt calculations (ie. 2 times 3, 3 times 2, etc.) and as well consumes a huge amount of memory for larger Hamming number determination as it uses an array as large as the range. The below code eliminates duplicate calculations and reduces memory use by using a Nim version of a lazy list internally so that unused back calculated values can be eliminated by the garbage collector. Thus, execution time for BigInt calculations is reduced by a constant factor of about two and a half and memory use is reduced from O(n) to O(n^(2/3)) in the following code: Translation of: Haskell Works with: Nim 1.4.0 Note, the following code uses the "bigints" library that doesn't ship with the Nim compiler; install it with "nimble install bigints". ``` import bigints, times iterator func_hamming() : BigInt = type Thunk[T] = proc(): T {.closure.} type Lazy[T] = ref object of RootObj # tuple[val: T, thnk: Thunk[T]] val: T thnk: Thunk[T] proc forceT: T = # not thread-safe; needs lock on thunk if me.thnk != nil: me.val = me.thnk(); me.thnk = nil me.val type LazyList[T] = ref object of RootObj # tuple[hd: T, tl: Lazy[LazyList[T]]] hd: T tl: Lazy[LazyList[T]] type Mytype = LazyList[BigInt] proc merge(x, y: Mytype): Mytype = let xh = x.hd; let yh = y.hd if xh < yh: let mthnk = proc(): Mytype = merge x.tl.force, y let mlzy = LazyMytype Mytype(hd: xh, tl: mlzy) else: let mthnk = proc(): Mytype = merge x, y.tl.force let mlzy = LazyMytype Mytype(hd: yh, tl: mlzy) proc smult(m: int32, s: Mytype): Mytype = proc smults(ss: Mytype): Mytype = let mthnk = proc(): Mytype = ss.tl.force.smults let mlzy = LazyMytype Mytype(hd: ss.hd m, tl: mlzy) s.smults proc u(s: Mytype, n: int32): Mytype = var r: Mytype let mthnk = proc(): Mytype = r let mlzy = LazyMytype let frst = Mytype(hd: initBigInt 1, tl: mlzy) if s == nil: r = smult(n, frst) else: r = merge(s, smult(n, frst)) r var hmg: Mytype = nil for p in [5i32, 3i32, 2i32]: hmg = u(hmg, p) yield initBigInt 1 while true: # loop almost forever yield initBigInt hmg.hd hmg = hmg.tl.force var cnt = 1 for h in func_hamming(): if cnt > 20: break write stdout, h, " "; cnt += 1 echo "" cnt = 1 for h in func_hamming(): if cnt < 1691: cnt += 1; continue else: echo h; break let strt = epochTime() var rslt: BigInt cnt = 1 for h in func_hamming(): if cnt < 1000000: cnt += 1; continue else: rslt = h; break let stop = epochTime() echo rslt echo "This last took ", (stop - strt)1000, " milliseconds." ``` Output: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 This last took 464.9641513824463 milliseconds. The above result was obtained by compiling with the default "mark-and-sweep" Garbage collector with -d:release -d:danger (all checking including bounds checks turned off); One should not use the new --gc:arc compilation argument (automatic reference counting) with this implementation as the lazy lists are cyclic but compiling with --gc:orc gives an execution time of about 80% of the execution time as compared to the conventional garbage collection, and is slower than the --gc:arc garbage collection by about half again the time (but correct as to not causing memory leaks) due to the extra time spent tracing cycles. The beauty of Nim inline iterators as used here is that they are zero overhead (tested) so there is no run time penalty for using them. Functional iterator sequence, eliminating duplicate calculations and using log approximations Much of the time for above algorithm is spent doing big integer calculations using the extended precision bit integer library; the following code eliminates most of the big integer calculations by using logarithmic aproximations and just converting to big integers for the display of the results: Works with: Nim 1.4.0 Note, the following code uses the "bigints" library that doesn't ship with the Nim compiler; install it with "nimble install bigints". ``` from times import inMilliseconds import std/monotimes, bigints from math import log2 type TriVal = (uint32, uint32, uint32) type LogRep = (float64, TriVal) type LogRepf = proc(x: LogRep): LogRep const one: LogRep = (0.0f64, (0'u32, 0'u32, 0'u32)) proc <(me: LogRep, othr: LogRep): bool = me < othr proc convertTrival2BigInt(tv: TriVal): BigInt = proc xpnd(bs: uint, v: uint32): BigInt = result = initBigInt 1; var bsm = initBigInt bs; var vm = v.uint while vm > 0: if (vm and 1) != 0: result = bsm bsm = bsm bsm # bsm = bsm crashes. vm = vm shr 1 result = (2.xpnd tv) (3.xpnd tv) (5.xpnd tv) const lb2 = 1.0'f64 const lb3 = 3.0'f64.log2 const lb5 = 5.0'f64.log2 proc mul2(me: LogRep): LogRep = let (lr, tpl) = me; let (x2, x3, x5) = tpl (lr + lb2, (x2 + 1, x3, x5)) proc mul3(me: LogRep): LogRep = let (lr, tpl) = me; let (x2, x3, x5) = tpl (lr + lb3, (x2, x3 + 1, x5)) proc mul5(me: LogRep): LogRep = let (lr, tpl) = me; let (x2, x3, x5) = tpl (lr + lb5, (x2, x3, x5 + 1)) type LazyList = ref object hd: LogRep tlf: proc(): LazyList {.closure.} tl: LazyList proc rest(ll: LazyList): LazyList = # not thread-safe; needs lock on thunk if ll.tlf != nil: ll.tl = ll.tlf(); ll.tlf = nil ll.tl iterator log_func_hammings(until: int): TriVal = proc merge(x, y: LazyList): LazyList = let xh = x.hd let yh = y.hd if xh < yh: LazyList(hd: xh, tlf: proc(): auto = merge x.rest, y) else: LazyList(hd: yh, tlf: proc(): auto = merge x, y.rest) proc smult(mltf: LogRepf; s: LazyList): LazyList = proc smults(ss: LazyList): LazyList = LazyList(hd: ss.hd.mltf, tlf: proc(): auto = ss.rest.smults) s.smults proc unnsm(s: LazyList, mltf: LogRepf): LazyList = var r: LazyList = nil let frst = LazyList(hd: one, tlf: proc(): LazyList = r) r = if s == nil: smult mltf, frst else: s.merge smult(mltf, frst) r yield one var hmpll: LazyList = ((nil.unnsm mul5).unnsm mul3).unnsm mul2 for _ in 2 .. until: yield hmpll.hd; hmpll = hmpll.rest # almost forever proc main = stdout.write "The first 20 hammings are: " for h in log_func_hammings(20): stdout.write h.convertTrival2BigInt, " " var lsth: TriVal for h in log_func_hammings(1691): lsth = h echo "\r\nThe 1691st Hamming number is: ", lsth.convertTriVal2BigInt let strt = getMonotime() for h in log_func_hammings(1000000): lsth = h let elpsd = (getMonotime() - strt).inMilliseconds echo "The millionth Hamming number is: ", lsth.convertTriVal2BigInt echo "This last took ", elpsd, " milliseconds." main() ``` Output: The first 20 hammings are: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 The 1691st Hamming number is: 2125764000 The millionth Hamming number is: 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 This last took 157 milliseconds. As you can see, this new version is over twice as fast as the version using many big integer calculations, both due to much less computation and also due to not having to allocate and de-allocate the memory required for many big integer representations. Again, it is about 80% faster if the new --gc:orc memory management is used, which is slower than using the --gc:arc memory management that is yet another 25% faster but incorrect as it has a memory leak due to the cyclic lazy lists that it can't properly handle. Most of the remaining time is spent in the many allocations and de-allocations of small structures in heap memory as is typical of functional algorithms. Further speed could be gained for the same algorithm as above by making allocations and de-allocations (now all the same size) from an implemented memory pool, which is what Haskell actually does inside its memory management system. Imperative iterator implementation of the above functional version The following code uses imperative techniques to implement the same algorithm, using sequences for storage, indexes for back pointers to the results of previous calculations, and custom deleting unused values in chunks in place (using constantly growing capacity) so that the same size of sequence can be longer used and many less new memory allocations need be made: ``` import bigints, times iterator nodups_hamming(): BigInt = var m = newSeqBigInt # give it two values so doubling size works h = newSeqBigInt # reasonably size x5 = initBigInt 5 mrg = initBigInt 3 x53 = initBigInt 9 # already advanced one step x532 = initBigInt 2 ih, jm, i, j = 0 yield initBigInt 1 # trivial case of 1 while true: let cph = h.len # move in-place to avoid allocation if i >= cph div 2: # move in-place to avoid allocation var s = i; var d = 0 while s < ih: shallowCopy(h[d], h[s]); s += 1; d += 1 ih -= i; i = 0 if ih >= cph: h.setLen(2 cph) if x532 < mrg: h[ih] = x532; x532 = h[i] 2; i += 1 else: h[ih] = mrg let cpm = m.len if j >= cpm div 2: # move in-place to avoid allocation var s = j; var d = 0 while s < jm: shallowCopy(m[d], m[s]); s += 1; d += 1 jm -= j; j = 0 if jm >= cpm: m.setLen(2 cpm) if x53 < x5: mrg = x53; x53 = m[j] 3; j += 1 else: mrg = x5; x5 = x5 5 m[jm] = mrg jm += 1 ih += 1 yield h[ih - 1] var cnt = 1 for h in nodups_hamming(): if cnt > 20: break write stdout, h, " "; cnt += 1 echo "" cnt = 1 for h in nodups_hamming(): if cnt < 1691: cnt += 1; continue else: echo h; break let strt = epochTime() var rslt: BigInt cnt = 1 for h in nodups_hamming(): if cnt < 1000000: cnt += 1; continue else: rslt = h; break let stop = epochTime() echo rslt echo "This last took ", (stop - strt)1000, " milliseconds." ``` Output: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 This last took 307.5404167175293 milliseconds. Compiling with --gc:arc gives an execution time of 220-230 ms. So, in both cases, the execution time is reduced which shows that a high percentage of the previous time was not used by BigInt calculations (as this code does exactly the same number of calculations) but rather by the memory allocatons/deallocations required for pure functional lazy algorithms. This may show that the current Nim version (1.4.2) is not so suitable for pure lazy functional algorithms, nor is it as terse as many modern functional languages (Haskell, OcaML, F#, Scala, etc.). Much faster iterating version using logarithmic calculations Still, much of the above time is used by BigInt calculations and still many heap allocations/deallocations, as BigInt's have an internal sequence to contain the infinite precision binary digits. The following code uses an internal logarithmic representation of the values rather than BigInt for the sorting comparisons and thus all mathematic operations required are just integer and floating point additions and comparison; as well, since these don't require heap space there is almost no allocation/deallocation at all for greatly increased speed: ``` HammingsLogImp.nim compile with: nim c -d:danger -t:-march=native -d:LTO --gc:arc HammingsLogImp import bigints, std/math from std/times import inMicroseconds from std/monotimes import getMonoTime, - type LogRep = (float64, uint32, uint32, uint32) let one: LogRep = (0.0, 0'u32, 0'u32, 0'u32) let lb2 = 1.0'f64; let lb3 = 3.0.log2; let lb5 = 5.0.log2 proc mul2(me: Logrep): Logrep {.inline.} = (me + lb2, me + 1, me, me) proc mul3(me: Logrep): Logrep {.inline.} = (me + lb3, me, me + 1, me) proc mul5(me: Logrep): Logrep {.inline.} = (me + lb5, me, me, me + 1) proc lr2BigInt(lr: Logrep): BigInt = proc xpnd(bs: uint, v: uint32): BigInt = result = initBigInt 1 var bsm = initBigInt bs; var vm = v.uint while vm > 0: if (vm and 1) != 0: result = bsm bsm = bsm; vm = vm shr 1 xpnd(2, lr) xpnd(3, lr) xpnd(5, lr) iterator hammingsLogImp(): LogRep = var s2 = newSeqLogrep # give it size one so doubling size works s3 = newSeqLogrep # reasonably sized s5 = one.mul5 # initBigInt 5 mrg = one.mul3 # initBigInt 3 s2hdi, s2tli, s3hdi, s3tli = 0 yield one s2 = one.mul2; s3 = one.mul3 while true: s2tli += 1 if s2hdi + s2hdi >= s2tli: # move in-place to avoid allocation copyMem(addr(s2), addr(s2[s2hdi]), sizeof(LogRep) (s2tli - s2hdi)) s2tli -= s2hdi; s2hdi = 0 let cps2 = s2.len # move in-place to avoid allocation if s2tli >= cps2: s2.setLen(cps2 + cps2) var rsltp = addr(s2[s2hdi]) if rsltp[] < mrg: s2[s2tli] = rsltp[].mul2; s2hdi += 1; yield rsltp[] else: s3tli += 1 if s3hdi + s3hdi >= s3tli: # move in-place to avoid allocation copyMem(addr(s3), addr(s3[s3hdi]), sizeof(LogRep) (s3tli - s3hdi)) s3tli -= s3hdi; s3hdi = 0 let cps3 = s3.len if s3tli >= cps3: s3.setLen(cps3 + cps3) s2[s2tli] = mrg.mul2; s3[s3tli] = mrg.mul3; s3hdi += 1 let arsltp = addr(s3[s3hdi]) let rslt = mrg if arsltp[] < s5: mrg = arsltp[] else: mrg = s5; s5 = s5.mul5; s3hdi -= 1 yield rslt var cnt = 0 for h in hammingsLogImp(): write stdout, h.lr2BigInt, " "; cnt += 1 if cnt >= 20: break echo "" cnt = 0 for h in hammingsLogImp(): cnt += 1 if cnt >= 1691: echo h.lr2BigInt; break let strt = getMonoTime() var rslt: LogRep cnt = 0 for h in hammingsLogImp(): cnt += 1 if cnt >= 1_000_000: rslt = h; break # """ let elpsd = (getMonoTime() - strt).inMicroseconds let (_, x2, x3, x5) = rslt writeLine stdout, "2^", x2, " + 3^", x3, " + 5^", x5 let lgrslt = (x2.float64 + x3.float64 3.0f64.log2 + x5.float64 5.0f64.log2) 2.0f64.log10 let (whl, frac) = lgrslt.splitDecimal echo "Approximately: ", 10.0f64.pow(frac), "E+", whl.uint64 let brslt = rslt.lr2BigInt() let s = brslt.to_string let ls = s.len echo "Number of digits: ", ls if ls <= 2000: for i in countup(0, ls - 1, 100): if i + 100 < ls: echo s[i .. i + 99] else: echo s[i .. ls - 1] echo "This last took ", elpsd, " microseconds." ``` Output: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 2^55 + 3^47 + 5^64 Approximately: 5.193127804483804E+83 Number of digits: 84 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 This last took 6004 microseconds. The time as shown is for for compilation as in the second line of code; with these options, the billionth Hamming number can be calculated in about 7 seconds. Faster alternate to the above using a ring buffer As other language contributions refer to it, the above code is left in place; however, it seems that the amount of time spent "draining" the buffers by already-used values using copying as used in the above code can be eliminated by using the buffers as "ring buffers" by making the indices wrap around from the end of the buffers to the beginning and detecting when the buffer needs to be "grown" by when the next/last/tail index runs into the first/head index, and changing the "grow" logic a little so as to open up a hole between the next and first indexes by the size of the expansion once the buffer size has "grown". The code is as follows: ``` HammingsLogDQ.nim compile with: nim c -d:danger -t:-march=native -d:LTO --gc:arc HammingsImpLogQ import bigints, std/math from std/times import inMicroseconds from std/monotimes import getMonoTime, - type LogRep = (float64, uint32, uint32, uint32) let one: LogRep = (0.0, 0'u32, 0'u32, 0'u32) let lb2 = 1.0'f64; let lb3 = 3.0.log2; let lb5 = 5.0.log2 proc mul2(me: Logrep): Logrep {.inline.} = (me + lb2, me + 1, me, me) proc mul3(me: Logrep): Logrep {.inline.} = (me + lb3, me, me + 1, me) proc mul5(me: Logrep): Logrep {.inline.} = (me + lb5, me, me, me + 1) proc lr2BigInt(lr: Logrep): BigInt = proc xpnd(bs: uint, v: uint32): BigInt = result = initBigInt 1 var bsm = initBigInt bs; var vm = v.uint while vm > 0: if (vm and 1) != 0: result = bsm bsm = bsm; vm = vm shr 1 xpnd(2, lr) xpnd(3, lr) xpnd(5, lr) proc $(lr: LogRep): string {.inline.} = $lr2BigInt(lr) iterator hammingsLogQ(): LogRep = var s2msk, s3msk = 1024 var s2 = newSeq[LogRep] s2msk; var s3 = newSeq[LogRep] s3msk s2msk -= 1; s3msk -= 1; s2 = one; var s2nxti = 1 var s2hdi, s3hdi, s3nxti = 0 var s5 = one.mul5; var mrg = one.mul3 while true: let s2hdp = addr(s2[s2hdi]) if s2hdp[] < mrg: s2[s2nxti] = s2hdp[].mul2; s2hdi += 1; s2hdi = s2hdi and s2msk yield s2hdp[] else: s2[s2nxti] = mrg.mul2; s3[s3nxti] = mrg.mul3; yield mrg let s3hdp = addr(s3[s3hdi]) if s3hdp < s5: mrg = s3hdp[]; s3hdi += 1; s3hdi = s3hdi and s3msk else: mrg = s5; s5 = s5.mul5 s3nxti += 1; s3nxti = s3nxti and s3msk if s3nxti == s3hdi: # buffer full - expand... let sz = s3msk + 1; s3msk = sz + sz; s3.setLen(s3msk); s3msk -= 1 if s3hdi == 0: s3nxti = sz else: # put extra space between next and head... copyMem(addr(s3[s3hdi + sz]), addr(s3[s3hdi]), sizeof(LogRep) (sz - s3hdi)); s3hdi += sz s2nxti += 1; s2nxti = s2nxti and s2msk if s2nxti == s2hdi: # buffer full - expand... let sz = s2msk + 1; s2msk = sz + sz; s2.setLen s2msk; s2msk -= 1 if s2hdi == 0: s2nxti = sz # copy all in a single block... else: # make extra space between next and head... copyMem(addr(s2[s2hdi + sz]), addr(s2[s2hdi]), sizeof(LogRep) (sz - s2hdi)); s2hdi += sz testing it... var cnt = 0 for h in hammingsLogQ(): write stdout, h, " "; cnt += 1 if cnt >= 20: break echo "" cnt = 0 for h in hammingsLogQ(): cnt += 1 if cnt >= 1691: echo h; break let strt = getMonoTime() var rslt: LogRep cnt = 0 for h in hammingsLogQ(): cnt += 1 if cnt >= 1_000_000: rslt = h; break # """ let elpsd = (getMonoTime() - strt).inMicroseconds let (_, x2, x3, x5) = rslt writeLine stdout, "2^", x2, " + 3^", x3, " + 5^", x5 let lgrslt = (x2.float64 + x3.float64 3.0f64.log2 + x5.float64 5.0f64.log2) 2.0f64.log10 let (whl, frac) = lgrslt.splitDecimal echo "Approximately: ", 10.0f64.pow(frac), "E+", whl.uint64 let s = $rslt let ls = s.len echo "Number of digits: ", ls if ls <= 2000: for i in countup(0, ls - 1, 100): if i + 100 < ls: echo s[i .. i + 99] else: echo s[i .. ls - 1] echo "This last took ", elpsd, " microseconds." ``` Output: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 2^55 + 3^47 + 5^64 Approximately: 5.193127804483804E+83 Number of digits: 84 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 This last took 5044 microseconds. As tested on an Intel i5-6500 (3.6 GHz single-threaded boosted), this is about a millisecond or about twenty percent faster than the version above, and can find the billionth Hamming number in about 4.5 seconds on this machine. The reason this is faster is mostly due to the elimination of the majority of the copy operations. Extremely fast version inserting logarithms into the top error band The above code is about as fast as one can go generating sequences; however, if one is willing to forego sequences and just calculate the nth Hamming number (repeatedly), then some reading on the relationship between the size of numbers to the sequence numbers is helpful (Wikipedia: Regular Number). One finds that there is a very distinct relationship and that it quite quickly reduces to quite a small error band proportional to the log of the output value for larger ranges. Thus, the following code just scans for logarithmic representations to insert into a sequence for this top error band and extracts the correct nth representation from that band. It reduces time complexity to O(n^(2/3)) from O(n) for the sequence versions, but even more amazingly, reduces memory requirements to O(n^(1/3)) from O(n^(2/3)) and thus makes it possible to calculate very large values in the sequence on common personal computers. The code is as follows: Translation of: Rust ``` import bigints, math, algorithm, times type TriVal = (uint32, uint32, uint32) proc convertTrival2BigInt(tv: TriVal): BigInt = proc xpnd(bs: uint, v: uint32): BigInt = result = initBigInt 1 var bsm = initBigInt bs var vm = v.uint while vm > 0: if (vm and 1) != 0: result = bsm bsm = bsm bsm # bsm = bsm causes a crash. vm = vm shr 1 result = (2.xpnd tv) (3.xpnd tv) (5.xpnd tv) proc nth_hamming(n: uint64): TriVal = doAssert n > 0u64 if n < 2: return (0'u32, 0'u32, 0'u32) # trivial case for 1 type LogRep = (float64, uint32, uint32, uint32) let lb3 = 3.0'f64.log2; let lb5 = 5.0'f64.log2; let fctr = 6.0'f64lb3lb5 let crctn = 30.0'f64.sqrt().log2 # log base 2 of sqrt 30 lgest = (fctr n.float64).pow(1.0'f64/3.0'f64) - crctn # from WP formula frctn = if n < 1000000000: 0.509'f64 else: 0.105'f64 lghi = (fctr (n.float64 + frctn lgest)).pow(1.0'f64/3.0'f64) - crctn lglo = 2.0'f64 lgest - lghi # and a lower limit of the upper "band" var count = 0'u64 # need to use extended precision, might go over var bnd = newSeqLogRep # give itone value so doubling size works let klmt = (lghi / lb5).uint32 + 1 for k in 0 ..< klmt: # i, j, k values can be just u32 values let p = k.float64 lb5; let jlmt = ((lghi - p) / lb3).uint32 + 1 for j in 0 ..< jlmt: let q = p + j.float64 lb3 let ir = lghi - q; let lg = q + ir.floor # current log value (estimated) count += ir.uint64 + 1; if lg >= lglo: bnd.add((lg, ir.uint32, j, k)) if n > count: raise newException(Exception, "nth_hamming: band high estimate is too low!") let ndx = (count - n).int if ndx >= bnd.len: raise newException(Exception, "nth_hamming: band low estimate is too high!") bnd.sort((proc (a, b: LogRep): int = a.cmp b), SortOrder.Descending) let rslt = bnd[ndx]; (rslt, rslt, rslt) for i in 1 .. 20: write stdout, nth_hamming(i.uint64).convertTrival2BigInt, " " echo "" echo nth_hamming(1691).convertTrival2BigInt let strt = epochTime() let rslt = nth_hamming(1_000_000'u64) let stop = epochTime() let (x2, x3, x5) = rslt writeLine stdout, "2^", x2, " + 3^", x3, " + 5^", x5 let lgrslt = (x2.float64 + x3.float64 3.0f64.log2 + x5.float64 5.0f64.log2) 2.0f64.log10 let (whl, frac) = lgrslt.splitDecimal echo "Approximately: ", 10.0f64.pow(frac), "E+", whl.uint64 let brslt = rslt.convertTrival2BigInt() let s = brslt.to_string let ls = s.len echo "Number of digits: ", ls if ls <= 2000: for i in countup(0, ls - 1, 100): if i + 100 < ls: echo s[i .. i + 99] else: echo s[i .. ls - 1] echo "This last took ", (stop - strt) 1000, " milliseconds." ``` The output is the same as above except that the execution time is much too small to be measured. The billionth number in the sequence can be calculated in under 5 milliseconds, the trillionth in about 0.38 seconds. The (2^64 - 1)th value (18446744073709551615) cannot be calculated due to a slight overflow problem as it approaches that limit. However, this version gives inaccurate results much about the 1e13th Hamming number due to the log base two (double) approximate representation not having enough precision to accurately sort the values put into the error band array. Alternate version with a greatly increased range without error To solve the problem of inadequate precision in the double log base two representation, the following code uses a BigInt representation of the log value with about twice the significant bits, which is then sufficient to extend the usable range well beyond any reasonable requirement: ``` import bigints, math, algorithm, times type TriVal = (uint32, uint32, uint32) proc convertTrival2BigInt(tv: TriVal): BigInt = proc xpnd(bs: uint, v: uint32): BigInt = result = initBigInt 1 var bsm = initBigInt bs var vm = v.uint while vm > 0: if (vm and 1) != 0: result = bsm bsm = bsm bsm # bsm = bsm causes a crash. vm = vm shr 1 result = (2.xpnd tv) (3.xpnd tv) (5.xpnd tv) proc nth_hamming(n: uint64): TriVal = doAssert n > 0u64 if n < 2: return (0'u32, 0'u32, 0'u32) # trivial case for 1 type LogRep = (BigInt, uint32, uint32, uint32) let lb3 = 3.0'f64.log2; let lb5 = 5.0'f64.log2; let fctr = 6.0'f64lb3lb5 let # manually produce the BigInt "limb's"! bglb2 = initBigInt @[0'u32, 0, 0, 16] # 1267650600228229401496703205376 # 2009178665378409109047848542368 bglb3 = initBigInt @[11608224'u32, 3177740794'u32, 1543611295, 25] # 2943393543170754072109742145491 bglb5 = initBigInt @[1258143699'u32, 1189265298, 647893747, 37] crctn = 30.0'f64.sqrt().log2 # log base 2 of sqrt 30 lgest = (fctr n.float64).pow(1.0'f64/3.0'f64) - crctn # from WP formula frctn = if n < 1000000000: 0.509'f64 else: 0.105'f64 lghi = (fctr (n.float64 + frctn lgest)).pow(1.0'f64/3.0'f64) - crctn lglo = 2.0'f64 lgest - lghi # and a lower limit of the upper "band" var count = 0'u64 # need to use extended precision, might go over var bnd = newSeqLogRep # give it one value so doubling size works let klmt = (lghi / lb5).uint32 + 1 for k in 0 ..< klmt: # i, j, k values can be just u32 values let p = k.float64 lb5; let jlmt = ((lghi - p) / lb3).uint32 + 1 for j in 0 ..< jlmt: let q = p + j.float64 lb3 let ir = lghi - q; let lg = q + ir.floor # current log value (estimated) count += ir.uint64 + 1; if lg >= lglo: let bglg = bglb2 ir.int32 + bglb3 j.int32 + bglb5 k.int32 bnd.add((bglg, ir.uint32, j, k)) if n > count: raise newException(Exception, "nth_hamming: band high estimate is too low!") let ndx = (count - n).int if ndx >= bnd.len: raise newException(Exception, "nth_hamming: band low estimate is too high!") bnd.sort((proc (a, b: LogRep): int = (a.cmp b).int), SortOrder.Descending) let rslt = bnd[ndx]; (rslt, rslt, rslt) for i in 1 .. 20: write stdout, nth_hamming(i.uint64).convertTrival2BigInt, " " echo "" echo nth_hamming(1691).convertTrival2BigInt let strt = epochTime() let rslt = nth_hamming(1_000_000'u64) let stop = epochTime() let (x2, x3, x5) = rslt writeLine stdout, "2^", x2, " + 3^", x3, " + 5^", x5 let lgrslt = (x2.float64 + x3.float64 3.0f64.log2 + x5.float64 5.0f64.log2) 2.0f64.log10 let (whl, frac) = lgrslt.splitDecimal echo "Approximately: ", 10.0f64.pow(frac), "E+", whl.uint64 let brslt = rslt.convertTrival2BigInt() let s = brslt.to_string let ls = s.len echo "Number of digits: ", ls if ls <= 2000: for i in countup(0, ls - 1, 100): if i + 100 < ls: echo s[i .. i + 99] else: echo s[i .. ls - 1] echo "This last took ", (stop - strt) 1000, " milliseconds." ``` The above code has the same output as before and doesn't take an appreciable amount time different to execute; it can produce the trillionth Hamming number in about 0.35 seconds and the thousand trillionth (which is now possible without error) in about 34.8 seconds. Thus, it successfully extends the usable range of the algorithm to near the maximum expressible 64 bit number in a few hours of execution time on a modern desktop computer although the (2^64 - 1)th Hamming number can't be found due to the restrictions of the expressible range limit in sizing of the required error band. OCaml A simple implementation using an integer Set as a priority queue. The semantics of the standard library Set provide a minimum element and prevent duplicate entries. min_elt and add are O(log N). ``` module ISet = Set.Make(struct type t = int let compare=compare end) let pq = ref (ISet.singleton 1) let next () = let m = ISet.min_elt !pq in pq := ISet.(remove m !pq |> add (2m) |> add (3m) |> add (5m)); m let () = print_string "The first 20 are: "; for i = 1 to 20 do Printf.printf "%d " (next ()) done; for i = 21 to 1690 do ignore (next ()) done; Printf.printf "\nThe 1691st is %d\n" (next ()); ``` Output: The first 20 are: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 The 1691st is 2125764000 Arbitrary precision An arbitrary precision version for the one millionth number. Compile with eg: ocamlopt -o hamming.exe nums.cmxa hamming.ml ``` open Big_int module APSet = Set.Make( struct type t = big_int let compare = compare_big_int end) let pq = ref (APSet.singleton (big_int_of_int 1)) let next () = let m = APSet.min_elt !pq in let ( ) = mult_int_big_int in pq := APSet.(remove m !pq |> add (2m) |> add (3m) |> add (5m)); m let () = let n = 1_000_000 in for i = 1 to (n-1) do ignore (next ()) done; Printf.printf "\nThe %dth is %s\n" n (string_of_big_int (next ())); ``` Output: The 1000000th is 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 Oz Lazy Version Translation of: Haskell ``` declare fun lazy {HammingFun} 1|{FoldL1 [{MultHamming 2} {MultHamming 3} {MultHamming 5}] LMerge} end Hamming = {HammingFun} fun {MultHamming N} {LMap Hamming fun {$ X} NX end} end fun lazy {LMap Xs F} case Xs of nil then nil [] X|Xr then {F X}|{LMap Xr F} end end fun lazy {LMerge Xs=X|Xr Ys=Y|Yr} if X < Y then X|{LMerge Xr Ys} elseif X > Y then Y|{LMerge Xs Yr} else X|{LMerge Xr Yr} end end fun {FoldL1 X|Xr F} {FoldL Xr F X} end in {ForAll {List.take Hamming 20} System.showInfo} {System.showInfo {Nth Hamming 1690}} {System.showInfo {Nth Hamming 1000000}} ``` Strict Version The strict version uses iterators and a priority queue. Note that it can calculate other variations of the hamming numbers too. By changing K, it will calculate the p(K)-smooth numbers. (E.g. K = 3, it will use the first three primes 2,3 and 5, thus resulting in the 5-smooth numbers, see ) ``` functor import Application System define class Multiplier attr lst factor current meth init(Factor Lst) lst := Lst factor := Factor {self next} end meth next local A AS in A|AS = @lst current := A@factor lst := AS end end meth peek(?X) X = @current end meth dump {System.showInfo "DUMP"} {System.showInfo "Factor: "#@factor} {System.showInfo "current: "#@current} end end % a priority queue of multipliers. The one which currently holds the smallest value is put on front class PriorityQueue attr mults current % for duplicate detection meth init(Mults) mults := Mults current := 0 end meth insert(Mult) local fun {Insert M Lst} local Av Mv in case Lst of nil then M|Lst [] A|AS then {A peek(Av)} {M peek(Mv)} if Av < Mv then A|{Insert M AS} else M|A|AS end end end end in mults := {Insert Mult @mults} end end meth next(Tail NextTail) local M Ms X Curr in M|Ms = @mults {M peek(X)} % gets value of lowest iterator Curr = @current if Curr == X then skip else Tail = X|NextTail % if we found a new value: append end {M next} mults := Ms {self insert(M)} if Curr == X then {self next(Tail NextTail)} else current := X end end end end local % Sieve of erasthothenes, adapted from fun {Sieve N} S = {Array.new 2 N true} M = {Float.toInt {Sqrt {Int.toFloat N}}} in for I in 2..M do if S.I then for J in II..N;I do S.J := false end end end S end fun {Primes N} S = {Sieve N} in for I in 2..N collect:C do if S.I then {C I} end end end % help method to extract args proc {GetNK ArgList N K} case ArgList of A|B|_ then N={StringToInt A} K={StringToInt B} end end proc {Generate N PriorQ Tail} local NewTail in if N == 0 then Tail = nil else {PriorQ next(Tail NewTail)} {Generate (N-1) PriorQ NewTail} end end end K = 3 PrimeFactors Lst Tail in ArgList = {Application.getArgs plain} Lst = 1|Tail PrimeFactors = {List.take {Primes KK} K} Mults = {List.map PrimeFactors fun {$ A} {New Multiplier init(A Lst) } end} PriorQ = {New PriorityQueue init(Mults)} {Generate 20 PriorQ Tail} {ForAll Lst System.showInfo} {Application.exit 0} end end ``` Strict version made by pietervdvn; do what you want with the code. PARI/GP This is a basic implementation; finding the millionth term requires 1 second and 54 MB. Much better algorithms exist. ``` Hupto(n)={ my(r=Vec(,n),v=primes(3),[v1,v2,v3]=v,i=1,j=1,k=1,t); for(m=2,n, r[m]=t=min(v1,min(v2,v3)); if(v1 == t, v1 = v r[i++]); if(v2 == t, v2 = v r[j++]); if(v3 == t, v3 = v r[k++]); ); r }; H(n)=Hupto(n)[n]; Hupto(20) H(1691) H(10^6) ``` Output: %1 = [1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36] %2 = 2125764000 %3 = 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 Pascal Simple brute force til 2^32-1.I was astonished by the speed.The inner loop is taken 2^32 -1 times.DIV by constant is optimized to Mul and shift. Using FPC_64 3.1.1, i4330 3.5 Ghz ``` program HammNumb; {$IFDEF FPC} {$MODE DELPHI} {$OPTIMIZATION ON} {$ELSE} {$APPTYPE CONSOLE} {$ENDIF} { type NativeUInt = longWord; } var pot : array[0..2] of NativeUInt; function NextHammNumb(n:NativeUInt):NativeUInt; var q,p,nr : NativeUInt; begin repeat nr := n+1; n := nr; p := 0; while NOT(ODD(nr)) do begin inc(p); nr := nr div 2; end; Pot:= p; p := 0; q := nr div 3; while q3=nr do Begin inc(P); nr := q; q := nr div 3; end; Pot := p; p := 0; q := nr div 5; while q5=nr do Begin inc(P); nr := q; q := nr div 5; end; Pot := p; until nr = 1; result:= n; end; procedure Check; var i,n: NativeUint; begin n := 1; for i := 1 to 20 do begin n := NextHammNumb(n); write(n,' '); end; writeln; writeln; n := 1; for i := 1 to 1690 do n := NextHammNumb(n); writeln('No ',i:4,' | ',n,' = 2^',Pot,' 3^',Pot,' 5^',Pot); end; Begin Check; End. ``` Output ``` 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 40 No 1690 | 2125764000 = 2^5 3^12 5^3 real 0m17.328s user 0m17.310s ``` Alternate Using Non-Duplicates Logarithmic Estimation Ordering The above is not a true sequence of Hamming numbers as it doesn't generate an iteration or enumeration of the numbers where each new value is generated from the accumulated state of all the generated numbers up to that point, but rather regenerates all the previous values very inefficiently for each new value, and thus does not have a linear execution complexity with number of generated values. Much more elegant solutions are those using functional programming paradigms, but as Pascal is by no means a functional language, lacking many of the requirements of functional programming such as closure functions to be functional and being difficult (although not impossible) to emulate those functions using classes/objects, the following code implements an imperative version of the non-duplicating Hamming sequence which also saves both time and space in not processing the duplicates (for instance, with two times three already accounted for, there is no need to process three times two); as well, since there is no standard "infinite" precision integer library for Pascal so that numbers larger than 64-bit can't easily be handled, the following code uses the "triplet" method and does the sorting based on a logarithmic estimation of the multiples: ``` {$OPTIMIZATION LEVEL4} program Hammings(output); {$mode objfpc} uses Math, SysUtils; const lb22 : Double = 1.0; ( log base 2 of 2 ) lb23 : Double = 1.58496250072115618147; ( log base 2 of 3 ) lb25 : Double = 2.32192809488736234781; ( log base 2 of 5 ) type TLogRep = record lr : Double; x2, x3, x5 : Word; end; const oneLogRep : TLogRep = (lr:0.0; x2:0; x3:0; x5:0); function LogRepMult2(lr : TLogRep) : TLogRep; begin Result := lr; Result.lr := lr.lr + lb22; Result.x2 := lr.x2 + 1 end; function LogRepMult3(lr : TLogRep) : TLogRep; begin Result := lr; Result.lr := lr.lr + lb23; Result.x3 := lr.x3 + 1 end; function LogRepMult5(lr : TLogRep) : TLogRep; begin Result := lr; Result.lr := lr.lr + lb25; Result.x5 := lr.x5 + 1 end; function LogRep2QWord(lr : TLogRep) : QWord; function xpnd(x : Word; m : QWord) : QWord; var mlt : QWord; begin mlt := m; Result := 1; while x > 0 do begin if x and 1 > 0 then Result := Result mlt; mlt := mlt mlt; x := x shr 1 end end; begin Result := xpnd(lr.x2, 2) xpnd(lr.x3, 3) xpnd(lr.x5, 5) end; function LogRep2String(lr : TLogRep) : AnsiString; type TBI = array of LongWord; TDigitStr = String; function mul2(bi : TBI) : TBI; var cry : QWord; i : Integer; begin cry := 0; for i := 0 to High(bi) do begin cry := (QWord(bi[i]) shl 1) + cry; bi[i] := cry; cry := cry shr 32 end; if cry <> 0 then begin SetLength(bi, Length(bi) + 1); bi[High(bi)] := cry end; Result := bi end; function add(bia : TBI; bib : TBI) : TBI; var cry : QWord; i : Integer; begin cry := 0; for i := 0 to High(bia) do begin cry := QWord(bia[i]) + QWord(bib[i]) + cry; bia[i] := cry; cry := cry shr 32 end; if cry <> 0 then begin SetLength(bia, Length(bia) + 1); bia[High(bia)] := cry end; Result := bia end; function div10(bi : TBI) : TDigitStr; var brw : QWord; i : Integer; begin brw := 0; for i := High(bi) downto 0 do begin brw := (brw shl 32) + QWord(bi[i]); bi[i] := brw div 10; brw := brw - QWord(bi[i]) 10 end; Result := IntToStr(brw) end; var v : Word; xpnd, xpndt : TBI; begin Result := ''; SetLength(xpnd, 1); xpnd := 1; for v := lr.x2 downto 1 do xpnd := mul2(xpnd); for v := lr.x3 downto 1 do begin xpndt := Copy(xpnd, 0, Length(xpnd)); xpnd := mul2(xpnd); xpnd := add(xpnd, xpndt) end; for v := lr.x5 downto 1 do begin xpndt := Copy(xpnd, 0, Length(xpnd)); xpnd := mul2(xpnd); xpnd := mul2(xpnd); xpnd := add(xpnd, xpndt) end; while Length(xpnd) > 0 do begin Result := div10(xpnd) + Result; if xpnd[High(xpnd)] <= 0 then SetLength(xpnd, Length(xpnd) - 1) end end; type TLogReps = array of TLogRep; THammings = class private FCurrent : TLogRep; FBA, FMA : TLogReps; Fnxt2, Fnxt3, Fnxt5, Fmrg35 : TLogRep; FBb, FBe, FMb, FMe : Integer; public constructor Create; function GetEnumerator : THammings; function MoveNext : Boolean; property Current : TLogRep read FCurrent; end; constructor THammings.Create; begin inherited Create; FCurrent := oneLogRep; FCurrent.lr := -1.0; SetLength(FBA, 4); SetLength(FMA, 4); Fnxt5 := LogRepMult5(oneLogRep); Fmrg35 := LogRepMult3(oneLogRep); Fnxt3 := LogRepMult3(Fmrg35); Fnxt2 := LogRepMult2(oneLogRep); FBb := 0; FBe := 0; FMb := 0; FMe := 0 end; function THammings.GetEnumerator : THammings; begin Result := Self end; function THammings.MoveNext : Boolean; var blen, mlen, i, j : Integer; begin if FCurrent.lr < 0.0 then FCurrent.lr := 0.0 else begin blen := Length(FBA); if FBb >= blen shr 1 then begin i := 0; for j := FBb to FBe - 1 do begin FBA[i] := FBA[j]; Inc(i) end; FBe := FBe - FBb; FBb := 0 end; if FBe >= blen then SetLength(FBA, blen shl 1); if Fnxt2.lr < Fmrg35.lr then begin FCurrent := Fnxt2; FBA[FBe] := FCurrent; Fnxt2 := LogRepMult2(FBA[FBb]); Inc(FBb) end else begin mlen := Length(FMA); if FMb >= mlen shr 1 then begin i := 0; for j := FMb to FMe - 1 do begin FMA[i] := FMA[j]; Inc(i) end; FMe := FMe - FMb; FMb := 0 end; if FMe >= mlen then SetLength(FMA, mlen shl 1); if Fmrg35.lr < Fnxt5.lr then begin FCurrent := Fmrg35; FMA[FMe] := FCurrent; Fnxt3 := LogRepMult3(FMA[FMb]); Inc(FMb) end else begin FCurrent := Fnxt5; FMA[FMe] := FCurrent; Fnxt5 := LogRepMult5(Fnxt5) end; if Fnxt3.lr < Fnxt5.lr then Fmrg35 := Fnxt3 else Fmrg35 := Fnxt5; FBA[FBe] := FCurrent; Inc(FMe) end; Inc(FBe) end; Result := True end; var elpsd : QWord; count : Integer; h : TLogRep; begin write('The first 20 Hamming numbers are: '); count := 0; for h in THammings.Create do begin Inc(count); if count > 20 then break; write(' ', LogRep2QWord(h)); end; writeln('.'); count := 1; for h in THammings.Create do begin Inc(count); if count > 1691 then break; end; writeln('The 1691st Hamming number is ', LogRep2QWord(h), '.'); elpsd := GetTickCount64; count := 1; for h in THammings.Create do begin Inc(count); if count > 1000000 then break; end; elpsd := GetTickCount64 - elpsd; writeln('The millionth Hamming number is approximately ', 2.0h.lr, '.'); write('The millionth Hamming triplet is '); writeln('2^', h.x2, ' 3^', h.x3, ' 5^', h.x5, '.'); writeln('The millionth Hamming number is ', LogRep2String(h), '.'); writeln('This last took ', elpsd, ' milliseconds.') end. ``` Output: The first 20 Hamming numbers are: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36. The 1691st Hamming number is 2125764000. The millionth Hamming number is approximately 5.19312780448555124533E+0083. The millionth Hamming triplet is 2^55 3^47 5^64. The millionth Hamming number is 519312780448388736089589843750000000000000000000000000000000000000000000000000000000. This last took 13 milliseconds. The above was as run on a modern Intel CPU at 4 GHz. Note that as the millionth Hamming number has 84 decimal digits and the largest standard 64-bit value that is easily expressed in standard Pascal is only about 19 decimal digits, enough of an "infinite" precision integer library has been implemented to be able to convert the produced "triplet" into the resulting millionth value; this does not need to be of maximum efficiency as it is used only for the final answer. a fast alternative The first Pascal code is much slower. The following is easy to use for smooth-3 .. smooth-37. Big(O) is nearly linear to sub-linear . 1E7-> 0.028s => x10 =>1e8 ->0.273s => x1000 => 100'200'300'400 ~ 1e11 35.907s // estimated 270 s! This depends extreme on sorting speed. is head to head, but still faster for very big numbers >1e8 (10^8: 4 MB 0.27 sec) 100'200'300'400 calculates in 8.33 s For fpc 3.1.1_64 linux on 3.5 Ghz i4330, depends on 64-Bit by a factor of 4 slower on 32-Bit / For 12 primes "smooth-37" 1e8 it takes 02.807 s / I collect only the factors between p^n and p^(n+1), in a recursive way in different lists 5 is a list consisting only 5^? = 1 factor 3 is a sorted list 3^?..3^?+1 and inserted values of 5 2 is a sorted list 2^?..2^?+1 and inserted values of list 3 Changing sizeOf(tElem) to 32 {maxPrimFakCnt = 3+8} instead of 16 ( x2) {maxPrimFakCnt = 3} results in increasing the runtime by x4 ( 2^2 ) ``` program hammNumb; {$IFDEF FPC} {$MODE DELPHI} {$OPTIMIZATION ON,ALL} {$ALIGN 16} {$ELSE} {$APPTYPE CONSOLE} {$ENDIF} uses sysutils; const maxPrimFakCnt = 3;//3 or 3+8 if tNumber= double, else -1 for extended to keep data aligned minElemCnt = 10; type tPrimList = array of NativeUint; tnumber = double; tpNumber= ^tnumber; tElem = record n : tnumber;//ln(prime^Pots... Pots: array[0..maxPrimFakCnt] of word; end; tpElem = ^tElem; tElems = array of tElem; tElemArr = array [0..0] of tElem; tpElemArr = ^tElemArr; tpFaktorRec = ^tFaktorRec; tFaktorRec = record frElems : tElems; frInsElems: tElems; frAktIdx : NativeUint; frMaxIdx : NativeUint; frPotNo : NativeUint; frActPot : NativeUint; frNextFr : tpFaktorRec; frActNumb: tElem; frLnPrime: tnumber; end; tArrFR = array of tFaktorRec; var Pl : tPrimList; ActIndex : NativeUint; ArrInsert : tElems; procedure PlInit(n: integer); const cPl : array[0..11] of byte=(2,3,5,7,11,13,17,19,23,29,31,37); var i : integer; Begin IF n>High(cPl)+1 then n := High(cPl) else IF n < 0 then n := 1; setlength(Pl,n); dec(n); For i := 0 to n do Pl[i] := cPl[i]; end; procedure AusgabeElem(pElem: tElem); var i : integer; Begin with pElem do Begin IF n < 23 then begin write(round(exp(n)),' '); if n < ln(100)then EXIT; end else write('ln ',n:13:7); For i := 0 to maxPrimFakCnt-1 do write(' ',PL[i]:2,'^',Pots[i]); end; writeln end; //LoE == List of Elements function LoEGetNextNumber(pFR :tpFaktorRec):tElem;forward; procedure LoECreate(const Pl: tPrimList;var FA:tArrFR); var i : integer; Begin setlength(ArrInsert,100); setlength(FA,Length(PL)); For i := 0 to High(PL) do with FA[i] do Begin //automatic zeroing IF i < High(PL) then Begin setlength(frElems,minElemCnt); setlength(frInsElems,minElemCnt); frNextFr := @FA[i+1] end else Begin setlength(frElems,2); setlength(frInsElems,0); frNextFr := NIL; end; frPotNo := i; frLnPrime:= ln(PL[i]); frMaxIdx := 0; frAktIdx := 0; frActPot := 1; With frElems do Begin n := frLnPrime; Pots[i]:= 1; end; frActNumb := frElems; end; end; procedure LoEFree(var FA:tArrFR); var i : integer; Begin For i := High(FA) downto Low(FA) do setlength(FA[i].frElems,0); setLength(FA,0); end; function LoEGetActElem(pFr:tpFaktorRec):tElem; Begin with pFr^ do result := frElems[frAktIdx]; end; function LoEGetActLstNumber(pFr:tpFaktorRec):tpNumber; Begin with pFr^ do result := @frElems[frAktIdx].n; end; procedure LoEIncInsArr(var a:tElems); Begin setlength(a,Length(a)8 div 5); end; procedure LoEIncreaseElems(pFr:tpFaktorRec;minCnt:NativeUint); var newLen: NativeUint; Begin with pFR^ do begin newLen := Length(frElems); minCnt := minCnt+frMaxIdx; repeat newLen := newLen8 div 5 +1; until newLen > minCnt; setlength(frElems,newLen); end; end; procedure LoEInsertNext(pFr:tpFaktorRec;Limit:tnumber); var pNum : tpNumber; pElems : tpElemArr; cnt,i,u : NativeInt; begin with pFr^ do Begin //collect numbers of heigher primes cnt := 0; pNum := LoEGetActLstNumber(frNextFr); while Limit > pNum^ do Begin frInsElems[cnt] := LoEGetNextNumber(frNextFr); // writeln( 'Ins ',frInsElems[cnt].n:10:8,' < ',pNum^:10:8); inc(cnt); IF cnt > High(frInsElems) then LoEIncInsArr(frInsElems); pNum := LoEGetActLstNumber(frNextFr); end; if cnt = 0 then EXIT; i := frMaxIdx; u := frMaxIdx+cnt+1; IF u > High(frElems) then LoEIncreaseElems(pFr,cnt); IF frPotNo = 0 then inc(ActIndex,u); //Merge pElems := @frElems; dec(cnt); dec(u); frMaxIdx:= u; repeat // writeln(i:10,cnt:10,u:10); writeln( pElems^[i].n:10:8,' < ',frInsElems[cnt].n:10:8); IF pElems^[i].n < frInsElems[cnt].n then Begin pElems^[u] := frInsElems[cnt]; dec(cnt); end else Begin pElems^[u] := pElems^[i]; dec(i); end; dec(u); until (i<0) or (cnt<0); IF i < 0 then For u := cnt downto 0 do pElems^[u] := frInsElems[u]; end; end; procedure LoEAppendNext(pFr:tpFaktorRec;Limit:tnumber); var pNum : tpNumber; pElems : tpElemArr; i : NativeInt; begin with pFr^ do Begin i := frMaxIdx+1; pElems := @frElems; pNum := LoEGetActLstNumber(frNextFr); while Limit > pNum^ do Begin IF i > High(frElems) then Begin LoEIncreaseElems(pFr,10); pElems := @frElems; end; pElems^[i] := LoEGetNextNumber(frNextFr); inc(i); pNum := LoEGetActLstNumber(frNextFr); end; inc(ActIndex,i); frMaxIdx:= i-1; end; end; procedure LoENextList(pFr:tpFaktorRec); var pElems : tpElemArr; j : NativeUint; begin with pFR^ do Begin //increase Elements by factor pElems := @frElems; for j := frMaxIdx Downto 0 do with pElems^[j] do Begin n := n+frLnPrime; inc(Pots[frPotNo]); end; //x^j -> x^(j+1) j := frActPot+1; with frActNumb do begin n:= jfrLnPrime; Pots[frPotNo]:= j; end; frActPot := j; //if something follows IF frNextFr <> NIL then LoEInsertNext(pFR,frActNumb.n); frAktIdx := 0; end; end; function LoEGetNextNumber(pFR :tpFaktorRec):tElem; Begin with pFr^ do Begin result := frElems[frAktIdx]; inc(frAktIdx); IF frMaxIdx < frAktIdx then LoENextList(pFr); end; end; procedure LoEGetNumber(pFR :tpFaktorRec;no:NativeUint); Begin dec(no); while ActIndex < no do LoENextList(pFR); with pFr^ do frAktIdx := (no-(ActIndex-frMaxIdx)-1); end; var T1,T0: tDateTime; FA: tArrFR; i : integer; Begin PlInit(3);// 3 -> 2,3,5 LoECreate(Pl,FA); i := 1; i := 1; T0 := time; write('First 20 :'); For i := 1 to 20 do AusgabeElem(LoEGetNextNumber(@FA)); writeln; write(' 1691.th :'); LoEGetNumber(@FA,1691); AusgabeElem(LoEGetNextNumber(@FA)); LoEGetNumber(@FA,10001000); AusgabeElem(LoEGetNextNumber(@FA)); T1 := time; Writeln('Timed 1,000,000 in ',FormatDateTime('HH:NN:SS.ZZZ',T1-T0)); LoEGetNumber(@FA,100010001000); AusgabeElem(LoEGetNextNumber(@FA)); Writeln('Timed 1,000,000,000 in ',FormatDateTime('HH:NN:SS.ZZZ',time-T1)); Writeln('Actual Index ',ActIndex ); AusgabeElem(LoEGetNextNumber(@FA)); For i := 0 to High(FA) do writeln(pL[i]:2, ' elemcount ',FA[i].frMaxIdx+1:7,' out of',length(FA[i].frElems):7); LoEFree(FA); End. ``` @ TIO.RUN: ``` First 20 :2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 40 1691.th :2125764000 2^5 3^12 5^3 ln 192.7618989 2^55 3^47 5^64 Timed 1,000,000 in 00:00:00.003 ln 1942.9063722 2^1334 3^335 5^404 Timed 1,000,000,000 in 00:00:04.456 Actual Index 1001046828 ln 1942.9063727 2^761 3^572 5^489 2 elemcount 1069703 out of1426063 3 elemcount 1209 out of 1236 5 elemcount 1 out of 2 ... change zu use 12 primes [2..37] ( 32 bit ) -> 2.2x runtime over using 3 primes Begin PlInit(12) ln 40.8834947 2^14 3^0 5^6 7^4 11^2 13^1 17^0 19^1 23^0 29^0 31^1 37^0 Actual Index 100269652 Timed 100000000 in 00:00:02.807 2 elemcount 14322779 out of 14953361 3 elemcount 3387290 out of 3650722 5 elemcount 891236 out of 891289 7 elemcount 289599 out of 348159 11 elemcount 92240 out of 135999 13 elemcount 28272 out of 33202 17 elemcount 9394 out of 12969 19 elemcount 2639 out of 3165 23 elemcount 676 out of 772 29 elemcount 119 out of 188 31 elemcount 15 out of 17 37 elemcount 1 out of 2 @home: //tested til 1E12 with 4.4 Ghz 5600G Free Pascal Compiler version 3.2.2-[2022/11/22] for x86_64 Timed 1,000,000,000,000 in 57:53.015 ln 19444.3672890 2^1126 3^16930 5^40 -> see Haskell-Version [ Actual Index 1000075683108 ln 19444.3672890 2^8295 3^2426 5^6853 2 elemcount 106935365 out of 156797362 3 elemcount 12083 out of 12969 5 elemcount 1 out of 2 user 57m51.015s << sys 0m1.616s ``` PascalABC.NET ``` function Hamming(n: integer): BigInteger; begin var (two,three,five) := (2bi, 3bi, 5bi); var h := new BigInteger[n]; h := 1; var (x2,x3,x5) := (2bi, 3bi, 5bi); var (i,j,k) := (0, 0, 0); for var ind := 1 to n-1 do begin h[ind] := Min(x2, x3, x5); if h[ind] = x2 then begin i += 1; x2 := two h[i]; end; if h[ind] = x3 then begin j += 1; x3 := three h[j]; end; if h[ind] = x5 then begin k += 1; x5 := five h[k]; end; end; Result := h[n-1]; end; begin (1..20).Select(x -> Hamming(x)).Println; Hamming(1691).Println; Hamming(1000000).Println; end. ``` Output: ``` 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 ``` Perl ``` use strict; use warnings; use List::Util 'min'; If you want the large output, uncomment either the one line marked (1) or the two lines marked (2) use Math::GMP qw/:constant/; # (1) uncomment this to use Math::GMP use Math::GMPz; # (2) uncomment this plus later line for Math::GMPz sub ham_gen { my @s = (, , ); my @m = (2, 3, 5); #@m = map { Math::GMPz->new($_) } @m; # (2) uncomment for Math::GMPz return sub { my $n = min($s, $s, $s); for (0 .. 2) { shift @{$s[$_]} if $s[$_] == $n; push @{$s[$_]}, $n $m[$_] } return $n } } my $h = ham_gen; my $i = 0; ++$i, print $h->(), " " until $i > 20; print "...\n"; ++$i, $h->() until $i == 1690; print ++$i, "-th: ", $h->(), "\n"; You will need to pick one of the bigint choices ++$i, $h->() until $i == 999999; print ++$i, "-th: ", $h->(), "\n"; ``` Output: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 40 ... 1691-th: 2125764000 1000000-th: 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 The core module bigint (Math::BigInt) is very slow, even with the GMP backend, and not supported here. Alternatives shown are Math::GMP and Math::GMPz (about 4x faster). Phix Translation of: AWK Library: Phix/mpfr standard and gmp versions ``` with javascript_semantics function hamming(integer N) sequence h = repeat(1,N) atom x2 = 2, x3 = 3, x5 = 5, hn integer i = 1, j = 1, k = 1 for n=2 to N do hn = min(x2,min(x3,x5)) h[n] = hn if hn==x2 then i += 1 x2 = 2h[i] end if if hn==x3 then j += 1 x3 = 3h[j] end if if hn==x5 then k += 1 x5 = 5h[k] end if end for return h[N] end function include builtins\mpfr.e function mpz_hamming(integer N) sequence h = mpz_inits(N,1) mpz x2 = mpz_init(2), x3 = mpz_init(3), x5 = mpz_init(5), hn = mpz_init() integer i = 1, j = 1, k = 1 for n=2 to N do mpz_set(hn,mpz_min({x2,x3,x5})) mpz_set(h[n],hn) if mpz_cmp(hn,x2)=0 then i += 1 mpz_mul_si(x2,h[i],2) end if if mpz_cmp(hn,x3)=0 then j += 1 mpz_mul_si(x3,h[j],3) end if if mpz_cmp(hn,x5)=0 then k += 1 mpz_mul_si(x5,h[k],5) end if end for return h[N] end function sequence s = {} for i=1 to 20 do s = append(s,hamming(i)) end for ?s printf(1,"%d\n",hamming(1691)) printf(1,"%d (wrong!)\n",hamming(1000000)) --(the hn==x2 etc fail, so multiplies are all wrong) printf(1,"%s\n",{mpz_get_str(mpz_hamming(1691))}) printf(1,"%s\n",{mpz_get_str(mpz_hamming(1000000))}) ``` Output: ``` {1,2,3,4,5,6,8,9,10,12,15,16,18,20,24,25,27,30,32,36} 2125764000 246192725545902804828662268200 (wrong!) 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 ``` A much faster logarithmic version This proved much easier to implement than scanning the other entries suggested [not copied, they all frighten me]. At some point, comparing logs will no doubt get it wrong, but I have no idea when that might happen. ``` with javascript_semantics -- numbers kept as {log,{pow2,pow3,pow5}}, -- value is ~= exp(log), == (2^pow2)(3^pow3)(5^pow5) enum LOG, POWS enum POW2, POW3, POW5 function lnmin(sequence a, sequence b) return iff(a[LOG]<b[LOG]?a:b) end function constant ln1 = log(1), ln2 = log(2), ln3 = log(3), ln5 = log(5) function hamming(integer N) sequence h = repeat(0,N) sequence x2 = {ln2,{1,0,0}}, x3 = {ln3,{0,1,0}}, x5 = {ln5,{0,0,1}} integer i = 1, j = 1, k = 1 h = {ln1,{0,0,0}} for n=2 to N do h[n] = deep_copy(lnmin(x2,lnmin(x3,x5))) sequence p = h[n][POWS] if p=x2[POWS] then i += 1 x2 = deep_copy(h[i]) x2[LOG] += ln2 x2[POWS][POW2] += 1 end if if p=x3[POWS] then j += 1 x3 = deep_copy(h[j]) x3[LOG] += ln3 x3[POWS][POW3] += 1 end if if p=x5[POWS] then k += 1 x5 = deep_copy(h[k]) x5[LOG] += ln5 x5[POWS][POW5] += 1 end if end for return h[N] end function function hint(sequence hm) -- (obviously not accurate above 53 bits on a 32-bit system, or 64 bits on a 64 bit system) sequence p = hm[POWS] return power(2,p[POW2])power(3,p[POW3])power(5,p[POW5]) end function sequence s = {} for i=1 to 20 do s = append(s,hint(hamming(i))) end for printf(1,"hamming[1..20]: %v\n",{s}) ?hint(hamming(1691)) ?hint(hamming(1000000)) printf(1," %d (approx)\n",hint(hamming(1000000))) include builtins\mpfr.e function mpz_hint(sequence hm) -- (as accurate as you like) integer {p2,p3,p5} = hm[POWS] mpz {tmp2,tmp3,tmp5} = mpz_inits(3) mpz_ui_pow_ui(tmp2,2,p2) mpz_ui_pow_ui(tmp3,3,p3) mpz_ui_pow_ui(tmp5,5,p5) mpz_mul(tmp3,tmp3,tmp5) mpz_mul(tmp2,tmp2,tmp3) return mpz_get_str(tmp2) end function ?mpz_hint(hamming(1000000)) ``` Output: ``` hamming[1..20]: {1,2,3,4,5,6,8,9,10,12,15,16,18,20,24,25,27,30,32,36} 2125764000.0 5.193127804e+83 519312780448389068266824288284848486280402222226888608420684482660084484246042460000 (approx) "519312780448388736089589843750000000000000000000000000000000000000000000000000000000" ``` Under pwa/p2js, no real idea or any fretting over why, we instead get: ```  519312780448388740000000000000000000000000000000000000000000000000000000000000000000 (approx) ``` Picat ``` go => println([hamming(I) : I in 1..20]), time(println(hamming_1691=hamming(1691))), time(println(hamming_1000000=hamming(1000000))), nl. hamming(1) = 1. hamming(2) = 2. hamming(3) = 3. hamming(N) = Hamming => A = new_array(N), [Next2, Next3, Next5] = [2,3,5], A := Next2, A := Next3, A := Next5, I = 0, J = 0, K = 0, M = 1, while (M < N) A[M] := min([Next2,Next3,Next5]), if A[M] == Next2 then I := I+1, Next2 := 2A[I] end, if A[M] == Next3 then J := J+1, Next3 := 3A[J] end, if A[M] == Next5 then K := K+1, Next5 := 5A[K] end, M := M + 1 end, Hamming = A[N-1]. ``` Output: ``` [1,2,3,4,5,6,8,9,10,12,15,16,18,20,24,25,27,30,32,36] hamming_1691 = 2125764000 CPU time 0.0 seconds. hamming_1000000 = 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 CPU time 2.721 seconds. ``` PicoLisp ``` (de hamming (N) (let (L (1) H) (do N (for (X L X (cadr X)) # Find smallest result (setq H (car X)) ) (idx 'L H NIL) # Remove it (for I (2 3 5) # Generate next results (idx 'L ( I H) T) ) ) H ) ) (println (make (for N 20 (link (hamming N))))) (println (hamming 1691)) # very fast (println (hamming 1000000)) # runtime about 13 minutes on i5-3570S ``` Output: (1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36) 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 PL/I ``` (subscriptrange): Hamming: procedure options (main); / 14 November 2013 with fixes 2021 / declare (H(2000), p2, p3, p5, twoTo31, Hm, tenP(11)) decimal(12)fixed; declare (i, j, k, m, d, w) fixed binary; / Quicksorts in-place the array of integers H, from lb to ub / quicksortH: procedure( lb, ub ) recursive; declare ( lb, ub )binary(15)fixed; declare ( left, right )binary(15)fixed; declare ( pivot, swap )decimal(12)fixed; declare sorting bit(1); if ub > lb then do / more than one element, so must sort / left = lb; right = ub; / choosing the middle element of the array as the pivot / pivot = H( left + ( ( right + 1 ) - left ) / 2 ); sorting = '1'b; do while( sorting ); do while( left <= ub & H( left ) < pivot ); left = left + 1; end; do while( right >= lb & H( right ) > pivot ); right = right - 1; end; sorting = ( left <= right ); if sorting then do; swap = H( left ); H( left ) = H( right ); H( right ) = swap; left = left + 1; right = right - 1; end; end; call quicksortH( lb, right ); call quicksortH( left, ub ); end; end quicksortH ; / find 2^31 - the limit for Hamming numbers we need to find / twoTo31 = 2; do i = 2 to 31; twoTo31 = twoTo31 2; end; / calculate powers of 10 so we can check the number of digits / / the numbers will have / tenP( 1 ) = 10; do i = 2 to 11; tenP( i ) = 10 tenP( i - 1 ); end; / find the numbers / m = 0; p5 = 1; do k = 0 to 13; p3 = 1; do j = 0 to 19; Hm = 0; p2 = 1; do i = 0 to 31 while( Hm < twoTo31 ); / count the number of digits p2 p3 p5 will have / d = 0; do w = 1 to 11 while( tenP(w) < p2 ); d = d + 1; end; do w = 1 to 11 while( tenP(w) < p3 ); d = d + 1; end; do w = 1 to 11 while( tenP(w) < p5 ); d = d + 1; end; if d < 11 then do; / the product will be small enough / Hm = p2 p3 p5; if Hm < twoTo31 then do; m = m + 1; H(m) = Hm; end; end; p2 = p2 2; end; p3 = p3 3; end; p5 = p5 5; end; / sort the numbers / call quicksortH( 1, m ); put skip list( 'The first 20 Hamming numbers:' ); do i = 1 to 20; put skip list (H(i)); end; put skip list( 'Hamming number 1691:' ); put skip list (H(1691)); end Hamming; ``` Results: ``` The first 20 Hamming numbers: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 Hamming number 1691: 2125764000 ``` Prolog Generator idiom ``` %% collect N elements produced by a generator in a row take( 0, Next, Z-Z, Next). take( N, Next, [A|B]-Z, NZ):- N>0, !, next(Next,A,Next1), N1 is N-1, take(N1,Next1,B-Z,NZ). %% a generator provides specific {next} implementation next( hamm( A2,B,C3,D,E5,F,[H|G] ), H, hamm(X,U,Y,V,Z,W,G) ):- H is min(A2, min(C3,E5)), ( A2 =:= H -> B=[N2|U],X is N22 ; (X,U)=(A2,B) ), ( C3 =:= H -> D=[N3|V],Y is N33 ; (Y,V)=(C3,D) ), ( E5 =:= H -> F=[N5|W],Z is N55 ; (Z,W)=(E5,F) ). mkHamm( hamm(1,X,1,X,1,X,X) ). % Hamming numbers generator init state main(N) :- mkHamm(G),take(20,G,A-[],), write(A), nl, take(1691-1,G,,G2),take(2,G2,B-[],), write(B), nl, take( N -1,G,,G3),take(2,G3,[C1|]-,_), write(C1), nl. ``` SWI Prolog 6.2.6 produces (in about 7 ideone seconds): ``` ?- time( main(1000000) ). [1,2,3,4,5,6,8,9,10,12,15,16,18,20,24,25,27,30,32,36] [2125764000,2147483648] 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 % 10,017,142 inferences ``` Laziness flavor Works with SWI-Prolog. Laziness is simulate with freeze/2 and ground/2. Took inspiration from this code : (click on hamming.pl: Solves Hamming Problem). ``` hamming(N) :- % to stop cleanly nb_setval(go, 1), % display list ( N = 20 -> watch_20(20, L); watch(1,N,L)), % go L=[1|L235], multlist(L,2,L2), multlist(L,3,L3), multlist(L,5,L5), merge_(L2,L3,L23), merge_(L5,L23,L235). %% multlist(L,N,LN) %% multiply each element of list L with N, resulting in list LN %% here only do multiplication for 1st element, then use multlist recursively multlist([X|L],N,XLN) :- % the trick to stop nb_getval(go, 1) -> % laziness flavor when(ground(X), ( XN is XN, XLN=[XN|LN], multlist(L,N,LN))); true. merge_([X|In1],[Y|In2],XYOut) :- % the trick to stop nb_getval(go, 1) -> % laziness flavor ( X < Y -> XYOut = [X|Out], In11 = In1, In12 = [Y|In2] ; X = Y -> XYOut = [X|Out], In11 = In1, In12 = In2 ; XYOut = [Y|Out], In11 = [X | In1], In12 = In2), freeze(In11,freeze(In12, merge_(In11,In12,Out))); true. %% display nth element watch(Max, Max, [X|_]) :- % laziness flavor when(ground(X), (format('~w~n', [X]), % the trick to stop nb_linkval(go, 0))). watch(N, Max, [_X|L]):- N1 is N + 1, watch(N1, Max, L). %% display nth element watch_20(1, [X|_]) :- % laziness flavor when(ground(X), (format('~w~n', [X]), % the trick to stop nb_linkval(go, 0))). watch_20(N, [X|L]):- % laziness flavor when(ground(X), (format('~w ', [X]), N1 is N - 1, watch_20(N1, L))). ``` Output: ``` ?- hamming(20). 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 true . ?- hamming(1691). 2125764000 true . ?- hamming(1000000). 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 true . ``` PureBasic ``` X2 = 2 X3 = 3 X5 = 5 Macro Ham(w) PrintN("H("+Str(w)+") = "+Str(Hamming(w))) EndMacro Procedure.i Hamming(l.i) Define.i i,j,k,n,m,x=#X2,y=#X3,z=#X5 Dim h.i(l) : h(0)=1 For n=1 To l-1 m=x If m>y : m=y : EndIf If m>z : m=z : EndIf h(n)=m If m=x : i+1 : x=#X2h(i) : EndIf If m=y : j+1 : y=#X3h(j) : EndIf If m=z : k+1 : z=#X5h(k) : EndIf Next ProcedureReturn h(l-1) EndProcedure OpenConsole("Hamming numbers") For h.i=1 To 20 Ham(h) Next Ham(1691) Input() ``` Output: ``` H(1) = 1 H(2) = 2 H(3) = 3 H(4) = 4 H(5) = 5 H(6) = 6 H(7) = 8 H(8) = 9 H(9) = 10 H(10) = 12 H(11) = 15 H(12) = 16 H(13) = 18 H(14) = 20 H(15) = 24 H(16) = 25 H(17) = 27 H(18) = 30 H(19) = 32 H(20) = 36 H(1691) = 2125764000 ``` Python Version based on example from Dr. Dobb's CodeTalk ``` from itertools import islice def hamming2(): '''\ This version is based on a snippet from: /index.php?option=com_content&task=view&id=913&Itemid=85 Hamming problem Written by Will Ness December 07, 2008 When expressed in some imaginary pseudo-C with automatic unlimited storage allocation and BIGNUM arithmetics, it can be expressed as: hamming = h where array h; n=0; h=1; i=0; j=0; k=0; x2=2h[ i ]; x3=3h[j]; x5=5h[k]; repeat: h[++n] = min(x2,x3,x5); if (x2==h[n]) { x2=2h[++i]; } if (x3==h[n]) { x3=3h[++j]; } if (x5==h[n]) { x5=5h[++k]; } ''' h = 1 _h=[h] # memoized multipliers = (2, 3, 5) multindeces = [0 for i in multipliers] # index into _h for multipliers multvalues = [x _h[i] for x,i in zip(multipliers, multindeces)] yield h while True: h = min(multvalues) _h.append(h) for (n,(v,x,i)) in enumerate(zip(multvalues, multipliers, multindeces)): if v == h: i += 1 multindeces[n] = i multvalues[n] = x _h[i] # cap the memoization mini = min(multindeces) if mini >= 1000: del _h[:mini] multindeces = [i - mini for i in multindeces] # yield h ``` Output: ``` list(islice(hamming2(), 20)) [1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36] list(islice(hamming2(), 1690, 1691)) list(islice(hamming2(), 999999, 1000000)) ``` Another implementation of same approach This version uses a lot of memory, it doesn't try to limit memory usage. ``` import psyco def hamming(limit): h = limit x2, x3, x5 = 2, 3, 5 i = j = k = 0 for n in xrange(1, limit): h[n] = min(x2, x3, x5) if x2 == h[n]: i += 1 x2 = 2 h[i] if x3 == h[n]: j += 1 x3 = 3 h[j] if x5 == h[n]: k += 1 x5 = 5 h[k] return h[-1] psyco.bind(hamming) print [hamming(i) for i in xrange(1, 21)] print hamming(1691) print hamming(1000000) ``` Implementation based on priority queue This is inspired by the Picolisp implementation further down, but uses a priority queue instead of a search tree. Computes 3x more numbers than necessary, but discards them quickly so memory usage is not too bad. ``` from heapq import heappush, heappop from itertools import islice def h(): heap = while True: h = heappop(heap) while heap and h==heap: heappop(heap) for m in [2,3,5]: heappush(heap, mh) yield h print list(islice(h(), 20)) print list(islice(h(), 1690, 1691)) print list(islice(h(), 999999, 1000000)) # runtime 9.5 sec on i5-3570S ``` "Cyclical Iterators" The original author is Raymond Hettinger and the code was first published here under the MIT license. Uses iterators dubbed "cyclical" in a sense that they are referring back (explicitly, with p2, p3, p5 iterators) to the previously produced values, same as the above versions (through indices into shared storage) and the classic Haskell version (implicitly timed by lazy evaluation). Memory is efficiently maintained automatically by the tee function for each of the three generator expressions, i.e. only that much is maintained as needed to produce the next value (although, for Python versions older than 3.6 it looks like the storage is not shared so three copies are maintained implicitly there -- whereas for 3.6 and up the storage is shared between the returned iterators, so only a single underlying FIFO queue is maintained, according to the documentation). ``` from itertools import tee, chain, groupby, islice from heapq import merge def raymonds_hamming(): # Generate "5-smooth" numbers, also called "Hamming numbers" # or "Regular numbers". See: # Finds solutions to 2i 3j 5k for some integers i, j, and k. def deferred_output(): for i in output: yield i result, p2, p3, p5 = tee(deferred_output(), 4) m2 = (2x for x in p2) # multiples of 2 m3 = (3x for x in p3) # multiples of 3 m5 = (5x for x in p5) # multiples of 5 merged = merge(m2, m3, m5) combined = chain(, merged) # prepend a starting point output = (k for k,g in groupby(combined)) # eliminate duplicates return result print list(islice(raymonds_hamming(), 20)) print islice(raymonds_hamming(), 1689, 1690).next() print islice(raymonds_hamming(), 999999, 1000000).next() ``` Results are the same as before. Non-sharing recursive generator Another formulation along the same lines, but greatly simplified, found here. Lacks data sharing, i.e. calls self recursively thus creating a separate copy of the data stream fed to the tee() call, again and again, instead of using its own output. This gravely impacts the efficiency. Not to be used. ``` from heapq import merge from itertools import tee def hamming_numbers(): last = 1 yield last a,b,c = tee(hamming_numbers(), 3) for n in merge((2i for i in a), (3i for i in b), (5i for i in c)): if n != last: yield n last = n ``` Cyclic generator method #2. Cyclic generator method #2. Considerably faster due to early elimination (before merge) of duplicates. Currently the faster Python version. Direct copy of Haskell code. ``` from itertools import islice, chain, tee def merge(r, s): # This is faster than heapq.merge. rr = r.next() ss = s.next() while True: if rr < ss: yield rr rr = r.next() else: yield ss ss = s.next() def p(n): def gen(): x = n while True: yield x x = n return gen() def pp(n, s): def gen(): for x in (merge(s, chain([n], (n y for y in fb)))): yield x r, fb = tee(gen()) return r def hamming(a, b = None): if not b: b = a + 1 seq = (chain(, pp(5, pp(3, p(2))))) return list(islice(seq, a - 1, b - 1)) print hamming(1, 21) print hamming(1691) print hamming(1000000) ``` QBasic Works with: QBasic version 1.1 Works with: QuickBasic version 4.5 ``` FUNCTION min (a, b) IF a < b THEN min = a ELSE min = b END FUNCTION FUNCTION Hamming (limit) DIM h(limit) h(0) = 1 x2 = 2 x3 = 3 x5 = 5 i = 0 j = 0 k = 0 FOR n = 1 TO limit h(n) = min(x2, min(x3, x5)) IF x2 = h(n) THEN i = i + 1 x2 = 2 h(i) END IF IF x3 = h(n) THEN j = j + 1 x3 = 3 h(j) END IF IF x5 = h(n) THEN k = k + 1 x5 = 5 h(k) END IF NEXT n Hamming = h(limit - 1) END FUNCTION PRINT "The first 20 Hamming numbers are :" FOR i = 1 TO 20 PRINT Hamming(i); " "; NEXT i PRINT PRINT "H( 1691) = "; Hamming(1691) ``` Qi | | | This example is incomplete. Parts 2 & 3 of task missing. Please ensure that it meets all task requirements and remove this message. | Translation of: Clojure ``` (define smerge [X|Xs] [Y|Ys] -> [X | (freeze (smerge (thaw Xs) [Y|Ys]))] where (< X Y) [X|Xs] [Y|Ys] -> [Y | (freeze (smerge [X|Xs] (thaw Ys)))] where (> X Y) [X|Xs] [_|Ys] -> [X | (freeze (smerge (thaw Xs) (thaw Ys)))]) (define smerge3 Xs Ys Zs -> (smerge Xs (smerge Ys Zs))) (define smap F [S|Ss] -> [(F S)|(freeze (smap F (thaw Ss)))]) (set hamming [1 | (freeze (smerge3 (smap ( 2) (value hamming)) (smap ( 3) (value hamming)) (smap ( 5) (value hamming))))]) (define stake _ 0 -> [] [S|Ss] N -> [S|(stake (thaw Ss) (1- N))]) (stake (value hamming) 20) ``` Output: ``` [1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36] ``` Quackery Uses smoothwith from N-smooth numbers#Quackery. ' [ 2 3 5 ] smoothwith [ size 1000000 = ] dup 20 split drop echo cr dup 1690 peek echo cr -1 peek echo Output: [ 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 ] 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 R Recursively find the Hamming numbers below . Shown are results for tasks 1 and 2. Arbitrary precision integers are not supported natively. hamming=function(hamms,limit) { tmp=hamms for(h in c(2,3,5)) { tmp=c(tmp,hhamms) } tmp=unique(tmp[tmp<=limit]) if(length(tmp)>length(hamms)) { hamms=hamming(tmp,limit) } hamms } h <- sort(hamming(1,limit=2^31-1)) print(h[1:20]) print(h[length(h)]) Output: ``` 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 ``` Alternate version The nextn R function provides the needed functionality: hamming <- function(n) { a <- numeric(n) a <- 1 for (i in 2:n) { a[i] <- nextn(a[i-1]+1) } a } Output hamming(20) 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 Racket ``` lang racket (require racket/stream) (define first stream-first) (define rest stream-rest) (define (merge s1 s2) (define x1 (first s1)) (define x2 (first s2)) (cond [(= x1 x2) (merge s1 (rest s2))] [(< x1 x2) (stream-cons x1 (merge (rest s1) s2))] [else (stream-cons x2 (merge s1 (rest s2)))])) (define (mult k) (λ(x) ( x k))) (define hamming (stream-cons 1 (merge (stream-map (mult 2) hamming) (merge (stream-map (mult 3) hamming) (stream-map (mult 5) hamming))))) (for/list ([i 20] [x hamming]) x) (stream-ref hamming 1690) (stream-ref hamming 999999) ``` Output: '(1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36) 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 Translation of Haskell code avoiding duplicates The above version consumes quite a lot of memory as streams are retained since the head of the stream is a global defined binding "hamming". The following code implements (hamming) as a function and all heads of streams are locally defined so that they can be garbage collected as they are consumed; as well it is formulated so that no duplicate values are generated so as to simplify the calculation and minimize the number of values in the streams; to further the latter it also evaluates the least dense stream first. The following code is about three times faster than the above code: Translation of: Haskell ``` lang racket (require racket/stream) (define first stream-first) (define rest stream-rest) (define (hamming) (define (merge s1 s2) (let ([x1 (first s1)] [x2 (first s2)]) (if (< x1 x2) ; don't have to handle duplicate case (stream-cons x1 (merge (rest s1) s2)) (stream-cons x2 (merge s1 (rest s2)))))) (define (smult m s) ; faster than using map ( m) (define (smlt ss) (stream-cons ( m (first ss)) (smlt (rest ss)))) (smlt s)) (define (u n s) (if (stream-empty? s) ; checking here more efficient than in merge (letrec ([r (smult n (stream-cons 1 r)) ]) r) (letrec ([r (merge s (smult n (stream-cons 1 r)))]) r))) ;; (stream-cons 1 (u 2 (u 3 (u 5 empty-stream)))) (stream-cons 1 (foldr u empty-stream '(2 3 5)))) (for/list ([i 20] [x (hamming)]) x) (newline) (stream-ref (hamming) 1690) (newline) (stream-ref (hamming) 999999) (newline) ``` The output of the above code is the same as that of the earlier code. Raku (formerly Perl 6) Merge version Works with: rakudo version 2015-11-04 The limit scaling is not required, but it cuts down on a bunch of unnecessary calculation. ``` my $limit = 32; sub powers_of ($radix) { 1, |[] $radix xx } my @hammings = ( powers_of(2)[^ $limit ] X powers_of(3)[^($limit 2/3)] X powers_of(5)[^($limit 1/2)] ).sort; say @hammings[^20]; say @hammings; # zero indexed ``` Output: (1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36) 2125764000 Iterative version Works with: rakudo version 6.c This version uses a lazy list, storing a maximum of two extra value from the highest index requested ``` my \Hammings := gather { my %i = 2, 3, 5 Z=> (Hammings.iterator for ^3); my %n = 2, 3, 5 Z=> 1 xx 3; loop { take my $n := %n{2, 3, 5}.min; for 2, 3, 5 -> \k { %n{k} = %i{k}.pull-one k if %n{k} == $n; } } } say Hammings.[^20]; say Hammings.[1691 - 1]; say Hammings.[1000000 - 1]; ``` Output: (1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36) 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 Raven Translation of: Liberty Basic ``` define hamming use $limit [ ] as $h 1 $h 0 set 2 as $x2 3 as $x3 5 as $x5 0 as $i 0 as $j 0 as $k 1 $limit 1 + 1 range each as $n $x3 $x5 min $x2 min $h $n set $h $n get $x2 = if $i 1 + as $i $h $i get 2 as $x2 $h $n get $x3 = if $j 1 + as $j $h $j get 3 as $x3 $h $n get $x5 = if $k 1 + as $k $h $k get 5 as $x5 $h $limit 1 - get 1 21 1 range each as $lim $lim hamming print " " print "\n" print "Hamming(1691) is: " print 1691 hamming print "\n" print Raven can't handle > 2^31 using integers "Hamming(1000000) is: " print 1000000 hamming print "\n" print ``` Output: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 Hamming(1691) is: 2125764000 REXX Modules: How to use Modules: Source code This program separates calculation and presentation in different procedures, and is therefore more in line with other entries. The algorithm is still Dijkstra, without optimizations. Procedure Hammings is in module Sequences. ``` -- 16 Aug 2025 include Settings say 'HAMMING NUMBERS' say version say call Hammings 1000000 call ShowFirstN 20 call ShowNth 1691 call ShowNth 1000000 call Hammings 10000000 call ShowNth 10000000 say Time('e')/1 'seconds' exit ShowFirstN: procedure expose Hamm. arg xx xx = xx/1 say 'First' xx 'Hamming numbers are' do i = 1 to xx call Charout ,Right(Hamm.i,5) if i//10 = 0 then say end say return ShowNth: procedure expose Hamm. arg xx xx = xx/1 say xx'th Hamming number is' say Hamm.xx '('Length(Hamm.xx) 'digits)' say return include Math ``` Output: ``` HAMMING NUMBERS REXX-Regina_3.9.6(MT) 5.00 29 Apr 2024 First 20 Hamming numbers are 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 1691th Hamming number is 2125764000 (10 digits) 1000000th Hamming number is 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 (84 digits) 10000000th Hamming number is 16244105063830431823239215311759575035108538820596640863335672483325211601368209812790155410766601562500000000000000000000000000000000000000000000000000000000000000000000000000000000 (182 digits) 134.065 seconds ``` Ring ``` see "h(1) = 1" + nl for nr = 1 to 19 see "h(" + (nr+1) + ") = " + hamming(nr) + nl next see "h(1691) = " + hamming(1690) + nl see nl func hamming limit h = list(1690) h =1 x2 = 2 x3 = 3 x5 =5 i = 0 j = 0 k =0 for n =1 to limit h[n] = min(x2, min(x3, x5)) if x2 = h[n] i = i +1 x2 =2 h[i] ok if x3 = h[n] j = j +1 x3 =3 h[j] ok if x5 = h[n] k = k +1 x5 =5 h[k] ok next hamming = h[limit] return hamming ``` Output: ``` h(1) = 1 h(2) = 2 h(3) = 3 h(4) = 4 h(5) = 5 h(6) = 6 h(7) = 8 h(8) = 9 h(9) = 10 h(10) = 12 h(11) = 15 h(12) = 16 h(13) = 18 h(14) = 20 h(15) = 24 h(16) = 25 h(17) = 27 h(18) = 30 h(19) = 32 h(20) = 36 h(1691) = 2125764000 ``` RPL RPL does not provide any multi-precision capability, so only parts 1 and 2 of the task can be implemented. Using global variables In and Xn avoids stack acrobatics that would have made the code slower and unintelligible, despite the ugly 'var_name' STO syntax inherited from vintage HP calculators. ``` ≪ 1 ‘I2’ STO 1 ‘I3’ STO 1 ‘I5’ STO 2 ‘X2’ STO 3 ‘X3’ STO 5 ‘X5’ STO { 1 } 1 ROT 1 - FOR n X2 X3 MIN X5 MIN SWAP OVER + SWAP IF X2 OVER == THEN 1 ‘I2’ STO+ OVER I2 GET 2 ‘X2’ STO END IF X3 OVER == THEN 1 ‘I3’ STO+ OVER I3 GET 3 ‘X3’ STO END IF X5 == THEN 1 ‘I5’ STO+ DUP I5 GET 5 ‘X5’ STO END NEXT ≫ 'HAMM' STO ``` ``` 20 HAMM 1691 HAMM DUP SIZE GET ``` Output: ``` 2: { 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 } 1: 2125764000 ``` Ruby Translation of: Scala Works with: Ruby version 1.9.3 ``` hamming = Enumerator.new do |yielder| next_ham = 1 queues = , [3, []], [5, []] ] loop do yielder << next_ham # or: yielder.yield(next_ham) queues.each {|m,queue| queue << next_ham m} next_ham = queues.collect{|m,queue| queue.first}.min queues.each {|m,queue| queue.shift if queue.first==next_ham} end end ``` And the "main" part of the task ``` start = Time.now hamming.each.with_index(1) do |ham, idx| case idx when (1..20), 1691 puts "#{idx} => #{ham}" when 1_000_000 puts "#{idx} => #{ham}" break end end puts "elapsed: #{Time.now - start} seconds" ``` Output: ``` 1 => 1 2 => 2 3 => 3 4 => 4 5 => 5 6 => 6 7 => 8 8 => 9 9 => 10 10 => 12 11 => 15 12 => 16 13 => 18 14 => 20 15 => 24 16 => 25 17 => 27 18 => 30 19 => 32 20 => 36 1691 => 2125764000 1000000 => 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 elapsed: 6.522811 seconds ``` ``` System: I7-6700HQ, 3.5 GHz, Linux Kernel 5.6.17 Run as: $ ruby hammingnumbers.rb elapsed: 2.589248076 seconds # Ruby 2.7.1 elapsed: 2.067365 seconds # JRuby 9.2.11.1 elapsed: N/A - too long # Truffleruby 20.0.0 ``` Alternative version: Translation of: Crystal ``` def hamming(limit) h = Array.new(limit, 1) x2, x3, x5 = 2, 3, 5 i, j, k = 0, 0, 0 (1...limit).each do |n| # h[n] = [x2, [x3, x5].min].min # not as fast on all VMs h[n] = (x3 < x5 ? (x2 < x3 ? x2 : x3) : (x2 < x5 ? x2 : x5)) x2 = 2 h[i += 1] if x2 == h[n] x3 = 3 h[j += 1] if x3 == h[n] x5 = 5 h[k += 1] if x5 == h[n] end h[limit - 1] end start = Time.new print "Hamming Number (1..20): "; (1..20).each { |i| print "#{hamming(i)} " } puts puts "Hamming Number 1691: #{hamming 1691}" puts "Hamming Number 1,000,000: #{hamming 1_000_000}" puts "Elasped Time: #{Time.new - start} secs" ``` ``` System: I7-6700HQ, 3.5 GHz, Linux Kernel 5.6.17 Run as: $ ruby hammingnumbers.rb ``` Output: ``` Hamming Number (1..20): 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 Hamming Number 1691: 2125764000 Hamming Number 1,000,000: 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 Elasped Time: 1.566937062 secs # Ruby 2.7.1 Elasped Time: 1.3442580 secs # JRuby 9.2.11.1 Elasped Time: 1.627 secs # Truffleruby 20.1.0 ``` Run BASIC ``` dim h(1000000) for i =1 to 20 print hamming(i);" "; next i print print "Hamming List First(1691) =";chr$(9);hamming(1691) print "Hamming List Last(1000000) =";chr$(9);hamming(1000000) end function hamming(limit) h(0) =1 x2 = 2: x3 = 3: x5 =5 i = 0: j = 0: k =0 for n =1 to limit h(n) = min(x2, min(x3, x5)) if x2 = h(n) then i = i +1: x2 =2 h(i) if x3 = h(n) then j = j +1: x3 =3 h(j) if x5 = h(n) then k = k +1: x5 =5 h(k) next n hamming = h(limit -1) end function ``` 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 Hamming List First(1691) = 2125764000 Hamming List Last(1000000) = 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 Rust Library: num Basic version Translation of: D Improved by minimizing the number of BigUint comparisons: ``` extern crate num; num::bigint::BigUint; use std::time::Instant; fn basic_hamming(n: usize) -> BigUint { let two = BigUint::from(2u8); let three = BigUint::from(3u8); let five = BigUint::from(5u8); let mut h = vec![BigUint::from(0u8); n]; h = BigUint::from(1u8); let mut x2 = BigUint::from(2u8); let mut x3 = BigUint::from(3u8); let mut x5 = BigUint::from(5u8); let mut i = 0usize; let mut j = 0usize; let mut k = 0usize; // BigUint comparisons are expensive, so do it only as necessary... fn min3(x: &BigUint, y: &BigUint, z: &BigUint) -> (usize, BigUint) { let (cs, r1) = if y == z { (0x6, y) } else if y < z { (2, y) } else { (4, z) }; if x == r1 { (cs | 1, x.clone()) } else if x < r1 { (1, x.clone()) } else { (cs, r1.clone()) } } let mut c = 1; while c < n { // satisfy borrow checker with extra blocks: { } let (cs, e1) = { min3(&x2, &x3, &x5) }; h[c] = e1; // vector now owns the generated value if (cs & 1) != 0 { i += 1; x2 = &two &h[i] } if (cs & 2) != 0 { j += 1; x3 = &three &h[j] } if (cs & 4) != 0 { k += 1; x5 = &five &h[k] } c += 1; } match h.pop() { Some(v) => v, _ => panic!("basic_hamming: arg is zero; no elements") } } fn main() { print!("["); for (i, h) in (1..21).map(basic_hamming).enumerate() { if i != 0 { print!(",") } print!(" {}", h) } println!(" ]"); println!("{}", basic_hamming(1691)); let strt = Instant::now(); let rslt = basic_hamming(1000000); let elpsd = strt.elapsed(); let secs = elpsd.as_secs(); let millis = (elpsd.subsec_nanos() / 1000000)as u64; let dur = secs 1000 + millis; let rs = rslt.to_str_radix(10); let mut s = rs.as_str(); println!("{} digits:", s.len()); while s.len() > 100 { let (f, r) = s.split_at(100); s = r; println!("{}", f); } println!("{}", s); println!("This last took {} milliseconds", dur); } ``` Output: [ 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36 ] 2125764000 84 digits: 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 This last took 677 milliseconds. Eliminating duplicate calculations Much of the time above is wasted doing big integer multiplications that are duplicated elsewhere as in 2 times 3 and 3 times 2, etc. The following code eliminates such duplicate multiplications and reduces the number of comparisons, as follows: ``` fn nodups_hamming(n: usize) -> BigUint { let two = BigUint::from(2u8); let three = BigUint::from(3u8); let five = BigUint::from(5u8); let mut m = vec![BigUint::from(0u8); 1]; m = BigUint::from(1u8); let mut h = vec![BigUint::from(0u8); n]; h = BigUint::from(1u8); if n > 1 { m.push(BigUint::from(3u8)); // for initial x53 advance h = BigUint::from(2u8); // for initial x532 advance } let mut x5 = BigUint::from(5u8); let mut x53 = BigUint::from(9u8); // 3 times 3 because already merged one step let mut mrg = BigUint::from(3u8); let mut x532 = BigUint::from(2u8); let mut i = 0usize; let mut j = 1usize; let mut c = 1usize; while c < n { // satisfy borrow checker with extra blocks: { } if &x532 < &mrg { h[c] = x532; i += 1; x532 = &two &h[i]; } else { h[c] = mrg; if &x53 < &x5 { mrg = x53; j += 1; x53 = &three &m[j]; } else { mrg = x5.clone(); x5 = &five &x5; }; m.push(mrg.clone()); }; c += 1; } match h.pop() { Some(v) => v, _ => panic!("nodups_hamming: arg is zero; no elements") } } ``` Substitute the calls to the above code for the calls to "basic_hamming" (three places) in the "main" function above. The output is the same except that the time expended is less (249 milliseconds), for over two and a half times faster. Much faster logarithmic version with low memory use The above versions spend much of their time doing BigUint calculations. The below version eliminates much of that time by using integer powers of 2, 3, and 5 representations and all normal integer calculations except for the final conversion to a BitUint for the final result for about a 30 times speed-up. Another problem is that the above versions use so much memory that they can't compute even the billionth hamming number without running out of memory on a 16 Gigabyte machine. This version greatly reduces the memory use to about O(n^(2/3)) by eliminating no longer required back values in batches so that with about 9 Gigabytes it will calculate the hamming numbers to 1.2e13 (it's limit due to the ranges of the exponents) in a day or so. The code is as follows: ``` fn log_nodups_hamming(n: u64) -> BigUint { if n <= 0 { panic!("nodups_hamming: arg is zero; no elements") } if n < 2 { return BigUint::from(1u8) } // trivial case for n == 1 if n > 1.2e13 as u64 { panic!("log_nodups_hamming: argument too large to guarantee results!") } // constants as expanded integers to minimize round-off errors, and // reduce execution time using integer operations not float... const LAA2: u64 = 35184372088832; // 2.0f64.powi(45)).round() as u64; const LBA2: u64 = 55765910372219; // 3.0f64.log2() 2.0f64.powi(45)).round() as u64; const LCA2: u64 = 81695582054030; // 5.0f64.log2() 2.0f64.powi(45)).round() as u64; #[derive(Clone, Copy)] struct Logelm { // log representation of an element with only allowable powers exp2: u16, exp3: u16, exp5: u16, logr: u64 // log representation used for comparison only - not exact } impl Logelm { fn lte(&self, othr: &Logelm) -> bool { if self.logr <= othr.logr { true } else { false } } fn mul2(&self) -> Logelm { Logelm { exp2: self.exp2 + 1, logr: self.logr + LAA2, .. self } } fn mul3(&self) -> Logelm { Logelm { exp3: self.exp3 + 1, logr: self.logr + LBA2, .. self } } fn mul5(&self) -> Logelm { Logelm { exp5: self.exp5 + 1, logr: self.logr + LCA2, .. self } } } let one = Logelm { exp2: 0, exp3: 0, exp5: 0, logr: 0 }; let mut x532 = one.mul2(); let mut mrg = one.mul3(); let mut x53 = one.mul3().mul3(); // advance as mrg has the former value... let mut x5 = one.mul5(); let mut h = Vec::with_capacity(65536); // vec!(one.clone(); 0); let mut m = Vec::<Logelm>::with_capacity(65536); // vec!(one.clone(); 0); let mut i = 0usize; let mut j = 0usize; for _ in 1 .. n { let cph = h.capacity(); if i > cph / 2 { // drain extra unneeded values... h.drain(0 .. i); i = 0; } if x532.lte(&mrg) { h.push(x532); x532 = h[i].mul2(); i += 1; } else { h.push(mrg); if x53.lte(&x5) { mrg = x53; x53 = m[j].mul3(); j += 1; } else { mrg = x5; x5 = x5.mul5(); } let cpm = m.capacity(); if j > cpm / 2 { // drain extra unneeded values... m.drain(0 .. j); j = 0; } m.push(mrg); } } let o = &h[&h.len() - 1]; let two = BigUint::from(2u8); let three = BigUint::from(3u8); let five = BigUint::from(5u8); let mut ob = BigUint::from(1u8); // convert to BigUint at the end for _ in 0 .. o.exp2 { ob = ob &two } for _ in 0 .. o.exp3 { ob = ob &three } for _ in 0 .. o.exp5 { ob = ob &five } ob } ``` Again, this function can be used with the same "main" as above and the outputs are the same except that the execution time is only 7 milliseconds. It calculates the hamming number to a billion and just over a second and to one hundred billion in just over 100 seconds - O(n) time complexity. As well as eliminating duplicate calculations and calculating using exponents rather than BitUint operations, it also reduces the time required as compared to other similar algorithms by scaling the logarithms of two, three, and five into 64-bit integers so no floating point operations are required. The scaling is such that round-off errors will not affect the order of results for well past the usable range. Memory used is greatly reduced to O(n^(2/3)) by draining the arrays of back values no longer required in batches (rather than one by one) so that less time is used. It also saves time by not requiring as many allocations and de-allocations as the draining is done in place, thus making the current capacity of arrays longer usable. Sequence version As the task actually asks for a sequence of Hamming numbers, any of the above three solutions can easily be adapted to output an Iterator sequence; in this case the last fastest one is converted as follows: ``` extern crate num; // requires dependency on the num library use num::bigint::BigUint; use std::time::Instant; fn log_nodups_hamming_iter() -> Box> { // constants as expanded integers to minimize round-off errors, and // reduce execution time using integer operations not float... const LAA2: u64 = 35184372088832; // 2.0f64.powi(45)).round() as u64; const LBA2: u64 = 55765910372219; // 3.0f64.log2() 2.0f64.powi(45)).round() as u64; const LCA2: u64 = 81695582054030; // 5.0f64.log2() 2.0f64.powi(45)).round() as u64; #[derive(Clone, Copy)] struct Logelm { // log representation of an element with only allowable powers exp2: u16, exp3: u16, exp5: u16, logr: u64 // log representation used for comparison only - not exact } impl Logelm { fn lte(&self, othr: &Logelm) -> bool { if self.logr <= othr.logr { true } else { false } } fn mul2(&self) -> Logelm { Logelm { exp2: self.exp2 + 1, logr: self.logr + LAA2, .. self } } fn mul3(&self) -> Logelm { Logelm { exp3: self.exp3 + 1, logr: self.logr + LBA2, .. self } } fn mul5(&self) -> Logelm { Logelm { exp5: self.exp5 + 1, logr: self.logr + LCA2, .. self } } } let one = Logelm { exp2: 0, exp3: 0, exp5: 0, logr: 0 }; let mut x532 = one.mul2(); let mut mrg = one.mul3(); let mut x53 = one.mul3().mul3(); // advance as mrg has the former value... let mut x5 = one.mul5(); let mut h = Vec::with_capacity(65536); let mut m = Vec::<Logelm>::with_capacity(65536); let mut i = 0usize; let mut j = 0usize; Box::new((0u64 .. ).map(move |it| if it < 1 { (0, 0, 0) } else { let cph = h.capacity(); if i > cph / 2 { h.drain(0 .. i); i = 0; } if x532.lte(&mrg) { h.push(x532); x532 = h[i].mul2(); i += 1; } else { h.push(mrg); if x53.lte(&x5) { mrg = x53; x53 = m[j].mul3(); j += 1; } else { mrg = x5; x5 = x5.mul5(); } let cpm = m.capacity(); if j > cpm / 2 { m.drain(0 .. j); j = 0; } m.push(mrg); } let o = &h[&h.len() - 1]; (o.exp2, o.exp3, o.exp5) })) } fn convert_log2big(o: (u16, u16, u16)) -> BigUint { let two = BigUint::from(2u8); let three = BigUint::from(3u8); let five = BigUint::from(5u8); let (x2, x3, x5) = o; let mut ob = BigUint::from(1u8); // convert to BigUint at the end for _ in 0 .. x2 { ob = ob &two } for _ in 0 .. x3 { ob = ob &three } for _ in 0 .. x5 { ob = ob &five } ob } fn main() { print!("["); for (i, h) in log_nodups_hamming_iter().take(20).map(convert_log2big).enumerate() { if i != 0 { print!(",") } print!(" {}", h) } println!(" ]"); println!("{}", convert_log2big(log_nodups_hamming_iter().take(1691).last().unwrap())); let strt = Instant::now(); // let rslt = convert_log2big(log_nodups_hamming_iter().take(1000000000).last().unwrap()); let mut it = log_nodups_hamming_iter().into_iter(); for _ in 0 .. 100-1 { // a little faster; less one level of iteration let _ = it.next(); } let rslt = convert_log2big(it.next().unwrap()); let elpsd = strt.elapsed(); let secs = elpsd.as_secs(); let millis = (elpsd.subsec_nanos() / 1000000)as u64; let dur = secs 1000 + millis; println!("2^{} times 3^{} times 5^{}", rslt.0, rslt.1, rslt.2); let rs = convert_log2big(rslt).to_str_radix(10); let mut s = rs.as_str(); println!("{} digits:", s.len()); let lg3 = 3.0f64.log2(); let lg5 = 5.0f64.log2(); let lg = (rslt.0 as f64 + rslt.1 as f64 lg3 + rslt.2 as f64 lg5) 2.0f64.log10(); println!("Approximately {}E+{}", 10.0f64.powf(lg.fract()), lg.trunc()); if s.len() <= 10000 { while s.len() > 100 { let (f, r) = s.split_at(100); s = r; println!("{}", f); } println!("{}", s); } println!("This last took {} milliseconds.", dur); } ``` Output: [ 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36 ] 2125764000 2^55 times 3^47 times 5^64 84 digits: Approximately 5.193127804483804E+83 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 This last took 17 milliseconds. The above final output is the same as the last one, but the function is called differently; also note that it is somewhat slower than the last version due to the extra function calls required to enumerate over an Iterator. It can enumerate the Hamming numbers up to a billion in about 20 seconds instead of the about ten seconds for the last version - about O(n) time complexity, and has the same O(n^(2/3)) space complexity as the last version. Functional sequence version avoiding duplicates It has been said by some that Rust is basically a functional language; however that isn't quite true in several respects, at least as per the following: It does not guarantee tail call optimization for functions, thus sometimes requiring imperative forms of code to produce that effect. It does not have currying or partial application of function arguments without using kludges of nested function/closure calls. move closures cannot use recursive shared values without using interior mutability inside a reference counted value (required here) Closures are not recursive without using a trick involving shared state reference counted values (demonstrated here). It currently does not have a standard library implementation of a lazily computed non-static value (required to implement a Lazy List, and It accordingly is not as easy as in most other languages to implement Co-Inductive Streams or (also memoized) Lazy Lists (a form of Lazy List as is required here). Many of these come about due to the Rust memory model where pieces of programs "own" data and its disposal but can assign references to other pieces of code (with limits if mutability as required), instead of the Garbage Collected model used by most other functional languages where variables are owned by the system and program code just uses references to that data other than for primitives which are owned by whoever uses them. The lack of the Lazy type and thus the Lazy List type is partly due to Rust's still being relatively unstable, as Lazy requires a "thunk" (a zero argument move closure acting on owned data - FnOnce in Rust), and in Rust these must be boxed (allocated on the heap) to be usable. However, the new versions of Rust allow boxing of the FnOnce closure so it can be used as a Thunk. Jeremy Reems had implemented Lazy and also LazyList, but they haven't been maintained for many years and don't compile. According, I have implemented enough of this functionality as required by this algorithm, as per the following code (tested on Rust version 1.53.0, run in --release mode): Translation of: Haskell Works with: Rust 1.53.0 ``` extern crate num; use num::bigint::BigUint; use std::rc::Rc; use std::cell::{UnsafeCell, RefCell}; use std::mem; use std::time::Instant; // implementation of Thunk closure here... pub struct Thunk<'a, R>(Box R + 'a>); impl<'a, R: 'a> Thunk<'a, R> { #[inline(always)] fn new R>(func: F) -> Thunk<'a, R> { Thunk(Box::new(func)) } #[inline(always)] fn invoke(self) -> R { self.0() } } // actual Lazy implementation starts here... use self::LazyState::; pub struct Lazy<'a, T: 'a>(UnsafeCell<LazyState<'a, T>>); enum LazyState<'a, T: 'a> { Unevaluated(Thunk<'a, T>), EvaluationInProgress, Evaluated(T) } impl<'a, T: 'a> Lazy<'a, T>{ #[inline] pub fn new<'b, F>(thunk: F) -> Lazy<'b, T> where F: 'b + FnOnce() -> T { Lazy(UnsafeCell::new(Unevaluated(Thunk::new(thunk)))) } #[inline] pub fn evaluated(val: T) -> Lazy<'a, T> { Lazy(UnsafeCell::new(Evaluated(val))) } #[inline] fn force<'b>(&'b self) { // not thread-safe unsafe { match self.0.get() { Evaluated(_) => return, // nothing required; already Evaluated EvaluationInProgress => panic!("Lazy::force called recursively!!!"), _ => () // need to do following something else if Unevaluated... } // following eliminates recursive race; drops neither on replace: match mem::replace(&mut self.0.get(), EvaluationInProgress) { Unevaluated(thnk) => { // Thunk can't call force on same Lazy self.0.get() = Evaluated(thnk.invoke()); }, _ => unreachable!() // already took care of other cases above. } } } #[inline] pub fn value<'b>(&'b self) -> &'b T { self.force(); // evaluatate if not evealutated match unsafe { &self.0.get() } { &Evaluated(ref v) => v, // return value _ => { unreachable!() } // previous force guarantees Evaluated } } #[inline] // consumes the object to produce the value pub fn unwrap<'b>(self) -> T where T: 'b { self.force(); // evaluatate if not evealutated match { self.0.into_inner() } { Evaluated(v) => v, _ => unreachable!() // previous code guarantees Evaluated } } } // now for immutable persistent shareable (memoized) LazyList via Lazy above... type RcLazyListNode<'a, T> = Rc<Lazy<'a, LazyList<'a, T>>>; use self::LazyList::; [derive(Clone)] enum LazyList<'a, T: 'a + Clone> { /// The Empty List Empty, /// A list with one member and possibly another list. Cons(T, RcLazyListNode<'a, T>) } impl<'a, T: 'a + Clone> LazyList<'a, T> { #[inline] pub fn cons(v: T, cntf: F) -> LazyList<'a, T> where F: 'a + FnOnce() -> LazyList<'a, T> { Cons(v, Rc::new(Lazy::new(cntf))) } #[inline] pub fn head<'b>(&'b self) -> &'b T { if let Cons(ref hd, ) = self { return hd } panic!("LazyList::head called on an Empty LazyList!!!") } / // not used #[inline] pub fn tail<'b>(&'b self) -> &'b Lazy<'a, LazyList<'a, T>> { if let Cons(, ref rlln) = self { return &rlln } panic!("LazyList::tail called on an Empty LazyList!!!") } / #[inline] pub fn unwrap(self) -> (T, RcLazyListNode<'a, T>) { // consumes the object if let Cons(hd, rlln) = self { return (hd, rlln) } panic!("LazyList::unwrap called on an Empty LazyList!!!") } } impl<'a, T: 'a + Clone> Iterator for LazyList<'a, T> { type Item = T; #[inline] fn next(&mut self) -> Option { if let Empty = self { return None } let oldll = mem::replace(self, Empty); let (hd, rlln) = oldll.unwrap(); let mut newll = rlln.value().clone(); // self now contains tail, newll contains the Empty mem::swap(self, &mut newll); Some(hd) } } // implements worker wrapper recursion closures using shared RcMFn variable... type RcMFn<'a, T> = Rc T + 'a>>>; // #[derive(Clone)] // struct RcMFn<'a, T: 'a>(Rc T + 'a>>>); trait RcMFnMethods<'a, T> { fn create T + 'a>(v: F) -> RcMFn<'a, T>; fn invoke(&self, v: T) -> T; fn set T + 'a>(&self, v: F); } impl<'a, T: 'a> RcMFnMethods<'a, T> for RcMFn<'a, T> { // creates new value wrapper... fn create T + 'a>(v: F) -> RcMFn<'a, T> { Rc::new(UnsafeCell::new(Box::new(v))) } #[inline(always)] // needs to be faster to be worth it fn invoke(&self, v: T) -> T { unsafe { (((self).get()))(v) } } fn set T + 'a>(&self, v: F) { unsafe { self.get() = Box::new(v); } } } type RcMVar = Rc>; trait RcMVarMethods { fn create(v: T) -> Self; fn get(self: &Self) -> T; fn set(self: &Self, v: T); } impl RcMVarMethods for RcMVar { fn create(v: T) -> RcMVar { // creates new value wrapped in RcMVar Rc::new(RefCell::new(v)) } #[inline] fn get(&self) -> T { self.borrow().clone() } fn set(&self, v: T) { self.borrow_mut() = v; } } // finally what the task objective requires... fn hammings() -> Box>> { type LL<'a> = LazyList<'a, Rc>; fn merge<'a>(x: LL<'a>, y: LL<'a>) -> LL<'a> { let lte = { x.head() <= y.head() }; // private context for borrow if lte { let (hdx, tlx) = x.unwrap(); LL::cons(hdx, move || merge(tlx.value().clone(), y)) } else { let (hdy, tly) = y.unwrap(); LL::cons(hdy, move || merge(x, tly.value().clone())) } } fn smult<'a>(m: BigUint, s: LL<'a>) -> LL<'a> { // like map m but faster let smlt = RcMFn::create(move |ss: LL<'a>| ss); let csmlt = smlt.clone(); smlt.set(move |ss: LL<'a>| { let (hd, tl) = ss.unwrap(); let ccsmlt = csmlt.clone(); LL::cons(Rc::new(&m &hd), move || ccsmlt.invoke(tl.value().clone())) }); smlt.invoke(s) } fn u<'a>(s: LL<'a>, n: usize) -> LL<'a> { let nb = BigUint::from(n); let rslt = RcMVar::create(Empty); let crslt = rslt.clone(); // same interior data... let cll = LL::cons(Rc::new(BigUint::from(1u8)), move || crslt.get()); // gets future value // below sets future value for above closure... rslt.set(if let Empty = s { smult(nb, cll) } else { merge(s, smult(nb, cll)) }); rslt.get() } fn rll<'a>() -> LL<'a> { [5, 3, 2].iter() .fold(Empty, |ll, n| u(ll, n) ) } let hmng = LL::cons(Rc::new(BigUint::from(1u8)), move || rll()); Box::new(hmng.into_iter()) } // and the required test outputs... fn main() { print!("["); for (i, h) in hammings().take(20).enumerate() { if i != 0 { print!(",") } print!(" {}", h) } println!(" ]"); println!("{}", hammings().take(1691).last().unwrap()); let strt = Instant::now(); let rslt = hammings().take(1000000).last().unwrap(); let elpsd = strt.elapsed(); let secs = elpsd.as_secs(); let millis = (elpsd.subsec_nanos() / 1000000)as u64; let dur = secs 1000 + millis; println!("{}", rslt); println!("This last took {} milliseconds.", dur); } ``` As can be seen, there is little code necessary for the "hammings" and "main" functions if the rest were available in libraries, as they really should be. Output: [ 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36 ] 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 This last took 172 milliseconds. In order to run this fast. the BigUint LazyList values are wrapped in a reference counted heap wrapper to make it more efficient for cloning operations as necessary to extract interior values from the nested RcLazyListNode structure. This is reasonably fast, with it a little slower than some languages, but a fairly high percentage of the time is spent on LazyList processing. This is likely due to the many small heap allocations and de-allocations required as well as the time required to process all of the reference counting. At that, on the same machine (Intel Sky Lake i5-6500 @ 3.6 Gigahertz - turbo when single-threaded as here), it is still about eight times faster than F# running the same functional algorithm, although much more "wordy" as also much more "wordy" than the Haskell code from which it was translated. However, it is just a little slower than Java JVM based languages (Scala, Kotlin, Clojure, etc.) and about twice slower than Haskell, likely due to those languages having very efficient memory management using memory pools for frequent small-byte-sizes per allocation/collection as for such functional algorithms, and as well not requiring reference counting due to garbage collection (although sometimes this is about a wash, as garbage collection adds its own overheads). So Rust can be used to implement purely functional algorithms, but it isn't the best at it especially as to conciseness of code. The other (and likely biggest) wart with implementing such functional algorithms in Rust as here is that when there are cyclic references as here, then the reference counting memory management can't handle automatic reclaiming of memory so as to produce a memory leak, which the above code has; As there is no easy way (or perhaps no way) to demote/downgrade those references to being "weak" references for this algorithm, one likely wouldn't be able to use the above method in "production" code and would have to revert to a more imperative algorithm. The memory leaks don't matter for the above code, which runs and exits taking the leaks away on program termination, it would be a problem if using in a library that would be called from a running applications many many times. Functional sequence version avoiding duplicates, increasing speed using logarithms Although we can't eliminate the memory leak of the ahove code, we can increase the speed by eliminating the many BigUint calculations and also reduce the memory used (and thus leaked) by using a LogRep structure instead of the variable length container where the contained BigUint gets constantly bigger with increasing range as per the following code: Works with: Rust 1.53.0 ``` extern crate num; use num::bigint::BigUint; use core::cmp::Ordering; use std::rc::Rc; use std::cell::{UnsafeCell, RefCell}; use std::mem; use std::time::Instant; // implementation of Thunk closure here... pub struct Thunk<'a, R>(Box R + 'a>); impl<'a, R: 'a> Thunk<'a, R> { #[inline(always)] fn new R>(func: F) -> Thunk<'a, R> { Thunk(Box::new(func)) } #[inline(always)] fn invoke(self) -> R { self.0() } } // actual Lazy implementation starts here... use self::LazyState::; pub struct Lazy<'a, T: 'a>(UnsafeCell<LazyState<'a, T>>); enum LazyState<'a, T: 'a> { Unevaluated(Thunk<'a, T>), EvaluationInProgress, Evaluated(T) } impl<'a, T: 'a> Lazy<'a, T>{ #[inline] pub fn new<'b, F>(thunk: F) -> Lazy<'b, T> where F: 'b + FnOnce() -> T { Lazy(UnsafeCell::new(Unevaluated(Thunk::new(thunk)))) } #[inline] pub fn evaluated(val: T) -> Lazy<'a, T> { Lazy(UnsafeCell::new(Evaluated(val))) } #[inline] fn force<'b>(&'b self) { // not thread-safe unsafe { match self.0.get() { Evaluated(_) => return, // nothing required; already Evaluated EvaluationInProgress => panic!("Lazy::force called recursively!!!"), _ => () // need to do following something else if Unevaluated... } // following eliminates recursive race; drops neither on replace: match mem::replace(&mut self.0.get(), EvaluationInProgress) { Unevaluated(thnk) => { // Thunk can't call force on same Lazy self.0.get() = Evaluated(thnk.invoke()); }, _ => unreachable!() // already took care of other cases above. } } } #[inline] pub fn value<'b>(&'b self) -> &'b T { self.force(); // evaluatate if not evealutated match unsafe { &self.0.get() } { &Evaluated(ref v) => v, // return value _ => { unreachable!() } // previous force guarantees Evaluated } } #[inline] // consumes the object to produce the value pub fn unwrap<'b>(self) -> T where T: 'b { self.force(); // evaluatate if not evealutated match { self.0.into_inner() } { Evaluated(v) => v, _ => unreachable!() // previous code guarantees Evaluated } } } // now for immutable persistent shareable (memoized) LazyList via Lazy above... type RcLazyListNode<'a, T> = Rc<Lazy<'a, LazyList<'a, T>>>; use self::LazyList::; [derive(Clone)] enum LazyList<'a, T: 'a + Clone> { /// The Empty List Empty, /// A list with one member and possibly another list. Cons(T, RcLazyListNode<'a, T>) } impl<'a, T: 'a + Clone> LazyList<'a, T> { #[inline] pub fn cons(v: T, cntf: F) -> LazyList<'a, T> where F: 'a + FnOnce() -> LazyList<'a, T> { Cons(v, Rc::new(Lazy::new(cntf))) } #[inline] pub fn head<'b>(&'b self) -> &'b T { if let Cons(ref hd, _) = self { return hd } panic!("LazyList::head called on an Empty LazyList!!!") } #[inline] pub fn unwrap(self) -> (T, RcLazyListNode<'a, T>) { // consumes the object if let Cons(hd, rlln) = self { return (hd, rlln) } panic!("LazyList::unwrap called on an Empty LazyList!!!") } } impl<'a, T: 'a + Clone> Iterator for LazyList<'a, T> { type Item = T; #[inline] fn next(&mut self) -> Option { if let Empty = self { return None } let oldll = mem::replace(self, Empty); let (hd, rlln) = oldll.unwrap(); let mut newll = rlln.value().clone(); // self now contains tail, newll contains the Empty mem::swap(self, &mut newll); Some(hd) } } // implements worker wrapper recursion closures using shared RcMFn variable... type RcMFn<'a, T> = Rc T + 'a>>>; trait RcMFnMethods<'a, T> { fn create T + 'a>(v: F) -> RcMFn<'a, T>; fn invoke(&self, v: T) -> T; fn set T + 'a>(&self, v: F); } impl<'a, T: 'a> RcMFnMethods<'a, T> for RcMFn<'a, T> { // creates new value wrapper... fn create T + 'a>(v: F) -> RcMFn<'a, T> { Rc::new(UnsafeCell::new(Box::new(v))) } #[inline(always)] // needs to be faster to be worth it fn invoke(&self, v: T) -> T { unsafe { (((self).get()))(v) } } fn set T + 'a>(&self, v: F) { unsafe { self.get() = Box::new(v); } } } type RcMVar = Rc>; trait RcMVarMethods { fn create(v: T) -> Self; fn get(self: &Self) -> T; fn set(self: &Self, v: T); } impl RcMVarMethods for RcMVar { fn create(v: T) -> RcMVar { // creates new value wrapped in RcMVar Rc::new(RefCell::new(v)) } #[inline] fn get(&self) -> T { self.borrow().clone() } fn set(&self, v: T) { self.borrow_mut() = v; } } // finally what the task objective requires... [derive(Clone)] struct LogRep {lg: f64, x2: u32, x3: u32, x5: u32} const ONE: LogRep = LogRep { lg: 0f64, x2: 0u32, x3: 0u32, x5: 0u32 }; const LB3: f64 = 1.5849625007211563f64; // log base two of 3f64 const LB5: f64 = 2.321928094887362f64; // log base two of 5f64 impl PartialEq for LogRep { #[inline] fn eq(&self, other: &Self) -> bool { self.lg == other.lg } } impl Eq for LogRep {} impl PartialOrd for LogRep { #[inline] fn partial_cmp(&self, other: &Self) -> Option { self.lg.partial_cmp(&other.lg) } } trait LogRepMults { fn mult2(lr: LogRep) -> LogRep; fn mult3(lr: LogRep) -> LogRep; fn mult5(lr: LogRep) -> LogRep; } impl LogRepMults for LogRep { #[inline] fn mult2(lr: LogRep) -> LogRep { LogRep { lg: lr.lg + 1f64, x2: lr.x2 + 1, x3: lr.x3, x5: lr.x5 } } #[inline] fn mult3(lr: LogRep) -> LogRep { LogRep { lg: lr.lg + LB3, x2: lr.x2, x3: lr.x3 + 1, x5: lr.x5 } } #[inline] fn mult5(lr: LogRep) -> LogRep { LogRep { lg: lr.lg + LB5, x2: lr.x2, x3: lr.x3, x5: lr.x5 + 1 } } } fn logrep2biguint(lr: LogRep) -> BigUint { let two = BigUint::from(2u8); let three = BigUint::from(3u8); let five = BigUint::from(5u8); fn xpnd(vm: u32, n: BigUint) -> BigUint { let mut rslt = BigUint::from(1u8); let mut v = vm; let mut bsm = n; while v > 0u32 { if v & 1u32 != 0u32 { rslt = rslt &bsm } bsm = &bsm.clone() bsm; v = v >> 1; } rslt } xpnd(lr.x2, two) xpnd(lr.x3, three) xpnd(lr.x5, five) } fn hammings() -> Box> { type LR = LogRep; type LL<'a> = LazyList<'a, LR>; fn merge<'a>(x: LL<'a>, y: LL<'a>) -> LL<'a> { let lte = { x.head() <= y.head() }; // private context for borrow if lte { let (hdx, tlx) = x.unwrap(); LL::cons(hdx, move || merge(tlx.value().clone(), y)) } else { let (hdy, tly) = y.unwrap(); LL::cons(hdy, move || merge(x, tly.value().clone())) } } fn smult<'a>(m: fn(LogRep) -> LogRep, s: LL<'a>) -> LL<'a> { // like map m but faster let smlt = RcMFn::create(move |ss: LL<'a>| ss); let csmlt = smlt.clone(); smlt.set(move |ss: LL<'a>| { let (hd, tl) = ss.unwrap(); let ccsmlt = csmlt.clone(); LL::cons(m(hd), move || ccsmlt.invoke(tl.value().clone())) }); smlt.invoke(s) } fn u<'a>(s: LL<'a>, f: fn(LogRep) -> LogRep) -> LL<'a> { let rslt = RcMVar::create(Empty); let crslt = rslt.clone(); // same interior data... let cll = LL::cons(ONE, move || crslt.get()); // gets future value // below sets future value for above closure... rslt.set(if let Empty = s { smult(f, cll) } else { merge(s, smult(f, cll)) }); rslt.get() } fn rll<'a>() -> LL<'a> { [LR::mult5, LR::mult3, LR::mult2].iter() .fold(Empty, |ll, mf| u(ll, mf) ) } let hmng = LL::cons(ONE, move || rll()); Box::new(hmng.into_iter()) } // and the required test outputs... fn main() { print!("["); for (i, h) in hammings().take(20).enumerate() { if i != 0 { print!(",") } print!(" {}", logrep2biguint(h)) } println!(" ]"); println!("{}", logrep2biguint(hammings().take(1691).last().unwrap())); let strt = Instant::now(); let rslt = hammings().take(1000000).last().unwrap(); let elpsd = strt.elapsed(); let secs = elpsd.as_secs(); let millis = (elpsd.subsec_nanos() / 1000000)as u64; let dur = secs 1000 + millis; println!("{}", logrep2biguint(rslt)); println!("This last took {} milliseconds.", dur); } ``` Output: [ 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36 ] 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 This last took 122 milliseconds. As can be seen, the above version takes about two thirds of the time as the previous version running on the same Intel Skylake i5-6500 - although it still has a memory leak, the size of the leak for a given range will be many times smaller. It still isn't as fast as Haskell running the same algorithm, but it is only about 30% slower and about as fast as most other languages that compile their code to a running executable. Very fast sequence version using imperative code (mutable vectors) and logarithmic approximations for sorting Most of the remaining execution time for the above version is due to the many allocations/deallocations used in implementing the functional lazy list sequence; the following code avoids that overhead by memoizing the pst values using linear vectors with the head and tail values marked by tracking indices: Translation of: Nim ``` extern crate num; use num::bigint::BigInt; use core::fmt::Display; use std::time::Instant; use std::iter; const NUM_ELEMENTS: usize = 1000000; const LB2_2: f64 = 1.0_f64; // log2(2.0) const LB2_3: f64 = 1.5849625007211563_f64; // log2(3.0) const LB2_5: f64 = 2.321928094887362_f64; // log2(5.0) [derive (Clone)] struct LogRep { lr: f64, x2: u32, x3: u32, x5: u32, } impl LogRep { fn int_value(&self) -> BigInt { BigInt::from(2).pow(self.x2) BigInt::from(3).pow(self.x3) BigInt::from(5).pow(self.x5) } #[inline(always)] fn mul2(&self) -> Self { LogRep { lr: self.lr + LB2_2, x2: self.x2 + 1, x3: self.x3, x5: self.x5, } } #[inline(always)] fn mul3(&self) -> Self { LogRep { lr: self.lr + LB2_3, x2: self.x2, x3: self.x3 + 1, x5: self.x5, } } #[inline(always)] fn mul5(&self) -> Self { LogRep { lr: self.lr + LB2_5, x2: self.x2, x3: self.x3, x5: self.x5 + 1, } } } impl Display for LogRep { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { let val = self.int_value(); let x2 = self.x2; let x3 = self.x3; let x5 = self.x5; write!(f, "[{x2} {x3} {x5}]=>{val}") } } const ONE: LogRep = LogRep { lr: 0.0, x2: 0, x3: 0, x5: 0 }; struct LogRepImperativeIterator { s2: Vec, s3: Vec, s5: LogRep, mrg: LogRep, s2i: usize, s3i: usize, } impl LogRepImperativeIterator { pub fn new() -> Self { LogRepImperativeIterator { s2: vec![ONE.mul2()], s3: vec![ONE.mul3()], s5: ONE.mul5(), mrg: ONE.mul3(), s2i: 0, s3i: 0, } } fn iter(&self) -> impl Iterator<Item = LogRep> { iter::once(ONE).chain(LogRepImperativeIterator::new()) } } impl Iterator for LogRepImperativeIterator { type Item = LogRep; #[inline(always)] fn next(&mut self) -> Option<Self::Item> { if self.s2i + self.s2i >= self.s2.len() { self.s2.drain(0..self.s2i); self.s2i = 0; } let result: LogRep; if self.s2[self.s2i].lr < self.mrg.lr { self.s2.push(self.s2[self.s2i].mul2()); result = self.s2[self.s2i].clone(); self.s2i += 1; } else { if self.s3i + self.s3i >= self.s3.len() { self.s3.drain(0..self.s3i); self.s3i = 0; } result = self.mrg.clone(); self.s2.push(self.mrg.mul2()); self.s3.push(self.mrg.mul3()); self.s3i += 1; if self.s3[self.s3i].lr < self.s5.lr { self.mrg = self.s3[self.s3i].clone(); } else { self.mrg = self.s5.clone(); self.s5 = self.s5.mul5(); self.s3i -= 1; } }; Some(result) } } fn main() { LogRepImperativeIterator::new().iter().take(20) .for_each(&|h: LogRep| print!("{} ", h.int_value())); println!(); println!("{} ", LogRepImperativeIterator::new().iter() .take(1691).last().unwrap().int_value()); let t0 = Instant::now(); let rslt = LogRepImperativeIterator::new().iter() .take(NUM_ELEMENTS).last().unwrap(); let elpsd = t0.elapsed().as_micros() as f64; println!("{}", rslt.int_value()); println!("This took {} microseconds for {} elements!", elpsd, NUM_ELEMENTS) } ``` Output: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 This took 6517 microseconds for 1000000 elements! The code above is almost twenty times faster than the previous functional lazy list sequence code due to not losing the time for the many small allocations/deallocations of small heap (reference counted) objects and not having recursive references, also it does not leak memory. This version can calculate the billionth Hamming number in about 8.1 seconds. Extremely fast non-sequence version by calculation of top band of Hamming numbers One might ask "What could possibly be done to further speed up finding Hamming numbers?": the answer is quite a lot, but one has to dump the ability to iterate a sequence as that depends on being able to refer to past calculated values by back pointers to the memorized O(n^(2/3)) arrays or lists and thus quite large amounts of memory. If one just wants to find very large Hamming numbers individually, one looks to the mathematical analysis of Hamming/regular numbers on Wikipedia and finds there is quite an exact relationship between 'n', the sequence number, and the logarithmic magnitude of the resulting Hamming number, and that the error term is directly proportional to the logarithm of that output number. This means that only the band of Hamming values as wide of this error and including the estimated value need to be generated, and that we need only iterate over two of the three prime exponents, thus O(n^(2/3)) time complexity and O(n^(1/3)) space complexity. The following code was adapted from an article in DDJ and the Haskell code with the further refinements to decrease the memory requirements as described above: Translation of: Haskell ``` extern crate num; // requires dependency on the num library use num::bigint::BigUint; use std::time::Instant; fn nth_hamming(n: u64) -> (u32, u32, u32) { if n < 2 { if n <= 0 { panic!("nth_hamming: argument is zero; no elements") } return (0, 0, 0) // trivial case for n == 1 } let lg3 = 3.0f64.ln() / 2.0f64.ln(); // log base 2 of 3 let lg5 = 5.0f64.ln() / 2.0f64.ln(); // log base 2 of 5 let fctr = 6.0f64 lg3 lg5; let crctn = 30.0f64.sqrt().ln() / 2.0f64.ln(); // log base 2 of sqrt 30 let lgest = (fctr n as f64).powf(1.0f64/3.0f64) - crctn; // from WP formula let frctn = if n < 1000000000 { 0.509f64 } else { 0.105f64 }; let lghi = (fctr (n as f64 + frctn lgest)).powf(1.0f64/3.0f64) - crctn; // calculate hi log limit based on log(N) - WP article let lglo = 2.0f64 lgest - lghi; // and a lower limit of the upper "band" let mut count = 0; // need to use extended precision, might go over let mut bnd = Vec::with_capacity(0); let klmt = (lghi / lg5) as u32 + 1; for k in 0 .. klmt { // i, j, k values can be just u32 values let p = k as f64 lg5; let jlmt = ((lghi - p) / lg3) as u32 + 1; for j in 0 .. jlmt { let q = p + j as f64 lg3; let ir = lghi - q; let lg = q + (ir as u32) as f64; // current log value (estimated) count += ir as u64 + 1; if lg >= lglo { bnd.push((lg, (ir as u32, j, k))) } } } if n > count { panic!("nth_hamming: band high estimate is too low!") }; let ndx = (count - n) as usize; if ndx >= bnd.len() { panic!("nth_hamming: band low estimate is too high!") }; bnd.sort_by(|a, b| b.0.partial_cmp(&a.0).unwrap()); // sort decreasing order bnd[ndx].1 } fn convert_log2big(o: (u32, u32, u32)) -> BigUint { let two = BigUint::from(2u8); let three = BigUint::from(3u8); let five = BigUint::from(5u8); let (x2, x3, x5) = o; let mut ob = BigUint::from(1u8); // convert to BigUint at the end for _ in 0 .. x2 { ob = ob &two } for _ in 0 .. x3 { ob = ob &three } for _ in 0 .. x5 { ob = ob &five } ob } fn main() { print!("["); for (i, h) in (1 .. 21).map(nth_hamming).enumerate() { if i != 0 { print!(",") } print!(" {}", convert_log2big(h)) } println!(" ]"); println!("{}", convert_log2big(nth_hamming(1691))); let strt = Instant::now(); let rslt = nth_hamming(1000000); let elpsd = strt.elapsed(); let secs = elpsd.as_secs(); let millis = (elpsd.subsec_nanos() / 1000000)as u64; let dur = secs 1000 + millis; println!("2^{} times 3^{} times 5^{}", rslt.0, rslt.1, rslt.2); let rs = convert_log2big(rslt).to_str_radix(10); let mut s = rs.as_str(); println!("{} digits:", s.len()); let lg3 = 3.0f64.log2(); let lg5 = 5.0f64.log2(); let lg = (rslt.0 as f64 + rslt.1 as f64 lg3 + rslt.2 as f64 lg5) 2.0f64.log10(); println!("Approximately {}E+{}", 10.0f64.powf(lg.fract()), lg.trunc()); if s.len() <= 10000 { while s.len() > 100 { let (f, r) = s.split_at(100); s = r; println!("{}", f); } println!("{}", s); } println!("This last took {} milliseconds.", dur); } ``` [ 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36 ] 2125764000 2^55 times 3^47 times 5^64 84 digits: Approximately 5.193127804483804E+83 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 This last took 0 milliseconds. The above code takes too little time to calculate the millionth Hamming numbers to be measured (as seen above), calculates the billionth number in under 10 milliseconds, calculates the trillionth in less than a second, and the thousand trillionth (10^15) in just over a minute (72 seconds). However, the program needs to be tuned for correctness for ranges of about the 100 trillionth value and above as the precision of the log approximation is not sufficient above about that level to maintain the proper sort order, and thus the answers will start to be out by one value or more. The answers are likely correct up to that point as they are the same to a trillion as the equivalent Haskell program, although this version is much faster due to no garbage collection (the Haskell version spends about half its time garbage collecting) and doing the calculations using loops and array/vector accesses rather than the lazy list processing used in the Haskell version. The program should be able to determine the 10^19th hamming number in a few hours and can't quite find the 2^64th (18446744073709551615th) Hamming number due to a slight overflow near the limit. The above code uses the library vector sort capabilities; custom sorting versions could be written but with the reduced array size, sorting is a very small percentage of the execution time and maximum space requirements are only a few 10's of Megabytes so that neither the time nor the space used for sorting are a concern. Note that I'm not knocking Haskell, just that (as here) many Haskell programmers like to use lazy list processing which has its costs; the Haskell version could be re-written to use arrays and functional loops and likely be about the same speed although perhaps not as concise. By simply converting the Haskell program to force strictness and to use this same method of determining the width of the upper band, the Haskell program would have the same time and space complexity as here, but would still be a constant factor of almost eight times slower due to the list processing (with a constant factor for extra space as well). Use of a mutable array or vector would solve that, but unfortunately not as easily as here as there would be the question of "unboxed" versus "boxed" arrays/vectors, and the complexities of implementing the (faster) unboxed type in which to sort the band - in short, not as easy as here in Rust. Scala class Hamming extends Iterator[BigInt] { import scala.collection.mutable.Queue val qs = Seq.fill(3)(new Queue[BigInt]) def enqueue(n: BigInt) = qs zip Seq(2, 3, 5) foreach { case (q, m) => q enqueue n m } def next = { val n = qs map (_.head) min; qs foreach { q => if (q.head == n) q.dequeue } enqueue(n) n } def hasNext = true qs foreach (_ enqueue 1) } However, the usage of closures adds a significant amount of time. The code below, though a bit uglier because of the repetitions, is twice as fast: class Hamming extends Iterator[BigInt] { import scala.collection.mutable.Queue val q2 = new Queue[BigInt] val q3 = new Queue[BigInt] val q5 = new Queue[BigInt] def enqueue(n: BigInt) = { q2 enqueue n 2 q3 enqueue n 3 q5 enqueue n 5 } def next = { val n = q2.head min q3.head min q5.head if (q2.head == n) q2.dequeue if (q3.head == n) q3.dequeue if (q5.head == n) q5.dequeue enqueue(n) n } def hasNext = true List(q2, q3, q5) foreach (_ enqueue 1) } ``` scala> new Hamming take 20 toList res87: List[BigInt] = List(1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36) scala> new Hamming drop 1690 next res88: BigInt = 2125764000 scala> new Hamming drop 999999 next res89: BigInt = 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 ``` There's also a fairly mechanical translation from Haskell using purely functional lazy streams Translation of: Haskell ``` val hamming : Stream[BigInt] = { def merge(inx : Stream[BigInt], iny : Stream[BigInt]) : Stream[BigInt] = { if (inx.head < iny.head) inx.head #:: merge(inx.tail, iny) else if (iny.head < inx.head) iny.head #:: merge(inx, iny.tail) else merge(inx, iny.tail) } 1 #:: merge(hamming map ( 2), merge(hamming map ( 3), hamming map (_ 5))) } ``` Use of "force" ensures that the stream is computed before being printed, otherwise it would just be left suspended and you'd see "Stream(1, ?)" ``` scala> (hamming take 20).force res0: scala.collection.immutable.Stream[BigInt] = Stream(1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36) ``` To get the nth code find the n-1th element because indexes are 0 based ``` scala> hamming(1690) res1: BigInt = 2125764000 ``` To calculate the 1000000th code I had to increase the JVM heap from the default ``` scala> hamming(999999) res2: BigInt = 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 ``` Translation of Haskell code avoiding duplicates One can fix the problems of the memory use of the above code resulting from the entire stream being held in memory due to the use a "val hamming: Stream[BigInt]" by using a function "def hamming(): Stream[BigInt]" and making temporary local variables for intermediate streams so that the beginnings of the streams are garbage collected as the output stream is consumed; one can also implement the other Haskell algorithm to avoid factor duplication by building each stream on successive streams, again with memory conserved by building the least dense first: def hamming(): Stream[BigInt] = { def merge(a: Stream[BigInt], b: Stream[BigInt]): Stream[BigInt] = { if (a.isEmpty) b else { val av = a.head; val bv = b.head if (av < bv) av #:: merge(a.tail, b) else bv #:: merge(a, b.tail) } } def smult(m:Int, s: Stream[BigInt]): Stream[BigInt] = (m s.head) #:: smult(m, s.tail) // equiv to map (m ) s; faster def u(s: Stream[BigInt], n: Int): Stream[BigInt] = { lazy val r: Stream[BigInt] = merge(s, smult(n, 1 #:: r)) r } 1 #:: List(5, 3, 2).foldLeft(Stream.empty[BigInt]) { u } } Usage: ``` scala> hamming() take 20 force res0: scala.collection.immutable.Stream[BigInt] = Stream(1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36) scala> hamming() drop 1690 head res1: BigInt = 2125764000 scala> hamming() drop 999999 head res2: BigInt = 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 ``` It only takes under a half second to find the millionth number in the sequence in the last output. Scheme ``` (define-syntax lons (syntax-rules () ((_ lar ldr) (delay (cons lar (delay ldr)))))) (define (lar lons) (car (force lons))) (define (ldr lons) (force (cdr (force lons)))) (define (lap proc . llists) (lons (apply proc (map lar llists)) (apply lap proc (map ldr llists)))) (define (take n llist) (if (zero? n) (list) (cons (lar llist) (take (- n 1) (ldr llist))))) (define (llist-ref n llist) (if (= n 1) (lar llist) (llist-ref (- n 1) (ldr llist)))) (define (merge llist-1 . llists) (define (merge-2 llist-1 llist-2) (cond ((null? llist-1) llist-2) ((null? llist-2) llist-1) ((< (lar llist-1) (lar llist-2)) (lons (lar llist-1) (merge-2 (ldr llist-1) llist-2))) ((> (lar llist-1) (lar llist-2)) (lons (lar llist-2) (merge-2 llist-1 (ldr llist-2)))) (else (lons (lar llist-1) (merge-2 (ldr llist-1) (ldr llist-2)))))) (if (null? llists) llist-1 (apply merge (cons (merge-2 llist-1 (car llists)) (cdr llists))))) (define hamming (lons 1 (merge (lap (lambda (x) ( x 2)) hamming) (lap (lambda (x) ( x 3)) hamming) (lap (lambda (x) ( x 5)) hamming)))) (display (take 20 hamming)) (newline) (display (llist-ref 1691 hamming)) (newline) (display (llist-ref 1000000 hamming)) (newline) ``` Output: (1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36) 2125764000 out of memory Avoiding Generation of Duplicates, including reduced memory use Translation of: Haskell Although the algorithm above is true to the classic Dijkstra version and although the algorithm does require a form of lazy list/stream processing in order to utilize memoization and avoid repeated recalculations/comparisons, the stream implementation can be simplified, and the modified algorithm as per the Haskell code avoids duplicate generations of factors. As well, the following code implements the algorithm as a procedure/function so that it restarts the calculation from the beginning on every new call and so that internal stream variables are not top level so that the garbage collector can collect the beginning of all intermediate and final streams when they are no longer referenced; in this way total memory used (after interspersed garbage collections) is almost zero for a sequence of the first million numbers. Note that Scheme R5RS does not define "map" or "foldl" functions, so these are provided (a simplified "smult" which is faster than using map for this one purpose): ``` (define (hamming) (define (foldl f z l) (define (foldls zs ls) (if (null? ls) zs (foldls (f zs (car ls)) (cdr ls)))) (foldls z l)) (define (merge a b) (if (null? a) b (let ((x (car a)) (y (car b))) (if (< x y) (cons x (delay (merge (force (cdr a)) b))) (cons y (delay (merge a (force (cdr b))))))))) (define (smult m s) (cons ( m (car s)) ;; equiv to map ( m) s; faster (delay (smult m (force (cdr s)))))) (define (u s n) (letrec ((a (merge s (smult n (cons 1 (delay a)))))) a)) (cons 1 (delay (foldl u '() '(5 3 2))))) ;;; test... (define (stream-take->list n strm) (if (= n 0) (list) (cons (car strm) (stream-take->list (- n 1) (force (cdr strm)))))) (define (stream-ref strm nth) (do ((nxt strm (force (cdr nxt))) (cnt 0 (+ cnt 1))) ((>= cnt nth) (car nxt)))) (display (stream-take->list 20 (hamming))) (newline) (display (stream-ref (hamming) (- 1691 1))) (newline) (display (stream-ref (hamming) (- 1000000 1))) (newline) ``` Output: {1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36} 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 The "stream-ref" procedure is zero based as is the Scheme standard for array indices, thus the subtraction of one from the desired nth number in the sequence. Seed7 ``` $ include "seed7_05.s7i"; include "bigint.s7i"; const func bigInteger: min (in bigInteger: a, in bigInteger: b, in bigInteger: c) is func result var bigInteger: min is 0_; begin if a < b then min := a; else min := b; end if; if c < min then min := c; end if; end func; const func bigInteger: hamming (in integer: n) is func result var bigInteger: hammingNum is 1_; local var array bigInteger: hammingNums is 0 times 0_; var integer: index is 0; var bigInteger: x2 is 2_; var bigInteger: x3 is 3_; var bigInteger: x5 is 5_; var integer: i is 1; var integer: j is 1; var integer: k is 1; begin hammingNums := n times 1_; for index range 2 to n do hammingNum := min(x2, x3, x5); hammingNums[index] := hammingNum; if x2 = hammingNum then incr(i); x2 := 2_ hammingNums[i]; end if; if x3 = hammingNum then incr(j); x3 := 3_ hammingNums[j]; end if; if x5 = hammingNum then incr(k); x5 := 5_ hammingNums[k]; end if; end for; end func; const proc: main is func local var integer: n is 0; begin for n range 1 to 20 do write(hamming(n) <& " "); end for; writeln; writeln(hamming(1691)); writeln(hamming(1000000)); end func; ``` Output: ``` 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 ``` Sidef ``` func ham_gen { var s = var m = [2, 3, 5] func { var n = [s, s, s].min { |i| s[i].shift if (s[i] == n) s[i].append(n m[i]) } << ^3 return n } } var h = ham_gen() var i = 20; say i.of { h() }.join(' ') { h() } << (i+1 ..^ 1691) say h() ``` Output: ``` 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 ``` Smalltalk Works with: GNU Smalltalk This is a straightforward implementation of the pseudocode snippet found in the Python section. Smalltalk supports arbitrary-precision integers, but the implementation is too slow to try it with 1 million. ``` Object subclass: Hammer [ Hammer class >> hammingNumbers: howMany [ |h i j k x2 x3 x5| h := OrderedCollection new. i := 0. j := 0. k := 0. h add: 1. x2 := 2. x3 := 2. x5 := 5. [ ( h size) < howMany ] whileTrue: [ |m| m := { x2. x3. x5 } sort first. (( h indexOf: m ) = 0) ifTrue: [ h add: m ]. ( x2 = (h last) ) ifTrue: [ i := i + 1. x2 := 2 (h at: i) ]. ( x3 = (h last) ) ifTrue: [ j := j + 1. x3 := 3 (h at: j) ]. ( x5 = (h last) ) ifTrue: [ k := k + 1. x5 := 5 (h at: k) ]. ]. ^ h sort ] ]. (Hammer hammingNumbers: 20) displayNl. (Hammer hammingNumbers: 1690) last displayNl. ``` Works with: Pharo Smalltalk ``` limit := 10 raisedToInteger: 84. tape := Set new. hammingProcess := [:newHamming| (newHamming <= limit) ifTrue: [| index | index := tape scanFor: newHamming. (tape array at: index) ifNil: [tape atNewIndex: index put: newHamming asSetElement. hammingProcess value: newHamming 2. hammingProcess value: newHamming 3. hammingProcess value: newHamming 5]]]. hammingProcess value: 1. sc := tape asSortedCollection. sc first: 20. "a SortedCollection(1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36)" sc at: 1691. "2125764000" sc at: 1000000. "519312780448388736089589843750000000000000000000000000000000000000000000000000000000" ``` Works with: Squeak Smalltalk with Xtream package (and probably on Pharo too) This is using the Xtreams package (see The tape is a Heap of associations, the key is a hamming number, the value is its greatest prime factor. Associations responds to <, so can be used in Heap, and are sorted by key. The stream can only move forward, for economy, we don't bother buffering past values. The counterpart is that we have no direct indexing and must keep the position counter by ourself. ``` tape := Heap with: 1 -> 1. hammingStream := [| next | next := tape removeFirst. next value <= 2 ifTrue: [tape add: next key 2 -> 2]. next value <= 3 ifTrue: [tape add: next key 3 -> 3]. next value <= 5 ifTrue: [tape add: next key 5 -> 5]. next key] reading. hammingStream read: 20. "get first 20 values => #(1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36)" hammingStream ++ 1670. "skip the next 1670 values" hammingStream get. "and the 1691th value is => 2125764000". hammingStream ++ (999999 - 1691). "now skip more to position at 999,999". hammingStream get. "and the 1,000,000th value is => 519312780448388736089589843750000000000000000000000000000000000000000000000000000000". tape size. "See how many we have buffered => 24904" ``` SQL This uses SQL99's "WITH RECURSIVE" (more like co-recursion) to build a table of Hamming numbers, then selects out the desired ones. With sqlite it is very fast. It doesn't try to get the millionth number because sqlite doesn't have bignums. ``` CREATE TEMPORARY TABLE factors(n INT); INSERT INTO factors VALUES(2); INSERT INTO factors VALUES(3); INSERT INTO factors VALUES(5); CREATE TEMPORARY TABLE hamming AS WITH RECURSIVE ham AS ( SELECT 1 as h UNION SELECT hn x FROM ham JOIN factors ORDER BY x LIMIT 1700 ) SELECT h FROM ham; sqlite> SELECT h FROM hamming ORDER BY h LIMIT 20; 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 sqlite> SELECT h FROM hamming ORDER BY h LIMIT 1 OFFSET 1690; 2125764000 ``` Tcl This uses coroutines to simplify the description of what's going on. Works with: Tcl version 8.6 ``` package require Tcl 8.6 Simple helper: Tcl-style list "map" proc map {varName list script} { set l {} upvar 1 $varName v foreach v $list {lappend l [uplevel 1 $script]} return $l } The core of a coroutine to compute the product of a hamming sequence. Tricky bit: we don't automatically advance to the next value, and instead wait to be told that the value has been consumed (i.e., is the result of the [yield] operation). proc ham {key multiplier} { global hammingCache set i 0 yield [info coroutine] # Cannot use [foreach]; that would take a snapshot of the list in # the hammingCache variable, so missing updates. while 1 { set n [expr {[lindex $hammingCache($key) $i] $multiplier}] # If the number selected was ours, we advance to compute the next if {[yield $n] == $n} { incr i } } } This coroutine computes the hamming sequence given a list of multipliers. It uses the [ham] helper from above to generate indivdual multiplied sequences. The key into the cache is the list of multipliers. Note that it is advisable for the values to be all co-prime wrt each other. proc hammingCore args { global hammingCache set hammingCache($args) 1 set hammers [map x $args {coroutine ham$x,$args ham $args $x}] yield while 1 { set n [lindex $hammingCache($args) [incr i]-1] lappend hammingCache($args) \ [tcl::mathfunc::min {}[map h $hammers {$h $n}]] yield $n } } Assemble the pieces so as to compute the classic hamming sequence. coroutine hamming hammingCore 2 3 5 Print the first 20 values of the sequence for {set i 1} {$i <= 20} {incr i} { puts [format "hamming[%d] = %d" $i [hamming]] } for {} {$i <= 1690} {incr i} {set h [hamming]} puts "hamming{1690} = $h" for {} {$i <= 1000000} {incr i} {set h [hamming]} puts "hamming{1000000} = $h" ``` Output: ``` hamming{1} = 1 hamming{2} = 2 hamming{3} = 3 hamming{4} = 4 hamming{5} = 5 hamming{6} = 6 hamming{7} = 8 hamming{8} = 9 hamming{9} = 10 hamming{10} = 12 hamming{11} = 15 hamming{12} = 16 hamming{13} = 18 hamming{14} = 20 hamming{15} = 24 hamming{16} = 25 hamming{17} = 27 hamming{18} = 30 hamming{19} = 32 hamming{20} = 36 hamming{1690} = 2123366400 hamming{1000000} = 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 ``` A faster version can be built that also works on Tcl 8.5 (or earlier, if only small hamming numbers are being computed): ``` variable hamming 1 hi2 0 hi3 0 hi5 0 proc hamming {n} { global hamming hi2 hi3 hi5 set h2 [expr {[lindex $hamming $hi2]2}] set h3 [expr {[lindex $hamming $hi3]3}] set h5 [expr {[lindex $hamming $hi5]5}] while {[llength $hamming] < $n} { lappend hamming [set h [expr { $h2<$h3 ? $h2<$h5 ? $h2 : $h5 : $h3<$h5 ? $h3 : $h5 }]] if {$h==$h2} { set h2 [expr {[lindex $hamming [incr hi2]]2}] } if {$h==$h3} { set h3 [expr {[lindex $hamming [incr hi3]]3}] } if {$h==$h5} { set h5 [expr {[lindex $hamming [incr hi5]]5}] } } return [lindex $hamming [expr {$n - 1}]] } Print the first 20 values of the sequence for {set i 1} {$i <= 20} {incr i} { puts [format "hamming[%d] = %d" $i [hamming $i]] } puts "hamming{1690} = [hamming 1690]" puts "hamming{1691} = [hamming 1691]" puts "hamming{1692} = [hamming 1692]" puts "hamming{1693} = [hamming 1693]" puts "hamming{1000000} = [hamming 1000000]" ``` uBasic/4tH uBasic's single array does not have the required size to calculate the 1691st number, let alone the millionth. ``` For H = 1 To 20 Print "H("; H; ") = "; Func (_FnHamming(H)) Next End _FnHamming Param (1) @(0) = 1 X = 2 : Y = 3 : Z = 5 I = 0 : J = 0 : K = 0 For N = 1 To a@ - 1 M = X If M > Y Then M = Y If M > Z Then M = Z @(N) = M If M = X Then I = I + 1 : X = 2 @(I) If M = Y Then J = J + 1 : Y = 3 @(J) If M = Z Then K = K + 1 : Z = 5 @(K) Next Return (@(a@-1)) ``` Output: ``` H(1) = 1 H(2) = 2 H(3) = 3 H(4) = 4 H(5) = 5 H(6) = 6 H(7) = 8 H(8) = 9 H(9) = 10 H(10) = 12 H(11) = 15 H(12) = 16 H(13) = 18 H(14) = 20 H(15) = 24 H(16) = 25 H(17) = 27 H(18) = 30 H(19) = 32 H(20) = 36 0 OK, 0:379 ``` UNIX Shell Works with: ksh93 Works with: Bourne Again SHell version 4+ Large numbers are not supported. ``` typeset -a hamming=(1) q2 q3 q5 function nextHamming { typeset -i h=${hamming[${#hamming[@]}-1]} q2+=( $(( h2 )) ) q3+=( $(( h3 )) ) q5+=( $(( h5 )) ) h=$( min3 ${q2} ${q3} ${q5} ) (( ${q2} == h )) && ashift q2 >/dev/null (( ${q3} == h )) && ashift q3 >/dev/null (( ${q5} == h )) && ashift q5 >/dev/null hamming+=($h) } function ashift { typeset -n ary=$1 printf '%s\n' "${ary}" ary=( "${ary[@]:1}" ) } function min3 { if (( $1 < $2 )); then (( $1 < $3 )) && printf '%s\n'$1 || printf '%s\n'$3 else (( $2 < $3 )) && printf '%s\n'$2 || printf '%s\n'$3 fi } for ((i=1; i<=20; i++)); do nextHamming printf '%d\t%d\n' "$i" "${hamming[i-1]}" done for ((; i<=1690; i++)); do nextHamming; done nextHamming printf '%d\t%d\n' "$i" "${hamming[i-1]}" printf 'elapsed: %s\n' "$SECONDS" ``` Output: 1 1 2 2 3 3 4 4 5 5 6 6 7 8 8 9 9 10 10 12 11 15 12 16 13 18 14 20 15 24 16 25 17 27 18 30 19 32 20 36 1690 2125764000 elapsed: 0.568 Ursala Smooth is defined as a second order function taking a list of primes and returning a function that takes a natural number to the -th smooth number with respect to them. An elegant but inefficient formulation based on the J solution is the following. ``` import std import nat smooth"p" "n" = ~&z take/"n" nleq-< (rep(length "n") ^Ts/~& productK0/"p") <1> ``` This test program main = smooth<2,3,5> nrange(1,20) yields this list of the first 20 Hamming numbers. ``` <1,2,3,4,5,6,8,9,10,12,15,16,18,20,24,25,27,30,32,36> ``` Although all calculations are performed using unlimited precision, the version above is impractical for large numbers. A more hardcore approach is the following. ``` import std import nat smooth"p" "n" = ~&H\"p" -<1>; @NiXS ~&/(1,1); ~&ll~="n"->lr -+ ^\~&rlPrrn2rrm2Zlrrmz3EZYrrm2lNCTrrm2QAXrhlPNhrnmtPA2XtCD ~&lrPrhl2E?/~&l ^|/successor@l ~&hl, ^|/~& nleq-<&l+ ^\~&r ~&l|| product@rnmhPX+- cast %nL main = smooth<2,3,5> nrange(1,20)--<1691,1000000> ``` Output: The great majority of time is spent calculating the millionth Hamming number. < 1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36, 2125764000, 519312780448388736089589843750000000000000000000000000000000000000000000000000000000> VBA 'RosettaCode Hamming numbers 'This is a well known hard problem in number theory: 'counting the number of lattice points in a 'n-dimensional tetrahedron, here n=3. Public a As Double, b As Double, c As Double, d As Double Public p As Double, q As Double, r As Double Public cnt() As Integer 'stores the number of lattice points indexed on the exponents of 3 and 5 Public hn(2) As Integer 'stores the exponents of 2, 3 and 5 Public Declare Function GetTickCount Lib "kernel32.dll" () As Long Private Function log10(x As Double) As Double log10 = WorksheetFunction.log10(x) End Function Private Function pow(x As Variant, y As Variant) As Double pow = WorksheetFunction.Power(x, y) End Function Private Sub init(N As Long) 'Computes a, b and c as the vertices '(a,0,0), (0,b,0), (0,0,c) of a tetrahedron 'with apex (0,0,0) and volume N 'volume N=abc/6 Dim k As Double k = log10(2) log10(3) log10(5) 6 N k = pow(k, 1 / 3) a = k / log10(2) b = k / log10(3) c = k / log10(5) p = -b c q = -a c r = -a b End Sub Private Function x_given_y_z(y As Integer, z As Integer) As Double x_given_y_z = -(q y + r z + a b c) / p End Function Private Function cmp(i As Integer, j As Integer, k As Integer, gn() As Integer) As Boolean cmp = (i log10(2) + j log10(3) + k log10(5)) > (gn(0) log10(2) + gn(1) log10(3) + gn(2) log10(5)) End Function Private Function count(N As Long, step As Integer) As Long 'Loop over y and z, compute x and 'count number of lattice points within tetrahedron. 'Step 1 is indirectly called by find_seed to calibrate the plane through A, B and C 'Step 2 fills the matrix cnt with the number of lattice points given the exponents of 3 and 5 'Step 3 the plane is lowered marginally so one or two candidates stick out Dim M As Long, j As Integer, k As Integer If step = 2 Then ReDim cnt(0 To Int(b) + 1, 0 To Int(c) + 1) M = 0: j = 0: k = 0 Do While -c j - b k + b c > 0 Do While -c j - b k + b c > 0 Select Case step Case 1: M = M + Int(x_given_y_z(j, k)) Case 2 cnt(j, k) = Int(x_given_y_z(j, k)) Case 3 If Int(x_given_y_z(j, k)) < cnt(j, k) Then 'This is a candidate, and ... If cmp(cnt(j, k), j, k, hn) Then 'it is bigger dan what is already in hn hn(0) = cnt(j, k) hn(1) = j hn(2) = k End If End If End Select k = k + 1 Loop k = 0 j = j + 1 Loop count = M End Function Private Sub list_upto(ByVal N As Integer) Dim count As Integer count = 1 Dim hn As Integer hn = 1 Do While count < N k = hn Do While k Mod 2 = 0 k = k / 2 Loop Do While k Mod 3 = 0 k = k / 3 Loop Do While k Mod 5 = 0 k = k / 5 Loop If k = 1 Then Debug.Print hn; " "; count = count + 1 End If hn = hn + 1 Loop Debug.Print End Sub Private Function find_seed(N As Long, step As Integer) As Long Dim initial As Long, total As Long initial = N Do 'a simple iterative goal search, takes a handful iterations only init initial total = count(initial, step) initial = initial + N - total Loop Until total = N find_seed = initial End Function Private Sub find_hn(N As Long) Dim fs As Long, err As Long 'Step 1: find fs such that the number of lattice points is exactly N fs = find_seed(N, 1) 'Step 2: fill the matrix cnt init fs err = count(fs, 2) 'Step 3: lower the plane by diminishing fs, the candidates for 'the Nth Hamming number will stick out and be recorded in hn init fs - 1 err = count(fs - 1, 3) Debug.Print "2^" & hn(0) - 1; " 3^" & hn(1); " 5^" & hn(2); "="; If N < 1692 Then 'The task set a limit on the number size Debug.Print pow(2, hn(0) - 1) pow(3, hn(1)) pow(5, hn(2)) Else Debug.Print If N <= 1000000 Then 'The big Hamming Number will end in a lot of zeroes. The common exponents of 2 and 5 'are split off to be printed separately. If hn(0) - 1 < hn(2) Then 'Conversion to Decimal datatype with CDec allows to print numbers upto 10^28 Debug.Print CDec(pow(3, hn(1))) CDec(pow(5, hn(2) - hn(0) + 1)) & String$(hn(0) - 1, "0") Else Debug.Print CDec(pow(2, hn(0) - 1 - hn(2))) CDec(pow(3, hn(1))) & String$(hn(2), "0") End If End If End If End Sub Public Sub main() Dim start_time As Long, finis_time As Long start_time = GetTickCount Debug.Print "The first twenty Hamming numbers are:" list_upto 20 Debug.Print "Hamming number 1691 is: "; find_hn 1691 Debug.Print "Hamming number 1000000 is: "; find_hn 1000000 finis_time = GetTickCount Debug.Print "Execution time"; (finis_time - start_time); " milliseconds" End Sub Output: ``` The first twenty Hamming numbers are: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 Hamming number 1691 is: 2^5 3^12 5^3= 2125764000 Hamming number 1000000 is: 2^55 3^47 5^64= 519312780448388671875000000000000000000000000000000000000000000000000000000000000000 Execution time 79 milliseconds ``` VBScript Translation of: BBC BASIC ``` For h = 1 To 20 WScript.StdOut.Write "H(" & h & ") = " & Hamming(h) WScript.StdOut.WriteLine Next WScript.StdOut.Write "H(" & 1691 & ") = " & Hamming(1691) WScript.StdOut.WriteLine Function Hamming(l) Dim h() : Redim h(l) : h(0) = 1 i = 0 : j = 0 : k = 0 x2 = 2 : x3 = 3 : x5 = 5 For n = 1 To l-1 m = x2 If m > x3 Then m = x3 End If If m > x5 Then m = x5 End If h(n) = m If m = x2 Then i = i + 1 : x2 = 2 h(i) End If If m = x3 Then j = j + 1 : x3 = 3 h(j) End If If m = x5 Then k = k + 1 : x5 = 5 h(k) End If Next Hamming = h(l-1) End Function ``` Output: ``` H(1) = 1 H(2) = 2 H(3) = 3 H(4) = 4 H(5) = 5 H(6) = 6 H(7) = 8 H(8) = 9 H(9) = 10 H(10) = 12 H(11) = 15 H(12) = 16 H(13) = 18 H(14) = 20 H(15) = 24 H(16) = 25 H(17) = 27 H(18) = 30 H(19) = 32 H(20) = 36 H(1691) = 2125764000 ``` V (Vlang) Translation of: Go Concise version using dynamic-programming ``` import math.big fn min(a big.Integer, b big.Integer) big.Integer { if a < b {return a} return b } fn hamming(n int) []big.Integer { mut h := []big.Integer{len: n} h = big.one_int two, three, five := big.two_int, big.integer_from_int(3), big.integer_from_int(5) mut next2, mut next3, mut next5 := big.two_int, big.integer_from_int(3), big.integer_from_int(5) mut i, mut j, mut k := 0, 0, 0 for m in 1..h.len { h[m] = min(next2, min(next3, next5)) if h[m] == next2 { i++ next2 = two h[i] } if h[m] == next3 { j++ next3 = three h[j] } if h[m] == next5 { k++ next5 = five h[k] } } return h } fn main() { h := hamming(int(1e6)) println(h[..20]) println(h[1691-1]) println(h[h.len-1]) } ``` Output: ``` [1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36] 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 ``` Fast version with no duplicates algorithm using arrays for memoization and logarithmic approximations The V (Vlang) language isn't yet stable enough (version 0.30) to support a fully functional version using generic lazy lists as per the Haskell language versions and in truth is mostly an imperative language anyway; however, it already can do the page task very quickly with a more imperative algorithm using arrays for memoization storage and logarithmic approximations for sorting comparisons to avoid "infinite" precision integer calculations except for the final result values, as per the following code, which is Nim's "ring buffer" version as that is faster due to less copying required: Translation of: Nim ``` // compile with: v -cflags -march=native -cflags -O3 -prod HammingsLogQ.v import time import math.big import math { log2 } import arrays { copy } const num_elements = 1_000_000 struct LogRep { lg f64 x2 u32 x3 u32 x5 u32 } const ( one = LogRep { 0.0, 0, 0, 0 } lb2_2 = 1.0 lb2_3 = log2(3.0) lb2_5 = log2(5.0) ) [inline] fn (lr &LogRep) mul2() LogRep { return LogRep { lg: lr.lg + lb2_2, x2: lr.x2 + 1, x3: lr.x3, x5: lr.x5 } } [inline] fn (lr &LogRep) mul3() LogRep { return LogRep { lg: lr.lg + lb2_3, x2: lr.x2, x3: lr.x3 + 1, x5: lr.x5 } } [inline] fn (lr &LogRep) mul5() LogRep { return LogRep { lg: lr.lg + lb2_5, x2: lr.x2, x3: lr.x3, x5: lr.x5 + 1 } } [inline] fn xpnd(x u32, mlt u32) big.Integer { mut r := big.integer_from_int(1) mut m := big.integer_from_u32(mlt) mut v := x for { if v <= 0 { break } else { if v & 1 != 0 { r = r m } m = m m v >>= 1 } } return r } fn (lr &LogRep) to_integer() big.Integer { return xpnd(lr.x2, 2) xpnd(lr.x3, 3) xpnd(lr.x5, 5) } fn (lr LogRep) str() string { return (&lr).to_integer().str() } struct HammingsLog { mut: // automatically initialized with LogRep = one (defult)... s2 []LogRep = []LogRep { len: 1024, cap: 1024 } s3 []LogRep = []LogRep { len: 1024, cap: 1024 } s5 LogRep = one.mul5() mrg LogRep = one.mul3() s2msk int = 1023 s2hdi int s2nxti int = 1 s3msk int = 1023 s3hdi int s3nxti int } [direct_array_access][inline] fn (mut hl HammingsLog) next() ?LogRep { mut rsltp := &hl.s2[hl.s2hdi] if rsltp.lg < hl.mrg.lg { hl.s2[hl.s2nxti] = rsltp.mul2() hl.s2hdi++ hl.s2hdi &= hl.s2msk } else { mut rslt := hl.mrg rsltp = &rslt hl.s2[hl.s2nxti] = hl.mrg.mul2() hl.s3[hl.s3nxti] = hl.mrg.mul3() s3hdp := &hl.s3[hl.s3hdi] if unsafe { s3hdp.lg < hl.s5.lg } { hl.mrg = s3hdp hl.s3hdi++ hl.s3hdi &= hl.s3msk } else { hl.mrg = hl.s5 hl.s5 = hl.s5.mul5() } hl.s3nxti++ hl.s3nxti &= hl.s3msk if hl.s3nxti == hl.s3hdi { // buffer full: grow it sz := hl.s3msk + 1 hl.s3msk = sz + sz unsafe { hl.s3.grow_len(sz) } hl.s3msk-- if hl.s3hdi == 0 { hl.s3nxti = sz } else { unsafe { vmemcpy(&hl.s3[hl.s3hdi + sz], &hl.s3[hl.s3hdi], int(sizeof(LogRep)) (sz - hl.s3hdi)) } hl.s3hdi += sz } } } hl.s2nxti++ hl.s2nxti &= hl.s2msk if hl.s2nxti == hl.s2hdi { // buffer full: grow it sz := hl.s2msk + 1 hl.s2msk = sz + sz unsafe { hl.s2.grow_len(sz) } hl.s2msk-- if hl.s2hdi == 0 { hl.s2nxti = sz } else { unsafe { vmemcpy(&hl.s2[hl.s2hdi + sz], &hl.s2[hl.s2hdi], int(sizeof(LogRep)) (sz - hl.s2hdi)) } hl.s2hdi += sz } } return rsltp } fn (hmgs HammingsLog) nth_hammings_log(n int) LogRep { mut cnt := 0 if n > 0 { for h in hmgs { cnt++ if cnt >= n { return h } } } panic("argument less than 1 for nth!") } { hs := HammingsLog {} mut cnt := 0 for h in hs { print("$h ") cnt++ if cnt >= 20 { break } } println("") } println("${(HammingsLog{}).nth_hammings_log(1691)}") start_time := time.now() rslt := (HammingsLog{}).nth_hammings_log(num_elements) duration := (time.now() - start_time).microseconds() println("$rslt") println("Above result for $num_elements elements in $duration microseconds.") ``` Output: ``` 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 Above result for 1000000 elements in 4881 microseconds. ``` The above result is as computed on an Intel i5-6500 at 3.6 GHz (single-threaded, boosted); the execution time is somewhat variable due to V currently using Garbage Collection by default, but the intention is to eventually use automatic reference counting by default which should make it slightly faster and more consistent other than for any variations caused by the memory allocator. The above version can calculate the billionth Hamming number in about 5.3 seconds. Extremely fast version inserting values into the error band and using logarithmic approximations for sorting The above code is about as fast as one can go generating sequences/iterations; however, if one is willing to forego sequences/iterations and just calculate the nth Hamming number (repeatedly when a sequence is desired, but that is only for the first required task of three and then only for a trivial range), then some reading on the relationship between the size of numbers to the sequence numbers is helpful (Wikipedia: Regular Number). One finds that there is a very distinct relationship and that it quite quickly reduces to quite a small error band proportional to the log of the output value for larger ranges. Thus, the following code just scans for logarithmic representations to insert into a sequence for this top error band and extracts the correct nth representation from that band. It reduces time complexity to O(n^(2/3)) from O(n) for the sequence versions, but even more amazingly, reduces memory requirements to O(n^(1/3)) from O(n^(2/3)) and thus makes it possible to calculate very large values in the sequence on common personal computers. This version uses a multi-precision integer as the representation of the logarithmic approximation of the value for sorting of the error band to extend the precision for accurate results up to almost the 64-bit number range (in about a day on common desktop computers). The code is as follows: Translation of: Nim ``` // compile with: v -cflags -march=native -cflags -O3 -prod HammingsLog.v import time import math.big import math { log2, sqrt, pow, floor } const num_elements = 1_000_000 struct LogRep { lg big.Integer x2 u32 x3 u32 x5 u32 } const ( one = LogRep { big.zero_int, 0, 0, 0 } // 1267650600228229401496703205376 lb2_2 = big.Integer { digits: [u32(0), 0, 0, 16], signum: 1, is_const: true } // 2009178665378409109047848542368 lb2_3 = big.Integer { digits: [u32(11608224), 3177740794, 1543611295, 25] signum: 1, is_const: true } // 2943393543170754072109742145491 lb2_5 = big.Integer { digits: [u32(1258143699), 1189265298, 647893747, 37], signum: 1, is_const: true } smlb2_2 = f64(1.0) smlb2_3 = log2(3.0) smlb2_5 = log2(5.0) fctr = f64(6.0) smlb2_3 smlb2_5 crctn = log2(sqrt(30.0)) ) fn xpnd(x u32, mlt u32) big.Integer { mut r := big.integer_from_int(1) mut m := big.integer_from_u32(mlt) mut v := x for { if v <= 0 { break } else { if v & 1 != 0 { r = r m } m = m m v >>= 1 } } return r } fn (lr LogRep) to_integer() big.Integer { return xpnd(lr.x2, 2) xpnd(lr.x3, 3) xpnd(lr.x5, 5) } fn (lr LogRep) str() string { return lr.to_integer().str() } fn nth_hamming_log(n u64) LogRep { if n < 2 { return one } lgest := pow(fctr f64(n), f64(1.0)/f64(3.0)) - crctn // from WP formula frctn := if n < 1_000_000_000 { f64(0.509) } else { f64(0.105) } lghi := pow(fctr (f64(n) + frctn lgest), f64(1.0)/f64(3.0)) - crctn lglo := f64(2.0) lgest - lghi // and a lower limit of the upper "band" mut count := u64(0) // need to use extended precision, might go over mut band := []LogRep { len: 1, cap: 1 } // give it one value so doubling size works mut ih := 0 // band array insertion index klmt := u32(lghi / smlb2_5) + 1 for k in u32(0) .. klmt { p := f64(k) smlb2_5 jlmt := u32((lghi - p) / smlb2_3) + 1 for j in u32(0) .. jlmt { q := p + f64(j) smlb2_3 ir := lghi - q lg := q + floor(ir) // current log value (estimated) count += u64(ir) + 1 if lg >= lglo { len := band.len if ih >= len { unsafe { band.grow_len(len) } } bglg := lb2_2 big.integer_from_u32(u32(ir)) + lb2_3 big.integer_from_u32(j) + lb2_5 big.integer_from_u32(k) band[ih] = LogRep { lg: bglg, x2: u32(ir), x3: j, x5: k } ih++ } } } band.sort_with_compare(fn(a &LogRep, b &LogRep) int { return b.lg.abs_cmp(a.lg) } ) if n > count { panic("nth_hamming_log: band high estimate is too low!") } ndx := int(count - n) if ndx >= band.len { panic("nth_hamming_log: band low estimate is too high!") } return band[ndx] } for i in 1 .. 21 { print("${nth_hamming_log(i)} ") } println("") println("${nth_hamming_log(1691)}") start_time := time.now() rslt := nth_hamming_log(num_elements) duration := (time.now() - start_time).microseconds() println("$rslt") println("Above result for $num_elements elements in $duration microseconds.") ``` Output: ``` 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 Above result for 1000000 elements in 277 microseconds. ``` The output is the same as above except that the execution time is almost too small to be measured; it can produce the billionth Hamming number in about five milliseconds, the trillionth Hamming number in about 440 milliseconds, and the thousand trillionth (which is now possible without error) in about 42.4 seconds. Thus, it successfully extends the usable range of the algorithm to near the maximum expressible 64 bit number in a few hours of execution time on a modern desktop computer although the (2^64 - 1)th Hamming number can't be found due to the restrictions of the expressible range limit in sizing of the required error band. This is in spite of the current Vlang standard library using its own implementation of multi-precision integers rather than the highly optimized "gmp" library used by some languages which could be somewhat faster. Wren Simple but slow Library: Wren-big ``` import "./big" for BigInt, BigInts var primes = [2, 3, 5].map { |p| BigInt.new(p) }.toList var hamming = Fn.new { |size| if (size < 1) Fiber.abort("size must be at least 1") var ns = List.filled(size, null) ns = BigInt.one var next = primes.toList var indices = List.filled(3, 0) for (m in 1...size) { ns[m] = BigInts.min(next) for (i in 0..2) { if (ns[m] == next[i]) { indices[i] = indices[i] + 1 next[i] = primes[i] ns[indices[i]] } } } return ns } var h = hamming.call(1e6) System.print("The first 20 Hamming numbers are:") System.print(h[0..19]) System.print() System.print("The 1,691st Hamming number is:") System.print(h) System.print() System.print("The 1,000,000th Hamming number is:") System.print(h) ``` Output: ``` The first 20 Hamming numbers are: [1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24, 25, 27, 30, 32, 36] The 1,691st Hamming number is: 2125764000 The 1,000,000th Hamming number is: 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 ``` Much faster logarithmic version Translation of: Go Library: Wren-dynamic Library: Wren-long Library: Wren-math A translation of Go's 'extremely fast version inserting logarithms into the top error band'. Not as fast as the statically typed languages but fast enough for me :) ``` import "./dynamic" for Struct import "./long" for ULong import "./big" for BigInt import "./math" for Math var Logrep = Struct.create("LogRep", ["lg", "x2", "x3", "x5"]) var nthHamming = Fn.new { |n| if (n < 2) { if (n < 1) Fiber.abort("nthHamming: argument is zero!") return [0, 0, 0] } var lb3 = 1.5849625007211561814537389439478 var lb5 = 2.3219280948873623478703194294894 var fctr = 6 lb3 lb5 var crctn = 2.4534452978042592646620291867186 var lgest = (n.toNum fctr).cbrt - crctn var frctn = (n < 1000000000) ? 0.509 : 0.106 var lghi = ((n.toNum + lgest frctn) fctr).cbrt - crctn var lglo = lgest 2 - lghi var count = ULong.zero var bnd = [] var klmt = (lghi/lb5).truncate.abs + 1 for (k in 0...klmt) { var p = k lb5 var jlmt = ((lghi - p)/lb3).truncate.abs + 1 for (j in 0...jlmt) { var q = p + j lb3 var ir = lghi - q var lg = q + ir.floor count = count + ir.truncate.abs + 1 if (lg >= lglo) bnd.add(Logrep.new(lg, ir.truncate.abs, j, k)) } } if (n > count) Fiber.abort("nthHamming: band high estimate is too low!") var ndx = (count - n).toSmall if (ndx >= bnd.count) Fiber.abort("nthHamming: band low estimate is too high!") bnd.sort { |a, b| b.lg < a.lg } var rslt = bnd[ndx] return [rslt.x2, rslt.x3, rslt.x5] } var convertTpl2BigInt = Fn.new { |tpl| var result = BigInt.one for (i in 0...tpl) result = result 2 for (i in 0...tpl) result = result 3 for (i in 0...tpl) result = result 5 return result } System.print("The first 20 Hamming numbers are:") for (i in 1..20) { System.write("%(convertTpl2BigInt.call(nthHamming.call(ULong.new(i)))) ") } System.print("\n\nThe 1,691st Hamming number is:") System.print(convertTpl2BigInt.call(nthHamming.call(ULong.new(1691)))) var start = System.clock var res = nthHamming.call(ULong.new(1e6)) var end = System.clock System.print("\nThe 1,000,000 Hamming number is:") System.print(convertTpl2BigInt.call(res)) var duration = ((end-start) 1000).round System.print("The last of these found in %(duration) milliseconds.") ``` Output: ``` The first 20 Hamming numbers are: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 The 1,691st Hamming number is: 2125764000 The 1,000,000 Hamming number is: 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 The last of these found in 16 milliseconds. ``` XPL0 ``` func Hamming(N); \Return 'true' if N is a Hamming number int N; [if N = 1 then return true; if rem(N/2) = 0 then return Hamming(N/2); if rem(N/3) = 0 then return Hamming(N/3); if rem(N/5) = 0 then return Hamming(N/5); return false; ]; int N, C; [N:= 1; C:= 0; loop [if Hamming(N) then [C:= C+1; IntOut(0, N); ChOut(0, ^ ); if C >= 20 then quit; ]; N:= N+1; ]; CrLf(0); N:= 1<<31; \ 8-) repeat N:= N-1 until Hamming(N); IntOut(0, N); ] ``` Output: 1 2 3 4 5 6 8 9 10 12 15 16 18 20 24 25 27 30 32 36 2125764000 Yabasic Translation of: Run BASIC ``` dim h(1000000) for i =1 to 20 print hamming(i)," "; next i print print "Hamming List First(1691) = ",hamming(1691) end sub hamming(limit) local x2, x3,x5,i,j,k,n h(0) =1 x2 = 2: x3 = 3: x5 =5 i = 0: j = 0: k =0 for n =1 to limit h(n) = min(x2, min(x3, x5)) if x2 = h(n) then i = i +1: x2 =2 h(i):end if if x3 = h(n) then j = j +1: x3 =3 h(j):end if if x5 = h(n) then k = k +1: x5 =5 h(k):end if next n return h(limit -1) end sub ``` zkl ``` var BN=Import("zklBigNum"); // only needed for large N fcn hamming(N){ h:=List.createLong(N+1); (0).pump(N+1,h.write,Void); // fill list with stuff h=1; if 1 // regular (64 bit) ints x2:=2; x3:=3; x5:=5; i:=j:=k:=0; else // big ints x2:=BN(2); x3:=BN(3); x5:=BN(5); i:=j:=k:=0; endif foreach n in ([1..N]){ z:=(x2<x3) and x2 or x3; z=(z<x5) and z or x5; h[n]=z; if (h[n] == x2) { x2 = h[i+=1]2 } if (h[n] == x3) { x3 = h[j+=1]3 } if (h[n] == x5) { x5 = h[k+=1]5 } } return(h[N-1]) } [1..20].apply(hamming).println(); hamming(1691).println(); ``` Output: ``` L(1,2,3,4,5,6,8,9,10,12,15,16,18,20,24,25,27,30,32,36) 2125764000 ``` While the other algorithms save [lots of] space, run time still sucks when n > 100,000 so memory usage might as well too. Changing the #if 0 to 1 will use Big Int and lots of space. Output: ``` hamming(0d1_000_000).println(); 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 ``` Direct calculation through triples enumeration OK, I was wrong, calculating the nth Hamming number can be fast and efficient. Translation of: Haskell as direct a translation as I can, except using a nested for loop instead of list comprehension (which makes it easier to keep the count). ``` -- directly find n-th Hamming number, in ~ O(n^{2/3}) time -- by Will Ness, based on "top band" idea by Louis Klauder, from DDJ discussion -- var BN=Import("zklBigNum"); var lg3 = (3.0).log()/(2.0).log(), lg5 = (5.0).log()/(2.0).log(); fcn logval(i,j,k){ lg5k + lg3j + i } fcn trival(i,j,k){ BN(2).pow(i) BN(3).pow(j) BN(5).pow(k) } fcn estval(n){ (6.0lg3lg5n).pow(1.0/3) } #-- estimated logval, base 2 fcn rngval(n){ if(n > 500000) return(2.4496 , 0.0076); #-- empirical estimation if(n > 50000) return(2.4424 , 0.0146); #-- correction, base 2 if(n > 500) return(2.3948 , 0.0723); #-- (dist,width) if(n > 1) return(2.2506 , 0.2887); #-- around (log $ sqrt 30), return(2.2506 , 0.5771); #-- says WP } fcn nthHam(n){ // -> (Double, (Int, Int, Int)) #-- n: 1-based: 1,2,3... d,w := rngval(n); #-- correction dist, width hi := estval(n.toFloat()) - d; #-- hi > logval > hi-w c,b := band(hi,w); #-- total count, the band s := b.sort(fcn(a,b){ a>b }); #-- sorted decreasing, result m := c - n; #-- m 0-based from top nb := b.len(); #-- |band| res := s[m]; #-- result if(w >= 1) throw(Exception.Generic("Breach of contract: (w < 1): " + w)); if(m < 0) throw(Exception.Generic("Not enough triples generated: " +c+n)); if(m >= nb)throw(Exception.Generic("Generated band is too narrow: " +m+nb)); return(res); } fcn band(hi,w){ //--> #-- total count, the band b := Sink(List); cnt := 0; foreach k in ([0 .. (hi/lg5).floor()]){ p := lg5k; foreach j in ([0 .. ((hi-p)/lg3).floor()]){ q := lg3j + p; i,frac := (hi-q).modf(); r := hi-frac; #-- r = i + q cnt+=(i+1); #-- total count if(frac<w) b.write(T(r,T(i,j,k))); #-- store it, if inside band } } return(cnt,b.close()); } ``` ``` fcn printHam(n){ r,t:=nthHam(n); i,j,k:=t; h:=trival(i,j,k); println("Hamming(%,d)-->2^%d 3^%d 5^%d-->\n%s".fmt(n,i,j,k,h)); } printHam(1691); //(5,12,3), 10 digits printHam(0d1_000_000); //(55,47,64), 84 digits printHam(0d10_000_000); //(80,92,162), 182 digits, 80 zeros at end printHam(0d1_000_000_000); //(1334,335,404), 845 digits ``` Output: ``` Hamming(1,691)-->2^5 3^12 5^3--> 2125764000 Hamming(1,000,000)-->2^55 3^47 5^64--> 519312780448388736089589843750000000000000000000000000000000000000000000000000000000 Hamming(10,000,000)-->2^80 3^92 5^162--> 162441050638304318232392153117595750351085388205966408633356724833252116013682098127901554107666015625 <80 zeros> Hamming(1,000,000,000)-->2^1334 3^335 5^404--> 621607575556524486163081633287207200394705651908965270659163240....... ``` ZX Spectrum Basic Translation of: BBC_BASIC 10 FOR h=1 TO 20: GO SUB 1000: NEXT h 20 LET h=1691: GO SUB 1000 30 STOP 1000 REM Hamming 1010 DIM a(h) 1030 LET a(1)=1: LET x2=2: LET x3=3: LET x5=5: LET i=1: LET j=1: LET k=1 1040 FOR n=2 TO h 1050 LET m=x2 1060 IF m>x3 THEN LET m=x3 1070 IF m>x5 THEN LET m=x5 1080 LET a(n)=m 1090 IF m=x2 THEN LET i=i+1: LET x2=2a(i) 1100 IF m=x3 THEN LET j=j+1: LET x3=3a(j) 1110 IF m=x5 THEN LET k=k+1: LET x5=5a(k) 1120 NEXT n 1130 PRINT "H(";h;")= ";a(h) 1140 RETURN Retrieved from " Categories: Programming Tasks Prime Numbers 11l 360 Assembly Ada ALGOL 68 ALGOL W Arturo ATS AutoHotkey AWK BASIC256 BBC BASIC Bc Bracmat Bruijn C C sharp C++ Chapel Clojure CoffeeScript Common Lisp Crystal D Dart DCL Delphi EasyLang Eiffel Elixir Elm Erlang ERRE F Sharp Factor Forth Fortran FreeBASIC FutureBasic FunL Fōrmulæ Go Groovy Haskell Icon Unicon J Java JavaScript Jq Julia Kotlin Lambdatalk Liberty BASIC Logo Lua M2000 Interpreter Mathematica Wolfram Language MATLAB Octave Mojo MUMPS Nim Bigints OCaml Oz PARI/GP Pascal PascalABC.NET Perl Phix Phix/mpfr Picat PicoLisp PL/I Prolog PureBasic Python QBasic Qi Qi examples needing attention Examples needing attention Quackery R Racket Raku Raven REXX Ring RPL Ruby Run BASIC Rust Num Scala Scheme Seed7 Sidef Smalltalk SQL Tcl UBasic/4tH UNIX Shell Ursala VBA VBScript V (Vlang) Wren Wren-big Wren-dynamic Wren-long Wren-math XPL0 Yabasic Zkl ZX Spectrum Basic Pages with too many expensive parser function calls Hidden category: Pages with syntax highlighting errors Cookies help us deliver our services. By using our services, you agree to our use of cookies.
14957
https://www.algebra-class.com/fundamental-counting-principle.html
Algebra Class Making Algebra easier for you! Home Pre-Algebra Topics Algebra Topics Shop Helpful Resources About Contact Algebra Calculator Algebra Cheat Sheet Algebra Practice Test Algebra Readiness Test Algebra Formulas Want to Build Your Own Website? Login Algebra E-Course Login In Sign In / Register Using the Fundamental Counting Principle to Determine the Sample Space As we dive deeper into more complex probability problems, you may start wondering, "How can I figure out the total number of outcomes, also known as the sample space?" We will use a formula known as the fundamental counting principle to easily determine the total outcomes for a given problem. First we are going to take a look at how the fundamental counting principle was derived, by drawing a tree diagram. Example 1 - Tree Diagram A new restaurant has opened and they offer lunch combos for $5.00. With the combo meal you get 1 sandwich, 1 side and 1 drink. The choices are below. Sandwiches: Chicken Salad, Turkey, Grilled Cheese Sides: Chips, French Fries, Fruit Cup Drinks: Soda, Water Draw a tree diagram to find the total number of possible outcomes. We were able to determine the total number of possible outcomes (18) by drawing a tree diagram. However, this technique can be very time consuming. The fundamental counting principle will allow us to take the same information and find the total outcomes using a simple calculation. Take a look. Example 1 -Using the Fundamental Counting Principle Fundamental Counting Principle If you have a ways of doing event 1, b ways of doing event 2, and cways of event 3, then you can find the total number of outcomes by multiplying: a x b x c This principle is difficult to explain in words. To find the total number of outcomes for the scenario, multiply the total outcomes for each individual event. For Example 1: 3 choices of sandwiches • 3 choices of sides • 2 choices of drinks 3 • 3 • 2 = 18 total outcomes As you can see, this is a much faster and more efficient way of determining the total outcomes for a situation. Let's take a look at another example. Example 2 The Bagel Factory offers 12 different kinds of bagels and 4 types of cream cheese. How many possible combinations of bagels and cream cheese are there? Solution: 12 kinds of bagels • 4 types of cream cheese = total outcomes 12 • 4 = 48 There are 48 different combinations of bagels and cream cheese. I would not want to draw a tree diagram for Example 2! However, we were able to determine the total outcomes by using the fundamental counting principle. Let's look at one more example and see how probability comes into play. Example 3 A pair of dice is rolled once. How many possible outcomes are there? What is the probability of rolling doubles? Solution: Use the fundamental counting principle to find the total outcomes: 6 sides on die 1 • 6 sides on die 2 = total outcomes 6 • 6 = 36 There are 36 total outcomes. Finding the probability of rolling doubles: There are 6 sets of doubles (1,1: 2,2: 3,3: 4,4: 5,5: 6,6) 6= 1 6 chances of rolling doubles 36 6 36 total outcomes The probability of rolling doubles is 1/6, or .167 or 16.7% chance. Although you may think that drawing the tree diagrams is fun, it's much easier to use the formula, isn't it? I hope you had fun - now it's time to move on to probability of independent events. Home Probability Fundamental Counting Principle Comments We would love to hear what you have to say about this page! Need More Help With Your Algebra Studies? Get access to hundreds of video examples and practice problems with your subscription! Click here for more information on our affordable subscription options. Not ready to subscribe? Register for our FREE Pre-Algebra Refresher course. ALGEBRA CLASS E-COURSE MEMBERS Click here for more information on our Algebra Class e-courses. Need Help? Try This Online Calculator! Affiliate Products... On this site, I recommend only one product that I use and love and that is Mathway If you make a purchase on this site, I may receive a small commission at no cost to you. About Me Affiliates Privacy Policy Disclaimer Let Us Know How we are doing! send us a message to give us more detail! | | | --- | | Share this page: What’s this? | FacebookX | Enjoy this page? Please pay it forward. Here's how... Would you prefer to share this page with others by linking to it? Click on the HTML link code below. Copy and paste it, adding a note of your own, into your blog, a Web page, forums, a blog comment, your Facebook account, or anywhere that someone would find this page valuable. Copyright © 2009-2020 | Karin Hutchinson | ALL RIGHTS RESERVED. Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more.
14958
https://www.quora.com/For-negative-numbers-when-do-I-use-a-minus-sign-and-when-do-I-use-parentheses
For negative numbers, when do I use a minus sign and when do I use parentheses? - Quora Something went wrong. Wait a moment and try again. Try again Skip to content Skip to search Sign In Mathematics Negative Numbers Notation Integer Systems Mathematical Symbols Basic Algebra Arithmetic Real Numbers Algebra 5 For negative numbers, when do I use a minus sign and when do I use parentheses? All related (34) Sort Recommended Jason Schwartz Director of Growth Product at Spotify (product) · Author has 57 answers and 179.9K answer views ·14y In general parentheses is used when doing accounting or dealing with money. A minus sign is used when displaying a calculation or metric where the result is a non-monetary integer. Upvote · Promoted by Coverage.com Johnny M Master's Degree from Harvard University (Graduated 2011) ·Updated Sep 9 Does switching car insurance really save you money, or is that just marketing hype? This is one of those things that I didn’t expect to be worthwhile, but it was. You actually can save a solid chunk of money—if you use the right tool like this one. I ended up saving over $1,500/year, but I also insure four cars. I tested several comparison tools and while some of them ended up spamming me with junk, there were a couple like Coverage.com and these alternatives that I now recommend to my friend. Most insurance companies quietly raise your rate year after year. Nothing major, just enough that you don’t notice. They’re banking on you not shopping around—and to be honest, I didn’t. Continue Reading This is one of those things that I didn’t expect to be worthwhile, but it was. You actually can save a solid chunk of money—if you use the right tool like this one. I ended up saving over $1,500/year, but I also insure four cars. I tested several comparison tools and while some of them ended up spamming me with junk, there were a couple like Coverage.com and these alternatives that I now recommend to my friend. Most insurance companies quietly raise your rate year after year. Nothing major, just enough that you don’t notice. They’re banking on you not shopping around—and to be honest, I didn’t. It always sounded like a hassle. Dozens of tabs, endless forms, phone calls I didn’t want to take. But recently I decided to check so I used this quote tool, which compares everything in one place. It took maybe 2 minutes, tops. I just answered a few questions and it pulled up offers from multiple big-name providers, side by side. Prices, coverage details, even customer reviews—all laid out in a way that made the choice pretty obvious. They claimed I could save over $1,000 per year. I ended up exceeding that number and I cut my monthly premium by over $100. That’s over $1200 a year. For the exact same coverage. No phone tag. No junk emails. Just a better deal in less time than it takes to make coffee. Here’s the link to two comparison sites - the one I used and an alternative that I also tested. If it’s been a while since you’ve checked your rate, do it. You might be surprised at how much you’re overpaying. Upvote · 999 485 999 103 99 17 Related questions More answers below Do parentheses around a number mean negative? How do you write negative numbers with a positive sign and parentheses? Why do we change the sign of all numbers under a parentheses when there is a negative sign outside the parentheses? How do you add two negative numbers? How are parentheses and brackets used in math? Garrick Saito former Corp. Controller, large public company · Author has 33.7K answers and 216M answer views ·14y They mean exactly the same thing. It is a matter of personal preference. Generally, parenthesis are used more often and easier on the eyes (in my opinion). Upvote · 9 2 Subramanyam Pidaparthy 25 years of expertise. · Author has 2.5K answers and 4.5M answer views ·14y It depends on the preferences. To give a "Look" use parentheses. particularly when you are sending the report to higher ups.. There is no rule... but the "minus) some times is not visible so these parentheses is more comfortable. Upvote · David Joyce Ph.D. in Mathematics, University of Pennsylvania (Graduated 1979) · Author has 9.9K answers and 68.4M answer views ·10y Related How and when did people start using the minus sign to label negative numbers? In the 1480s the – sign was used to mean debit and + for credit. They were also used for subtraction and addition in the late 1400s. Besides using words to describe negative numbers, various symbols have been used to denote negative numbers. Brahmagupta (ca. 598–660)used a dot above the number. A + after a number was used in the Bakhshali manuscript. Chuquet (1445–1488) wrote 0−1069 0−1069 to mean −1069−1069. Newton used −− as we do. Upvote · 99 21 Related questions More answers below What is the purpose of parentheses around negative numbers? How do you add negative numbers in parentheses? When would you use parentheses to add or subtract negative numbers? Why? How do you add two negative numbers? When and who started the use of negative numbers? Assistant Bot · 1y When working with negative numbers, the choice between using a minus sign and parentheses can depend on context and clarity. Here are some guidelines: Minus Sign (-): Simple Negative Values: Use a minus sign for individual negative numbers, e.g., -5. Subtraction: Use a minus sign when indicating subtraction, e.g., 10 - 3 = 7. Parentheses: Clarity in Expressions: Use parentheses to clarify expressions, especially in complex calculations. For example, in the expression 5 + (-3), the parentheses make it clear that you are adding a negative number. Avoiding Confusion: In cases where a negative num Continue Reading When working with negative numbers, the choice between using a minus sign and parentheses can depend on context and clarity. Here are some guidelines: Minus Sign (-): Simple Negative Values: Use a minus sign for individual negative numbers, e.g., -5. Subtraction: Use a minus sign when indicating subtraction, e.g., 10 - 3 = 7. Parentheses: Clarity in Expressions: Use parentheses to clarify expressions, especially in complex calculations. For example, in the expression 5 + (-3), the parentheses make it clear that you are adding a negative number. Avoiding Confusion: In cases where a negative number may be confused with a subtraction operation, parentheses can help. For example, instead of writing -a - b, you could use (-a) - b to clarify that a is negative. Grouping: In algebraic expressions, parentheses are used to group numbers and variables, ensuring the correct order of operations. For example, in -(x + y), parentheses indicate that the sum of x and y is negative. Conclusion Use a minus sign when indicating a negative value or performing subtraction. Use parentheses for clarity, especially in complex expressions or when grouping terms. Upvote · Alan Bustany Trinity Wrangler, 1977 IMO · Upvoted by Nathan Hannon , Ph. D. Mathematics, University of California, Davis (2021) · Author has 9.8K answers and 58.5M answer views ·1y Related Can a negative number be written without a negative sign? Of course! The classic example is Balanced Ternary (base three) which has three digits (⊤,0,⊥⊤,0,⊥) representing the values one, zero, and minus one. Here is a count-down from five to minus three: ⊤⊥⊥=3 2−3−1=9−3−1=5⊤⊥⊥=3 2−3−1=9−3−1=5 ⊤⊤=3+1=4⊤⊤=3+1=4 ⊤0⊤0 ⊤⊥⊤⊥ ⊤⊤ 0 0 ⊥⊥ ⊥⊤⊥⊤ ⊥0⊥0 No negative (or positive) signs required! Footnotes Balanced ternary - Wikipedia Your response is private Was this worth your time? This helps us sort answers on the page. Absolutely not Definitely yes Upvote · 99 17 9 6 Sponsored by Amazon Business Buy more, save more. Save time and unlock cost savings with Smart Business Buying. Sign up for a free account today. Sign Up 99 70 Dinos Constantinou Former Telecommunications Traffic Officer at BT Group (1980–1988) · Author has 123 answers and 680K answer views ·1y Related What is the purpose of a minus sign in mathematics? How does a negative number result from using a minus sign? What is a mortgage without a minus sign? To the lendee, it's money owed and to the lender, it is an asset. What is work without a minus sign? To the employee, it is money owed for their services; to the employer, it is money they owe you for the work you've done for them. So it is with mathematics. The minus sign represents a negative number or an operation of taking away a positive amount: a shopkeeper takes your money for goods sold. Upvote · 9 1 Larry Carlson Owner at Carlson Engineering Inc. ·4y Related Do parentheses around a number mean negative? Not in normal mathematical calculations. However in some systems of accounting parenthesis does signify a negative entry. This was originally done when people used a single column non-colored entry in ledgers and early computer accounting programs. By using parenthesis it was easy to find negative entries which were not frequent. Other accounting systems used either two column entries or colored entries with the negative entries in color, usually red. This is why we have sayings like “he is in the red" or “he's back in the black “. In your computer there's probably a setting to change your disp Continue Reading Not in normal mathematical calculations. However in some systems of accounting parenthesis does signify a negative entry. This was originally done when people used a single column non-colored entry in ledgers and early computer accounting programs. By using parenthesis it was easy to find negative entries which were not frequent. Other accounting systems used either two column entries or colored entries with the negative entries in color, usually red. This is why we have sayings like “he is in the red" or “he's back in the black “. In your computer there's probably a setting to change your display and output to any of these formats. I know for certain that excell and Quattro pro have this option Upvote · Sponsored by Grubhub For Merchants Ready to reach Amazon Prime customers? Reach more customers through our Amazon partnership. Prime members receive free Grubhub+. Learn More 9 5 Philip Lloyd Specialist Calculus Teacher, Motivator and Baroque Trumpet Soloist. · Author has 6.8K answers and 52.8M answer views ·5y Related Why do you add when subtracting negative numbers? I would like to give an explanation in simple, easy to understand terms with no mention of specifically “mathematical jargon”. Firstly when we write +3 we mean move 3 units to the right from 0… Continue Reading I would like to give an explanation in simple, easy to understand terms with no mention of specifically “mathematical jargon”. Firstly when we write +3 we mean move 3 units to the right from 0… Upvote · 9 1 Robert Lyon Former Retired, Founder at Legato Systems (company) (1988–2006) · Author has 88 answers and 12.3K answer views ·3y Related Can negative numbers be represented without using a minus sign? Yes. In most digital computers, integers are implemented in two’s complement. The binary encoding ensues that a number and its negative sum to zero. So, the four bit encoding of two is 0010 and its negative is 1110. Upvote · 9 2 Sponsored by MEDvi Prescription Weight Loss Why does Tirzepatide shed fat 2x as fast as Ozempic? Ozempic is made for diabetes. Tirzepatide is specifically for weight loss. It starts at $179/month. Start Now 999 111 Val Sulit Author has 905 answers and 80.4K answer views ·3y Related Can negative numbers be represented without using a minus sign? In ancient times, the Chinese use red counting rods for positive numbers and black counting rods for negative numbers. Chinese counting rods were the predecessors of Chinese abacuses. Upvote · 9 2 Kip Fisher MS in Operations Research (OR), Stanford University (Graduated 1979) · Author has 683 answers and 488.1K answer views ·5y Related When did negative numbers start being used? Thank you for the A2A. I am hardly an expert in the field, but I know enough to give you a general idea of where to look for definitive answers. Negative numbers did not exist in the ancient world. The ancients recognized two distinct types of positive quantities: number and magnitude. Numbers were used to count things. Things like sheep. Nobody felt a need for a number that indicated a complete absence of sheep, so there was no zero. The Pythagoreans (500’s BCE) thought of number as “a multitude of units.” The smallest multitude is 2, so one was not thought of as a number. Just a “unit.” Magnitud Continue Reading Thank you for the A2A. I am hardly an expert in the field, but I know enough to give you a general idea of where to look for definitive answers. Negative numbers did not exist in the ancient world. The ancients recognized two distinct types of positive quantities: number and magnitude. Numbers were used to count things. Things like sheep. Nobody felt a need for a number that indicated a complete absence of sheep, so there was no zero. The Pythagoreans (500’s BCE) thought of number as “a multitude of units.” The smallest multitude is 2, so one was not thought of as a number. Just a “unit.” Magnitudes were use to measure things. Things like lengths or distances. Again, nobody felt a need for a measure that indicated no length or distance. But there was no smallest measure. Just smaller and smaller fractions. Aristotle, writing a century or so after the Pythagoreans, described magnitude as infinitely divisible quantities (which is a fancy way of saying “smaller and smaller fractions.” Most of the theorems in Euclid’s Elements are proven twice: once for number and the other for magnitude. The Elements was written about a century after Aristotle. Certainly, nobody saw any need to speak or think about negative sheep nor negative lengths. The concept made no sense to the ancients. By the third century CE, western mathematicians were aware of simple equations that had no positive solution. The word they used to characterize such equations translates into modern English as “ridiculous” or “absurd.” The first evidence of attempts to develop a concept of negative numbers is some counting rods that were used in China in the last century BCE. Red rods represented positive quantities and black rods represented negative quantities. By the seventh century CE, negative numbers were being used in India to represent debts, while positive numbers were used to represent assets. It should not surprise us that the arena where there was greatest practical need to develop a concept of negative numbers was money. Both borrowers and lenders needed to keep track of who owed how much to whom. Mathematicians all over the world, including the Mediterranean and Europe, kept finding more and more need to have a concept of negative numbers, but acceptance was slow. Famous European mathematicians were arguing about the nature and meaning of negative numbers well into the 19th century. But the development that forced full acceptance of negative numbers was the invention of Calculus in the late 17th century. Calculus makes no sense without full acceptance of negative numbers. It just took a couple of centuries for everybody to figure that out and accept it. Full acceptance of negative numbers turns out to be not much older than we are. Upvote · 9 2 Sarah Madden Earned a Master's Degree Decades After College · Author has 8.4K answers and 51.9M answer views ·Updated 5y Related “Parenthesis” versus “parentheses,” which do you use and where? To Nick Gallimore, The words “thesis” and “parenthesis” both form their plurals by changing “-is” to “-es” (one thesis, two theses; one parenthesis, two parentheses). Perhaps their Latin roots explain the curious way they are pluralized. NOTE: You can distinguish the singular and plural by how they are pronounced in addition to how they are spelled (the “TH” is like “Thank” not “the,” and the ending syllables for the plurals are “seas” not “sis”): SINGULAR: Thes is = THEE-sis PLURAL: Thes es = THEE-seas SINGULAR: Parenthes is = Par-EN-the-sis PLURAL: Parenthes es = Par-EN-the-seas If you are talking ab Continue Reading To Nick Gallimore, The words “thesis” and “parenthesis” both form their plurals by changing “-is” to “-es” (one thesis, two theses; one parenthesis, two parentheses). Perhaps their Latin roots explain the curious way they are pluralized. NOTE: You can distinguish the singular and plural by how they are pronounced in addition to how they are spelled (the “TH” is like “Thank” not “the,” and the ending syllables for the plurals are “seas” not “sis”): SINGULAR: Thes is = THEE-sis PLURAL: Thes es = THEE-seas SINGULAR: Parenthes is = Par-EN-the-sis PLURAL: Parenthes es = Par-EN-the-seas If you are talking about putting a set of parentheses around some text (e.g., this text), you could say, “Remember to put parentheses around the unessential material” or “We like to enclose additional information in parentheses.” If you are discussing only the opening (left side) or closing (right side) parenthesis, you would say, “I forgot to insert the closing parenthesis in that sentence, but the book already went to press.” EDIT: In case you missed the excellent comment by Geof Garvey, a linguistics pro and academic editor, here it is: “One occasional source of confusion is that parenthesis can refer either to one of the marks or to the matter that the parentheses enclose. Occasionally in the most erudite writing you may encounter that second meaning. That sort of parenthesis may also occur between dashes or commas.” —Sarah M. 5/15/2018 — I love to write theses ORIGINAL QUESTION: Parenthesis versus parentheses, which do you use and where? Upvote · 99 20 9 7 9 1 Related questions Do parentheses around a number mean negative? Why do we change the sign of all numbers under a parentheses when there is a negative sign outside the parentheses? How do you write negative numbers with a positive sign and parentheses? Why do negative numbers have a negative sign? Can negative numbers be represented without using a minus sign? What is the purpose of parentheses around negative numbers? How do you add negative numbers in parentheses? When would you use parentheses to add or subtract negative numbers? Why? How do you add two negative numbers? When and who started the use of negative numbers? When did negative numbers start being used? Does putting a minus sign in front of every real number make all those numbers negative? What happens when you add two negative numbers? What is the difference between negative and minus? Why are negative real numbers called “minus” instead of “negative”? Related questions Do parentheses around a number mean negative? How do you write negative numbers with a positive sign and parentheses? Why do we change the sign of all numbers under a parentheses when there is a negative sign outside the parentheses? How do you add two negative numbers? How are parentheses and brackets used in math? Can negative numbers be represented without using a minus sign? Advertisement About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025
14959
https://www.sciencedirect.com/science/article/abs/pii/S1477893910000980
Swimming with death: Naegleria fowleri infections in recreational waters - ScienceDirect Skip to main contentSkip to article Journals & Books Access throughyour organization Purchase PDF Patient Access Other access options Search ScienceDirect Article preview Abstract Introduction Section snippets References (39) Cited by (68) Travel Medicine and Infectious Disease Volume 8, Issue 4, July 2010, Pages 201-206 Swimming with death: Naegleria fowleri infections in recreational waters Author links open overlay panel Travis W.Heggie a b Show more Add to Mendeley Share Cite rights and content Summary Naegleria fowleri is a free-living amoeba commonly found in warm freshwater environments such as hot springs, lakes, natural mineral water, and resort spas frequented by tourists. N. fowleri is the etiologic agent of primary amoebic meningoencephalitis (PAM), an acute fatal disease of the central nervous system that results in death in approximately seven days. Previously thought to be a rare condition, the number of reported PAM cases is increasing each year. PAM is difficult to diagnose because the clinical signs of the disease are similar to bacterial meningitis. Thus, the key to diagnosis is physician awareness and clinical suspicion. With the intent of creating awareness among travel medicine practitioners and the tourism industry, this review focuses on the presenting features of N. fowleri and PAM and offers insight into the prevention and treatment of the disease. Introduction The relationship between good health and travel has a long history. The use of mineral spas, pools, community baths, and hot springs has been popular since Roman times and eventually provided an important conceptual base leading to the development of pleasure resorts in Europe over two centuries ago.1, 2 These destinations quickly became popular among upper, middle, and working class people as a way to escape populated urban areas and industrial centers.1 There was also a strong belief in the curative powers of these waters that has carried over to modern times. For example, mineral water treatment is still used to treat arthritis, fibrositis, neuritis, sciatica, and a range of sport injuries.2 In 2007 the global media began reporting on a series of deaths that threatened to change the healthy image of hot springs, spas, and other bodies of warm freshwater.3 The reports were tied to the presence of Naegleria fowleri, an opportunistic free-living pathogenic amoeba protist with a human fatality rate of almost 100%. Known to exist globally in warm bodies of water and naturally and artificially heated aquatic environments, N.fowleri is the etiologic agent of primary amoebic meningoencephalitis (PAM).4, 5, 6, 7 PAM is an acute, fulminant, necrotizing and hemorrhagic meningoencephalitis that leads to death in approximately seven days.8 The early diagnosis of PAM is crucial to survival but making such a diagnosis is difficult because the physical signs of PAM are similar to bacterial meningitis. There is also little time between onset to death to mount an antibody response. Hence, the key to diagnosis rests on clinical suspicion and awareness of N. fowleri as the etiologic agent of PAM. The purpose of this study is to review the existing literature on N. fowleri with the aim of increasing the awareness and suspicion of N. fowleri and PAM among physicians, the tourism industry, and practitioners of travel medicine. This is important if the rapid diagnosis of future incidents is to be made and the effectiveness of treatment improved. Section snippets Origin and epidemiology N. fowleri was first identified as a human pathogen in 1965 when it was described by Fowler and Carter in Australia.9 One year later in 1966, three more fatal cases were reported in Florida.10 In each of the Australian and Florida cases, N. fowleri was acquired while swimming.11 Since that time N. fowleri has been found in warm, fresh or brackish water including swimming pools, ponds, lakes, streams, hot springs, thermally polluted water, and sewage. Warm water does not have to be contaminated Mechanisms of pathogenisis and clinical diagnosis N. fowleri is not considered an opportunistic pathogen because it typically presents in healthy individuals. In fact, PAM often occurs in healthy immunologically intact children and young adults who were exposed during recreational activity in warm bodies of freshwater.4, 21, 22 The organism enters the human host via the nasal route when it is splashed or inhaled into the nose. Forcing water into the nose by diving or jumping into water is common but N. fowleri can become motile even if the Laboratory diagnosis The treating physician should request a wet mount examination of the patient’s unrefrigerated cerebrospinal fluid (CSF) preferably utilizing a microscope with phase-contrast optics that will be able to detect the presence of trophozoites. As previously noted, the ameboid trophozoites will range in approximate size from 7 to 20 μm with a large centrally placed nucleolus. Any movement of the trophozoites will most likely be directional and rapid using eruptive pseudopodia.8 If N. fowleri is Pathophysiology of N. fowleri infections In cases involving N. fowleri infections, the left and right cerebral hemispheres of the brain tend to be soft, noticeably swollen with an exceptional accumulation of fluid.8 The leptomeninges are congested and hyperemic with limited purulent exudates within the sulci, the base of the brain, brainstem, and cerebellum.8 The olfactory bulbs are distinguished by hemorrhagic necrosis and purulent exudates.4, 8, 27, 30 In addition, the cerebral cortex typically displays multiple superficial Treatment Primary amoebic meningoencephalitis (PAM) is a severe, progressive disease with a rapid onset and a high associated mortality.33 Because of the rapid onset and high mortality, there are only a handful of known survivors. One of the best-documented survival cases involved a nine year old female infected while swimming in a California hot spring. In this case the patient was successfully treated with intravenous and intrathecal amphotericin B, intravenous and intrathecal miconazole, and oral Tourist exposure and prevention N. fowleri is widespread in the natural environment and the rate of infection after exposure is unknown. There are even reported cases from arid regions where N. fowleri has been inhaled from dust (in cyst form).23 However, by far and large, the vast majority of infections result from activity in warm freshwater environments. An understanding of N. fowleri is important for specialists in travel medicine due to the potential high risk of exposure segments of the tourism industry have to N. Conflict of interest None declared. Special issue articles Recommended articles References (39) J. Towner et al. History and tourism Ann Tourism Res (1991) I. Cervantes-Sandoval et al. Characterization of brain inflammation during primary amoebic meningoencephalitis Parasitol Intl (2008) F.L. Schuster et al. Free-living amoeba as opportunistic and non-opportunistic Pathogens of humans and animals Intl J Parasitol (2004) S. Gupta Isolation of Naegleria fowleri from pond water in West Bengal, India Trans R Soc Trop Med Hyg (1992) E. Van den Driessche et al. Primary amoebic meningoencephalitis after swimming in stream water Lancet (1973) A.R. Cain et al. IgA and primary amoebic meningoencephalitis Lancet (1979) N.D.P. Barnett et al. Primary amoebic meningoencephalitis with Naegleria fowleri: clinical review Pediatr Neurol (1996) K. Aldape et al. Naegleria fowleri: characterization of secreted histolytic cysteine protease Exp Parasitol (1994) W. Hannisch et al. Primary amebic meningoencephalitis: a review of the clinical literature Wilderness Enviro Med (1997) J. Vargas-Zepeda et al. Successful treatment of Naegleria fowleri meningoencephalitis by using intravenous amphotericin B, fluconazole and rifampicin Arch Med Res (2005) Y. Sukthana et al. Spa, springs and safety Southeast Asian J Trop Med Public Health (2005) Arizona daily star. Brain-eating amoeba kills Arizona boy M. Lebbadi et al. Cocultivation of the amoeba Naegleria fowleri and the Amoebicin-producing strain Bacillus licheniformis M-4 Appl Environ Microbiol (1995) G.S. Visvesvara et al. Pathogenic and opportunistic free-living amoebae: acanthamoeba spp., Balamuthia mandrillaris, Naegleria fowleri, and Sappinia diploidea FEMS Immunol Med Microbiol (2007) M. Fowler et al. Acute pyogenic meningitis probably due to Acanthamoeba sp.: a preliminary report BMJ (1965) C.G. Butt Primary amebic meningoencephalitis N Eng J Med (1966) D.T. John Primary amebic meningoencephalitis and the biology of Naegleria Fowleri Ann Rev Microbiol (1982) A. Lekkla et al. Free-living ameba contamination in natural hot spring in Thailand Southeast Asian J Trop Med Public Health (2005) S. Izumiyama et al. Occurrence and distribution of Naegleria species in thermal waters in Japan J Eukaryot Microbiol (2003) View more references Cited by (68) The therapeutic strategies against Naegleria fowleri 2018, Experimental Parasitology Citation Excerpt : N. fowleri is the etiological agent of Primary Amoebic Meningoencephalitis (PAM), a devastating infection that targets the Central Nervous System (CNS) with high lethality rates (Grace et al., 2015). N. fowleri is a thermophilic amoeboflagellate that have been isolated as a cyst resistant form, a trophozoite proliferative and feeding form and a motile flagellate form (reviewed by Grace et al., 2015; Baig et al., 2014; Heggie, 2010). All these stages have the ability to establish infection (Martinez and Visvesvara, 1997; Schuster and Visvesvara, 2004). Show abstract Naegleria fowleri is a pathogenic amoeboflagellate most prominently known for its role as the etiological agent of the Primary Amoebic Meningoencephalitis (PAM), a disease that afflicts the central nervous system and is fatal in more than 95% of the reported cases. Although being fatal and with potential risks for an increase in the occurrence of the pathogen in populated areas, the organism receives little public health attention. A great underestimation in the number of PAM cases reported is assumed, taking into account the difficulty in obtaining an accurate diagnosis. In this review, we summarize different techniques and methods used in the identification of the protozoan in clinical and environmental samples. Since it remains unclear whether the protozoan infection can be successfully treated with the currently available drugs, we proceed to discuss the current PAM therapeutic strategies and its effectiveness. Finally, novel compounds for potential treatments are discussed as well as research on vaccine development against PAM. ### Naegleria fowleri: Sources of infection, pathophysiology, diagnosis, and management; a review 2020, Clinical and Experimental Pharmacology and Physiology ### Naegleria fowleri after 50 years: Is it a neglected pathogen? 2016, Journal of Medical Microbiology ### Passage of parasites across the blood-brain barrier 2012, Virulence ### The risk of contracting infectious diseases in public swimming pools. A review 2012, Annali Dell Istituto Superiore Di Sanita ### Tourist behaviour and the contemporary world 2011, Tourist Behaviour and the Contemporary World View all citing articles on Scopus View full text Copyright © 2010 Elsevier Ltd. All rights reserved. Substances (1) Generated by ​, an expert-curated chemistry database. Part of special issue Including a Special Issue on Tick-borne Encephalitis Edited by Jane Zuckerman Other articles from this issue Post-traumatic camel-related benign paroxysmal positional vertigo July 2010 Giovanni Ralli, …, Nola Giuseppe ### Linking yellow fever vaccinator approval and renewal with training in travel medicine in New Zealand July 2010 Brigid O’Brien, Peter A.Leggat ### Tick-borne encephalitis virus and the immune response of the mammalian host July 2010 Bastian Dörrbecker, …, Frank T.Hufert View more articles Recommended articles Dynein-based motility of pathogenic protozoa Dyneins: Structure, Biology and Disease, 2018, pp. 418-435 Simon Imhof, Kent L.Hill ### Encéphalite granulomateuse amibienne: à propos d’un cas Pratique Neurologique - FMC, Volume 13, Issue 2, 2022, pp. 124-129 B.Abdouni, …, J.-M.Turmel ### Detection of the free living amoeba Naegleria fowleri by using conventional and real-time PCR based on a single copy DNA sequence Experimental Parasitology, Volume 161, 2016, pp. 35-39 Estelle Régoudis, Michel Pélandakis ### The type 2 statins, cerivastatin, rosuvastatin and pitavastatin eliminate Naegleria fowleri at low concentrations and by induction of programmed cell death (PCD) Bioorganic Chemistry, Volume 110, 2021, Article 104784 Aitor Rizo-Liendo, …, Jacob Lorenzo-Morales ### Application of untargeted metabolomics for the detection of pathogenic Naegleria fowleri in an operational drinking water distribution system Water Research, Volume 145, 2018, pp. 678-686 Zhihao Yu, …, Brian H.Clowers ### Protozoan Waterborne Infections in the Context of Actual Climatic Changes and Extreme Weather Events Encyclopedia of Environmental Health, 2019, pp. 391-399 Maria Cristina Angelici, Panagiotis Karanis Show 3 more articles Article Metrics Citations Citation Indexes 67 Patent Family Citations 1 Policy Citations 4 Captures Mendeley Readers 135 Mentions News Mentions 2 References 1 Social Media Shares, Likes & Comments 32 View details About ScienceDirect Remote access Advertise Contact and support Terms and conditions Privacy policy Cookies are used by this site. Cookie Settings All content on this site: Copyright © 2025 Elsevier B.V., its licensors, and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies. For all open access content, the relevant licensing terms apply. Cookie Preference Center We use cookies which are necessary to make our site work. We may also use additional cookies to analyse, improve and personalise our content and your digital experience. For more information, see our Cookie Policy and the list of Google Ad-Tech Vendors. You may choose not to allow some types of cookies. However, blocking some types may impact your experience of our site and the services we are able to offer. See the different category headings below to find out more or change your settings. Allow all Manage Consent Preferences Strictly Necessary Cookies Always active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. Cookie Details List‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. Cookie Details List‎ Targeting Cookies [x] Targeting Cookies These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. If you do not allow these cookies, you will experience less targeted advertising. Cookie Details List‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Confirm my choices Your Privacy [dialog closed] × From question to evidence in seconds ScienceDirect AI enhances your research, producing cited responses from full-text research literature. Try for free
14960
https://www.k5learning.com/free-math-worksheets/fourth-grade-4/fractions/adding-fractions-like-denominators
Reading & Math for K-5 Sign UpLog In Math Math by Grade Kindergarten Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Grade 6 Numbers Learning numbers Counting Comparing numbers Place Value Rounding Roman numerals Fractions & Decimals Fractions Decimals 4 Operations Addition Subtraction Multiplication Division Order of operations Flashcards Drills & practice Measurement Measurement Money Time Advanced Factoring & prime factors Exponents Proportions Percents Integers Algebra More Shape & geometry Data & graphing Word problems Reading Reading by Grade Kindergarten Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Stories Children's stories Leveled stories Fables Early Reading Phonics Sight words Sentences & passages Comprehension Exercises Context clues Cause & effect Compare & contrast Fact vs. fiction Fact vs. opinion Story Structure Exercises Main idea & details Sequencing Story elements Prediction Conclusions & inferences Kindergarten Early Reading Letters Sounds & phonics Words & vocabulary Reading comprehension Early writing Early Math Shapes Numbers & counting Simple math Early Science & More Science Colors Social skills Other activities Vocabulary Vocabulary by Grade Kindergarten Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Flashcards Dolch sight words Fry sight words Phonics Multiple meaning words Prefixes & suffixes Vocabulary cards Spelling Spelling by Grade Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Grammar & Writing By Grade Kindergarten Grade 1 Grade 2 Grade 3 Grade 4 Grade 5 Grammar Nouns Verbs Adjectives Adverbs Pronouns Other parts of speech Writing Sentences Punctuation Capitalization Narrative writing Opinion writing Informative writing Science Science by Grade Kindergarten Grade 1 Grade 2 Grade 3 Cursive Cursive Writing Worksheets Cursive alphabet Cursive letters Cursive letter joins Cursive words Cursive sentences Cursive passages | Bookstore Math Reading Kindergarten Vocabulary Spelling Grammar & Writing More Science Cursive Bookstore Math Reading Kindergarten More Vocabulary Spelling Grammar & Writing Science Cursive Bookstore Breadcrumbs Worksheets Math Grade 4 Fractions Adding fractions (like denominators) Buy Workbook Download & PrintOnly $7.50 Adding fractions with like denominators Adding like fractions worksheets "Like fractions" are fractions with the same denominator. In these fractions worksheets, students add like fractions together. Results may be improper fractions (greater than 1). Worksheet #1 Worksheet #2 Worksheet #3 Worksheet #4 Worksheet #5 Worksheet #6 5 More Become a Member These worksheets are available to members only. Join K5 to save time, skip ads and access more content. Learn More Join Now Similar: Adding mixed numbers (like denominators) Adding a fraction and a mixed number (like denominators) More fractions worksheets Explore all of our fractions worksheets, from dividing shapes into "equal parts" to multiplying and dividing improper fractions and mixed numbers. What is K5? K5 Learning offers free worksheets, flashcards and inexpensive workbooks for kids in kindergarten to grade 5. Become a member to access additional content and skip ads. Help us give away worksheets Our members helped us give away millions of worksheets last year. We provide free educational materials to parents and teachers in over 100 countries. If you can, please consider purchasing a membership ($24/year) to support our efforts. Members skip ads and access exclusive features. Learn about member benefits Join Now Become a Member This content is available to members only. Join K5 to save time, skip ads and access more content. Learn More Join Now
14961
https://pmc.ncbi.nlm.nih.gov/articles/PMC10341171/
Complete Hydatidiform Mole with Lung Metastasis and Coexisting Live Fetus: Unexpected Twin Pregnancy Mimicking Placenta Accreta - PMC Skip to main content An official website of the United States government Here's how you know Here's how you know Official websites use .gov A .gov website belongs to an official government organization in the United States. Secure .gov websites use HTTPS A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites. Service Alert: Planned Maintenance beginning July 25th Most services will be unavailable for 24+ hours starting 9 PM EDT. Learn more about the maintenance. Search Log in Dashboard Publications Account settings Log out Search… Search NCBI Primary site navigation Search Logged in as: Dashboard Publications Account settings Log in Search PMC Full-Text Archive Search in PMC Advanced Search Journal List User Guide New Try this search in PMC Beta Search View on publisher site Download PDF Add to Collections Cite Permalink PERMALINK Copy As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health. Learn more: PMC Disclaimer | PMC Copyright Notice Diagnostics (Basel) . 2023 Jul 3;13(13):2249. doi: 10.3390/diagnostics13132249 Search in PMC Search in PubMed View in NLM Catalog Add to search Complete Hydatidiform Mole with Lung Metastasis and Coexisting Live Fetus: Unexpected Twin Pregnancy Mimicking Placenta Accreta Hera Jung Hera Jung 1 Department of Pathology, CHA Ilsan Medical Center, CHA University School of Medicine, Goyang 10414, Republic of Korea; elledriver2008@gmail.com Find articles by Hera Jung 1 Editors: Cinzia Giacometti 1, Kathrin Ludwig 1 Author information Article notes Copyright and License information 1 Department of Pathology, CHA Ilsan Medical Center, CHA University School of Medicine, Goyang 10414, Republic of Korea; elledriver2008@gmail.com Roles Cinzia Giacometti: Academic Editor Kathrin Ludwig: Academic Editor Received 2023 May 26; Revised 2023 Jun 23; Accepted 2023 Jun 24; Collection date 2023 Jul. © 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( PMC Copyright notice PMCID: PMC10341171 PMID: 37443643 Abstract Twin pregnancy with a complete hydatidiform mole and coexisting fetus (CHMCF) is an exceedingly rare condition with an incidence of about 1 in 20,000–100,000 pregnancies. It can be detected by prenatal ultrasonography and an elevated maternal serum beta-human chorionic gonadotropin (BhCG) level. Herein, the author reports a case of CHMCF which was incidentally diagnosed through pathologic examination without preoperative knowledge. The 41-year-old woman, transferred due to preterm labor, delivered a female baby by cesarean section at 28 + 5 weeks of gestation. Clinically, the surgeon suspected placenta accreta on the surgical field, and the placental specimen was sent to the pathology department. On gross examination, focal vesicular and cystic lesions were identified separately from the normal-looking placental tissue. The pathologic diagnosis was CHMCF and considering the fact that placenta accreta was originally suspected, invasive hydatidiform mole was not ruled out. After radiologic work-up, metastatic lung lesions were detected, and methotrexate was administered in six cycles at intervals of every two weeks. The author presents the clinicopathological features of this unexpected CHMCF case accompanied by pulmonary metastasis, compares to literature review findings, and emphasizes the meticulous pathologic examination. Keywords: complete hydatidiform mole, gestational trophoblastic disease, twin pregnancy 1. Introduction Complete hydatidiform mole (CHM) is one of the gestational trophoblastic diseases originating from fertilization of an empty ovum by a sperm and can be invasive or metastatic . In the early stage of CHM, clinical presentations including vaginal bleeding and a snowstorm appearance of the ultrasound lead to the detection of the disease . Additionally, maternal serum beta-human chorionic gonadotropin (BhCG) level elevation also assists the prenatal diagnosis . Of itself, CHM does not have the fetal part; however, twin pregnancy with a complete hydatidiform mole and a coexisting fetus (CHMCF) has been documented in about 1 in 20,000–100,000 pregnancies and the precise diagnosis of CHMCF can be delayed . Herein, the author reports a case of unexpected CHMCF referred to the pathology department with a clinical impression of placenta accreta in a preterm labor. 2. Case Presentation A 41-year-old G3-P1 multigravida woman, with 28 weeks and 4 days of gestation, was admitted to the author’s institution because of preterm labor and a need for treatment in the neonatal intensive care unit (NICU). The patient had an obstetric history of dilatation and evacuation due to spontaneous abortion 4 years previously at GA (gestational age) 11 weeks, and 3 years previously she delivered a female baby weighing 3.2 kg at GA 41 weeks. The transfer record from an outside hospital presented a low-lying placenta with a suspicion of abruption and pelvic examination result of 3 cm and 50% effacement. In the first ultrasonographic finding of the present institute, the fetus was small for the gestational age (27 + 3 weeks, 5.6 percentile) and due to the fetal position, the heart and extremities were not checked. Along with the low-lying placenta, hypervascularity and high blood flow in the subplacental area to the uterine fundus were identified. Other findings also included bridging vessels and multiple irregular lacunae within the placenta in the color Doppler ultrasound. The previous history of evacuation, maternal age, and ultrasonographic findings suggested the possibility of placenta accreta or placenta increta (Figure 1A). Moreover, there was a 4.4 cm × 3.2 cm × 2.5 cm sized mixed echoic lesion in the cervical canal, and a blood clot was suspected (Figure 1B). Figure 1. Open in a new tab Ultrasonographic findings after admission. (A) Placenta with hypervascularity and high blood flow in subplacental area (yellow arrow); (B) Blood clot in cervical canal (asterisk). After removing the blood clot with a speculum, the membrane bulged, and the length of the cervix became 0 cm with U-shaped funneling. Although magnesium sulfate (Magnesin) and ritodrine (Lavopa) were administrated, labor pain persisted every 5 to 8 min with 30–80 torr. As ultrasound findings suggested placenta accreta, the obstetrician obtained informed consent for cesarean section with the possibility of uterine artery embolization and hysterectomy in case of excessive bleeding. An emergent cesarean section was conducted on the day after admission (at GA 28 + 5 weeks). On the surgical field, the uterus was slightly dextrorotated and enlarged to term size. Bilateral ovaries and fallopian tubes were grossly normal in size and shape. The clear amniotic fluid was noted. A living female baby weighing 1030 gm with an Apgar score of 7 (at 1 min) and 8 (at 5 min) was delivered in the left occiput transverse position. Intraoperatively, the uterus showed no obvious distension over the placental bed and the surface was clear without gross neovascularity. After an initial trial of manual removal of the mildly adherent placenta, bleeding was present but controlled after an intravenous Pitocin (10 unit) injection. Therefore, no further procedure was initiated. Although the operative findings were not fully sufficient for a placenta accreta spectrum (PAS) diagnosis, the preoperative ultrasound and the experienced clinician’s suspicion did not exclude placenta accreta, so the specimen was sent to the pathology department. The patient tolerated the entire procedure well and recovered in a stable condition. On gross examination at the pathology department, the placental specimen consisted of a discoid-shaped placental tissue, weighing 728 gm and measuring 23 cm × 16 cm × 2 cm. The umbilical cord inserted centrally 5 cm apart from the nearest margin and was measured 35 cm in length and 2.2 cm in diameter. On section, it had two arteries and one vein. The amniotic membrane was semitransparent. The fetal surface of the chorionic plate was smooth and semitransparent. The maternal surface was covered by intact cotyledons with blood clots and there were also separated multiple fragments of vesicular tissue, measuring up to 13 cm × 11 cm in aggregates (Figure 2A). Considering the heterogeneous gross findings and clinical suspicion of placenta accreta, sections were obtained at variable portions of the specimen. Microscopic examination demonstrated two distinct areas of villi: (1) hydropic large villi with peripheral trophoblastic hyperplasia and cistern formation; and (2) relatively small normal villi (Figure 2B). The areas of hydropic villi had massive necrotic changes, more than about 80%, and in the viable area, the enlarged villi had an internal cistern formation and circumferential trophoblast hyperplasia with often cytologic atypia (Figure 2C). Villous stromal cells and cytotrophoblasts of hydropic villi area were negative for p57 immunohistochemical staining, the marker for the maternally expressed gene CDKN1C (p57KIP2) (Figure 2D). The histologic and immunohistochemical results were consistent with complete hydatidiform mole. Meanwhile, p57 showed retained expression in normal-looking villi (Figure 2E) and there was multiple defined proliferation of capillary vessels with surface trophoblastic proliferation, consistent with chorangiomas (Figure 2F). The size of the largest chorangioma was measured to 0.5 cm. These whole pathologic findings indicated an unexpected twin pregnancy with CHMCF. Figure 2. Open in a new tab Gross and microscopic findings of CHMCF. (A) Separately identified small vesicles on gross examination (yellow arrow); (B) Two groups of villi: hydropic villi with cistern formation and relatively small normal-looking villi (12.5×); (C) Complete hydatidiform mole area with massive necrosis (asterisk) (12.5×); (D) Negative p57 immunohistochemical staining of complete hydatidiform mole (12.5×); (E) Positive p57 immunohistochemical staining of normal area (100×); (F) Chorangioma (40×). Because the placenta was removed manually during surgery, there was no clear distinctive border. Additionally, the surgeon originally suspected placenta accreta and only placenta was sent for pathologic examination without any uterine tissue, so the possibility of invasive hydatidiform mole was not excluded in the clinical context. The final pathologic report was twin pregnancy with CHMCF and indicated the possibility of invasive hydatidiform mole, so a BhCG level check and radiological work-up for excluding residual or metastasizing lesions were recommended. The BhCG level at 15 days after delivery was 1325 mIU/mL. There was no previous BhCG data because an emergent section was performed. Chest computed tomography (CT) revealed variably sized nodules in both lungs, indicating hematogenous metastasis (Figure 3). Brain CT was normal and abdominopelvic CT showed postpartum uterine enlargement, fatty liver, and borderline hepatosplenomegaly. Figure 3. Open in a new tab Chest computed tomography (CT) highlighting multiple pulmonary metastasis (yellow arrows). (A) Metastatic lesion in right middle lobe on coronal view; (B) Metastatic lesion in left upper lobe on axial view; (C) Metastatic lesions in both lobes on coronal view. Six cycles of methotrexate injection were administered every two weeks. After each cycle, the BhCG level gradually decreased (399–33.9–7.2–2.4–1.2–0.4 mIU/mL). The last BhCG level was 0.2 mIU/mL at five months after delivery and follow-up CT confirmed no evidence of recurrence or metastasis in the chest and abdominopelvic cavity. The preterm baby had respiratory distress syndrome but improved and was discharged with a 2260 gm weight after two months of NICU care. 3. Discussion Gestational trophoblastic disease is categorized by putative trophoblastic cells of placental origin; chorionic villous trophoblasts and intermediate trophoblasts . Of them, hydatidiform mole originates from chorionic villous trophoblasts and is divided into complete, partial, and invasive types . Among them, the pathogenesis of CHM is associated with the presence of a paternal-only genome . The majority (about 80–90%) of cases are caused by duplication of the paternal haploid genome, detected as genome-wide homozygosity (46, XX), and the rest are produced by dispermy, resulting in heterozygosity (46, XX or 46, XY) [1,4]. Rarely, inherited mutation of NLRP7 or KHDC3L have also been identified as causes of familial biparental CHM . Overexpression of the paternal genome leads to failure of normally balanced placental and fetal development . As a result, on microscopic examination, CHM is characterized by enlarged chorionic villi with a cistern formation. Circumferential trophoblastic hyperplasia with cytologic atypia is also a usual finding and p57 immunohistochemical staining is negative in villous stromal cells and cytotrophoblasts. In CHM, fetal parts are normally absent. However, CHMCF cases have been steadily reported with low prevalence (1/20,000–100,000) [3,5,6,7,8,9,10,11,12]. The median gestational age at diagnosis of CHMCF is 15–16 weeks, and the delivery or termination is performed in 21–24 weeks [3,9]. Clinical symptoms include vaginal hemorrhage, preeclampsia, and hyperthyroidism . According to the largest review article of CHMCF by M. Suksai et al., more than half of patients (118/206, 57.28%) have hemorrhage and initial BhCG levels range from 1048 to 2,460,000 mIU/mL with a median level of 367,747 mIU/mL . Ultrasonography can also help the diagnosis, demonstrating snowstorm appearance and a heterogeneous, echogenic mass with cystic appearance . Despite the traditional recommendation for termination of the pregnancy, several studies suggests that the risk of gestational trophoblastic neoplasia after CHMCF is not significantly increased with continuation of the pregnancy [9,10]. M. Suksai et al. reported that 37.86 % (78 of 206) were delivered successfully compared to 22.33% (46 of 203) of miscarriage or intrauterine fetal death, stillbirth, and neonatal death . A better prognosis is statistically associated with the lower prevalence of antenatal maternal complications, such as pregnancy-induced hypertension (PIH), hyperthyroidism (HTD), and hyperemesis gravidarum (HG) . An initial serum BhCG level less than 400,000 mIU/mL is also known as a favorable predictive factor for live births . In the present case, the patient had not been diagnosed with CHMCF before and there was no serum BhCG results due to emergent admission. However, the absence of PIH, HTD, and HG might have contributed to the successful delivery. Placenta accreta was the initial clinical impression when the placental specimen was referred to the pathology department. Distinct vesicular tissues were observed on gross examination by the pathologist, so the hidden molar pregnancy obscured by a normal living fetus could be properly diagnosed. In this pregnancy, the patient was confirmed to be pregnant while living abroad but had entered South Korea during the second trimester due to the COVID-19 pandemic (patient’s delivery date: 9 May 2022). The limitations on hospital visits during the pandemic period of COVID-19 are considered as a possible explanation for the delay in the diagnosis of CHMCF. The significant amount of necrosis might also be another factor that made prenatal diagnosis difficult . Meanwhile, the incidence of chorangioma is 1% and associated with an increased risk of pregnancy complications, including polyhydramnios and preterm delivery . Known risk factors of chorangioma include a maternal age over 30 years, maternal hypertension, twin pregnancy, maternal smoking history, and living at high altitude . In the present case, the placenta of the normal living fetus had multiple chorangiomas, and two factors (maternal age and twin pregnancy) might have contributed to the development of the tumors. As multiple chorangiomas can have some overlapping ultrasonographic findings with molar pregnancy, cautious radiologic reading is also required . Clinically, degenerating molar tissue can mimic placenta accreta . In this case, the clinician’s suspicion of placenta accreta helped the pathologic diagnosis of CHMCF with a possible invasive or metastasizing lesion. As a result, metastatic lesions that might have been missed were found, and the patient had effective chemotherapy. A lack of previous hospital information including ultrasonography and initial serum BhCG was a limitation in this case. Additionally, the author attempted to compare this case with previous reports of CHMCF with lung metastasis. The Medline database was thoroughly searched using the PubMed retrieval service. The keywords used were “complete hydatidiform mole and surviving coexistent twin”, “complete hydatidiform mole twin metastasis”, “complete mole twin lung”, “complete mole twin pulmonary”, “complete mole fetus lung”, and “complete mole fetus pulmonary”. Among the studies, the cases without English publication were excluded. A total of 20 cases were collected, as those with an unspecified metastasis site and limited clinical information were omitted. Including the presented case, the clinical information from the 21 cases is displayed in Table 1. The median maternal age was 34 years. Some pregnancies were achieved by IVF (in vitro fertilization) (3 cases), hMG/hCG (human menopausal gonadotropin/human chorionic gonadotropin) therapy (1 case), and ICSI (intracytoplasmic sperm injection) (1 case). Most of the collected cases were diagnosed by prenatal BhCG or radiologic examination. Only one case in 1982 was diagnosed on delivery . Thirteen cases attempted delivery including cesarean section, but in two cases, the infants died within a few hours. The detection of pulmonary metastasis was usually made after termination/delivery. Only four cases were detected before delivery at a mean GA of 25 weeks (17–32 weeks). Compared to the previous studies, the present case demonstrates the importance of pathologic examination. In most of the cases, coexisting complete hydatidiform mole was recognized in the first or second trimester, unlike this case. It is exceptional that the hidden complete hydatidiform mole and multiple lung metastases that could be harmful to the patient were diagnosed through the accurate pathological examination. Table 1. Literature review of CHMCF with lung metastasis (21 cases). | Author (Year Published) | Maternal Age (years) | Pregnancy Type | GA at Diagnosis | BhCG Level at Diagnosis | Radiologic Finding of Complete Hydatidiform Mole | Detection of Pulmonary Metastasis | Pregnancy Outcome | :---: :---: :---: :---: | | Block and Merrill (1982) | 36 | NS | On delivery | NS | Not obtained | Post OP | Amniotomy and delivery at 35 weeks | | Jinno (1994) | 35 | IVF | 12 weeks | 256,000 mIU/mL | Multiple cystic echoes | GA 17 weeks | Emergent cesarean section at 31 weeks (Infant died 4 h postpartum) | | Osada (1995) | 30 | Natural conception | 24 weeks | 478,000 mIU/mL | Typical molar pregnancy (four fifths) | 7 weeks after delivery | Intrauterine fetal death and evacuation at 25 weeks | | Ishii (1998) | 37 | Natural conception | 22 weeks | NS | NS | NS | Vaginal delivery at 40 weeks | | Bruchium (2000) | 25 | hMG/hCG | NS | 35 MOM | Uterine wall mass | Post OP | Cesarean section at 26 weeks | | Kashimura (2001) | 30 | NS | 13 weeks | 684 ng/mL | Empty gestational sac with microcystic pattern | 5 weeks after termination | Dilatation and evacuation (Termination) | | Steigrad (2004) | NS | NS | First trimester | NS | NS | Post OP | Cesarean section | | Makary (2010) | 19 | NS | 25 weeks | 228,000 mIU/mL | Large cystic mass | 2 months after delivery | Emergent cesarean section at 25 weeks | | Lee (2010) | 39 | IVF-ET | 13 weeks | 1,307,693 mIU/mL | Diffuse vesicular pattern | Post OP | Hysterostomy (Termination) | | Sasaki (2012) | 36 | NS | 15 weeks | 440,000 mIU/mL | Typical classic molar pattern | GA 32 weeks | Spontaneous labor at 33 weeks | | Sanchez-Ferrer (2013) | 28 | Natural conception | 11 weeks | 395,000 mIU/mL | Multiple small cysts and a characteristic snowstorm pattern | Post OP | Suction curettage (Termination) at 13 weeks | | Sanchez-Ferrer (2014) | 35 | Natural conception | First trimester | 963,971 mIU/mL | Mass of vesicular structures with snowstorm pattern | Post OP | Subtotal hysterectomy at 15 weeks (Termination and uterine rupture) | | Peng (2014) | 34 | NS | 20 weeks | 31,0277.7 mIU/mL | Multiple cystic spaces | 4 months after delivery | Cesarean section at 37 weeks | | Himoto (2014) | 34 | Natural conception | 9 weeks | 1,124,200 mIU/mL | Multicystic lesion | Post OP | Artificial abortion (Termination) | | Maeda (2018) | 33 | NS | 24 weeks | 156,800 mIU/mL | Multicystic lesions | GA 29 weeks | Cesarean section and hysterectomy at 31 weeks | | Nobuhara (2018) | 42 | IVF | 45 days | 450,000 miU/mL | Subchorionic hematoma with multivesicular features | 5 weeks after termination | Aspiration curettage at 9 weeks (Termination) and delayed hysterectomy | | Odera (2019) | 34 | NS | 14 weeks | 900,000 mIU/mL | Mixed cystic and solid lesion with internal vascularity | GA 23 weeks | Cesarean section at 23 weeks (Infant died a few hours postpartum) | | Sindiani (2020) | 33 | NS | 13 weeks | 171,820 mIU/mL | A sac filled with a complete molar pregnancy | Post OP | Hysterostomy (Termination) | | Mok (2021) | 34 | NS | 10 weeks | free: 13.225 MoM | Multiple cystic area | Post OP | Emergent cesarean section at 32 weeks | | Alpay (2021) | 33 | ICSI | 12 weeks | 425,000 mIU/mL | Echogenic mass resembling molar placenta | 8 weeks after delivery | Cesarean section at 26 weeks | | Jung (2023) [This work] | 41 | Natural conception | Not done | NS | Not identified | Post OP | Cesarean section at 28 weeks | Open in a new tab GA: gestational age, BhCG: beta-human chorionic gonadotropin, NS: not specified, hMG/hCG: human menopausal gonadotropin/human chorionic gonadotropin, IVF: in vitro fertilization, MoM: multiples of median, ET: embryo transfer, ICSI: intracytoplasmic sperm injection. 4. Conclusions In summary, an unexpected twin pregnancy with CHMCF and extrauterine metastasis which were clinically mimicking placenta accreta is reported. Such uncommon cases can be detected by pathological examinations, so it should always be conducted out of caution even for usual specimens. Furthermore, if placenta accreta is suspected, it is worth considering a serum BhCG check when only limited clinical information is available, such as in this case. Institutional Review Board Statement This study was approved by the Institutional Review Board of CHA Ilsan Medical Center (protocol code: 2022-07-003; date of approval: 25 July 2022) with a waiver of informed consent. Informed Consent Statement Regarding the retrospective nature of this study, the Institutional Review Board waived the requirement for the investigator to obtain signed informed consent. Data Availability Statement All data are contained within the article. Conflicts of Interest The author declares no conflict of interest. Funding Statement This research received no external funding. Footnotes Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. References 1.WHO Classification of Tumours Editorial Board . Female Genital Tumours. International Agency for Research on Cancer; Lyon, France: 2020. [Google Scholar] 2.Seckl M.J., Sebire N.J., Berkowitz R.S. Gestational trophoblastic disease. Lancet. 2010;376:717–729. doi: 10.1016/S0140-6736(10)60280-2. [DOI] [PubMed] [Google Scholar] 3.Lin L.H., Maestá I., Braga A., Sun S.Y., Fushida K., Francisco R.P.V., Elias K.M., Horowitz N., Goldstein D.P., Berkowitz R.S. Multiple pregnancies with complete mole and coexisting normal fetus in North and South America: A retrospective multicenter cohort and literature review. Gynecol. Oncol. 2017;145:88–95. doi: 10.1016/j.ygyno.2017.01.021. [DOI] [PubMed] [Google Scholar] 4.Redline R.W., Boyd T.K., Roberts D.J. Placental and Gestational Pathology. Cambridge University Press; Cambridge, UK: 2017. [Google Scholar] 5.Bristow R.E., Shumway J.B., Khouzami A.N., Witter F.R. Complete hydatidiform mole and surviving coexistent twin. Obstet. Gynecol. Surv. 1996;51:705–709. doi: 10.1097/00006254-199612000-00002. [DOI] [PubMed] [Google Scholar] 6.Vaisbuch E., Ben-Arie A., Dgani R., Perlman S., Sokolovsky N., Hagay Z. Twin pregnancy consisting of a complete hydatidiform mole and co-existent fetus: Report of two cases and review of literature. Gynecol. Oncol. 2005;98:19–23. doi: 10.1016/j.ygyno.2005.02.002. [DOI] [PubMed] [Google Scholar] 7.Aguilera M., Rauk P., Ghebre R., Ramin K. Complete hydatidiform mole presenting as a placenta accreta in a twin pregnancy with a coexisting normal fetus: Case report. Case Rep. Obstet. Gynecol. 2012;2012:405085. doi: 10.1155/2012/405085. [DOI] [PMC free article] [PubMed] [Google Scholar] 8.Sasaki Y., Ogawa K., Takahashi J., Okai T. Complete hydatidiform mole coexisting with a normal fetus delivered at 33 weeks of gestation and involving maternal lung metastasis: A case report. J. Reprod. Med. 2012;57:301–304. [PubMed] [Google Scholar] 9.Suksai M., Suwanrath C., Kor-Anantakul O., Geater A., Hanprasertpong T., Atjimakul T., Pichatechaiyoot A. Complete hydatidiform mole with co-existing fetus: Predictors of live birth. Eur. J. Obstet. Gynecol. Reprod. Biol. 2017;212:1–8. doi: 10.1016/j.ejogrb.2017.03.013. [DOI] [PubMed] [Google Scholar] 10.Johnson C., Davitt C., Harrison R., Cruz M. Expectant Management of a Twin Pregnancy with Complete Hydatidiform Mole and Coexistent Normal Fetus. Case Rep. Obstet. Gynecol. 2019;2019:8737080. doi: 10.1155/2019/8737080. [DOI] [PMC free article] [PubMed] [Google Scholar] 11.Odedra D., MacEachern K., Elit L., Mohamed S., McCready E., DeFrance B., Wang Y. Twin pregnancy with metastatic complete molar pregnancy and coexisting live fetus. Radiol. Case Rep. 2020;15:195–200. doi: 10.1016/j.radcr.2019.11.017. [DOI] [PMC free article] [PubMed] [Google Scholar] 12.Tipiani Rodríguez O., Solís Sosa C., Valdez Alegría G.E., Quenaya Rodríguez R.J., Escalante Jibaja R., Cevallos Pacheco C., Ibarra Lavado O., Bocanegra Becerra Y.L. Invasive hydatidiform mole coexistent with normal fetus. Case report. Rev. Peru. Ginecol. Obstett. 2020;66:1–5. doi: 10.31403/rpgo.v66i2253. [DOI] [Google Scholar] 13.Green C.L., Angtuaco T.L., Shah H.R., Parmley T.H. Gestational trophoblastic disease: A spectrum of radiologic diagnosis. RadioGraphics. 1996;16:1371–1384. doi: 10.1148/radiographics.16.6.8946542. [DOI] [PubMed] [Google Scholar] 14.Okumura M., Fushida K., Francisco R.P.V., Schultz R., Zugaib M. Massive Necrosis of a Complete Hydatidiform Mole in a Twin Pregnancy With a Surviving Coexistent Fetus. Journal. Ultrasound Med. 2014;33:177–179. doi: 10.7863/ultra.33.1.177. [DOI] [PubMed] [Google Scholar] 15.Akbarzadeh-Jahromi M., Soleimani N., Mohammadzadeh S. Multiple Chorangioma Following Long-Term Secondary Infertility: A Rare Case Report and Review of Pathologic Differential Diagnosis. Int. Med. Case Rep. J. 2019;12:383–387. doi: 10.2147/IMCRJ.S227947. [DOI] [PMC free article] [PubMed] [Google Scholar] 16.Block M.F., Merrill J.A. Hydatidiform mole with coexistent fetus. Obstet. Gynecol. 1982;60:129–133. [PubMed] [Google Scholar] 17.Jinno M., Ubukata Y., Hanyu I., Satou M., Yoshimura Y., Nakamura Y. Hydatidiform mole with a surviving coexistent fetus following in-vitro fertilization. Hum. Reprod. 1994;9:1770–1772. doi: 10.1093/oxfordjournals.humrep.a138792. [DOI] [PubMed] [Google Scholar] 18.Osada H., Iitsuka Y., Matsui H., Sekiya S. A Complete Hydatidiform Mole Coexisting with a Normal Fetus Was Confirmed by Variable Number Tandem Repeat (VNTR) Polymorphism Analysis Using Polymerase Chain Reaction. Gynecol. Oncol. 1995;56:90–93. doi: 10.1006/gyno.1995.1015. [DOI] [PubMed] [Google Scholar] 19.Ishii J., Iitsuka Y., Takano H., Matsui H., Osada H., Sekiya S. Genetic differentiation of complete hydatidiform moles coexisting with normal fetuses by short tandem repeat–derived deoxyribonucleic acid polymorphism analysis. Am. J. Obstet. Gynecol. 1998;179:628–634. doi: 10.1016/S0002-9378(98)70055-9. [DOI] [PubMed] [Google Scholar] 20.Bruchim I., Kidron D., Amiel A., Altaras M., Fejgin M.D. Complete Hydatidiform Mole and a Coexistent Viable Fetus: Report of Two Cases and Review of the Literature. Gynecol. Oncol. 2000;77:197–202. doi: 10.1006/gyno.2000.5733. [DOI] [PubMed] [Google Scholar] 21.Kashimura Y., Tanaka M., Harada N., Shinmoto M., Morishita T., Morishita H., Kashimura M. Twin pregnancy consisting of 46, XY heterozygous complete mole coexisting with a live fetus. Placenta. 2001;22:323–327. doi: 10.1053/plac.2000.0613. [DOI] [PubMed] [Google Scholar] 22.Steigrad S.J., Robertson G., Kaye A.L. Serial hCG and ultrasound measurements for predicting malignant potential in multiple pregnancies associated with complete hydatidiform mole: A report of 2 cases. J. Reprod. Med. 2004;49:554–558. [PubMed] [Google Scholar] 23.Makary R., Mohammadi A., Rosa M., Shuja S. Twin gestation with complete hydatidiform mole and a coexisting live fetus: Case report and brief review of literature. Obstet. Med. 2010;3:30–32. doi: 10.1258/om.2009.090038. [DOI] [PMC free article] [PubMed] [Google Scholar] 24.Lee S.W., Kim M.Y., Chung J.H., Yang J.H., Lee Y.H., Chun Y.K. Clinical findings of multiple pregnancy with a complete hydatidiform mole and coexisting fetus. J. Ultrasound Med. 2010;29:271–280. doi: 10.7863/jum.2010.29.2.271. [DOI] [PubMed] [Google Scholar] 25.Sánchez-Ferrer M.L., Machado-Linde F., Martínez-Espejo Cerezo A., Peñalver Parres C., Ferri B., López-Expósito I., Abad L., Parrilla J.J. Management of a Dichorionic Twin Pregnancy with a Normal Fetus and an Androgenetic Diploid Complete Hydatidiform Mole. Fetal Diagn. Ther. 2013;33:194–200. doi: 10.1159/000338926. [DOI] [PubMed] [Google Scholar] 26.Sánchez-Ferrer M.L., Hernández-Martínez F., Machado-Linde F., Ferri B., Carbonel P., Nieto-Diaz A. Uterine rupture in twin pregnancy with normal fetus and complete hydatidiform mole. Gynecol. Obstet. Investig. 2014;77:127–133. doi: 10.1159/000355566. [DOI] [PubMed] [Google Scholar] 27.Peng H.H., Huang K.G., Chueh H.Y., Adlan A.S., Chang S.D., Lee C.L. Term delivery of a complete hydatidiform mole with a coexisting living fetus followed by successful treatment of maternal metastatic gestational trophoblastic disease. Taiwan. J. Obstet. Gynecol. 2014;53:397–400. doi: 10.1016/j.tjog.2013.02.005. [DOI] [PubMed] [Google Scholar] 28.Himoto Y., Kido A., Minamiguchi S., Moribata Y., Okumura R., Mogami H., Nagano T., Konishi I., Togashi K. Prenatal differential diagnosis of complete hydatidiform mole with a twin live fetus and placental mesenchymal dysplasia by magnetic resonance imaging. J. Obstet. Gynaecol. Res. 2014;40:1894–1900. doi: 10.1111/jog.12441. [DOI] [PubMed] [Google Scholar] 29.Maeda Y., Oyama R., Maeda H., Imai Y., Yoshioka S. Choriocarcinoma with multiple lung metastases from complete hydatidiform mole with coexistent fetus during pregnancy. J. Obstet. Gynaecol. Res. 2018;44:1476–1481. doi: 10.1111/jog.13677. [DOI] [PubMed] [Google Scholar] 30.Nobuhara I., Harada N., Haruta N., Higashiura Y., Watanabe H., Watanabe S., Hisanaga H., Sado T. Multiple metastatic gestational trophoblastic disease after a twin pregnancy with complete hydatidiform mole and coexisting fetus, following assisted reproductive technology: Case report and literature review. Taiwan. J. Obstet. Gynecol. 2018;57:588–593. doi: 10.1016/j.tjog.2018.06.020. [DOI] [PubMed] [Google Scholar] 31.Sindiani A., Obeidat B., Alshdaifat E. Successful Management of the First Case of a Metastasized Complete Mole in Form of Twin Pregnancy in Jordan. Am. J. Case Rep. 2020;21:e923395. doi: 10.12659/ajcr.923395. [DOI] [PMC free article] [PubMed] [Google Scholar] 32.Mok Z.W., Merchant K., Yip S.L. Management of a complete hydatidiform mole with a coexisting live fetus followed by successful treatment of maternal metastatic gestational trophoblastic disease: Learning points. BMJ Case Rep. 2021;14:e235028. doi: 10.1136/bcr-2020-235028. [DOI] [PMC free article] [PubMed] [Google Scholar] 33.Alpay V., Kaymak D., Erenel H., Cepni I., Madazli R. Complete Hydatidiform Mole and Co-Existing Live Fetus after Intracytoplasmic Sperm Injection: A Case Report and Literature Review. Fetal Pediatr. Pathol. 2021;40:493–500. doi: 10.1080/15513815.2019.1710790. [DOI] [PubMed] [Google Scholar] Associated Data This section collects any data citations, data availability statements, or supplementary materials included in this article. Data Availability Statement All data are contained within the article. Articles from Diagnostics are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI) ACTIONS View on publisher site PDF (1.5 MB) Cite Collections Permalink PERMALINK Copy RESOURCES Similar articles Cited by other articles Links to NCBI Databases On this page Abstract 1. Introduction 2. Case Presentation 3. Discussion 4. Conclusions Institutional Review Board Statement Informed Consent Statement Data Availability Statement Conflicts of Interest Funding Statement Footnotes References Associated Data Cite Copy Download .nbib.nbib Format: Add to Collections Create a new collection Add to an existing collection Name your collection Choose a collection Unable to load your collection due to an error Please try again Add Cancel Follow NCBI NCBI on X (formerly known as Twitter)NCBI on FacebookNCBI on LinkedInNCBI on GitHubNCBI RSS feed Connect with NLM NLM on X (formerly known as Twitter)NLM on FacebookNLM on YouTube National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov Back to Top
14962
https://zh-yue.wikipedia.org/wiki/%E6%95%B8%E5%AD%B8%E7%AC%A6%E8%99%9F
Published Time: 2013-07-14T06:32:47Z 數學符號 - 維基百科,自由嘅百科全書 跳去內容 [x] 主目錄 主目錄 移去側欄 收埋 導覽 頭版 目錄 正嘢 時人時事 是但一版 關於維基百科 聯絡處 交流 說明書 城市論壇 社區大堂 最近修改 專門版 查嘢 搵嘢 [x] 外表 捐畀維基百科 開戶口 簽到 [x] 個人架生 捐畀維基百科 開戶口 簽到 數學符號 [x] 57種語言 አማርኛ العربية مصرى Български বাংলা Bosanski Català Čeština Cymraeg Deutsch English Español Eesti Euskara فارسی Suomi Võro Français Galego עברית हिन्दी Magyar Հայերեն Interlingua Bahasa Indonesia Italiano 日本語 Қазақша ಕನ್ನಡ 한국어 Kurdî Lingua Franca Nova Lombard Македонски മലയാളം Bahasa Melayu Nederlands Norsk nynorsk Norsk bokmål Polski پښتو Português Română Русский Simple English Slovenčina Slovenščina Soomaaliga Sunda Svenska ไทย Türkçe Українська Tiếng Việt 吴语 中文 文言 改拎 文章 討論 [x] 粵語 简 閱 改 睇返紀錄 [x] 架撐 架撐 移去側欄 收埋 動作 閱 改 睇返紀錄 基本 有乜連過嚟 連結頁嘅更改 上載檔案 固定連結 此版明細 引用呢篇文 攞短網址 下載QR碼 打印/匯出 下載PDF 印得嘅版本 第啲維基項目 維基同享 維基數據項 外表 移去側欄 收埋 出自維基百科,自由嘅百科全書 呢篇文夾雜其他語言,唔合乎粵語嘅用字、句式或者文法。 請改善。如果想睇吓似粵語口語嘅文應該點寫,可以睇吓Wikipedia:正文同埋Wikipedia:好文入面貼堂嘅範例。 呢個表唔完整,歡迎加料。 呢一篇文有特別字。當冇適當嘅展示支援,你可能會見到問號、盒或者係其它符號。 喺數學上,有一啲經常喺數學表達式中出現嘅符號,稱為數學符號(讀音:sou 3 hok 6 fu 4 hou 62)。數學工作者熟悉呢啲符號,唔係每次用都加以說明。所以,對於數學初學者,下面嘅表列出好多常見嘅符號包括名、讀法同埋應用領域。另外,第三欄有一個非正式嘅定義,第四欄有個簡單嘅例子。 有時不同符號有相同含義,這些符號在不同的上下文中有不同的含義。 | 符號 | 名 | 定義 | 舉例 | --- --- | | 讀法 | | 數學領域 | | = | 等號 | x= y 表示 x 同 y 係相同的數值或個數值係相等。 | 1+ 1= 2,2+2=4 | | 等於 | | 所有領域 | | ≠ | 不等號 | x≠y 表示 x 同 y 唔係相同嘅嘢或者個值唔相等。 | 1 ≠ 2 | | 不等於 | | 所有領域 | | < > | 絕對不等號 | x<y 表示 x 細過 y。 x>y 表示 x 大過 y。 | 3<4 5>4 | | 小於,大於 | | 序理論 | | ≤ ≥ | 不等號 | x≤y 表示 x 細過或等於 y。 x≥y 表示 x 大過或等於 y。 | 3≤4;5≤5 5≥4;5≥5 | | 小於等於,大於等於 | | 序理論 | | + | 加號 | 6 + 3 表示 6 加 3。 | 6 + 3 = 9 | | 加 | | 算術 | | − | 減號 | 36 − 5 表示 36 減 5 。 | 36 − 5 = 31 | | 減 | | 算術 | | 負號 | −3 表示 3 嘅負數。 | −(−5) = 5 | | 負 | | 算術 | | 補集 | A−B 表示包含所有屬於 A 但唔屬於 B 嘅元素嘅集合。 | {1,2,4}−{1,3,4}= {2} | | 減 | | 集合論 | | × | 乘號 | 6 × 3 表示 6 乘以 3。 | 6 × 3 = 18 | | 乘以 | | 算術 | | 直積 | X×Y 表示所有第一個元素屬於 X,第二個元素屬於 Y 嘅有序對嘅集合。 | {1,2} × {3,4} = {(1,3),(1,4),(2,3),(2,4)} | | … 和…嘅直積 | | 集合論 | | 向量積 | u×v 表示向量u 同 v 嘅向量積。 | (1,2,5) × (3,4,−1) = (−22, 16, − 2) | | 向量積 | | 向量代數 | | ÷ / | 除號 | 6 ÷ 3 或 6 / 3 表示 6 除以 3 或 3 除 6。 | 6 ÷ 3 = 2 12/4 = 3 | | 除以 | | 算術 | | {\displaystyle {\sqrt {}}} {\displaystyle {\sqrt {\ }}} | 根號 | x{\displaystyle {\sqrt {x}}}表示佢嘅平方係 x 嘅正數。 | 4=+2{\displaystyle {\sqrt {4}}=+2} | | …嘅平方根 | | 實數 | | 復根號 | 若果用極坐標表示複數 z = r exp(i φ)(滿足 -π<φ≤π),則 √z = √r exp(i φ/2)。 | −1=i{\displaystyle {\sqrt {-1}}=i} | | …嘅平方根 | | 複數 | | || | 絕對值 | |x| 表示實數軸(或復平面)上 x 同 0 嘅距離。 | |3| = 3, |-5| = |5| |i| = 1, |3+4 i| = 5 | | …嘅絕對值 | | 絕對值 | | ! | 階乘 | n! 表示連乘積 1×2×…×n。 | 4! = 1 × 2 × 3 × 4 = 24 | | …嘅階乘 | | 組合論 | | ~ | 概率分佈 | X ~ D 表示隨機變量X 概率分佈係 D。 | X ~ N(0,1):標準正態分佈 | | 滿足分佈 | | 統計學 | | ⇒ → ⊃ | 實質蘊涵 | A⇒B 表示 A 真則 B 亦真;A 假則 B 唔定。 → 可能同 ⇒ 一樣,或者有下面會提到嘅函數嘅意思。 ⊃ 可能同 ⇒ 一樣,或者有下面會提到嘅父集嘅意思。 | x = 2⇒x 2 = 4 為真,但 x 2 = 4 ⇒x = 2 一般情況下為假(因為 x 可以係 −2)。 | | 推出,若…則 … | | 命題邏輯 | | ⇔ ↔ | 實質等價 | A⇔B 表示 A 真則 B 真,A 假則 B 假。 | x+ 5= y+2⇔x+ 3= y | | 當且僅當 | | 命題邏輯 | | ¬ ˜ | 邏輯非 | 命題 ¬A 為真當且僅當 A 為假。 將一條斜線穿過一個符號相當於將 "¬" 放喺該符號前面。 | ¬(¬A)⇔A x≠y⇔¬(x=y) | | 非,否 | | 命題邏輯 | | ∧ | 邏輯與或交運算 | 若 A 為真且 B 為真,則命題 A∧B 為真;否則為假。 | n< 4∧n>2⇔n= 3,當 n 係自然數 | | 與 | | 命題邏輯,格理論 | | ∨ | 邏輯或或並運算 | 若 A 或 B(或都)為真,則命題 A∨B 為真;若兩者都假則命題為假。 | n≥ 4∨n≤ 2⇔n≠ 3,當 n 係自然數 | | 或 | | 命題邏輯,格理論 | | ⊕ ⊻ | 異或 | 若 A 同 B 啱啱有一個為真,則命題 A⊕B 為真。 A⊻B 嘅意義相同。 | (¬A) ⊕A 恆為真,A⊕A 恆為假。 | | 異或 | | 命題邏輯,布爾代數 | | ∀ | 全稱量詞 | ∀x: P(x) 表示 P(x) 對於所有 x 為真。 | ∀n∈N: n 2≥n | | 對所有;對任意;對任一 | | 謂詞邏輯 | | ∃ | 存在量詞 | ∃x: P(x) 表示存在至少一個 x 令 P(x) 為真。 | ∃n∈N: n 為偶數 | | 存在 | | 謂詞邏輯 | | ∃! | 唯一量詞 | ∃!x: P(x) 表示得一個 x 令 P(x) 為真。 | ∃!n∈N: n+ 5= 2 n | | 存在唯一 | | 謂詞邏輯 | | := ≡ :⇔ | 定義 | x:= y 或 x≡y 表示 x 定義為 y 嘅一個名字(注意:≡ 亦可以表示其它意思,例如全等)。 P:⇔Q 表示 P 定義為 Q 嘅邏輯等價。 | cosh x:= (1/2)(exp x+ exp(−x)) A XOR B:⇔ (A∨B)∧¬(A∧B) | | 定義為 | | 所有領域 | | { , } | 集合括號 | {a,b,c} 表示 a, b,c 組成嘅集合。 | N= {0,1,2,…} | | …嘅集合 | | 集合論 | | {: } { | } | 集合構造記號 | {x: P(x)} 表示所有滿足 P(x) 嘅 x 嘅集合。 {x| P(x)} 和 {x: P(x)} 嘅意義相同。 | {n∈N: n 2<20}= {0,1,2,3,4} | | 滿足…嘅集合 | | 集合論 | | ∅ {} | 空集 | ∅ 表示冇元素嘅集合。 {} 嘅意義相同。 | {n∈N: 1<n 2< 4}= ∅ | | 空集 | | 集合論 | | ∈ ∉ | 元素歸屬性質 | a∈S 表示 a 屬於集合 S;a∉S 表示 a 唔屬於 S。 | (1/2)−1∈N 2−1∉N | | 屬於;唔屬於 | | 所有領域 | | ⊆ ⊂ | 子集 | A⊆B 表示 A 嘅所有元素屬於 B。 A⊂B 表示 A⊆B 但 A≠B。 | A∩B⊆A;Q⊂R | | …嘅子集 | | 集合論 | | ⊇ ⊃ | 父集 | A⊇B 表示 B 嘅所有元素屬於 A。 A⊃B 表示 A⊇B 但 A≠B。 | A∪B⊇B;R⊃Q | | …嘅父集 | | 集合論 | | ∪ | 並集 | A∪B 表示包含所有 A 同 B 嘅元素但唔包含任何其他元素嘅集合。 | A⊆B⇔A∪B= B | | …同…嘅並集 | | 集合論 | | ∩ | 交集 | A∩B 表示包含所有同時屬於 A 同 B 嘅元素嘅集合。 | {x∈R: x 2= 1}∩N= {1} | | …同…嘅交集 | | 集合論 | | \ | 補集 | A\ B 表示所有屬於 A 但唔屬於 B 嘅元素嘅集合。 | {1,2,3,4} \ {3,4,5,6} = {1,2} | | 減;除去 | | 集合論 | | ( ) | 函數應用 | f(x) 表示 f 喺 x 嘅值。 | f(x):= x 2,則 f(3)= 3 2= 9。 | | f(x) | | 集合論 | | 優先組合 | 先執行括號內嘅運算。 | (8/4)/2= 2/2= 1;8/(4/2)= 8/2= 4 | | | | 所有領域 | | ƒ:X →Y | 函數箭頭 | ƒ:X→Y 表示 ƒ 從集合 X 映射到集合 Y。 | 設 ƒ:Z→N 定義為 ƒ(x)= x 2。 | | 從…到… | | 集合論 | | o | 復合函數 | f o g 係一個函數,令 (f o g)(x) = f(g(x))。 | 若 f(x) = 2 x,且 g(x) = x + 3,則 (f o g)(x) = 2(x + 3)。 | | 復合 | | 集合論 | | N ℕ | 自然數 | N 表示 {1,2,3,…},另一定義睇自然數。 | {|a|: a∈Z}= N | | N | | 數 | | Z ℤ | 整數 | Z 表示 {…,−3,−2,−1,0,1,2,3,…}。 | {a: |a|∈N}= Z | | Z | | 數 | | Q ℚ | 有理數 | Q 表示 {p/q: p,q∈Z, q≠0}。 | 3.14∈Q π∉Q | | Q | | 數 | | R ℝ | 實數 | R 表示 {lim n→∞a n: ∀n∈N: a n∈Q, 極限存在}。 | π∈R √(−1)∉R | | R | | 數 | | C ℂ | 複數 | C 表示 {a+bi: a,b∈R}。 | i= √(−1)∈C | | C | | 數 | | ∞ | 無窮 | ∞ 係擴展嘅實數軸上大過任何實數嘅數;通常出現喺極限中。 | lim x→0 1/|x|= ∞ | | 無窮 | | 數 | | π | 圓周率 | π 表示圓周長同直徑嘅比。 | A= π r 2 係半徑為 r 嘅圓嘅面積 | | pi | | 幾何 | | |||| | 範數 | ||x|| 係賦範向量空間元素 x 嘅範數。 | ||x+y|| ≤ ||x|| + ||y|| | | …嘅範數;…嘅長度 | | 線性代數 | | ∑ | 求和 | ∑k=1 n a k 表示 a 1+ a 2+ …+ a n. | ∑k=1 4 k 2= 1 2+ 2 2+ 3 2+ 4 2= 1+ 4+ 9+ 16= 30 | | 從…到…嘅和 | | 算術 | | ∏ | 求積 | ∏k=1 n a k 表示 a 1 a 2···a n. | ∏k=1 4(k+ 2)= (1 + 2)(2+ 2)(3+ 2)(4+ 2)= 3× 4× 5× 6= 360 | | 從…到…嘅積 | | 算術 | | 直積 | ∏i=0 n Y i 表示所有 (n+1)-元組 (y 0,…,y n)。 | ∏n=1 3R = Rn | | …嘅直積 | | 集合論 | | ' | 導數 | f'(x)函數 f 喺 x 點嘅導數,亦就係,嗰度嘅切線斜率。 | 若 f(x)=x 2, 則 f'(x)=2 x | | … 撇; …嘅導數 | | 微積分 | | ∫ | 不定積分 或 反導數 | ∫f(x)d x 表示導數為 f 嘅函數。 | ∫x 2 d x= x 3/3 | | …嘅不定積分; …嘅反導數 | | 微積分 | | 定積分 | ∫a b f(x)d x 表示 x-軸和 f 喺 x= a 和 x= b 之間嘅函數圖像所夾成嘅帶符號面積。 | ∫0 b x 2 d x= b 3/3; | | 從…到…以…為變量嘅積分 | | 微積分 | | ∇ | 梯度 | ∇f (x 1, …, x n) 偏導數組成嘅向量 (df / dx 1, …, df / dx n). | 若 f (x,y,z) = 3 xy + z 2 則 ∇f=(3 y, 3 x, 2 z) | | …嘅(del或nabla或梯度) | | 微積分 | | ∂ | 偏導數 | 設有 f (x 1, …, x n), ∂f/∂x i 係 f 嘅對於x i 嘅當其他變量保持不變時嘅導數。 | 若 f(x,y) = x 2 y, 則 ∂f/∂x = 2xy | | …嘅偏導數 | | 微積分 | | 邊界 | ∂M 表示 M 嘅邊界 | ∂{x: ||x|| ≤ 2} = {x: || x || = 2} | | …嘅邊界 | | 拓撲 | | 次數 | ∂f(x) 表示 f(x) 嘅次數(亦記作degf(x)) | | | …嘅次數 | | 多項式 | | ⊥ | 垂直 | x⊥y 表示 x 垂直於 y; 更一般嘅 x 正交於 y. | 若 l⊥m 同 m⊥n 則 l || n. | | 垂直於 | | 幾何 | | 底元素 | x = ⊥ 表示 x 係最細嘅元素。 | ∀x: x∧⊥ = ⊥ | | 底元素 | | 格理論 | | ⊧ | 蘊含 | A⊧B 表示 A 蘊含 B,喺 A 成立嘅每個 模型中,B 亦成立。 | A⊧A∨¬A | | 蘊含; | | 模型論 | | ⊢ | 推導 | x⊢y 表示 y 由 x 導出。 | A→B⊢¬B→¬A | | 從…導出 | | 命題邏輯、謂詞邏輯 | | ◅ | 正則子群 | N◅G 表示 N 係 G 嘅正則子群。 | Z(G) ◅G | | 係…嘅正則子群 | | 群論 | | / | 商群 | G/H 表示 G模其子群 H 嘅商群。 | {0, a, 2 a, b, b+a, b+2 a} / {0, b} = {{0, b}, {a, b+a}, {2 a, b+2 a}} | | 模 | | 群論 | | ≈ | 同構 | G≈H 表示 G 同構於 H | Q / {1, −1} ≈V, 其中 Q 係四元數群V 係 克萊因四群。 | | 同構於 | | 群論 | | ∝ | 正比 | G∝{\displaystyle \propto }H 表示 G 正比於 H | 若 Q∝{\displaystyle \propto }V,则 Q=KV | | 正比於 | | 所有領域 | 出面網頁 [編輯] Compendium of Mathematical Symbols List of LaTeX Mathematical Symbols Jeff Miller: Earliest Uses of Various Mathematical Symbols TCAEP - Institute of Physics | 睇 傾 改 現代數學同數學各領域 | | 純粹同應用數學·數學史·數學之美 | | 數學證明 | 直接證明 數學歸納法 否定證明 反證法 構造法 分類證明 | | | 數學基礎 | 數學哲學 數學邏輯 範疇論 集合論 類型論 資訊理論 數學符號 數學物體 | | 數論 | 數(質數) 算術 代數數論 解析數論 | | 代數學 | 基本代數 線性代數(多重) 抽象代數(交換代數·羣論·伽華理論) 場論 泛代數 同調代數 | | 幾何學 | 歐幾里得幾何 代數幾何 解析幾何 微分幾何 離散幾何 拓撲學(代數拓撲) | | 數學分析 | 微積分 微分方程 實分析同複分析 測度 諧波分析 泛函分析 | | 離散數學 | 組合數學 圖論 序理論 博弈論 | | 應用數學 | 數理物理學 數理化學 數理生物學 數理心理學 數理社會學 控制理論 決策論 概率論同統計學 數理經濟學 運籌學 數理金融學 | | 運算數學 | 電腦科學 運算理論 數值分析 最佳化 電腦代數 | | 拉雜相關 | 邏輯 理論電腦科學 哲學(古希臘哲學·理性主義) 科學(形式·系統) 科技 藝術 樂理 | | 數學主題·數學類 | 由「 屬於4類: 維基百科清理 夾雜非粵文嘅文章 數學符號 數學表示法 屬於2隱類: 全部需要清理嘅文 未搞掂嘅一覽 呢版上次改係2024年11月26號 (禮拜二) 06:30 嘅事。 呢度嘅所有文字係根據Creative Commons Attribution-ShareAlike 牌照 4.0嘅條款發佈;可能會有附加嘅條款。 利用呢個網站,你同意利用條款同埋私隱政策。Wikipedia® 係Wikimedia Foundation, Inc. 嘅註冊商標,一個非牟利機構。 私隱政策 關於維基百科 免責聲明 行為準則 開發人員 統計 Cookie聲明 手提版 改預覽設定 查嘢 搵嘢 數學符號 57種語言加主題
14963
https://www.education.com/lesson-plan/antonyms-are-opposites/
SKIP TO CONTENT Lesson Plans 2nd Grade Lesson Plans Lesson Plan Antonyms are Opposites! Opposites attract! In this lesson, your students will practice matching antonyms. They will expand their vocabulary by knowing more words to use! Grades: Second Grade Third Grade View aligned standards L.2.5 L.3.1.g Learning Objectives Students will be able to generate a definition for the term antonym. Students will be able to identify pairs of antonyms. Introduction (5 minutes) Begin the lesson by showing your students Antonyms are Opposites. Watch the video twice to solidify the concept. Encourage your students to sing along as the song progresses. Explain that antonyms are opposites.
14964
https://www.opt.math.tugraz.at/~cela/papers/bibch.pdf
Quadratic and Three Dimensional Assignmen ts: An Annotated Bibliograph y Rainer E. Burkard y Erand a C  ela y Abstract The amoun t of literature on quadratic assignmen t and related problems has already gro wn so m uc h that o v erviewing it to determine the most relev an t dev elopmen ts and the most recen t trends b ecomes more and more dicult. This pap er pro vides a collection of references on quadratic and three dimensional assignmen t problems together with brief annotations. W e consider all asp ects of the quadratic assignmen t problem (QAP) ranging form linearizations and equiv alen ts form ulatio ns to p olynomiall y solv able sp ecial cases and asymptotic b eha vior. Similarly , the most imp ortan t researc h directions on three dimensional assignmen t problems (D AP) are co v ered. Hereb y this pap er lies somewhere in the middle ground b et w een a pure bibliograph y and a surv ey article. It will so on app ear as a separate c hapter in \Annotated Bibliographies in Com binatorial Optimization", edited b y M. Dell'Amico, F. Maoli and S. Martello. Concen trating on con tributions whic h app eared in   or later and fo cusing on the most recen t results, w e consider, ho w ev er, also seminal w ork on the ro ots of the problems at hand and surv ey pap ers whic h can serv e as precious sources of related literature. Con ten ts . Intr oduction . Books and Sur veys . R oots of the Quadra tic Assign-ment Pr oblem (QAP): Basic F a cts and Complexity . Lineariza tions of the QAP . Lo wer Bounds f or the QAP . Exa ct Algorithms f or the QAP . Heuristics f or the QAP . Simulate d anne aling appr o aches . T abu se ar ch appr o aches . Genetic algorithms . Asymptotic Beha vior of the QAP . Pol ynomiall y Sol v able Cases of the QAP 0. Some Applica tions . Codes and D a t a f or the QAP . Generaliza tions . The -Dimensional Assignment Pr oblem (-D AP): Basic F a cts and Complexity . The F a cial Str ucture of the -D AP . Algorithms f or the -D AP . Pol ynomiall y Sol v able Cases of the -D AP This researc h has b een partially supp orted b y the Sp ezialforsc h ungs b erei c h F 00 \Optimierung und Kon-trolle" / Pro jektb ereic h Diskrete Optimierung. y TU Graz, Institut f  ur Mathematik B, Steyrergasse 0, A-00 Graz, Austria.  Quadra tic and -Dimensional Assignments: A Bibliography  In tro duction Giv en t w o n n matrices A and B the quadratic assignmen t problem (QAP) of size n can b e stated as follo ws min S n n X i= n X j = a (i)(j ) b ij ; where S n is the set of p erm utations of f; ; : : : ; ng. Initially the QAP arose as a mathematical mo del of a lo cation problem concerning economic activities. In the con text of lo cation problems whic h still remain a ma jor application of the QAP , n facilities and n lo cations are giv en. Matrix A is the o w matrix, i.e. a ij is the o w of materials mo ving from facilit y i to facilit y j , and matrix B is the distance matrix, i.e. b k l is the distance b et w een facilities k and l . The cost of sim ultaneously assigning facilit y i to lo cation k and facilit y j to lo cation l is a ij b k l . The ob jectiv e of the QAP consists of nding an assignmen t of the facilities to the lo cations with the minim um o v erall cost. No w ada ys, a large v ariet y of other pr actic al applic ations of the QAP are kno wn, includin g suc h areas as sc heduling, man ufacturing, parallel and distributed computing, statistical data analysis and c hemistry . F rom the the or etic al p oint of view , other com binatorial optimization and graph theoretical problems can b e form ulated as QAPs. Just to men tion some w ell kno wn examples consider the tra v eling salesman problem, the turbine problem, the linear ordering problem, graph partitioning problems, subgraph isomorphism and maxim um clique problems. Due to its theoretical and practical relev ance, but also due to its c omplexity , the QAP has b een sub ject of extensiv e researc h since its rst o ccurrence in  . In the last decade w e ha v e seen a dramatic increase of the size of NP-hard com binatorial optimization problems whic h can b e ecien tly solv ed in practice. Unfortunately , the QAP is not one of them; QAP instances of size larger than 0 are still considered in tractable. Th us, the QAP still remains a c hallenging problem from b oth theoretical and practical p oin t of view. The researc h done on the QAP co v ers more or less all of its asp ects. With the in ten tion to iden tify new structural com binatorial prop erties a n um b er of alternativ e form ulations for the QAP ha v e b een giv en. Ranging from equiv alen t Bo olean linear and mixed in teger linear programs to the trace form ulation, they ha v e led to div erse lo w er b ounding pro cedures and exact solution metho ds for this problem. It is probably remark able that quite di eren t approac hes ha v e b een applied to this end: com binatorial metho ds, eigen v alue computation and subgradien t and nonsmo oth optimization tec hniques. The resulting lo w er b ounds ha v e b een incorp orated in cutting planes and branc h and b ound algorithms for the QAP , the latter b eing considered the most ecien t. Recen tly , parallel implemen tation of branc h and b ound metho ds ha v e enabled the solution of test instances of size 0. Ho w ev er, ev en the most sophisticated implemen tations of exact algorithms fail in solving real size QAPs and heuristics still remain the unique mean to solv e medium to large size instances of the problem. Among the large v ariet y of heuristics prop osed for the QAP the so called metaheuristics, tabu searc h, sim ulated annealing and genetic algorithms, seem to b e the most ecien t. As these metho ds are based on neigh b orho o d searc h, they are also appropriate for parallel implemen tations. This enables in turn the heuristic solution of real life problems. Unfortunately , there is no guaran tee on the qualit y of the solutions pro duced b y these metho ds and no tigh t b ounds for large sized QAPs are kno wn. This is not surprising when considering that ev en the appro ximation problem for QAPs is in general NP-hard. On the other side, under certain probabilistic conditions the random QAP b ecomes in some sense trivial as the size of the problem increases. Another researc h direction on QAPs concerns restricted v ersions of the problem. Clearly , Quadra tic and -Dimensional Assignments: A Bibliography most of the e orts fo cus on iden tifying p olynomially solv able cases of the QAP . Ho w ev er, the iden ti cation of pro v ably NP-hard cases helps on understanding structural prop erties of the problem. Recen tly , QAPs whose co ecien t matrices ha v e a sp ecial com binatorial structure ha v e b een in v estigated leading to some new p olynomially solv able cases. Ho w ev er, only a few results of this t yp e are kno w and a lot remains to do in this direction. Another ob ject of researc h w ork related to QAPs concerns its generalizations. Giv en the large area of applications of this problem, its n umerous generalizations and related problems should not b e surprising. The generalizations ma y b e related to the structure of the problem co ecien ts or to the set of feasible solutions. Tw o w ell kno wn examples are probably the biquadr atic assignment pr oblem (BiQAP) and the semi-quadr atic assignment pr oblem (SQAP). Another widely kno wn and w ell studied assignmen t problem is the multidimensional assign-ment pr oblem (MAP), in particular the thr e e dimensional assignment pr oblem (-D AP). There are t w o w ell distinguished v ersions of the -D AP: the axial -D AP and the planar -D AP . In the -D AP of size n w e are giv en three disjoin t sets I , J , K of cardinalit y n eac h and a w eigh t c ij k asso ciated with eac h ordered triplet (i; j; k ) I J K . In the axial -D AP w e w an t to nd a minim um (maxim um) w eigh t collection of n pairwise disjoin t triplets as ab o v e, whereas in the planar -D AP the goal is to nd n triplets forming n disjoin t sets of n disjoin t triplets eac h. The m ultidimensional assignmen t problem arises as a generalization of the axial -D AP when n-tuples are considered instead of triplets. Both the axial and the planar -D AP are kno wn to b e NP-hard and ha v e sev eral applications with resp ect to sc heduling and time-tabling problems. A recen t application of the MAP concerns data asso ciation problems in m ultitarget trac king and m ultisensor data fusion. The axial (planar) -D AP is a close relativ e to the (solid) transp ortation problem. This relationship has b een probably helpful on studying the facial structure of these problems. Some classes of facet de ning inequalities and corresp onding separation algorithms ha v e b een deriv ed. Among algorithms kno wn for -D APs some branc h and b ound metho ds in v olving Lagrangean relaxation and subgradien t optimization can b e men tioned, the axial problem b eing the mostly studied. Recen tly , a tabu searc h algorithm for the planar -D AP has b een prop osed. Finally , some in v estigations ha v e b een done on sp ecial cases of the axial -D AP . These in v es-tigations concern problems whose co ecien ts ha v e a sp ecial structure. e.g. are decomp osable or ful ll the triangle inequalit y , or p ossess Monge-lik e prop erties. It turns out that in most of the cases the problems remain NP-hard, unless their co ecien ts ful ll additional, more restrictiv e conditions. The latter lead then to p olynomially solv able and p olynomially appro ximable cases, resp ectiv ely . There exists an abundan t literature on the QAP and its generalizations. In dra wing up this bibliograph y , w e ha v e concen trated on publications that app eared in   or later, fo cusing on the most recen t con tributions. Ho w ev er, seminal w ork related to the ro ots of the QAP or review articles whic h con tain a large n um b er of p oin ters to relev an t previous w ork ha v e also b een men tioned. When reviewing pap ers related to algorithmic asp ects of the problem, w e ha v e only rep orted on those whic h presen t the b est computational results, unless relev an t theoretical con tribution is pro vided. W e hop e to ha v e not o v erlo ok ed an y imp ortan t con tribution on the considered problems. Ho w ev er, w e w ould b e pleased to hear ab out additional relev an t w ork in this area and w e w ould highly appreciate an y related p oin ters. Quadra tic and -Dimensional Assignments: A Bibliography  Bo oks and Surv eys The follo wing surv ey pap ers can serv e as a general in tro duction to quadratic assignmen t prob-lems. Co v ering all asp ects of researc h on QAPs, these pap ers pro vide also a large n um b er of p oin ters to the ro ots of the QAP and to earlier surv eys whic h are not listed in this section. G. Fink e, R.E. Burk ard, F. Rendl ( ). Quadratic assignmen t problems. S. Martello, G. Lap orte, M. Minoux, C. Rib eiro (eds.). Surveys in Combinatorial Optimization , Ann. Discr. Math. , North-Holland, Amsterdam, {. This surv ey fo cuses on the \trace form ulation" of the QAP . The eigen v alue approac h for the lo w er b ound computation in the case of symmetric QAPs is in tro duced together with a reduction sc heme for impro ving the resulting b ounds. S.W. Hadley , F. Rendl, H. W olk o wicz ( 0). Bounds for the quadratic assignmen t problem us-ing con tin uous optimization tec hniques. Inte ger Pr o gr amming and Combinatorial Optimization, Univ ersit y of W aterlo o Press, {. This article reviews lo w er b ounding pro cedures for the QAP based on con tin uous optimization tec hniques. Eigen v alue tec hniques, reduced gradien t metho ds, trust region metho ds, sequen tial quadratic programming and sub di eren tial calculus are applied to appro ximations and relax-ations of the QAP . R.E. Burk ard ( ). Lo cations with Spatial In teractions: The Quadratic Assignmen t Problem. P .B. Mirc handani, R.L. F rancis (eds.). Discr ete L o c ation The ory , John Wiley & Sons, {. This surv ey resumes kno wn results related to (mixed) in teger programming form ulations, b ounding pro cedures, exact algorithms and heuristics for QAPs. Some t ypical applications of the QAP are describ ed and a n um b er of pap ers describing less t ypical applications are referenced. Moreo v er, the asymptotic b eha vior of the QAP is describ ed. P .M. P ardalos, F. Rendl, H. W olk o wicz ( ). The quadratic assignmen t problem: A surv ey and recen t dev elopmen ts. DIMA CS Series Discr. Math. The or. Comp. Sci. , {. This is the most recen t surv ey on the QAP . It app eared as an in tro ductory article in the pro ceedings b o ok of the DIMA CS w orkshop on quadratic assignmen t and related problems. It fo cuses on recen t results concerning computation of lo w er b ounds, computational complexit y , heuristic approac hes for the QAP and its generalizations. Quadr atic Assignment and R elate d Pr oblems , Pro c. DIMA CS W orkshop on Quadratic Assign-men t Problems, P .M. P ardalos, H. W olk o wicz (eds.). DIMA CS Series Discr. Math. Theor. Comp. Sci. . This b o ok o ers a collection of up-to-date con tributions on computational approac hes to the QAP and its applications. Ro ots of the Quadratic Assignmen t Problem (QAP): Basic F acts and Complexit y T.C. Ko opmans, M.J. Bec kmann ( ). Assignmen t problems and the lo cation of economic activities. Ec onometric a , {. Quadra tic and -Dimensional Assignments: A Bibliography  This is the rst o ccurrence of the standard QAP form ulation: min P n i;j = a (i)(j ) b ij , where ranges o v er all p erm utations of f; ; : : : ; ng. The QAP is deriv ed as a mathematical form ula-tion of a problem arising along with the lo cation of economic activities. P .C. Gilmore ( ). Optimal and sub optimal algorithms for the quadratic assignmen t problem. SIAM J. Appl. Math. 0, 0{. This pap er in tro duces the so called Gilmor e-L aw ler b ounds whic h still remain one of the most imp ortan t and frequen tly used b ounds for the QAP . Based on these b ounds, t w o heuristic approac hes are prop osed. E.L. La wler ( ). The quadratic assignmen t problem. Management Sci. , { . A more general QAP is in tro duced, where the ob jectiv e is the minimization of a double sum of the form P n i;j = d  (i) (j )ij o v er all p erm utations of f; ; : : : ; ng. The problem co ecien ts d ij k l form an arra y with n  elemen ts. Moreo v er, the author deriv es an equiv alen t in teger pro-gramming form ulation for this problem and describ es the computation of lo w er b ounds. C.E. Nugen t, T.E. V ollmann, J. Ruml ( ). An exp erimen tal comparison of tec hniques for the assignmen t of facilities to lo cations, Op er. R es. , 0{. An impro v emen t metho d com bined with random elemen ts is prop osed. This metho d is com-pared with deterministic impro v emen t algorithms on a set of QAP test instances. No w ada ys these instances are kno wn as Nugent's pr oblems and are frequen tly used for exp erimen tal pur-p oses. G.W. Gra v es, A.B. Whinston ( 0). An algorithm for the quadratic assignmen t problem. Management Sci. , {. The authors deriv e form ulas for the mean and the v ariance of the ob jectiv e function v alue of the QAP . Moreo v er, en umerativ e algorithms are prop osed whic h exploit this statistical informa-tion. R.E. Burk ard ( ). Quadratisc he Bottlenec kprobleme. Op er. R es. V erfahr en , {. The QAP with b ottlenec k ob jectiv e function is in tro duced. The goal is to minimize max i;j n a (i)(j ) b ij o v er all p erm utations of f; ; : : : ; ng. Moreo v er, the author prop oses lo w er b ounds for the b ottlene ck QAP to b e incorp orated in branc h and b ound algorithms. S. Sahni, T. Gonzalez ( ). P-complete appro ximation problems. J. A CM , {. The computational complexit y of the QAP is in v estigated sho wing that this problem is strongly NP-hard. Moreo v er, it is sho wn that the existence of a p olynomial -appro ximate algorithm for QAPs implies P = N P . M. Queyranne ( ). P erformance ratio of p olynomial heuristics for triangle inequalit y quadratic assignmen t problems. Op er. R es. L ett. , {. The author considers QAPs with co ecien t matrices ful lling the triangle inequalit y . It is sho wn that for suc h QAPs no p olynomial heuristic algorithm with b ounded asymptotic p erfor-mance ratio exists unless P = N P . K.A. Murth y , P . P ardalos and Y. Li ( ). A lo cal searc h algorithm for the quadratic assignmen t Quadra tic and -Dimensional Assignments: A Bibliography  problem. Informatic a ,  -. A new neigh b orho o d for QAPs is prop osed whic h is similar to the Kernighan-Lin neigh b or-ho o d for the graph partitioning problem. It is sho wn that the corresp onding lo cal searc h problem is PLS-complete.  Linearizations of the QAP The QAP can b e equiv alen tly form ulated as a Bo olean, an in teger or a mixed in teger linear pro-gram. There exists a large n um b er of suc h equiv alen t form ulations for the QAP . This approac h is particularly fruitful concerning the computation of lo w er b ounds. L. Kaufman, F. Bro ec kx ( ). An algorithm for the quadratic assignmen t problem using Benders' decomp osition. Eur op e an J. Op er. R es. , 0{. The authors prop ose an equiv alen t form ulation for the QAP as a mixed in teger linear program with n real v ariables, n in teger v ariables and O (n ) constrain ts. This is one of the \smallest" linearizations of the QAP with resp ect to the n um b er of v ariables and constrain ts. M.S. Bazaraa, H.D. Sherali ( 0). Benders' partitioning sc heme applied to a new form ulation of the quadratic assignmen t problem. Naval R es. L o g. Quart. , {. An equiv alen t form ulation of the QAP as a mixed in teger linear program with a highly sp e-cialized structure is prop osed. This form ulation whic h in v olv es n Bo olean v ariables, n (n )= real v ariables and n linear constrain ts, p ermits the e ectiv e use of the partitioning sc heme of Benders. W.P . Adams, H.D.Sherali ( ). A tigh t linearization and an algorithm for zero-one quadratic programming problems. Management Sci. , { 0. A linearization for a class of linearly constrained 0  quadratic programming problems con taining the QAP is prop osed. It is sho wn that this linearization is tigh ter than other ones existing in the literature. Moreo v er, an implicit en umeration algorithm whic h mak es use of the strength of this linearization is deriv ed. W.P . Adams, T.A. Johnson ( ). Impro v ed linear programming-based lo w er b ounds for the quadratic assignmen t problem. DIMA CS Series Discr. Math. The or. Comp. Sci. , {. A new mixed 0- linear form ulation for the QAP is prop osed. By appropriately surrogating selected constrain ts and com bining v ariables, most of the kno wn linear form ulations for the QAP can b e obtained. Moreo v er, most of the resulting b ounding tec hniques can b e describ ed in terms of the Lagrangean dual of this new form ulation of the QAP . A dual-ascen t pro cedure is prop osed for sub optimally solving a relaxation of the dual of the new QAP form ulation deriving also new lo w er b ounds.  Lo w er Bounds for the QAP N. Christo des, M. Gerrard ( ). A graph theoretic analysis of b ounds for the quadratic assignmen t problem. P .Hansen (ed.). Studies on Gr aphs and Discr ete Pr o gr amming , North-Holland, {. Quadra tic and -Dimensional Assignments: A Bibliography  The authors consider a sp ecial v ersion of the QAP , where the feasible solutions corresp ond to isomorphisms of graphs. This v ersion of the QAP is p olynomially solv able in the case that the co ecien t matrices are w eigh ted adjacency matrices of isomorphic trees or other simple graphs, eg. wheels or cycles. The latter solv able cases can b e used to generate lo w er b ounds for the general QAP . The follo wing three pap ers deal with Lagrangean tec hniques for computing lo w er b ounds. A.M. F rieze, J. Y adegar ( ). On the quadratic assignmen t problem. Discr. Appl. Math. ,  { . The relationship b et w een the Gilmore-La wler b ounds for the QAP on reduced matrices and a Lagrangean relaxation of a particular mixed 0  linear form ulation for the QAP is in v estigated. The Gilmore-La wler b ounds obtained b y in v olving an \optimal" reduction are dominated b y the con tin uous relaxation of the prop osed linear form ulation for the QAP . A.A. Assad, W. Xu ( ). On lo w er b ounds for a class of quadratic 0;  programs. Op er. R es. L ett. , {0. P . Carraresi, F. Malucelli ( ). A new lo w er b ound for the quadratic assignmen t problem. Op er. R es. 0, Suppl. No. , S{S. In these t w o pap ers iterativ e metho ds are used for generating non-decreasing sequences of lo w er b ounds for the QAP . In eac h iteration the problem is reform ulated and a lo w er b ound for the new form ulation is computed. The reform ulation is based on a Lagrangean dual-ascen t pro cedure and on the information giv en b y the dual v ariables arising along with the lo w er b ound computation, resp ectiv ely . The follo wing four pap ers use the trace form ulation of the QAP for generating lo w er b ounds based on eigen v alue computations. F. Rendl ( ). Ranking scalar pro ducts to impro v e b ounds for the quadratic assignmen t problem. Eur op e an J. Op er. R es. 0, {. The author reconsiders the eigen v alue b ounds for QAPs prop osed b y Fink e, Burk ard and Rendl ( ) as describ ed in Section . In the case when the linear term resulting after the reduction mostly in uences the ob jectiv e function, the eigen v alue b ound can b e impro v ed b y ranking the k -b est solutions of the linear term. S.W. Hadley , F. Rendl, H. W olk o wicz ( ). Symmetrization of nonsymmetric quadratic as-signmen t problems and the Ho man-Wielandt inequalit y , Line ar A lg. Appl. , {. A tec hnique is prop osed to transform a nonsymmetric QAP in to an equiv alen t QAP on Her-mitian matrices. The eigen v alue b ound for symmetric QAPs is extended to the general problem and Ho man-Wielandt-t yp e eigen v alue inequalities for general matrices are deriv ed. F. Rendl, H. W olk o wicz ( ). Applications of parametric programming and eigen v alue maxi-mization to the quadratic assignmen t problem. Math. Pr o gr am. , -. The classical eigen v alue b ounds for QAPs on symmetric matrices can b e impro v ed b y apply-ing sp ecial reduction sc hemes. The authors deriv e an \optimal" reduction taking sim ultaneously in to accoun t the quadratic term and the linear term of the ob jectiv e function. This in v olv es a Quadra tic and -Dimensional Assignments: A Bibliography  steep est-ascen t algorithm based on sub di eren tial calculus. S.W. Hadley , F. Rendl, H. W olk o wicz ( ). A new lo w er b ound via pro jection for the quadratic assignmen t problem. Math. Op er. R es. , { . The standard eigen v alue b ounds for the QAP are impro v ed. The new b ounds mak e use of a tigh ter relaxation on orthogonal matrices with constan t ro w and column sums. The additional constrain ts of the new relaxation are pro jected in to the space of orthogonal matrices of size n , where n is the size of the giv en QAP . F or b ounding the quadratic part of the pro jected program standard eigen v alue approac hes are used. S.E. Karisc h, F. Rendl ( ). Lo w er b ounds for the quadratic assignmen t problem via triangle decomp ositions. Math. Pr o gr am. , -. QAPs where one of the co ecien t matrices is the distance matrix of a grid graph are consid-ered. The problem is decomp osed in to a trivially solv able QAP and a so called \residual QAP". A lo w er b ound for the residual problem is computed via pro jection and nonsmo oth optimization tec hniques are used to deriv e an appropriate decomp osition.  Exact Algorithms for the QAP A large v ariet y of exact algorithms has b een prop osed for the QAP among whic h the branc h and b ound metho ds generally yield the b etter results. Man y of these algorithms are men tioned and/or describ ed in the surv eys cited in Section . A new er dev elopmen t is deriv ed b y Edw ards in the follo wing pap er. C.S. Edw ards ( 0). A branc h and b ound algorithm for the Ko opmans-Bec kmann quadratic assignmen t problem. Math. Pr o gr am. Study , {. The branc h and b ound approac h is based on the trace form ulation of the QAP whic h allo ws to e ectiv ely use a binary branc hing rule. The p erformance of branc h and b ound algorithms dep ends signi can tly on the ecience and on the qualit y of the in v olv ed lo w er b ounds. The p erformance of suc h algorithms can b e also impro v ed b y a smart use of the a v ailable hardw are in parallel implemen tations. The follo wing three pap ers describ e some parallel branc h and b ound algorithms for the QAP . C. Roucairol ( ). A parallel branc h and b ound algorithm for the quadratic assignmen t prob-lem. Discr. Appl. Math. , {. P . P ardalos, J. Crouse (  ). A parallel algorithm for the quadratic assignmen t problem. Pr o c. Sup er c omputing Conf., {0. J. Clausen, M. P erreg  ard, Solving large quadratic assignmen t problems in parallel. Computa-tional Opt. Appl. (to app ear) T. Mautor, C. Roucairol ( ). A new exact algorithm for the solution of quadratic assignmen t problems, Discr. Appl. Math. , { . It is sho wn ho w to exploit the symmetries on the cost matrix in order to reduce the branc h and b ound tree. The prop osed branc h and b ound algorithm uses p olytomic branc hing rule and Quadra tic and -Dimensional Assignments: A Bibliography outp erforms most of the other branc h and b ound sc hemes existing at that time. N. Christo des, E. Bena v en t (  ). An exact algorithm for the quadratic assignmen t problem on a tree. Op er. R es. , 0{. A sp ecial case of the QAP is considered, where the o w matrix is the w eigh ted adjacency matrix of a tree. A branc h and b ound metho d for this NP-hard sp ecial case is deriv ed. An in-teger programming form ulation for this problem is giv en and its Lagrangean relaxation is solv ed b y using a dynamic programming sc heme. This approac h pro duces tigh t lo w er b ounds. M.E. Dy er, A.M. F rieze, C.J.H. McDiarmid ( ). On linear programs with random costs. Math. Pr o gr am. , {. The authors deriv e an in teresting result on the size of branc h and b ound trees for random QAPs. The considered lo w er b ounds arise as solutions of an LP relaxation of the Bo olean linear programming form ulation of the QAP giv en b y F rieze and Y adegar ( ) and describ ed in Sec-tion ). It is sho wn that in case of binary branc hing the n um b er of the explored branc hing no des gro ws sup er-exp onen tially with probabilit y tending to  as the site of the problem approac hes in nit y .  Heuristics for the QAP There is a large v ariet y of heuristics for the QAP ranging from construction and deterministic impro v emen t metho ds to tabu searc h and sim ulation based algorithms. In the follo wing w e will only men tion some of the most recen t approac hes. Y. Li, P .M. P ardalos, M. Resende ( ). A greedy randomized adaptiv e searc h pro cedure for the quadratic assignmen t problem. DIMA CS Series Discr. Math. The or. Comp. Sci. , {. The authors prop ose an impro v emen t metho d whic h com bines greedy elemen ts with prob-abilistic asp ects. The so called GRASP sho ws a go o d computational b eha vior in man y QAP instances from QAPLIB (see Section 0 b elo w). . Sim ulated annealing approac hes R.E. Burk ard, F. Rendl ( ). A thermo dynamically motiv ated sim ulation pro cedure for com-binatorial optimization problems. Eur op e an J. Op er. R es. ,  {. This is one of the rst applications of simulate d anne aling (SA) to the QAP . It is sho wn that SA outp erforms most of the existing heuristics for the QAP at that time. M.R. Wilhelm, T.L. W ard ( ). Solving quadratic assignmen t problems b y \sim ulated anneal-ing". IIE T r ans.  , 0- . SA is impro v ed b y in tro ducing \equilibria" comp onen ts whic h comply with the statistic me-c hanics bac kground of the underlying Metrop olis' algorithm. D.T. Connolly ( 0). An impro v ed annealing sc heme for the QAP . Eur op e an J. Op er. R es. , {00. Quadra tic and -Dimensional Assignments: A Bibliography 0 A new elemen t of the annealing sc heme, the so called optimal temp er atur e , is in tro duced. The corresp onding algorithm yields a promising impro v emen t of the trade-o b et w een computation time and solution qualit y . . T abu searc h approac hes One of the rst applications of tabu searc h to the QAP and a parallel implemen tation of tabu searc h for QAPs can b e found in the follo wing t w o pap ers, resp ectiv ely . J. Sk orin-Kap o v ( 0). T abu searc h applied to the quadratic assignmen t problem. ORSA J. Comput. , {. J. Chakrapani, J. Sk orin-Kap o v ( ). Massiv ely parallel tabu searc h for the quadratic assign-men t problem. A nn. Op er. R es. , {. The p erformance of tabu searc h algorithms dep ends v ery m uc h on the size of the tabu list and on the w a y this list is handled. Tw o of the most e ectiv e strategies leading to a go o d trade-o b et w een the div ersi cation and the in tensi cation of the searc h are presen ted in the follo wing t w o pap ers. E. T aillard ( 0). Robust tab o o searc h for the quadratic assignmen t problem. P arallel Comput. , {. R. Battiti, G. T ecc hiolli ( ). The reactiv e tabu searc h. ORSA J. Comput. , {0. . Genetic algorithms W e b eliev e that among genetic approac hes for QAPs, the follo wing is a remark able con tribution. R.K. Ah uja, J.B. Orlin, A. Tiv ari ( ). A greedy genetic algorithm for the quadratic assignmen t problem. Working p ap er , Sloan Sc ho ol of Managemen t, MIT. This genetic algorithm attempts to strik e a balance b et w een div ersit y and a bias to w ards tter individuals. Appropriate greedy elemen ts are com bined to this end with genetic ingredien ts lik e new crosso v er sc hemes, tournamen ting, p erio dic lo cal optimization and an immigration rule that promotes div ersit y .  Asymptotic Beha vior of the QAP Under natural probabilistic constrain ts on the input data, the QAP sho w an in teresting asymp-totic b eha vior. Namely , the ratio b et w een the \b est" and the \w orst" v alue of the ob jectiv e function approac hes  with probabilit y tending to  as the size of the problem approac hes in n-it y . This b eha vior w as rst sho wn b y Burk ard and Finc k e for sum and b ottlenec k ob jectiv es: R.E. Burk ard, U. Finc k e ( ). The asymptotic probabilistic b eha vior of quadratic sum assign-men t problems. Z. Op er. R es. , {. R.E. Burk ard, U. Finc k e ( ). On random quadratic b ottlenec k assignmen t problems. Math. Pr o gr am. , {. Quadra tic and -Dimensional Assignments: A Bibliography  J.B.G. F renk, M. v an Hou w eninge, A.H.G. Rinno o y Kan ( ). Asymptotic prop erties of the quadratic assignmen t problem. Math. Op er. R es. 0, 00-. The range of the con v ergence in the ab o v e men tioned b eha vior is impro v ed from \with prob-abilit y" to \almost sure". W.T. Rhee ( ). A note on asymptotic prop erties of the quadratic assignmen t problem. Op er. R es. L ett. ,  {00. The results of F renk, Hou w eninge and Rinno o y Kan ( ) are impro v ed b y pro viding a sim-pler pro of and sharp er estimations for the almost sure con v ergence in the asymptotic b eha vior of the QAP . W.T. Rhee ( ). Sto c hastic analysis of the quadratic assignmen t problem. Math. Op er. R es. , { . The maximization v ersion of the QAP is considered. A simple greedy approac h is describ ed whic h pro duces a go o d appro ximation of the optimal solution with o v erhelming probabilit y . This complies with the previous results on the asymptotic b eha vior of the QAP . E. Bonomi, J.-L. Lutton ( ). The asymptotic b eha vior of quadratic sum assignmen t problems: A statistical mec hanics approac h. Eur op e an J. Op er. R es. , {00. A statistical mec hanics approac h, based on the Boltzmann distribution and the Metrop olis algorithm, is applied to study the asymptotic b eha vior of the QAP . The authors deriv e in this w a y the same result as Burk ard and Finc k e ( ) and p erform n umerical exp erimen ts whic h con rm this theoretical result. In the follo wing t w o pap ers a com binatorial condition whic h guaran tees an analogous asymp-totic b eha vior for general com binatorial optimization problems is singled out. The rst pap er sho ws con v ergence with probabilit y whereas in the second one an almost sure con v ergence is pro v en. R.E. Burk ard, U. Finc k e ( ). Probabilistic asymptotic prop erties of some com binatorial optimization problems. Discr. Appl. Math. , { . W. Szpank o wski ( ). Com binatorial optimization problems for whic h almost ev ery algorithm is asymptotically optimal!. Optimization ,  {. P olynomially Solv able Cases of the QAP The rst p olynomially solv able cases of the QAP w ere iden ti ed b y Christo des and Ger-rard ( ) as men tioned in Section . These results are extended and generalized to minimal v ertex series-parallel (MVSP) digraphs as describ ed in the follo wing pap er. F. Rendl ( ). Quadratic assignmen t problems on series parallel digraphs. Z. Op er. R es. 0, {. It is sho wn that the general v ersion of the QAP on isomorphic MVSPs is NP-hard. Ho w ev er, MVSP digraphs whic h do not con tain the bipartite digraph K ; as v ertex induced subgraph lead to p olynomially solv able cases. A p olynomial time algorithm is prop osed for solving the latter. Quadra tic and -Dimensional Assignments: A Bibliography  Recen t in v estigation on p olynomially solv able cases of the QAP rely on sp ecial com binatorial prop erties of the in v olv ed co ecien t matrices, as sho wn in the follo wing t w o pap ers. R.E. Burk ard, E. C  ela, G. Rote, G.J. W o eginger ( ). The quadratic assignmen t problem with an an ti-Monge and a T o eplitz matrix: Easy and hard cases. Math. Pr o gr am. (to app ear) R.E. Burk ard, E. C  ela, V.M. Demidenk o N.N. Metelski, G.J. W o eginger ( ). Easy and hard cases of the quadratic assignmen t problem: A surv ey . Working p ap er , Graz Univ ersit y of T ec hnology . 0 Co des and Data for the QAP R.E. Burk ard, U. Derigs ( 0). Assignmen t and Matc hing Problems: Solution Metho ds with F OR TRAN-Programs. L e ctur e Notes Ec on. Math. Sys. , Springer-V erlag, Berlin. This b o ok con tains F OR TRAN co des for exact and heuristic algorithms for the QAP . A p oin ter to the source les can b e found at ac.at/ e karisch/qaplib/ R.E. Burk ard, S.E. Karisc h, F. Rendl ( ). QAPLIB -a quadratic assignmen t problem library , Eur op e an J. Op er. R es. , { . Up dated v ersion Marc h  . This pap er describ es a library of test instances for the QAP . F or eac h instance the b est kno wn solutions and the corresp onding ob jectiv e function v alues are giv en. The up dated v ersion pro vides also the b est kno wn lo w er b ounds for eac h test instance. This library can b e found at ac.at/ e karisch/qaplib/ and is also a v ailable p er anon ymous ftp at ftp.tu-graz.ac.at/pub/papers /qaplib. The follo wing t w o pap ers prop ose algorithms for generating QAP instances with kno wn op-timal solution. G.S. P alub ec kis ( ). Generation of quadratic assignmen t test problems with kno wn optimal solutions. (in Russian). U.S.S.R. Comput. Maths. Math. Phys. , { . Y. Li, P .M. P ardalos ( ). Generating quadratic assignmen t test problems with kno wn optimal p erm utations. Computational Opt. Appl. , {. The authors sho w that the test instances generated b y the P alub ec kis algorithm are \easy" in the sense that the corresp onding optimal v alue of the ob jectiv e function can b e computed in p olynomial time. The pro of relies on the fact that the in v olv ed co ecien t matrices are Euclidean. The P alub ec kis' idea is then generalized to generate test instance with kno wn optimal solution whose co ecien t matrices are not Euclidean.  Generalizations A natural generalization of the QAP , the so called biquadr atic assignment pr oblem (BiQAP), arises in the VLSI design. The BiQAP co ecien ts are organized in four dimensional arra ys and an instance of size n lo oks as follo ws: min  S n n X i;j;k ;l= a  (i) (j ) (k ) (l) b ij k l Quadra tic and -Dimensional Assignments: A Bibliography  where S n is the set of p erm utations of f; ; : : : ; ng. The follo wing t w o pap ers generalize previous w ork on QAPs to deriv e linearizations, lo w er b ounds and heuristic approac hes for the BiQAP . Moreo v er, the asymptotic b eha vior of the BiQAP is in v estigated and it is sho wn that it is anal-ogous to that of the QAP . R.E. Burk ard, E. C  ela, B. Klinz ( ). On the biquadratic assignmen t problem. DIMA CS Series Discr. Math. The or. Comp. Sci. , {. R.E. Burk ard, E. C  ela ( ). Heuristics for biquadratic assignmen t problems and their compu-tational comparison. Eur op e an J. Op er. R es. , {00. A n um b er of applications for the so called semi-quadr atic assignment pr oblem (SQAP) are describ ed in the follo wing three pap ers. The SQAP has the same ob jectiv e function as the QAP , whereas the feasible solutions do not need to b e p erm utations but simply injectiv e functions mapping f; ; : : : ; ng in to itself. The follo wing references pro vide also p oin ters to b ounding pro-cedures, heuristics and p olynomially sp ecial cases of the SQAP . R.J. F reeman, D.C. Gogert y , G.W. Gra v es, R.B.S. Bro oks ( ). A mathematical mo del of supply supp ort for space op erations. Op er. R es. , {. V.F. Magirou, J.Z. Milis (  ). An algorithm for the m ultipro cessor assignmen t problem. Op er. R es. L ett. , {. F. Malucelli, D. Pretolani ( ). L ower b ounds for the quadr atic semi-assignment pr oblem. T ec hnical Rep ort , Cen tre des Rec herc hes sur les transp orts, Univ ersit  e de Mon tr  eal.  The -Dimensional Assignmen t Problem (-D AP): State-men t and Complexit y The m ultidimensional assignmen t problem (MAP) and some of its applications w ere in tro duced in: W.P . Piersk alla ( ). The m ultidimensional assignmen t problem. Op er. R es. , {. The simplest MAP is the -D AP . Moreo v er, most of the results obtained for the -D AP can b e naturally extended to the MAP , to o. O. Leue ( ). Metho den zur L osung -dimensionaler Zuordn ugsprobleme. A ngewandte Math-ematik , -. Tw o v ersions of the -D AP are stated: the axial -D AP and the planar -D AP . The feasible solutions of the axial -D AP are pair of p erm utations of f; : : : ; ng, whereas there is a one-to-one relation b et w een latin squares and the feasible solutions of the planar -D AP . A.M F rieze ( ). A bilinear programming form ulation of the -dimensional assignmen t prob-lems. Math. Pr o gr am. , { . The author considers a sligh tly generalized form of the axial -D AP and giv es an equiv alen t form ulation of this problem as a bilinear program. This form ulation is exploited then to deriv e a necessary optimalit y condition for the axial -D AP . Quadra tic and -Dimensional Assignments: A Bibliography  While the axial -D AP is NP-hard in the strong sense b y a standard observ ation, the NP-hardness of the planar -D AP is sho wn in the follo wing pap er: A.M. F rieze ( ). Complexit y of a -dimensional assignmen t problem. Eur op e an J. Op er. R es. , {. V.A. Jemelic hev, M.M. Ko v aliev, M.K. Kra vtso v ( ). Polytop es, gr aphs and optimisation, Cam bridge Univ. Press. Multidimensional assignmen t problems are closely related to the m ulti-index transp ortation problems. This b o ok pro vides a detailed analysis of the m ulti-index transp ortation problem concerning in particular its p olyhedral structure.  The F acial Structure of the -D AP The facial structure of the MAP pla ys an imp ortan t role in deriving ecien t algorithms of branc h and cut t yp e. The facial structure of the axial -D AP has b een in v estigated in the follo wing three pap ers. The authors iden tify facet de ning equalities for the corresp onding p olytop e and deriv e also separation algorithms for these facets. E. Balas, M.J. Saltzman (  ). F acets of the three-index assignmen t p olytop e. Discr. Appl. Math. , 0{ . E. Balas, L. Qi ( ). Linear-time separation algorithms for the three-index assignmen t p oly-top e. Discr. Appl. Math.  {. L. Qi, E. Balas, G. Gw an ( ). A new facet class and a p olyhedral metho d for the three-index assignmen t problem. D.-Z. Du (ed.). A dvanc es in Optimization, Klu w er Academic, {. The facial structure of planar -D AP has b een in v estigated in the follo wing t w o pap ers. R.E. Burk ard, R. Euler, Grommes ( ). On latin squares and the facial structure of related p olytop es. Discr. Math. , -. R. Euler ( ). Odd cycles and a class of facets of the axial -index assignmen t p olytop e. Applic ationes Mathematic ae (Zastosowania Matematyki) XIX, {.  Algorithms for the -D AP P . Hansen, L. Kaufman ( ). A primal-dual algorithm for the three dimensional assignmen t problem. Cahiers Centr e  Etudes R e ch. Op er. , {. The authors apply a mo di ed v ersion of the Hungarian metho d for the linear (t w o dimen-sional) assignmen t problem to the axial -D AP . R.E. Burk ard, K. F r ohlic h ( 0). Some remarks on three dimensional assignmen t problems. Metho ds Op er. R es. , {. An exact solution metho d for the axial -D AP is deriv ed. This metho d com bines reduction steps with lo w er b ounds computation b y subgradien t optimization within a branc h and b ound sc heme. Quadra tic and -Dimensional Assignments: A Bibliography  A.M. F rieze, J. Y adegar ( ). An algorithm for solving -dimensional assignmen t problems with applications to sc heduling a teac hing practice. Op er. R es. ,  { . The authors prop ose a subgradien t optimization metho d for solving a Lagrangean relaxation of a sligh tly generalized maximization v ersion of the axial -D AP . This algorithm pro duces quite go o d solutions on test instances with real life and random input data. E. Balas, M.J. Saltzman ( ). An algorithm for the three-index assignmen t problem. Op er. R es. , 0{. A branc h and b ound algorithm for the axial -D AP is deriv ed. The computation of lo w er b ound in v olv es subgradien t tec hniques for solving a Lagrangean relaxation of the problem whic h incorp orates a class of facets de ning inequalities. A no v el branc hing strategy exploits the struc-ture of the -D AP to reduce the size of the en umeration tree. R.E. Burk ard, R. Rudolf ( ). Computational in v estigations on -dimensional axial assignmen t problems. Belgian J. Op er. R es. Stat. Comp. Sci. , { . This computational study compares di eren t branc hing rules and b ounding pro cedures for the axial -D AP . A. P o ore ( ). P artitioning m ultiple data sets: Multidimensional assignmen t and Lagrangean relaxation. DIMA CS Series Discr. Math. The or. Comp. Sci. , {. A Lagrangean relaxation metho d for a class of MAPs is prop osed. The relaxed problem is again a MAP and its maximization in v olv es nonsmo oth optimization tec hniques. The algorithm is illustrated and tested on instances arising as mathematical mo des of real life data asso ciation problems. A. P o ore, A. Rob ertson ( ). A new Lagrangean relaxation based algorithm for a class of m ultidimensional assignmen t problems. Computational Opt. Appl. (to app ear) A new Lagrangean relaxation metho d for sparse MAPs is prop osed. The relaxed problem is a linear (t w o dimensional) assignmen t problem, whereas the computation of the Lagrangean m ultipliers in v olv es non-smo oth optimization metho ds. An in teresting application of the MAP arises along with data asso ciation in m ultitarget trac k-ing as describ ed in the follo wing t w o pap ers. A. P o ore ( ). Multidimesional assignmen t form ulation of data asso ciation problems arising from m ultitarget and m ultisensor trac king. Computational Opt. Appl. , {. A. P o ore ( ). Multidimensional assignmen t and m ultitarget trac king. DIMA CS Series Discr. Math. The or. Comp. Sci.  ,  { . Compared to the axial -D AP less w ork has b een done on the planar -D AP . The three fol-lo wing pap ers describ e branc h and b ound and heuristic metho ds for this problem. M. Vlac h ( ). A branc h and b ound metho d for the three index assignmen t problem. Ekonomicko-Matematicky Obzor , { . A straigh tforw ard branc h and b ound metho d for the planar -D AP is describ ed. Quadra tic and -Dimensional Assignments: A Bibliography  D. Magos, P . Miliotis ( ). An algorithm for the planar three-index assignmen t problem. Eur op e an J. Op er. R es. , {. A branc h and b ound algorithm for the planar -D AP is prop osed and tested. It in v olv es p oly-tomic branc hing and an impro v emen t metho d for the computation of the upp er b ounds. The computation of lo w er b ound is based on Lagrangean relaxations solv ed b y subgradien t metho ds. D. Magos ( ). T abu searc h for the planar three-index assignmen t problem. J. Glob al Opt. , {. A tabu searc h algorithm for the planar -D AP is prop osed and tested on problems of size  to . The algorithm com bines standard tabu searc h elemen ts suc h as xed (v ariable) tabu list size and frequency-based memory with a new neigh b orho o d structure in the set of latin squares.  P olynomially Solv able Cases of the -D AP P olynomially solv able sp ecial cases of the axial -D AP w ere singled out in the follo wing t w o pap ers. R.E. Burk ard, R. Rudolf, G.J. W o eginger ( ). Three-dimensional axial assignmen t problems with decomp osable cost-co ecien ts. Discr. Appl. Math. , - . This pap er in v estigates -D APs of the form min ; S n n X i= a i b (i) c (i) where S n is the set of p erm utations of f; ; : : : ; ng, and sho ws that in general this problem is NP-hard. Additional conditions on the problem co ecien ts (a i ), (b i ) and (c i ),   i  n, lead, ho w ev er, to p olynomially solv able sp ecial cases. Finally , it is sho wn that the maximization v er-sion of the -D AP is also p olynomially solv able pro vided that all co ecien ts are non-negativ e. D. F ortin, R. Rudolf ( ). We ak algebr aic Monge arr ays . SFB Rep ort , Institute of Mathe-matics, Graz Univ ersit y of T ec hnology . The authors generalize Monge prop erties to m ultidimensional arra ys and giv e an explicit op-timal solution for MAPs on arra ys ha ving suc h prop erties. Other t yp es of sp ecial cost co ecien ts for the MAP and the axial -D AP are considered in the follo wing t w o pap ers, resp ectiv ely . Though for the considered sp ecial cost co ecien ts the problems remain NP-hard, p olynomial appro ximation sc hemes can b e giv en. H.-J. Bandelt, Y. Crama, F.C.R. Spieksma ( ) Appro ximation algorithms for m ultidimen-sional assignmen t problems with decomp osable costs. Discr. Appl. Math.  , {0. Y. Crama, F.C.R. Spieksma ( ). Appro ximation algorithms for three-dimensional assignmen t problems with triangle inequalities. Eur op e an J. Op er. R es. 0, { . Ac kno wledgmen t. W e w ould lik e to thank R  udiger Rudolf for helpful suggestions and remarks concerning references on three dimensional assignmen t problems.
14965
https://www.youtube.com/watch?v=bMccdk8EdGo
R-squared, Clearly Explained!!! StatQuest with Josh Starmer 1500000 subscribers 5091 likes Description 270384 views Posted: 18 Nov 2022 R-squared is one of the most useful metrics for understanding how two quantitate things, like weight and height, are related. For a complete index of all the StatQuest videos, check out: If you'd like to support StatQuest, please consider... Patreon: ...or... YouTube Membership: ...buying one of my books, a study guide, a t-shirt or hoodie, or a song from the StatQuest store... ...or just donating to StatQuest! Lastly, if you want to keep up with me as I research and create new StatQuests, follow me on twitter: StatQuest 298 comments Transcript: step Quest step Quest step Quest stat Quest stat Quest is brought to you by the friendly people in the genetics department at the University of North Carolina at Chapel Hill hello and welcome to stat quest in this video we're going to talk about r squared r squared is a metric of correlation that is easy to compute and intuitive to interpret most of us are already familiar with correlation and the standard metric of it plain old r correlation values that are close to 1 or negative one are good and tell you that two quantitative variables for example weight and size are strongly related correlation values close to zero are lame some of you may be asking why should we care about r squared we already have regular r some of you might just be asking what is r squared r squared is very similar to its hipper cousin R but interpretation is easier for example it's not obvious that when R equals 0.7 that's twice as good a correlation as when R equals 0.5 however r squared equals 0.7 is what it looks like it's 1.4 times as good as r squared equals 0.5 the other thing that I like about r squared is that it's easy and intuitive to calculate let's start with an example here we're plotting mouse weight on the y-axis with high weights towards the top and low weights towards the bottom and mouse identification numbers on the x-axis with ID numbers one through seven we can calculate the mean or average of the mouse weights and plot it as a line that spans the graph we can calculate the variation of the data around this mean as the sum of the squared differences of the weight for each Mouse I where I is an individual Mouse represented by a red dot and the mean the difference between each data point is squared so that the points below the mean don't cancel out the points above the mean now what if instead of ordering our mice by their identification number we ordered them by their size instead of using identification number on the x-axis we have Mouse size with the smallest size on the left side and the largest size on the right side all we have done is reorder the data on the x-axis the mean and variation are the exact same as before here we show the mean again as a black bar that spans the graph in the exact same location as it was before also the distances between the dots and the line have not changed just the order of the dots here's a question for you given that we know an individual Mouse's size is the mean or average weight the best way to predict that individual Mouse's weight well the answer is no we can do way better all we have to do is fit a line to the data now we can predict weight with our line you tell me you have a large Mouse I can look at my line and make a good guess about the weight here's another question does the blue line that we just drew fit the data better than the mean if so how much better by I it looks like the blue line fits the data better than the mean how do we quantify that difference r squared in the bottom of the graph I've drawn the equation for r squared we're going to walk through it one step at a time the first part of the equation is just the variation around the mean we already calculated that it's just the sum of the squared differences of the actual data values from the mean the second part of the equation is the variation around our new Blue Line This is calculated in a very similar way here we just want the sum of the squared differences between the actual data points and our new Blue Line the numerator which is the difference between the variation around the mean and the variation around the blue line is then divided by the variation around the mean this makes r squared range from 0 to 1 because the variation around the line will never be greater than the variation around the mean and it will never be less than zero this division also makes r squared a percentage and we'll talk more about that in just a second now we'll walk through an example where we calculate things one step at a time first we'll start with the variation around the mean in this case that equals 32. the variation around the blue line is only six which is what we suspected since it appears to fit the data much better once we've calculated the variation around the mean and the variation around our Blue Line we can plug these values in to our formula for r squared after plugging in our values we get r squared equals 32 minus 6 over 32. after subtracting 6 from 32 we get 26 doing the division 26 divided by 32 gives us 0.81 or 81 percent this means that there is 81 percent less variation around the line than the mean in other words the size weight relationship accounts for 81 percent of the total variation this means that most of the variation in the data is explained by the size weight relationship here's another example in this example we're comparing two possibly uncorrelated variables on the y-axis we have mouse weight again but on the x-axis we now have time spent sniffing a rock like before we calculate the variation around the mean and just like before we got 32. however this time when we calculated the variation around the Blue Line we got a much larger value 30. now we just plug those values into our formula for r squared by doing the math we see that r squared equals 0.06 or 6 percent thus there's only six percent less variation around the line than the mean in other words the sniff weight relationship accounts for only six percent of the total variation this means that hardly any of the variation in the data is explained by the sniff weight relationship now when someone says the statistically significant r squared was 0.9 you can think to yourself very good the relationship between the two variables explains 90 percent of the variation in the data and when someone else says the statistically significant r squared was 0.01 you can think to yourself dag who cares if that relationship is significant it only accounts for one percent of the variation in the data something else must explain the remaining 99 percent what about plain old r how is it related to r squared r squared is just the square of r now when someone says the statistically significant r was 0.9 and we're talking about just plain old r you can think to yourself 0.9 times 0.9 equals 0.81 very good the relationship between the two variables explains 81 percent of the variation in the data and when someone else says the statistically significant R that's plain old R was 0.5 you can think to yourself 0.5 times 0.5 equals 0.25 the relationship accounts for 25 percent of the variation in the data that's good if there are a million other things accounting for the remaining 75 percent and bad if there's only one thing I like r squared more than just plain old R because it's easier to interpret here's an example how much better is R equals 0.7 than R equals 0.5 well if we convert those numbers to r squared we see that when r squared equals 0.7 squared it actually equals 0.5 which means 50 percent of the original variation is explained by the relationship when r squared equals 0.5 squared which equals 0.25 we see that only 25 percent of the original variation is explained by the relationship with r squared it's easy to see that the first correlation is twice as good as the second explaining 50 percent of the original variation is twice as good as only explaining 25 percent of the original variation that said r squared does not indicate the direction of the correlation because squared numbers are never negative if the direction of the correlation isn't obvious you can say the two variables were positively or negatively correlated with r squared equals dot dot dot whatever that value may be these are the two main ideas for r squared r squared is the percentage of variation explained by the relationship between two variables and also if someone gives you a value for plain old R just Square it in your head you'll understand what's going on a whole lot better we've reached the end of our stat Quest tune in next time for an exciting Adventure into the land of statistics
14966
https://static1.squarespace.com/static/5fe101b108d85d5e817a934a/t/60acfa7b68de3f2969621292/1621949051841/Diophantine_Equations_Hvitfeldtska.pdf
Diophantine Equations Hugo Berg, Joakim Colpier, Lycka Drakengren, Kevin Haagensen Str¨ omberg, Yuanqi Peng Hvitfeldtska gymnasiet, G¨ oteborg June 7, 2020 Contents 1 Introduction 2 2 Finding a Factorization 2 2.1 Exercises for the reader . . . . . . . . . . . . . . . . . . . . . . . 4 3 Congruences 4 3.1 Exercises for the reader . . . . . . . . . . . . . . . . . . . . . . . 6 4 Inequalities 6 4.1 Creating extra equations . . . . . . . . . . . . . . . . . . . . . . . 7 4.2 Using symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4.3 Minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 4.4 Exercises for the reader . . . . . . . . . . . . . . . . . . . . . . . 8 5 Pythagorean Triples 9 5.1 Exercises for the reader . . . . . . . . . . . . . . . . . . . . . . . 11 1 1 Introduction Diophantine equations are algebraic equations where we only seek the integer solutions. We might wish for a purely algebraic approach for solving these equa-tions in general. That however is usually hard, for most Diophantine equations will require the use of number theory to be solved. Some tricks will be shown here, but as always the best teacher is you. Make sure to practice solving lots of problems. Linear Diophantine equations will not be presented here, although they are the most fundamental type of Diophantine equations and are helpful in under-standing modular arithmetic. The reader might be interested in reading the article about linear Diophantine equations on Brilliant Math & Science Wiki.1 2 Finding a Factorization The word factorization denotes the writing of an expression as a product of two other expressions (two factors of the original expression, hence the name). For example, 6 can be factored as 2 · 3, and x2 + x as x(x + 1). The Fundamental Theorem of Arithmetic states that every integer greater than 1 is either a prime or can be factored as a product of prime numbers in a unique way. As a result of the theorem, knowing the factorization of an expression provides a way of solving Diophantine equations. In a Diophantine equation, if we have a factored expression on one side and an integer on the other side, we can due to the Fundamental Theorem of Arithmetic determine all possible values of the factors by looking at the prime factorization of the integer. We will demonstrate how it works in later examples. We will start by refreshing the reader’s memory and giving some new useful identities to know. We encourage you to ascertain their validity by trying them out by yourself. x2 ± 2xy + y2 = (x ± y)2 (1) x2 −y2 = (x + y)(x −y) (2) x3 ± y3 = (x ± y)(x2 ∓xy + y2) (3) x3 + y3 + z3 −3xyz = (x + y + z)(x2 + y2 + z2 −xy −yz −xz) (4) Knowing these formulas can prove a useful skill, as they are very common and their simplicity makes them easy to handle. Example 2.1. For what integers a and b does the equality a2 −4ab = −4b2 + 9 hold? 1Linear Diophantine Equations (n.d.). On Brilliant Math & Science Wiki. Accessible at: 2 Solution. From identity (1), we can see that (x −y)2 = x2 −2xy + y2. Also, we know that a2 −4ab = −4b2 + 9 is equivalent to a2 −2a(2b) + (2b)2 = 9, and can therefore conclude that (a −2b)2 = 9. The number 9 can be factored as 9 · 1, 3 · 3, (−9) · (−1) and (−3) · (−3). Because the left side of the equation is the product of two identical integers (a −2b), we can conclude that a −2b = ±3, giving us a = ±3 + 2b. It is also worth noting that if a product of two factors is equal to the n:th power of an integer, and the factors have no common divisor greater than 1, the factors are also n:th powers of some integers. That is to say, if x and y are co-prime integers and xy = zn where z and n are integers with n ≥0, then x = an and y = bn for some integers a and b. This can be verified by looking at the individual prime divisors of z, which occur in multiples of n. Since they cannot be split up between two co-prime factors, the prime power is either a factor in x or in y. As a result, both x and y will consist of prime factors with powers being multiples of n, which means that x and y in turn are n:th powers. Remark. Two integers are said to be relatively prime (or co-prime) if their greatest common divisor is 1. The numbers (a, b, c) are pairwise relatively prime if (a, b), (b, c) and (c, a) are all pairs of relatively prime integers. Example 2.2. (Baltic Way 1995) The positive integers a, b, c are pairwise relatively prime, a and c are odd and the numbers satisfy the equation a2 + b2 = c2. Prove that b + c is a square of an integer. Solution. We can rewrite the equation in a more convenient form as a2 = c2−b2. Factoring the right hand side using identity (2) yields a2 = (c + b)(c −b). As previously explained, it is sufficient to prove that b + c and c −b are relatively prime for b + c and c −b to be squares. Let us assume the opposite, that b + c and c−b share a divisor d > 1. Let b+c = dx and c−b = dy, where x and y are integers. Adding the equations together gives 2c = d(x + y), and subtracting them yields 2b = d(x −y). This means that either d divides both b and c, or 2 divides d. We can exclude the first case since b and c are relatively prime. If d were even, b and c would have had the same parity, which would in turn imply that a is even, which would be a contradiction. This means that we can exclude the second case as well. Hence c+b and c−b must be relatively prime, meaning that b + c is the square of an integer (which is also true for c −b), which was to be proven. While an expression might remind you of a certain factorization, it might not always be possible to directly factorize it. In that case, it is often useful to add a term to make the factorization possible. A common strategy is to add an integer to both sides of a Diophantine equation, ending up with a factorizable expression on one side and an expression from which you can determine the factors on the other side. Example 2.3 (Pythagoras Enigma 2019). Find all integer solutions to the equation x3 + y3 −3xy = 3. 3 Solution. Trying to immediately factor the expression in the left hand side will not lead to much progress. Nevertheless, the expression closely resembles the left hand side of identity (4), where z is replaced by 1. Comparing the expressions x3 + y3 −3xy and x3 + y3 + z3 −3xyz where z = 1, we see that they only differ by the number 1. Therefore, we can add 1 to both sides of the equation x3 + y3 −3xy = 3 to make the left hand side factorizable. Consequently, using identity (4), the obtained equation x3 + y3 + 1 −3xy = 4 can be rewritten as (x + y + 1)(x2 + y2 + 1 −xy −x −y) = 4. (5) There are six possible ways to split up the number 4 between the two factors, namely (x+y+1, x2+y2+1−xy−x−y) = (1, 4), (4, 1), (2, 2), (−1, −4), (−4, −1) or (−2, −2). For the first factor of the left hand side in equation (5) to be even, x and y must have different parity, making the second factor odd. Hence, we can exclude the alternatives (2, 2) and (−2, −2). The other alternatives, i.e. (x + y + 1, x2 + y2 + 1 −xy −x −y) = (1, 4), (4, 1), (−1, −4) or (−4, −1), lead to the systems of equations ( x + y = 0 xy = −1 , ( x + y = 3 xy = 2 , ( x + y = −2 3xy = 11 , and ( x + y = −5 3xy = 32 , respectively, which follows from rewriting the second factor as (x + y)2 + 1 − 3xy −(x + y). The two latter can be excluded, since 11 and 32 are not divisible by 3. The two first equations give, by substituting the upper equation into the lower, the solutions (x, y) = (1, −1), (−1, 1), (1, 2) and (2, 1). These are hence the integer solutions to the equation. 2.1 Exercises for the reader Exercise 2.1 (Baltic Way 2003). Let a and b be positive integers. Prove that if a3 + b3 is the square of an integer, then a + b is not a product of two different prime numbers. Exercise 2.2 (Baltic Way 1997). A rectangle can be divided into n equal squares. The same rectangle can also be divided into n + 76 equal squares. Find all possible values of n. Exercise 2.3 (Skolornas matematikt¨ avling 2009). Find all solutions in positive integers to the equation 1 x + 1 y = 1 101. 3 Congruences While often not containing many complicated terms or expressions, even quite normal-looking Diophantine equations can hide enormous amounts of complex-ity. Being able to reduce the mental complexity of some equation should there-fore be very helpful, and as it turns out, using what is called modular arithmetic and congruences is one of the most powerful and fundamental tools at our dis-posal. The basic definition in this section is therefore the one of congruence. 4 Definition 3.1. Two integers a, b, are said to be congruent ”modulo” another integer n if n|a −b, and we denote this by a ≡b mod n. Remark. If (all mod some fixed positive integer n) a ≡a′, and b ≡b′ then a + b ≡a′ + b′ and ab ≡a′b′. For example, since 02 ≡0, 12 ≡1, 22 ≡4 ≡0, and 32 ≡9 ≡1 mod 4, this implies that integer squares can only be congruent to 0 or 1 mod 4. For instance, the 24-hour clock is an example of a system of integers mod 24. Another system you have used before is the fact that two odd numbers or two even numbers sum to an even number, and only the sum of one even and one odd number sum to an odd number; these are in fact statements about the integers mod 2. We introduce below, an important and useful theorem in number theory and the study of Diophantine equations. Theorem 3.1 (Fermat’s little theorem). Let p be a prime number, then for any integer a, we have ap ≡a mod p Moreover, if a is not divisible by p, we can get ap−1 ≡1 mod p Example 3.1 (Baltic Way 2012). Find all integer solutions a, b, c of a2 + b2 + c2 = 20122012. Solution. Let us first factorize the RHS. We immediately see that 20122012 = 10001 · 2012 = 10001 · 4 · 503, and we consider the equation mod 8. As 10001 = 8 · 1250 + 1 and 503 = 480 + 23 = 8 · 62 + 7, we have that 20122012 ≡4 · 7 ≡4 mod 8. Let us explore what n2 can be congruent to mod 8. As we can see from the n mod 8 0 1 2 3 4 5 6 7 n2 mod 8 0 1 4 1 0 1 4 1 table, since our sum of integer squares is congruent to 4 mod 8, the only possi-ble combinations of congruences for (a2, b2, c2) are (0, 0, 4) and (4, 4, 4) (in any permutation). Thus the integers a, b, c, can only be even. Now let a = 2 · a′, b = 2 · b′, c = 2 · c′ and we get 4(a′2 + b′2 + c′2) = 20122012 = 4 · 503 · 10001, 5 which in turn gives a′2 + b′2 + c′2 = 503 · 10001. Now we combine the fact that 503·10001 ≡7 mod 8 and look at our table once again, noting that no sum of three squares is congruent to 7 mod 8, and so no solutions to our original equation can exist. Example 3.2 (Andreescu et al.2, 2010, p.224, modified). Prove that the equa-tion 8xy −x −y = 2z4 has no solution in positive integers. Solution. Assume there exists a positive integer solution. Multiplying by 8 and adding 1 gives us the equation (8x−1)(8y −1) = 16z4 +1. Suppose p is a prime divisor of 8x −1. Then p is also a divisor of 16z4 + 1, thus 16z4 = (4z2)2 ≡−1 mod p. Since p is not a divisor of z, Fermat’s Little Theorem shows that (4z2)p−1 ≡1 mod p. We also know that p is odd, so ((4z2)2) p−1 2 ≡(−1) p−1 2 ≡1 mod p and p−1 2 must be even. As a result, p ≡1 mod 4. Looking at the prime factorization of 8x −1, we can see that all factors are congruent to 1 mod 4, meaning that their product (i.e. 8x −1) is also congruent to 1 mod 4. However, 8x −1 ≡−1 mod 4, which leads to a contradiction. This means that there are no solutions in positive integers to the equation. 3.1 Exercises for the reader Exercise 3.1 (Baltic Way 2016). For which integers n=1, 2 . . . 6 does the equa-tion an + bn = cn + n, have a solution in integers? Exercise 3.2 (USAMO 1979). Determine all non-negative integer solutions, apart from permutations, of the equation n4 1 + n4 2 + n4 3 + . . . + n4 15 = 1599. Exercise 3.3 (AwesomeMath 2007). Find all non-negative integer solutions (a, b, c) of 4ab −a −b = c2. 4 Inequalities Discovering bounds on variables and expressions can be very useful in Diophan-tine equations since we don not have a continuous span of solutions but rather single points on the numberline. That means we easily can remove large quan-tities and get finitely many possibilities that can be tested in cases. We present three useful techniques for this endeavor. 2Andreescu et al. (2010). Introduction to Diophantine Equations Berlin: Springer 6 4.1 Creating extra equations A common technique is to use the fact that squares of real numbers, and hence integers, are non-negative. We can with this technique limit the number of options for some variable and get a finite number of possible values. This works with |x| or any other function that has a lower bound on its range. Example 4.1. Find all integral solutions to the following system of equations. ( x + y + z = 60, (x −4y)2 + (y −2z)2 = 2 Solution. The integer squares in the second equation must be both 1 for their sum to be 2, since both squares are integers greater than or equal to zero. That gives: ( x = 4y ± 1 y = 2z ± 1 . We have reduced all infinitely many values to two possibilities for x −4y and two for y −2z, which is four combinations in total. Now, if we express x and z in terms of y, we get when we plug into the first equation: 4y ± 1 + y + y 2 ± 1 2 = 60, which gives 11y ± 2 ± 1 = 120. Here the only way for 120±1±2 to be a multiple of 11 is if 11y = 120+2−1 or y = 11. Using our plus and minus choices we get x = 4 · 11 −1 = 43 and 11 = 2 · z −1 which gives z = 6. This solves the original equations, so we arrive at our answer of x = 43, y = 11, z = 6. 4.2 Using symmetry Using symmetry is another useful technique which often allows us to eliminate one variable at once. We order the variables in the equation to use properties of the smallest or largest one. Example 4.2 (Andreescu et al., 2010, p.14). Solve 1 x + 1 y + 1 z = 3 5 in positive integers. Solution. Without loss of generality let x ≤y ≤z (sometimes we cannot order the variables and instead we can only choose which of the variables will be the 7 smallest or largest one). This gives 3 x ≥3 5 which gives x ∈{1, 2, 3, 4, 5}. We can eliminate x = 1 since it is too large. If x = 2 we get 1 y + 1 z = 1 10 →y = 10 + 100 z−10 so z −10|100. Restrict-ing ourselves to a finite amount of values for y and z, the solutions are easily found: (2, 11, 110), (2, 12, 60), (2, 14, 35), (2, 15, 30), (2, 20, 20). Remember that the permutations of these solutions also work. The rest of the cases are left as an exercise to the reader. 4.3 Minimization Minimization is a technique taking advantage of the fact that, given some solu-tions in positive integers to an equation, one of them is the smallest one. Using this, we can disprove the existence of solutions through a proof by contradiction. We start by assuming that a solution to an equation exists. If that leads us to the existence of an infinite strictly decreasing sequence of positive integer solu-tions, we have arrived at a contradiction. In turn, we can deduce that there are no positive integer solutions to the equation. Of course, we will need to define what the smallest solution is for equations involving more than one variable. We can for instance do this by looking at the sum of the variables. For example: if (x, y) and (p, q) are solutions, then (x, y) is smaller than (p, q) if x + y < p + q. Example 4.3 (Andreescu et al., 2010, p.49). Solve x3 + 2y3 = 4z3 in positive integers. Solution. Let (a, b, c) be a solution minimized for x+y+z. In other words there is no solution (p, r, q) such that p + r + q < a + b + c. Because we seek positive values such a solution must exist. Notice that a3 = 4c3 −2b3 = 2(2c3 −b3), or in other words a3 and thus a is even. Letting a = 2k where k is a positive integer gives 8k3 + 2b3 = 4c3 − →4k3 + b3 = 2c3 − →b3 = 2(c3 −2k3), so b is also even. Let b = 2m where m is a positive integer. From this we get 4m3 = c3 −2k3 − →c3 = 4m3 + 2k3 = 2(2m3 + k3), so c is also even, and letting c = 2n, where n is a positive integer, gives another solution x = k, y = m, z = n to the initial equation. This means that our original solution is not minimized for x + y + z since k + m + n < a + b + c, which is a contradiction, and thus no solutions exist. 4.4 Exercises for the reader Exercise 4.1. Prove that no solutions in integers exist for the equation a b = √p where p is a prime number. Exercise 4.2. Finish all the cases in Example 4.2. Exercise 4.3. Find all integer solutions to the equation √a + √ b = √ 14. 8 5 Pythagorean Triples You have probably encountered the equation x2 +y2 = z2 from the Pythagorean Theorem, describing the relation between the side lengths x, y, z of a right angled triangle. If the side lengths are all positive integers, they form a so called Pythagorean triple. For instance, (x, y, z) = (3, 4, 5) is a Pythagorean triple, and (5, 12, 13) is another. Note that if we multiply the side lengths of a Pythagorean triangle with a positive factor k, the triangle still remains right angled since we have only scaled it. Hence, if (x, y, z) is a Pythagorean triple, all triples (kx, ky, kz), where k is a positive integer, are also Pythagorean. If we could find all Pythagorean triples (x, y, z) with x, y, z pairwise co-prime, we would know all positive integer solutions to the equation x2 + y2 = z2 (indeed, two of the numbers x, y, z cannot share a common factor which is not a divisor in the third, a consequence of the condition x2 + y2 = z2). Such triples are called primitive Pythagorean triples. As we can see in the following theorem, they are infinitely many. Theorem 5.1. Every primitive Pythagorean triple (x, y, z) with y even can be expressed in the form x = m2 −n2, y = 2mn, z = m2 + n2, where m and n are relatively prime integers of different parity with m > n > 0. Proof. Firstly, we can easily check that these indeed form a primitive Pythagorean triple. We have that x2 + y2 = (m2 −n2)2 + 4m2n2 = m4 + 2m2n2 + n4 = (m2 + n2)2 = z2. Also, any prime divisor of y (except 2) is a divisor of m or n, and hence not in m2 ± n2 since it would then divide both m and n. A common prime divisor of x and z would divide both their sum and their difference, i.e. 2m2 and 2n2, and therefore also m and n (if not 2) which is not possible since m and n are co-prime. Neither do any pair of the numbers x, y, z share the factor 2, since if m and n have different parity, x and z are both odd. Hence, x, y, z are pairwise relatively prime. They are also positive integers due to the fact that m > n > 0. We must also prove that there are no other primitive Pythagorean triples. Any primitive Pythagorean triple (x, y, z) satisfies x2 + y2 = z2, where x, y, z are pairwise relatively prime. If both x and y were odd, we would have had z even and the right hand side divisible by 4. Since the square of an odd integer 2k + 1 can be written as 4k2 + 4k + 1, which is congruent to 1 modulo 4 (see Section 3), the sum of two odd squares is therefore congruent to 2 modulo 4 and the left hand side can therefore not be divisible by 4. We can therefore assume that y is even, while x and z are odd. Replacing a, b, c in Example 2.2 by (x, y, z), we can deduce that y + z and z −y are squares of integers. Write y + z = s2 and z −y = t2, where s and t are integers. Since y and z have different parity, s and t are odd, which means s + t and s −t are even. Hence, s + t = 2m and s −t = 2n for integers m and n. Since x, y and z are positive and x2 + y2 = z2, z must be greater than y which makes s and t non-zero. Hence, we can assume s and t are positive. Also s2 −t2 = 2z > 0, so s is greater than t. This means that m and n also are positive integers. They are of different parity since otherwise m + n is even and 9 therefore 2(m+n) divisible by 4, making (s+t)+(s−t) = 2s divisible by 4 and s even, a contradiction. They are also relatively prime, since a common divisor of m and n divides both their sum and their difference, hence s and t, which are relatively prime. This again leads to a contradiction. Finally, m > n since 2m = s + t > s −t = 2n. Solving for s and t, we get s = m + n and t = m −n, yielding y + z = (m + n)2 and z −y = (m −n)2. Solving for y and z, y = 2mn and z = m2 + n2. It follows that x = p z2 −y2 = m2 −n2, and the proof is finished. These convenient formulas for primitive Pythagorean triples provide a way for us to handle the condition x2 + y2 = z2 in Diophantine equations. Example 5.1. Show that the equation a4 + b4 = c2 has no solution in positive integers. Solution. We will assume there exist such solutions, and try to arrive at a contradiction. If a solution (a, b, c) exists, the numbers x = a2, y = b2 and z = c satisfy the Pythagorean equation x2 + y2 = z2. Common prime factors of any pair of the numbers (a, b, c) will be factors of the third as well. Noting that the exponent of the prime factors must be multiples of 4 in both sides of the equation, we can cancel them out without changing the equation. Hence, we can assume that a2, b2 and c are co-prime, thus forming a primitive Pythagorean triple. We can further assume that (a, b, c) is the solution with the smallest value of c. We can without loss of generality assume that b2 is even and write a2 = m2 −n2, b2 = 2mn and c = m2 + n2 with m, n being co-prime integers of different parity, and m > n > 0. Now, we directly see that (a, n, m) also form a primitive Pythagorean triple, since m and n are co-prime and can therefore not share a divisor with a for the Pythagorean equation to hold. This means we can again use the parametrization of primitive Pythagorean triples and write, since a is odd, a = s2−t2, n = 2st and m = s2+t2 with s, t being co-prime integers of different parity and s > t > 0. Now, since n = 2st, we have b2 = 2mn = 4stm. Since s, t are co-prime, the equation m = s2 + t2 implies that s, t and m are pairwise relatively prime. This means that s, t and m all must be squares of positive integers for their product to equal b2/4, which is a square (see Section 1). We will hence write s = u2, t = v2 and m = w2, where u, v and w are positive integers, pairwise co-prime. We can therefore rewrite the equation m = s2 + t2 as u4 + v4 = w2. We have thus obtained another solution to the initial equation, namely (u, v, w), where u, v and w are pairwise relatively prime. Since c = m2 +n2 = w4 +n2 is strictly greater than w4, which is in turn greater than or equal to w, we obtain the inequality w < c, contradicting the fact that c is minimal. Hence, there are no solutions in positive integers to the equation a4 + b4 = c2 (See Section 4.3). 10 5.1 Exercises for the reader Exercise 5.1. Find all solutions in positive integers to the system of equations ( a2 + b2 = c2 b2 + c2 = d2 . 11
14967
https://mathematica.stackexchange.com/questions/188673/orthogonal-projection-of-vector-onto-plane
Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Orthogonal Projection of vector onto plane Ask Question Asked Modified 6 years, 8 months ago Viewed 2k times 1 $\begingroup$ I'm currently trying to learn Mathematica, and I've got some linear algebra tasks to solve with it. I've gotten quite far but now I'm stuck on this one exercise. The instructions are: With the help of Mathematica-commands, draw a new picture, where you can see the orthogonal projection of the vector onto the plane. It should look something like this: Now, I started out by drawing the vector in the 3D plane with this code: Graphics3D [ { Thick , Arrow [ { { 0 , 0 , 0} , { 1 , ˆ’1 , 2 } } ] , InfinitePlane [ { { 1 , 0 , 0} , { 1 , 1 , 1} , { 0 , 0 , 1 } } ] } , Axes -> True , AxesLabel -> { "X" , "Y" , "Z" } ] This gave me the 3D image in the picture above, without the projection (the dashed line) obviously. But now I'm stuck, and my question is, how would I get the orthogonal projection of the vector? Thanks in advance. graphics linear-algebra vector Share Improve this question edited Jan 2, 2019 at 1:49 Michael E2 258k2121 gold badges370370 silver badges828828 bronze badges asked Jan 1, 2019 at 12:15 jhndoe2jhndoe2 10311 silver badge88 bronze badges $\endgroup$ 8 $\begingroup$ Maybe it will help you to know the normal of the plane? You can obtain it by plane = InfinitePlane[{{1, 0, 0}, {1, 1, 1}, {0, 0, 1}}]; Normalize[ Cross[plane - plane, plane - plane]]. $\endgroup$ Henrik Schumacher – Henrik Schumacher 2019-01-01 16:17:16 +00:00 Commented Jan 1, 2019 at 16:17 $\begingroup$ Sadly enough, that doesn't work for me. The result comes back saying "Part specification plane is longer than depth of the object." $\endgroup$ jhndoe2 – jhndoe2 2019-01-01 17:03:25 +00:00 Commented Jan 1, 2019 at 17:03 $\begingroup$ Hm. Weird. Did you really execute all code I posted? $\endgroup$ Henrik Schumacher – Henrik Schumacher 2019-01-01 17:07:01 +00:00 Commented Jan 1, 2019 at 17:07 $\begingroup$ Ah, nevermind. I got it to work now. This gave me three normals, which all had the value of 1/Sqrt. How do I proceed from here? $\endgroup$ jhndoe2 – jhndoe2 2019-01-01 17:15:52 +00:00 Commented Jan 1, 2019 at 17:15 1 $\begingroup$ X-posted in Wolfram Community: community.wolfram.com/groups/-/m/t/1580738 $\endgroup$ Daniel Lichtblau – Daniel Lichtblau 2019-01-01 21:08:14 +00:00 Commented Jan 1, 2019 at 21:08 | Show 3 more comments 2 Answers 2 Reset to default 3 $\begingroup$ Try this version p0 = {0, 0, 0}; p1 = {1, -1, 2}; p2 = {1, 0, 0}; p3 = {1, 1, 1}; p4 = {0, 0, 1}; gr1 = Graphics3D[{Thick, Arrow[{p0, p1}],InfinitePlane[{p2,p3,p4}]},Axes -> True, AxesLabel-> {"X", "Y","Z"}]; v = p1 - p0; n1 = p2 - p3; n2 = p3 - p4; n = Cross[n1, n2]; pl = p0; pp = p2; equs = Thread[pl + lambda v == pp + mu n1 + nu n2]; sol = Solve[equs, {lambda, mu, nu}]; pb = p0 + lambda v /. sol; vern = n/Norm[n]; prjn = (v.vern) vern; prjP = v - prjn; gr2 = Graphics3D[{Thick, Dashed, Red, Arrow[{pb, prjP + pb}]}]; gr3 = Graphics3D[{Thick, Green, Arrow[{pb, prjn}]}]; Show[gr1, gr2, gr3] In green the component along the normal to the plane and in dashed red the projection onto the plane. NOTE The line segment $\mu p_0+(1-\mu)p_1$ for $0 \le \mu \le 1$ is supported by the line $L\to p_l + \lambda (p_1-p_0) = p_l + \lambda \vec v$. The plane containing the three points $p_2,p_3,p_4$ can be defined as $\Pi\to p_p+\mu(p_2-p_3)+\nu(p_3-p_4) = p_p + \mu\vec n_1+\nu\vec n_2$ The intersection point $p_b = L\cap\Pi$ is obtained by solving for $(\lambda^,\mu^,\nu^)$ the linear system $p_l + \lambda \vec v = p_p + \mu\vec n_1+\nu\vec n_2$ and then $p_b = p_l+\lambda^ \vec v$ The plane normal is obtained as $\vec n = \vec n_1\times\vec n_2$ and the $\vec v$ component regarding $\vec n$ is obtained as $ \vec v_{\vec n} = \left(\vec v\cdot\frac{\vec n}{|\vec n|}\right)\frac{\vec n}{|\vec n|}$ and finally $\vec v_{\Pi} = \vec v-\vec v_{\vec n}$ Share Improve this answer edited Jan 2, 2019 at 15:50 answered Jan 1, 2019 at 21:18 CesareoCesareo 4,21899 silver badges1212 bronze badges $\endgroup$ 5 $\begingroup$ Thank you for the answer! Let's see if I get the steps right. First you get the normal to the plane by taking the cross-product between n1 and n2. But here's where I kind of get lost. What does the following functions mean? Do you think you could explain the next couple of lines for me? I could just post this, but I want to understand aswell. Thank you very much. $\endgroup$ jhndoe2 – jhndoe2 2019-01-02 14:58:25 +00:00 Commented Jan 2, 2019 at 14:58 $\begingroup$ @wznd Attached an explanation. $\endgroup$ Cesareo – Cesareo 2019-01-02 15:54:13 +00:00 Commented Jan 2, 2019 at 15:54 $\begingroup$ Thank you for that. Now all I'm wondering is three lines of code, that I can't seem to understand, even with your great explanation. I'll post a picture with the lines of code that I don't understand. I don't know if it's to much to ask, but if it would be possible for you to explain the functions in words, or a bit simpler to a math novice like me, I'd greatly appreciate it. Here's the section that I don't really understand: gyazo.com/707fd1863efd810e6ea58f295651d5ef $\endgroup$ jhndoe2 – jhndoe2 2019-01-02 16:03:13 +00:00 Commented Jan 2, 2019 at 16:03 $\begingroup$ @wznd Removing the semicolon at the end of the command facilitates the understanding. $\endgroup$ Cesareo – Cesareo 2019-01-02 16:57:06 +00:00 Commented Jan 2, 2019 at 16:57 $\begingroup$ Yeah, I did that initially and it help somewhat. But I'm still unsure about what the equs, solve and prjn variables specifically mean/do. $\endgroup$ jhndoe2 – jhndoe2 2019-01-02 17:20:41 +00:00 Commented Jan 2, 2019 at 17:20 Add a comment | 2 $\begingroup$ The code in Cesareo's answer can be shortened slightly. Using the same set of initial points as in the other answer: p0 = {0, 0, 0}; p1 = {1, -1, 2}; p2 = {1, 0, 0}; p3 = {1, 1, 1}; p4 = {0, 0, 1}; Some intermediate vectors: d = p1 - p0; nrm = Cross[p2 - p3, p3 - p4]; Use RegionIntersection[] to find the point of intersection: pin = First[RegionIntersection[InfinitePlane[{p2, p3, p4}], InfiniteLine[{p0, p1}]]] {1/4, -1/4, 1/2} From there: Graphics3D[{Arrow[Tube[{p0, p1}]], {Opacity[2/3], InfinitePlane[{p2, p3, p4}]}, {Green, Arrow[Tube[{pin, pin + Normalize[nrm]}]]}, {Red, Arrow[Tube[{pin, pin + d - Projection[d, nrm]}]]}, {Blue, Sphere[pin, 0.03]}}, Axes -> True, AxesLabel -> {"X", "Y", "Z"}] Note the use of the Projection[] function. Share Improve this answer answered Jan 6, 2019 at 7:18 J. M.'s missing motivationJ. M.'s missing motivation 126k1111 gold badges411411 silver badges590590 bronze badges $\endgroup$ Add a comment | Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions graphics linear-algebra vector See similar questions with these tags. The Overflow Blog The history and future of software development (part 1) Getting Backstage in front of a shifting dev experience Featured on Meta Spevacus has joined us as a Community Manager Introducing a new proactive anti-spam measure Linked 1 Orthogonal projection of line onto plane Related How to make a point's position time-dependent given a formula for the next step? Orthogonal vector in two dimensions VectorScale Explanation 2 2D vector treatment and visualisation multiplication of vector spaces 1 How to texture an annulus on a cylinder? 2 Problem verifying expression with 3D vectors Solving system of equations for coordinates Hot Network Questions How do you emphasize the verb "to be" with do/does? Why is the fiber product in the definition of a Segal spaces a homotopy fiber product? Is encrypting the login keyring necessary if you have full disk encryption? How many stars is possible to obtain in your savefile? Why do universities push for high impact journal publications? how do I remove a item from the applications menu Spectral Leakage & Phase Discontinuites The geologic realities of a massive well out at Sea Checking model assumptions at cluster level vs global level? Find non-trivial improvement after submitting Calculating the node voltage в ответе meaning in context Riffle a list of binary functions into list of arguments to produce a result How to rsync a large file by comparing earlier versions on the sending end? Exchange a file in a zip file quickly How long would it take for me to get all the items in Bongo Cat? The rule of necessitation seems utterly unreasonable Do we need the author's permission for reference Any knowledge on biodegradable lubes, greases and degreasers and how they perform long term? Two calendar months on the same page Can a cleric gain the intended benefit from the Extra Spell feat? I'm having a hard time intuiting throttle position to engine rpm consistency between gears -- why do cars behave in this observed way? Clinical-tone story about Earth making people violent Numbers Interpreted in Smallest Valid Base more hot questions Question feed
14968
https://ocw.mit.edu/courses/18-01sc-single-variable-calculus-fall-2010/478c26455ea963b7ba6bf795d6f4ecfc_MIT18_01SCF10_Ses83a.pdf
Polar Coordinates and Area How would we calculate an area using polar coordinates? Our basic increment of area will be shaped like a slice of pie. The slice of pie shown in Figure 1 has rdθ r dθ Figure 1: A slice of pie with radius r and angle dθ. a piece of a circular arc along its boundary with arc length r dθ. We’ll say that dA equals the area of the slice. How do we express dA in terms of r and θ? The total area of the pie this was sliced from is πr2 . To find the area dA we note that the proportion of the total area covered equals the proportion of arc length covered. So: dA dθ = πr2 2πr dA = r dθ 2πr · πr2 dA = 1 r 2 dθ 2 This is the basic formula for an increment of area in polar coordinates. We want to use polar coordinates to compute areas of shapes other than circles. In this case r will be a function of θ. The distance between the curve and the origin changes depending on what angle our ray is at. Our center point of reference is the origin; we think of rays emerging from the origin at some angle θ; r(θ) is, roughly, the distance we must travel along that ray to get to the curve. To find the area of a shape like this, we break it up into circular sectors with angle Δθ. Since the curve is not a circle the circular sectors won’t perfectly cover the region, so we just approximate the area of a wedge between the curve and the origin by: ΔA ≈ 1 r 2Δθ. 2 If we take the limit as Δθ approaches zero our sum of sector areas will approach 1 r=f(θ) Figure 2: A slice from an oddly shaped pie. the exact area and we get: dA = 1 r 2 dθ. 2 This is very similar to letting Δx go to zero in a Riemann sum of rectangle areas. In the limit, we have: A = Z θ2 1 r 2 dθ. 2 θ1 Remember that we’re assuming r is a function of θ. 2 MIT OpenCourseWare 18.01SC Single Variable Calculus Fall 2010 For information about citing these materials or our Terms of Use, visit:
14969
https://www.youtube.com/watch?v=guBVW5PiHLs
How to Convert Fractions to Decimals Math with Mr. J 1700000 subscribers 57068 likes Description 5449350 views Posted: 14 Apr 2020 Welcome to How to Convert Fractions to Decimals with Mr. J! Need help with converting fractions to decimals? You're in the right place! Whether you're just starting out, or need a quick refresher, this is the video for you if you need help with how to change fractions to decimals. Mr. J will go through fraction to decimal examples and explain the steps of how to convert a fraction to a decimal using long division. About Math with Mr. J: This channel offers instructional videos that are directly aligned with math standards. Teachers, parents/guardians, and students from around the world have used this channel to help with math content in many different ways. All material is absolutely free. Click Here to Subscribe to the Greatest Math Channel On Earth: Follow Mr. J on Twitter: @MrJMath5 Email: math5.mrj@gmail.com Music: Hopefully this video is what you're looking for when it comes to converting fractions to decimals. Have a great rest of your day and thanks again for watching! Transcript: welcome to math with mr. J in this video I'm going to show you how to convert a fraction to a decimal and if you take a look at the top of your screen it says divide the numerator by the denominator and round if needed so that's exactly what we are going to do now I'm going to do a few of these by hand a long division problem to show you exactly what's going on and then the others I will give you the answer that a calculator will give you and show you how to interpret everything so let's jump right into number one where we have two fifths or two over five so here again divide the numerator by the denominator so 2/5 and this fraction is less than a whole so our decimal is going to be less than a whole as well because this decimal is going to be equivalent to 2/5 so we can't do 2/5 right we can't take a whole group of 5 out of that two so we need to extend our division problem by putting a decimal and a zero so now we can think of that as 20 bring our decimal straight up how many whole groups of five can we pull out of 20 well 4 4 times 5 is 20 subtract and we get a zero and that tells us we are done so 2/5 is equal to 4 tenths number 2 9 25th and as well so 9 divided by 25 so we need to extend our division problem with the decimal and a 0 because we can't do 9/25 and get a whole number we can't pull a group of 25 out of nine so now we think of this as 90 how many whole groups of 25 out of 90 well 3 3 times 25 is 75 subtract we get 15 so we did not get a zero right away like number one so we can extend this division problem by putting another zero on the end a zero to the right of a decimal doesn't change the values so we're not changing the problem at all now we can bring that zero down and we have 150 divided by 25 and we can pull 6 whole 25 out of 150 6 times 25 is 150 and we get that clean cut zero so we do not need to go any further we are done and that problem kind of ran into the top problem there but our answer is thirty six hundredths so nine 25th equal to 36 hundredths let's take a look at number three now number three if we were to plug in 3 / 3/16 into a calculator we would get the following decimal and it goes to the ten thousandths so it's typical to either round a decimal to the thousands or hundreds so we're going to round to the thousandth in this video so we would take a look at what's in the thousands look next door that five says round up we are closer to one hundred eighty eight thousandths so our rounded answer would be one hundred eighty eight thousandths so that rounding step depends on what you're doing with the problem maybe you wouldn't round that decimal depending on the situation and as we'll see with number four and five we can have decimals that are much longer than just to the tenth in this place so speaking in number four here we have one over three or one third and I'm going to show you this by hand and hopefully you'll notice a pattern as I start doing this one so 1/3 so again this is just like number 1 and 2 where we wrote them out we can't pull a whole 3 out of that one so we extend with a decimal and a 0 bring that decimal straight up so we look at it as a 10 so how many whole threes can we pull out of 10 well 3 that gets us tonight 3 times 3 is 9 subtract we get 1 remember we want that clean cut zero to tell us that we are done so we need to add another zero drop it so we have another 10 how many whole threes out of 10 well 3 3 times 3 is 9 and our pattern is going to start here subtract add another 0 and drop it so we have another 10 three threes out of 10 3 times 3 is 9 subtract a 1 and you're probably getting the point here it's going to go on forever so it's a repeating decimal so our answer this is one we would want to round and if we round it to the thousandth we have a 3 there look next door it says stay the same so our answer is 333 thousandths or if you have a repeating decimal you can write whatever numbers repeating and put a bar over it and that bar signifies that that digit just repeats okay so two ways to do that you can round it off or the bar shows that that digit repeats so number five we actually have an improper fraction so this is going to be above one whole its greater than a whole so if you plug 17 / 11 or 17 / 11 in on a calculator you're going to get 154 54 54 and it's just going to be 54 s repeating so again we can round to the thousandths so a 5 there look look next door that says that 4 says stay the same so our rounded answer would be five hundred forty-five thousandths so one and five hundred forty-five thousandths or we can use the bar method I forgot to circle my answers for number four there just notice that or we can use the bar method so one and a fifty four repeats so we can put our bar above the fifty four to show that that will continually repeat number six seventeen over twenty so 17 divided by 20 is going to give us eighty five hundredths so it cuts off in the hundredths place so no need to round that one works out nicely so there you have it there's how you convert a fraction to a decimal divide the numerator by the denominator and then interpret your answer do you need to round is it a repeating decimal or maybe it cuts off in the tenths hundredths or thousandths place thanks so much for watching until next time peace
14970
https://brainly.com/question/38312967
[FREE] For what value of $\theta$ is $\sec \theta$ undefined? A. 90° B. 180° C. 270° D. i and iii only - brainly.com 4 Search Learning Mode Cancel Log in / Join for free Browser ExtensionTest PrepBrainly App Brainly TutorFor StudentsFor TeachersFor ParentsHonor CodeTextbook Solutions Log in Join for free Tutoring Session +39,4k Smart guidance, rooted in what you’re studying Get Guidance Test Prep +43,3k Ace exams faster, with practice that adapts to you Practice Worksheets +8,5k Guided help for every grade, topic or textbook Complete See more / Mathematics Expert-Verified Expert-Verified For what value of θ is sec θ undefined? A. 90° B. 180° C. 270° D. i and iii only 1 See answer Explain with Learning Companion NEW Asked by breanacopprue5445 • 09/22/2023 0:00 / 0:15 Read More Community by Students Brainly by Experts ChatGPT by OpenAI Gemini Google AI Community Answer This answer helped 1532897 people 1M 5.0 2 Upload your school material for a more relevant answer As per the trigonometric functions, secθ is undefined at the value of 90° and 270°(option d) The secant function is undefined when the cosine of an angle is equal to zero because you cannot divide by zero. Mathematically, this can be expressed as: secθ = 1/cosθ So, secθ is undefined when cosθ = 0. Now, let's consider the given options: a) 90°: At 90 degrees, cos(90°) = 0. Therefore, sec(90°) is undefined. b) 180°: At 180 degrees, cos(180°) = -1, which is not zero. So, sec(180°) is defined. c) 270°: At 270 degrees, cos(270°) = 0. Therefore, sec(270°) is undefined. d) i and iii only: Options i (90°) and iii (270°) are the values for which secθ is undefined, as explained above. So, for what value of θ is secθ undefined? The answer is d) i and iii only. To know more about trigonometric function here brainly.com/question/31540769 SPJ1 Answered by BakkiyaLakshmi •7.2K answers•1.5M people helped Thanks 2 5.0 (1 vote) Expert-Verified⬈(opens in a new tab) This answer helped 1532897 people 1M 5.0 2 Upload your school material for a more relevant answer The secant function sec θ is undefined at angles where cos θ=0, which occurs at 9 0∘ and 27 0∘. Therefore, the answer is option D: i and iii only. Explanation To determine for what value of θ the secant function sec θ is undefined, we need to understand the relationship of secant to cosine. The secant of an angle is defined as: sec θ=cos θ 1​ This means that sec θ will be undefined at angles where cos θ=0 since division by zero is not possible. The cosine function equals zero at specific angles: At 9 0∘: cos(9 0∘)=0 At 27 0∘: cos(27 0∘)=0 At 0∘, 18 0∘, and other angles, the cosine function does not equal zero. Evaluating the options provided: A. 9 0∘: This makes sec(9 0∘) undefined. B. 18 0∘: Here, sec(18 0∘) is defined because cos(18 0∘)=−1. C. 27 0∘: This makes sec(27 0∘) undefined. D. i and iii only: This indicates that both 90° and 270° are the correct angles where secant is undefined. Thus, the correct answer is: D. i and iii only. Examples & Evidence For example, at θ=9 0∘, the cosine is zero, making the secant undefined. Similarly, at θ=27 0∘, the cosine is also zero, further confirming that secant is undefined at these angles. The definitions of secant and cosine from trigonometric identities outline that sec θ=c o s θ 1​ and confirm that cos(9 0∘)=0 and cos(27 0∘)=0. Thus, this leads to sec θ being undefined at these angles. Thanks 2 5.0 (1 vote) Advertisement breanacopprue5445 has a question! Can you help? Add your answer See Expert-Verified Answer ### Free Mathematics solutions and answers Community Answer 4.5 7 For what value of θ is csc θ undefined? (100 Points!) 90° 270° 315° 360° Help please Community Answer 4.1 5 For what values of Θ is tanΘ undefined? 180° 135° 315° 270° Community Answer 5.0 4 For what value of Θ is cscΘ undefined?0°180°270° II only I and II only I and III only I, II, and III Community Answer 1 For what value of Θ is cotΘ undefined? 0° 180° 270° Community Answer 8 What are the values of x when tan(x) is undefined? A. -270°, -90°, 90°, and 270° B. -270°, -90°, 0°, 90°, and 270° C. -360°, -180°, 0°, 180°, and 360° D. -360°, 0° and 360° Community Answer 5.0 19 Find the value of sin θ, if tan θ=4; 180° < θ <270° Community Answer 4.6 12 Jonathan and his sister Jennifer have a combined age of 48. If Jonathan is twice as old as his sister, how old is Jennifer Community Answer 11 What is the present value of a cash inflow of 1250 four years from now if the required rate of return is 8% (Rounded to 2 decimal places)? Community Answer 13 Where can you find your state-specific Lottery information to sell Lottery tickets and redeem winning Lottery tickets? (Select all that apply.) 1. Barcode and Quick Reference Guide 2. Lottery Terminal Handbook 3. Lottery vending machine 4. OneWalmart using Handheld/BYOD Community Answer 4.1 17 How many positive integers between 100 and 999 inclusive are divisible by three or four? New questions in Mathematics | Number of Siblings | Number of Students | :---: | | 0 | 4 | | 1 | 18 | | 2 | 10 | | 3 | 8 | What is the experimental probability that a 10th-grade student chosen at random has at least one, but no more than two, siblings? Round to the nearest whole percent. A. 65% B. 70% C. 75% D. 80% The following equation has two real solutions, one of which is zero. 12 x 2 7​−108 x 3=0 Find the other (non-zero) solution. x-5 0 7 \ The table of ordered pairs shows the coordinates of two points on the graph of a line. Which equation describes the line?\na. y=x+7\nc. y=−5 x+2\nb. y=x−7\nd. y=5 x+2 Write an equation of the line that passes through a pair of points: A. y=x+3 B. y=x−3 C. y=−x+2 D. y=−x−2 Previous questionNext question Learn Practice Test Open in Learning Companion Company Copyright Policy Privacy Policy Cookie Preferences Insights: The Brainly Blog Advertise with us Careers Homework Questions & Answers Help Terms of Use Help Center Safety Center Responsible Disclosure Agreement Connect with us (opens in a new tab)(opens in a new tab)(opens in a new tab)(opens in a new tab)(opens in a new tab) Brainly.com Dismiss Materials from your teacher, like lecture notes or study guides, help Brainly adjust this answer to fit your needs. Dismiss
14971
https://artofproblemsolving.com/wiki/index.php/AM-GM_Inequality?srsltid=AfmBOopSwM5RUW6rpDo-huH2zm_7BSEWv7aPb0IoUTWfMu7mCmrfpVq9
Art of Problem Solving AM-GM Inequality - AoPS Wiki Art of Problem Solving AoPS Online Math texts, online classes, and more for students in grades 5-12. Visit AoPS Online ‚ Books for Grades 5-12Online Courses Beast Academy Engaging math books and online learning for students ages 6-13. Visit Beast Academy ‚ Books for Ages 6-13Beast Academy Online AoPS Academy Small live classes for advanced math and language arts learners in grades 2-12. Visit AoPS Academy ‚ Find a Physical CampusVisit the Virtual Campus Sign In Register online school Class ScheduleRecommendationsOlympiad CoursesFree Sessions books tore AoPS CurriculumBeast AcademyOnline BooksRecommendationsOther Books & GearAll ProductsGift Certificates community ForumsContestsSearchHelp resources math training & toolsAlcumusVideosFor the Win!MATHCOUNTS TrainerAoPS Practice ContestsAoPS WikiLaTeX TeXeRMIT PRIMES/CrowdMathKeep LearningAll Ten contests on aopsPractice Math ContestsUSABO newsAoPS BlogWebinars view all 0 Sign In Register AoPS Wiki ResourcesAops Wiki AM-GM Inequality Page ArticleDiscussionView sourceHistory Toolbox Recent changesRandom pageHelpWhat links hereSpecial pages Search AM-GM Inequality In algebra, the AM-GM Inequality, also known formally as the Inequality of Arithmetic and Geometric Means or informally as AM-GM, is an inequality that states that any list of nonnegative reals' arithmetic mean is greater than or equal to its geometric mean. Furthermore, the two means are equal if and only if every number in the list is the same. In symbols, the inequality states that for any real numbers , with equality if and only if . The AM-GM Inequality is among the most famous inequalities in algebra and has cemented itself as ubiquitous across almost all competitions. Applications exist at introductory, intermediate, and olympiad level problems, with AM-GM being particularly crucial in proof-based contests. Contents 1 Proofs 2 Generalizations 2.1 Weighted AM-GM Inequality 2.2 Mean Inequality Chain 2.3 Power Mean Inequality 3 Problems 3.1 Introductory 3.2 Intermediate 3.3 Olympiad 4 See Also Proofs Main article: Proofs of AM-GM All known proofs of AM-GM use induction or other, more advanced inequalities. Furthermore, they are all more complex than their usage in introductory and most intermediate competitions. AM-GM's most elementary proof utilizes Cauchy Induction, a variant of induction where one proves a result for , uses induction to extend this to all powers of , and then shows that assuming the result for implies it holds for . Generalizations The AM-GM Inequality has been generalized into several other inequalities. In addition to those listed, the Minkowski Inequality and Muirhead's Inequality are also generalizations of AM-GM. Weighted AM-GM Inequality The Weighted AM-GM Inequality relates the weighted arithmetic and geometric means. It states that for any list of weights such that , with equality if and only if . When , the weighted form is reduced to the AM-GM Inequality. Several proofs of the Weighted AM-GM Inequality can be found in the proofs of AM-GM article. Mean Inequality Chain Main article: Mean Inequality Chain The Mean Inequality Chain, also called the RMS-AM-GM-HM Inequality, relates the root mean square, arithmetic mean, geometric mean, and harmonic mean of a list of nonnegative reals. In particular, it states that with equality if and only if . As with AM-GM, there also exists a weighted version of the Mean Inequality Chain. Power Mean Inequality Main article: Power Mean Inequality The Power Mean Inequality relates all the different power means of a list of nonnegative reals. The power mean is defined as follows: The Power Mean inequality then states that if , then , with equality holding if and only if Plugging into this inequality reduces it to AM-GM, and gives the Mean Inequality Chain. As with AM-GM, there also exists a weighted version of the Power Mean Inequality. Problems Introductory For nonnegative real numbers , demonstrate that if then . (Solution) Find the maximum of for all positive . (Solution) Intermediate Find the minimum value of for . (Source) Olympiad Let , , and be positive real numbers. Prove that (Source) See Also Proofs of AM-GM Mean Inequality Chain Power Mean Inequality Cauchy-Schwarz Inequality Inequality Retrieved from " Categories: Algebra Inequalities Definition Art of Problem Solving is an ACS WASC Accredited School aops programs AoPS Online Beast Academy AoPS Academy About About AoPS Our Team Our History Jobs AoPS Blog Site Info Terms Privacy Contact Us follow us Subscribe for news and updates © 2025 AoPS Incorporated © 2025 Art of Problem Solving About Us•Contact Us•Terms•Privacy Copyright © 2025 Art of Problem Solving Something appears to not have loaded correctly. Click to refresh.
14972
https://www.utmb.edu/pedi_ed/CoreV2/Cardiology/cardiologyV2/cardiologyV24.html
Heart Murmurs Cardiology A Chapter in Core Concepts of Pediatrics, 2nd Edition print all Contents Intro Cardiology Cycle The Cardiac Cycle Phases of the Cardiac Cycle: Sounds Heart Sounds and Murmurs Murmers Heart Murmurs Systolic Murmur Grades based on the intensity of the murmur Summary of Heart Murmers Systolic Diastolic Continuous LRShunts Left to Right Shunts VSD Ventricular Septal Defects Anatomy Pathophysiology Clinical Presentations Making the diagnosis Natural History Management Patent Ductus Arteriosus Anatomy Anatomy Pathophysiology Clinical presentations Making the diagnosis Natural history and management Atrial Septal Defects (ASDs) Anatomy Pathophysiology Clinical presentation Making the diagnosis Management Atrioventricular Defect (AVSD) Anatomy Pathophysiology Clinical presentations Making the diagnosis Management Quick Checks Obstruct Obstructive Cardiac Lesions Arterial switch (PS) Anatomy Clinical presentations Making the diagnosis Management Aortic Stenosis (AS) Anatomy Clinical presentation Making the diagnosis Management Coarctation of the Aorta Anatomy Pathophysiology Clinical presentation Making the diagnosis Management Quick Checks D TGA Cyanotic Cardiac Lesions Complete Transposition of the Great Arteries (D-TGA) Anatomy Pathophysiology Clinical presentation Making the Diagnosis Management L TGA Corrected Transposition of the Great Arteries (L-TGA) Pathology Making the diagnosis Management Persistent Truncus Arteriosus (TA) Anatomy Pathophysiology Clinical Presentation Making the Diagnosis Management Tetralogy of Fallot (TOF) Anatomy (figure) Pathophysiology Clinical manifestations Making the Diagnosis Management TOF with Pulmonary Atresia: TOF with Absent Pulmonary Valve: Total Anomalous Pulmonary Venous Connection (TAPVC) Pathophysiology Clinical manifestations Making the Diagnosis Management TA Tricuspid Atresia (TA) Clinical presentation, EKG and, imaging Management Pulmonary Atresia with Intact Ventricular Septum (PA-IVS) Clinical presentation, EKG, imaging Management Ebstein Anomaly of the Tricuspid Valve (EA) Clinical presentations, EKG, imaging Management Hypoplastic Left Heart Syndrome Pathophysiology Clinical presentations Making the Diagnosis Management Surgical management of HLHS: C Myo Cardiomyopathies & Myocarditis Cardiomyopathies Dilated cardiomyopathy Pathology Pathophysiology Clinical presentation Making the diagnosis Management Pericard Constrictive Pericarditis Clinical presentation: Making the diagnosis: Management: R C Myo Restrictive Cardiomyopathy Clinical presentation: Making the diagnosis: Management: Myocarditis Clinical presentation Making the diagnosis Management Hypertrophic Obstructive Cardiomyopathy Pathology Clinical Presentation Making the diagnosis Management Pain Chest Pain Causes of chest pain in children Evaluation of chest pain in children Management EKG Inte Pediatric EKG Interpretation Before you read the EKG, look for: Basic EKG interpretation Arrhythm Arrhythmias Bradyarrhythmias Sinus bradycardia Atrioventricular Block First degree AV block Second degree AV block Third degree AV block Tachyarrhythmias Sinus tachycardia Premature atrial complexes (PACs) Atrial Flutter and Atrial Fibrillation Supraventricular Tachycardia (SVT) Ventricular arrhythmias Premature ventricular contractions (PVCs) Ventricular tachycardia (VT) Ventricular Fibrillation (VF) LQTS Long QT Syndrome (LQTS) Pathophysiology Clinical Making the Diagnosis Management Quick Check Kawasaki Kawasaki Disease (Mucocutaneous Lymph Node Syndrome) Pathogenesis Clinical presentation Making the Diagnosis Classic KD is diagnosed by the following criteria: Other clinical signs: Management (Figure): References Endocard Infective Endocarditis Pathogenesis Pathology Microbiology Clinical presentation Making the diagnosis Management Antimicrobial prophylaxis: (2007 AHA guidelines) References ARF Rheumatic Fever (ARF) Pathology Clinical presentation Making the diagnosis Major Criteria: Minor Criteria: Management Principles: General treatment of the acute episode: Cardiac management: Prevention Quick Checks - Kawasaki Disease, Endocarditis, Rheumatic fever Prev Next Page 4 of 20 Heart Murmurs Systolic Murmur Grades based on the intensity of the murmur Summary of Heart Murmers Systolic Diastolic Continuous About this book Chapter Index Heart Murmurs Murmurs are additional sounds generated by turbulent blood flow in the heart and blood vessels. Murmurs may be systolic, diastolic or continuous. Systolic Murmur Grades based on the intensity of the murmur I/VI: Barely audible II/VI: Faint but easily audible III/VI: Loud murmur without a palpable thrill IV/VI: Loud murmur with a palpable thrill V/VI: Very loud murmur heard with stethoscope lightly on chest VI/VI: Very loud murmur that can be heard without a stethoscope Systolic murmurs are the most common types of murmurs in children and based on their timing within systole, they are classified into: a) Systolic ejection murmurs (SEM, crescendo-decrescendo) result from turbulent blood flow due to obstruction (actual or relative) across the semilunar valves, outflow tracts or arteries. The murmur is heard shortly after S1 (pulse). The intensity of the murmur increases as more blood flows across an obstruction and then decreases (crescendo-decrescendo or diamond shaped). Innocent murmurs are the most common cause of SEM (see below). Other causes include stenotic lesions (aortic and pulmonary stenosis, coarctation of the aorta, tetralogy of Fallot) or relative pulmonary stenosis due to increased flow from an ASD Crescendo decrescendo murmur b) Holosystolic (regurgitant) murmurs start at the beginning of S1 (pulse) and continue to S2. Examples: ventricular septal defect (VSD), mitral and tricuspid valve regurgitation. Holosystolic mumur c) Decrescendo systolic murmur is a subtype of holosystolic murmur that may be heard in patients with small VSDs. In the latter part of systole, the small VSD may close or become so small to not allow discernible flow through and the murmur is no longer audible. Decrescendo murmur Diastolic murmurs are usually abnormal, and may be early, mid or late diastolic. More information: Examples of innocent murmu Stills murmur Pulmonary flow murmur Peripheral pulmonary stenosis (PPS) Venous hum Early diastolic murmurs immediately follow S2. Examples: aortic and pulmonary regurgitation. Mid-diastolic murmurs (rumble) are due to increased flow (relative stenosis) through the mitral (VSD) or the tricuspid valves (ASD). Late diastolic murmurs are due to pathological narrowing of the AV valves. Example: rheumatic mitral stenosis. Tricuspid stenosis is very rare in children. Continuous murmurs are heard during both systole and diastole. They occur when there is a constant shunt between a high and low pressure blood vessel. Examples: patent ductus arteriosus (PDA) and systemic arterio-venous fistulas. This may also occur in surgically placed shunts such as a BT shunt between the aorta and the pulmonary artery. Innocent murmurs are common in children and have the following characteristics: Grade III or less in intensity An otherwise a normal cardiac examination and normal heart sounds No associated cardiac symptoms Change in intensity with body position (e.g. louder in supine position) Summary of Heart Murmers Table showing the common systolic, diastolic and continuous heart murmurs Systolic SEM: Innocent murmurs, obstructive lesions, ASD Holosystolic: VSD, MR, TR (mitral and tricuspid insufficiency) Decrescendo: usually with small VSDs (as VSD almost closes by the end of systole Diastolic Early: AI, PI (aortic and pulmonary insufficiency) Mid: relative mitral stenosis (VSD) or relative tricuspid stenosis (ASD) Late: Rheumatic MS (mitral stenosis) Continuous Usually vascular in origin when a high-pressure vessel communicates with a low-pressure vessel e.g. PDA (beyond the neonatal period), BT shunt, AV malformation anywhere in the body (heart, lungs, brain, liver or pregnant uterus) Obstructive lesions include AS, PS, Coarctation of the aorta, TOF, etc. Table showing the common heart murmurs audible at different age Immediately after birth PDA or obstructive lesions Shortly after birth (a few hours to few weeks)VSD, PDA, PPS (peripheral pulmonary stenosis) 1-4 years Innocent murmurs, ASD Teenage Innocent murmur, HOCM or MVP/MR _Obstructive lesions include AS, PS, Coarctation of the aorta, TOF, etc_ toc | return to top | previous page | next page Content ©2017. Some Rights Reserved. Date last modified: July 7, 2017. Ashraf Aly and Soham Dusgupta Dept. of Pediatrics University of Texas Medical Branch.
14973
https://www.amjmed.com/article/S0002-9343(17)30257-7/pdf
Single High-Sensitivity Cardiac Troponin I to Rule Out Acute Myocardial Infarction Yader Sandoval, MD, a Stephen W. Smith, MD, b Sara A. Love, PhD, c,d Anne Sexter, MPH, c Karen Schulz, DC, cFred S. Apple, PhD c,d a Division of Cardiology, Hennepin County Medical Center and Minneapolis Heart Institute, Abbott Northwestern Hospital, Minn; bDepartment of Emergency Medicine, Hennepin County Medical Center and University of Minnesota, Minneapolis; c Minneapolis Medical Research Foundation, Minn; dDepartment of Laboratory Medicine and Pathology, Hennepin County Medical Center and University of Minnesota, Minneapolis. ABSTRACT BACKGROUND: This study examined the performance of single high-sensitivity cardiac troponin I (hs-cTnI) measurement strategies to rule out acute myocardial infarction. METHODS: This was a prospective, observational study of consecutive patients presenting to the emergency department (n ¼ 1631) in whom cTnI measurements were obtained using an investigational hs-cTnI assay. The goals of the study were to determine 1) negative predictive value (NPV) and sensitivity for the diagnosis of acute myocardial infarction, type 1 myocardial infarction, and type 2 myocardial infarction; and 2) safety outcome of acute myocardial infarction or cardiac death at 30 days using hs-cTnI less than the limit of detection (LoD) ( <1.9 ng/L) or the High-STEACS threshold ( <5 ng/L) alone and in combination with normal electrocardiogram (ECG). RESULTS: Acute myocardial infarction occurred in 170 patients (10.4%), including 68 (4.2%) type 1 myocardial infarction and 102 (6.3%) type 2 myocardial infarction. For hs-cTnI <LoD (27%), the NPV and sensitivity for acute myocardial infarction were 99.6% (95% con fi dence interval 98.9%-100%) and 98.8 (97.2%-100%). For hs-cTnI <5 ng/L (50%), the NPV and sensitivity for acute myocardial infarction were 98.9% (98.2%-99.6%) and 94.7% (91.3%-98.1%). In combination with a normal ECG, 1) hs-cTnI <LoD had an NPV of 99.6% (98.9%-100%) and sensitivity of 99.4% (98.3%-100%); and 2) hs-cTnI <5 ng/L had an NPV of 99.5% (98.8%-100%) and sensitivity of 98.8% (97.2%-100%). The NPV and sensitivity for the safety outcome were excellent for hs-cTnI <LoD alone or in combination with a normal ECG, and for hs-cTnI <5 ng/L in combination with a normal ECG. CONCLUSION: Strategies using a single hs-cTnI alone or in combination with a normal ECG allow the immediate identi fi cation of patients unlikely to have acute myocardial infarction and who are at very low risk for adverse events at 30 days. Ó 2017 Elsevier Inc. All rights reserved.  The American Journal of Medicine (2017) 130, 1076-1083 KEYWORDS: Acute myocardial infarction; High-sensitivity cardiac troponin; Troponin High-sensitivity (hs) cardiac troponin (cTn) I and T assays are analytically superior assays compared with contempo-rary cTn assays and are able to measure cTn at very low concentrations with excellent precision. 1-3 Both hs-cTnI and hs-cTnT assays are available and clinically used worldwide, with only the hs-cTnT assay recently cleared for use in the United States by the US Food and Drug Administration. 3,4 The ability to measure very low hs-cTn concentrations with clinically acceptable imprecision has allowed the development of new rule-out strategies, which have sug-gested that both acute myocardial infarction and/or myocardial injury can be safely excluded with a single measurement at presentation. 5-14 Two particular strategies have gained attention. The fi rst strategy is based on the use of an assay ’s limit of detection (LoD), an analytical Funding: See last page of article. Con fl ict of Interest: See last page of article. Authorship: See last page of article. Requests for reprints should be addressed to Fred S. Apple, PhD, Hennepin County Medical Center, Clinical Laboratories P4, 701 Park Avenue, Minneapolis, MN 55415. E-mail address: apple004@umn.edu 0002-9343/$ -see front matter Ó 2017 Elsevier Inc. All rights reserved. CLINICAL RESEARCH STUDY threshold below the 99th percentile. 3,14 The second strategy, the High-STEACS approach, consists of using a hs-cTnI concentration (assay dependent) threshold selected on the basis of a clinical need, rather than an analytical threshold. 13 This approach was derived and validated in the High-STEACS cohort study, in which a single hs-cTnI concen-tration <5 ng/L was shown to identify patients at very low risk for cardiac events. 13 Both ap-proaches have been shown to have excellent negative predictive values (NPVs) for acute myocar-dial infarction. 5-14 Studies examining the use of single measurements to rule out acute myocardial infarction have primarily been performed outside the United States, in select co-horts of patients with chest pain, with the intent to exclude type 1 myocardial infarction. 5-7,12 No large study has tested and compared the rule-out of acute myocardial infarction, including type 1 and 2 myocardial infarc-tion, using the 1) LoD and 2) High-STEACS approaches using an hs-cTnI assay in a US population. The goals of the present study were to examine the diagnostic perfor-mance of these two approaches for 1) ruling out acute myocardial infarction, including type 1 and 2 myocardial infarction; and 2) examine the safety outcomes for acute myocardial infarction or cardiac death at 30 days. METHODS Study Design and Population Following institutional review board approval, we pro-spectively included consecutive, unselected patients pre-senting from February 4, 2014 through May 9, 2014 in whom initial pre-set serial cTnI measurements at 0, 3, 6, and 9 hours were ordered on clinical indication at Hennepin County Medical Center (Minneapolis, MN) to rule in and rule out acute myocardial infarction (Use of TROPonin In Acute coronary syndromes [UTROPIA]; NCT02060760). For inclusion, patients needed a baseline cTnI measurement at presentation and at least one additional cTnI measured within 24 hours of presentation before discharge and at least one 12-lead electrocardiogram (ECG) performed. Exclusion criteria were age <18 years, ST-segment eleva-tion myocardial infarction, pregnancy, trauma, declined to participate on research as documented on information disclosure, did not present through the emergency depart-ment, or were transferred from an outside hospital. For pa-tients with more than one presentation during the study period, we included only the fi rst. Cardiac Troponin I Assays Fresh ethylenediaminetetraacetic acid plasma samples were simultaneously measured with both the contemporary cTnI (clinically used) and hs-cTnI (investigational) assays on the ARCHITECT i1000 SR or i2000 SR analyzers (Abbott Di-agnostics, Abbott Park, IL). Only the hs-cTnI assay data were used for the present study. Sex-speci fi c 99th percentile upper reference limits (URL) for the hs-cTnI assay were 16 ng/L for females and 34 ng/L for males; % coef fi cients of variation were 5.3% at 15 ng/L and <20% at the LoD of 1.9 ng/L. 15,16 Event Adjudication All cases with at least one hs-cTnI concentration >99th percentile were adjudicated according to the Third Universal De fi nition of Myocardial Infarction consensus recommendations by two clini-cians after review of all available medical records, including 12-lead ECG, echocardiography, angiography, hs-cTnI values, and clinical presentation. 17 Cases with an adjudication discrep-ancy were reviewed and adjudicated by a third senior clinician. To guide the adjudication of acute myocardial infarction in relation to the presence or absence of a signi fi cant rise and/or fall of cTnI, an algorithm was developed for the hs-cTnI assay on the basis of biological variation, 15 with the primary purpose of ensuring that changes within biological variation were not deemed abnormal. If the initial hs-cTnI value was below the sex-speci fi c 99th percentile cutoff, then a rise of >69% and/or fall of >41% on serial sampling were used to suggest a signi fi cant dynamic rise and/or fall. Conversely, if the initial hs-cTnI value was above the 99th percentile, then a change of at least >20% was used. For the diagnosis of acute myocardial infarction, a rise and/or fall with at least one value above the 99th percentile occurring in appropriate clinical circumstances consistent with acute myocardial ischemia was required, plus at least one additional myocardial infarction criteria: 1) ischemic symptoms, 2) development of pathologic Q waves in the 12-lead ECG, 3) ECG changes indicative of new ischemia, 4) imaging evidence of new loss of viable myocardium or new regional wall motion abnormality, or 5) identi fi cation of an intracoronary thrombus by angiography or autopsy. 17 Patients adjudicated as myocardial infarction were further classi fi ed into myocardial infarction subtypes. 17 Type 1 myocardial infarction was de fi ned as myocardial infarction related to atherosclerotic plaque rupture, ulcera-tion, fi ssuring, erosion, or dissection with resulting intra-luminal thrombus. 17 Type 2 myocardial infarction was CLINICAL SIGNIFICANCE  Strategies using a single high-sensitivity cardiac troponin I measurement at pre-sentation in combination with a normal electrocardiogram allow the immediate identi fi cation of patients unlikely to have acute myocardial infarction and who are at very low risk for adverse events at 30 days.  The implementation of these approaches may reduce overcrowding, facilitate early discharge in selected patients, expedite triaging, and reduce costs. Sandoval et al Single Hs-cTnI to Rule out AMI 1077 de fi ned as myocardial infarction secondary to an ischemic imbalance between myocardial oxygen supply and/or de-mand not due to atherothrombosis. 17-19 For type 2myocardial infarction to be adjudicated, cases were required to have a rise and/or fall of cTnI with at least 1 value above the 99th percentile plus at least 1 additional myocardial infarction criteria according to the Universal De fi nition of Myocardial Infarction, including the objective evidence or documentation of supply/demand imbalance. 17-19 Study Outcomes The diagnostic outcomes examined were 1) acute myocar-dial infarction, 2) type 1 myocardial infarction, and 3) type 2 myocardial infarction during the index hospitalization. The safety outcome was a composite of acute myocardial infarction or cardiac death at 30 days, including events occurring during the index hospitalization. Statistical Analyses Categorical variables are shown as percentages. Continuous variables are shown as mean values standard deviation. The diagnostic and safety outcomes were examined for 1) LoD ( <1.9 ng/L) and 2) High-STEACS ( <5 ng/L) threshold based on a single hs-cTnI at presentation alone and in combination with a normal ECG. The ECGs were catego-rized as normal according to previously described criteria 14 (Supplementary Methods, available online). Diagnostic performance statistics were sensitivity, speci fi city, positive predictive value (PPV), and negative predictive value (NPV); 95% con fi dence intervals (CIs) were ascertained using binomial proportions. Subgroup analyses were per-formed on early presenters, de fi ned as individuals who had their fi rst cTnI sample obtained 2 hours after symptom onset. All analysis was done using SAS version 9.4 (SAS Institute, Cary, NC). RESULTS Baseline characteristics are shown in Table 1 . Among the 1631 patients who met inclusion criteria, 444 patients (27%) had hs-cTnI <LoD at presentation. Using the High-STEACS threshold, 812 patients (50%) had hs-cTnI <5ng/L at presentation. A total of 601 patients (37%) had a normal ECG. During the index hospitalization, acute myocardial infarction occurred in 170 patients (10.4%), including 68 (4.2%) type 1 and 102 (6.3%) type 2myocardial infarctions. Rule-Out Using the LoD Alone and in Combination with a Normal ECG In patients with hs-cTnI <LoD at presentation (27% of pa-tients), independent of ECG fi ndings, the NPV and sensi-tivity for acute myocardial infarction were 99.6% (95% CI, 98.9%-100%) and 98.8% (95% CI, 97.2%-100%), respec-tively ( Table 2 ). Using hs-cTnI <LoD alone, 2 of 170 patients with acute myocardial infarction were missed, corresponding to a miss rate of 1.2% (or 2 of 444 patients with an hs-cTnI <LoD, 0.5%). In comparison with hs-cTnI <LoD alone, the addition of a normal ECG (16% of patients) offered a NPV of 99.6% (95% CI, 98.9%-100%) and sensitivity of 99.4% (95% CI, 98.3%-100%) for acute myocardial infarction. Using hs-cTnI <LoD with a normal ECG, only 1 of 170 patients with acute myocardial infarction was missed, corresponding to a miss rate of 0.6% (or 1 of 254 patients with an hs-cTnI <LoD and a normal ECG, 0.4%). At 30 days, the NPV and sensitivity for acute myocardial infarction or cardiac death were 99.6% (95% CI, 98.9%-100%) and 98.8% (95% CI, 97.2%-100%) for hs-cTnI <LoD alone, and 99.6% (95% CI, 98.8%-100%) and 99.4% (95% CI, 98.3%-100%) for hs-cTnI <LoD with a normal ECG ( Figure , Table 3 ). Using hs-cTnI <LoD alone, 2 of 171 events (1.2%) were missed (or 2 of 444 patients with an hs-cTnI <LoD, 0.5%); whereas using hs-cTnI <LoD with a normal ECG only 1 of 171 events (0.6%) was missed (or 1 of 254 patients with a hs-cTnI <LoD and a normal ECG, 0.4%). For ruling out type 1 myocardial infarction alone, base-line hs-cTnI <LoD alone resulted in a NPV of 99.8% (95% CI, 99.3%-100%) and sensitivity of 98.5% (95% CI, Table 1 Baseline Characteristics Characteristic Value Study cohort (n) 1631 Age (y), mean (SD) 57 (15) Female gender 720 (44) Hypertension 1074 (66) Diabetes mellitus 496 (43) Dyslipidemia 696 (43) Coronary artery disease 371 (23) Prior myocardial infarction 190 (12) Prior percutaneous coronary intervention 150 (9) Prior coronary artery bypass graft 73 (4) Congestive heart failure 231 (14) Atrial fi brillation 129 (8) Peripheral vascular disease 42 (3) Cerebrovascular disease 153 (9) Renal insuf fi ciency, nondialysis 161 (10) End-stage renal disease on hemodialysis 80 (5) History of tobacco use 969 (59) Chest discomfort 835 (51) Dyspnea 680 (42) Arm and/or shoulder discomfort 250 (15) Jaw and/or neck discomfort 98 (6) Epigastric discomfort 93 (6) Nausea and/or vomiting 381 (23) Fatigue 444 (27) Baseline hs-cTnI concentrations <1.9 ng/L 444 (27) Baseline hs-cTnI concentrations <5 ng/L 812 (50) Normal 12-lead ECG 601 (37) Values are number (percentage) unless otherwise noted. ECG ¼electrocardiogram; SD ¼standard deviation. 1078 The American Journal of Medicine, Vol 130, No 9, September 2017 95.7%-100%). In combination with a normal ECG, hs-cTnI <LoD resulted in an NPV of 99.6% (95% CI, 98.8%-100%) and sensitivity of 98.5% (95% CI, 95.7%-100%). For type 1 myocardial infarction, the sensitivity for the safety outcome was 98.6% (95% CI, 95.8%-100%) using either hs-cTnI <LoD alone or in combination with a normal ECG. For ruling out type 2 myocardial infarction alone, base-line hs-cTnI <LoD alone resulted in an NPV of 99.8 (95% CI, 99.3%-100%) and sensitivity of 99.0% (95% CI, 97.1%-100%). In combination with a normal ECG, hs-cTnI <LoD resulted in an NPV and sensitivity of 100% (95% CI, 100%-100%). For type 2 myocardial infarction, the sensitivity for the safety outcome was 99.1% (95% CI, 97.3%-100%) using hs-cTnI <LoD alone and 100% (95% CI, 100%-100%) in combination with a normal ECG. In early presenters the NPV and sensitivity for the diagnostic and safety outcomes was 100% (95% CI, 100%-100%) using hs-cTnI <LoD alone or in combination with a normal ECG ( Table 4 ). Rule-Out Using the High-STEACS Threshold Alone and in Combination with a Normal ECG In patients with hs-cTnI <5 ng/L at presentation (50% of patients), independent of ECG fi ndings, the NPV and sensitivity for acute myocardial infarction were 98.9% (95% CI, 98.2%-99.6%) and 94.7% (95% CI, 91.3%-98.1%), respectively. Using hs-cTnI <5 ng/L alone, 9 of 170 patients with acute myocardial infarction were missed, corresponding to a miss rate of 5.3% (or 9 of 812 patients with hs-cTnI <5 ng/L, 1.1%). The addition of a normal ECG to a hs-cTnI <5 ng/L (25% of patients) showed a NPV of 99.5% (95% CI, 98.8%-100%) and a sensitivity of 98.8% (95% CI, 97.2%-100%) for acute myocardial infarction (Table 2 ). Using hs-cTnI <5 ng/L with a normal ECG, 2 of 170 patients with acute myocardial infarction were missed, corresponding to a miss rate of 1.2% (or 2 of 406 patients with hs-cTnI <5 ng/L and a normal ECG, 0.5%). At 30 days, the NPV and sensitivity for acute myocardial infarction or cardiac death was 98.9% (95% CI, 98.2%-99.6%) and 94.7% (95% CI, 91.4%-98.1%) for hs-cTnI <5 ng/L alone, and 99.5% (95% CI, 98.8%-100%) and 98.8% (95% CI, 97.2%-100%) for hs-cTnI <5 ng/L with a normal ECG (Table 3 ). Using hs-cTnI <5 ng/L alone, 9 of 171 events (5.3%) were missed (or 9 of 812 patients with hs-cTnI <LoD, 1.1%), whereas when using hs-cTnI <5 ng/L with a normal ECG, only 2 of 171 (1.2%) were missed (or 2 of 406 patients with hs-cTnI <5 ng/L and a normal ECG, 0.5%). For ruling out type 1 myocardial infarction alone, base-line hs-cTnI <5 ng/L alone resulted in an NPV of 99.5% (95% CI, 99.0%-100%) and sensitivity of 94.1% (95% CI, 88.5%-99.7%). In combination with a normal ECG, hs-cTnI <5 ng/L resulted in a NPV of 99.8% (95% CI, 99.3%-100%) and sensitivity of 98.5% (95% CI, Table 2 Use of a Single hs-cTnI at Presentation Alone and in Combination with a Normal 12-lead ECG for the Diagnosis of Acute Myocardial Infarction (Type 1 and 2 Myocardial Infarction), Type 1 Myocardial Infarction Alone, and Type 2 Myocardial Infarction Alone Parameter LoD (1.9 ng/L) High-STEACS ( <5 ng/L) Baseline hs-cTnI <LoD Baseline hs-cTnI <LoD and Normal ECG Baseline hs-cTnI <5 ng/L Baseline hs-cTnI <5 ng/L and Normal ECG Acute myocardial infarction Proportion qualifying 444/1631 (27) 254/1631 (16) 812/1631 (50) 406/1631 (25) Proportion of missed MIs 2/170 (1.2) 1/170 (0.6) 9/170 (5.3) 2/170 (1.2) NPV 99.6 (98.9-100) 99.6 (98.8-100) 98.9 (98.2-99.6) 99.5 (98.8-100) Sensitivity 98.8 (97.2-100) 99.4 (98.3-100) 94.7 (91.3-98.1) 98.8 (97.2-100) PPV 14.2 (12.2-16.1) 12.3 (10.5-14.0) 19.7 (16.9-22.4) 13.7 (11.8-15.6) Speci fi city 30.3 (27.9-32.6) 17.3 (15.4-19.3) 55.0 (52.4-57.5) 27.7 (25.4-30.0) Type 1 myocardial infarction Proportion qualifying 443/1529 (29) 254/1529 (17) 807/1529 (53) 405/1529 (27) Proportion of missed MIs 1/68 (1.5) 1/68 (1.5) 4/68 (5.9) 1/68 (1.5) NPV 99.8 (99.3-100) 99.6 (98.8-100) 99.5 (99.0-100) 99.8 (99.3-100) Sensitivity 98.5 (95.7-100) 98.5 (95.7-100) 94.1 (88.5-99.7) 98.5 (95.7-100) PPV 6.2 (4.7-7.6) 5.3 (4.0-6.5) 8.9 (6.8-10.9) 6.0 (4.6-7.3) Speci fi city 30.3 (27.9-32.6) 17.3 (15.4-19.3) 5.5 (5.2-5.8) 27.7 (25.4-30.0) Type 2 myocardial infarction Proportion qualifying 443/1563 (28) 253/1563 (16) 808/1563 (52) 405/1563 (26) Proportion of missed MIs 1/102 (0.98) 0/102 (0) 5/102 (4.9) 1/102 (0.98) NPV 99.8 (99.3-100) 100 (100-100) 99.4 (98.8-99.9) 99.8 (99.3-100) Sensitivity 99.0 (97.1-100) 100 (100-100) 95.1 (90.9-99.3) 99.0 (97.1-100) PPV 9.0 (7.3-10.7) 7.8 (6.3-9.2) 12.9 (10.5-15.2) 8.7 (7.1-10.4) Speci fi city 30.3 (27.8-32.6) 17.3 (15.4-19.3) 55.0 (52.4-57.5) 27.7 (25.4-30.0) Values are number (percentage) or percentage (95% con fi dence interval). ECG ¼ electrocardiogram; LoD ¼ limit of detection; MI ¼ myocardial infarction; NPV ¼ negative predictive value; PPV ¼ positive predictive value. Sandoval et al Single Hs-cTnI to Rule out AMI 1079 95.7%-100%). For type 1 myocardial infarction, the sensi-tivities for the safety outcome were 94.3% (95% CI, 88.9%-99.7%) using hs-cTnI <5 ng/L alone and 98.6% (95% CI, 95.8%-100%) in combination with a normal ECG. For ruling out type 2 myocardial infarction alone, base-line hs-cTnI <5 ng/L alone resulted in a NPV of 99.4% (95% CI, 98.8%-99.9%) and sensitivity of 95.1% (95% CI, 90.9%-99.3%). In combination with a normal ECG, hs-cTnI <5 ng/L alone resulted in a NPV of 99.8% (95% CI, 99.3%-100%) and sensitivity of 99.0% (95% CI, 97.1%-100%). For type 2 myocardial infarction, the sensi-tivities for the safety outcome were 95.4% (95% CI, 91.4%-99.3%) using hs-cTnI <5 ng/L alone and 99.1% (95% CI, 97.3%-100%) in combination with a normal ECG. In early presenters, hs-cTnI <5 ng/L alone resulted in an NPV of 98.5% (95% CI, 96.5%-100%) and sensitivity of Table 3 Safety Outcome: Risk Strati fi cation at 30 Days for Acute Myocardial Infarction and Cardiac Death (Including Events During Index Hospitalization) Using a Single hs-cTnI at Presentation Alone and in Combination with a Normal 12-Lead ECG Parameter LoD (1.9 ng/L) High-STEACS ( <5 ng/L) Baseline hs-cTnI <LoD Baseline hs-cTnI <LoD and Normal ECG Baseline hs-cTnI <5 ng/L Baseline hs-cTnI <5 ng/L and Normal ECG Acute myocardial infarction Proportion of missed events 2/171 (1.2) 1/171 (0.6) 9/171 (5.3) 2/171 (1.2) NPV 99.6 (98.9-100) 99.6 (98.8-100) 98.9 (98.2-99.6) 99.5 (98.8-100) Sensitivity 98.8 (97.2-100) 99.4 (98.3-100) 94.7 (91.4-98.1) 98.8 (97.2-100) Type 1 myocardial infarction Proportion of missed events 1/70 (1.4) 1/70 (1.4) 4/70 (5.7) 1/70 (1.4) NPV 99.8 (99.3-100) 99.6 (98.8-100) 99.5 (99.0-100) 99.8 (99.3-100) Sensitivity 98.6 (95.8-100) 98.6 (95.8-100) 94.3 (88.9-99.7) 98.6 (95.8-100) Type 2 myocardial infarction Proportion of missed events 1/108 (0.9) 0/108 (0) 5/103 (4.6) 1/108 (0.9) NPV 99.8 (99.3-100) 100 (100-100) 99.4 (98.8-99.9) 99.8 (99.3-100) Sensitivity 99.1 (97.3-100) 100 (100-100) 95.4 (91.4-99.3) 99.1 (97.3-100) Values are number (percentage) or percentage (95% con fi dence interval). ECG ¼ electrocardiogram; LoD ¼ limit of detection; NPV ¼ negative predictive value. Figure Safety outcome: risk strati fi cation at 30 days for acute myocardial infarction and cardiac death. Columns for proportion of patients qualifying for each approach (limit of detection, <1.9 ng/L, and High-STEACS, <5 ng/L) with and without a normal result on electrocardiogram (ECG), and corresponding sensitivities for the safety outcome. 1080 The American Journal of Medicine, Vol 130, No 9, September 2017 94.9% (95% CI, 88.0%-100%) for acute myocardial infarction, whereas in combination with a normal ECG, hs-cTnI <5 ng/L had a NPV and sensitivity of 100% (95% CI, 100%-100%) ( Table 4 ). Similarly, hs-cTnI <5 ng/L alone had an NPV 98.5% (95% CI, 96.5%-100%) and sensitivity of 95.0% (95% CI, 88.3%-100%) for the safety outcome of acute myocardial infarction or cardiac death at 30 days, whereas in combination with a normal ECG, both the NPV and sensitivity were 100% (95% CI, 100%-100%). DISCUSSION Several fi ndings are unique to our study evaluating the LoD and High-STEACS threshold rule-out strategies using a single hs-cTnI at presentation, alone and in combination with a normal ECG. First, we demonstrate that both strate-gies are excellent in safely ruling out acute myocardial infarction when used in combination with a normal ECG, as demonstrated by the very high NPV and sensitivity achieved for both the diagnostic and safety outcomes, including an excellent performance in early presenters. The use of these strategies allows the immediate identi fi cation of patients in whom the clinical presentation is unlikely to be due to an acute myocardial infarction (type 1 and 2 myocardial infarction) and who are at very low risk for adverse events at 30 days. The implementation of these approaches may reduce overcrowding, facilitate early discharge in selected patients, expedite triaging and reduce costs. Second, our study provides novel insights into the per-formance of single measurement rule-out strategies across myocardial infarction subtypes, including both type 1 and 2 myocardial infarctions. Our fi ndings suggest that both the LoD and High-STEACS approaches in combination with a normal ECG are excellent in safely ruling out both type 1 and 2 myocardial infarction. Our study uniquely demonstrates that these rule-out approaches have an excel-lent clinical performance in a heterogeneous, all-comers cohort of patients undergoing hs-cTnI measurements on clinical indication, regardless of the presence or absence of chest pain, re fl ective of US practice. In contrast, most studies assessing rule-out strategies (outside the United States) often based their fi ndings on select cohorts of pa-tients with chest pain (5-7, 12) without providing detailed insights as to whether the rule-out strategies are applicable across the spectrum of patients with acute myocardial infarction, including both type 1 and 2 myocardial infarction. Third, among patients with a normal ECG, the High-STEACS approach seems more ef fi cient because it applies to a larger proportion of patients than the LoD. In our study, we demonstrate that a single hs-cTnI <LoD, regardless of ECG fi ndings, offers an excellent NPV and sensitivity for both the diagnostic and safety outcome, an approach applying to a similar proportion of patients than the one seen when combining a baseline hs-cTnI <5 ng/L and a normal ECG. The highest proportion of patients qualifying for rule-out was seen with the High-STEACS approach using hs-cTnI alone, which applied to 50% of patients. However, although the High-STEACS approach using hs-cTnI alone offered a very high NPV, the achieved sensitivity (approx-imately 95%) for both the diagnostic and safety outcome may not meet the desired acceptable event miss rate (approximately 1% miss rate or 99% sensitivity), 20 amatter of recent debate. 21,22 However, when combined with a normal ECG, the High-STEACS approach offered an excellent NPV and sensitivity for the diagnostic and safety outcomes. Prior non-US studies examining single measurement rule-out strategies have mostly examined hs-cTnT, with few non-US studies assessing the Abbott hs-cTnI assay. Similar to our fi ndings, the Advantageous Predictors of Acute Table 4 Use of a Single hs-cTnI at Presentation Alone and in Combination with a Normal 12-Lead ECG for 1) Diagnosis of Acute Myocardial Infarction and 2) 30-Day Risk Strati fi cation for Acute Myocardial Infarction or Cardiac Death in Early Presenters Early Presenters (n ¼262) LoD (1.9 ng/L) High-STEACS ( <5 ng/L) Baseline hs-cTnI <LoD Baseline hs-cTnI <LoD and Normal ECG Baseline hs-cTnI <5 ng/L Baseline hs-cTnI <5 ng/L and Normal ECG Diagnostic outcome, acute myocardial infarction Proportion qualifying 78/262 (30) 41/262 (16) 137/262 (52) 63/262 (24) Proportion of missed MIs 0/39 (0) 0/39 (0) 2/39 (5.1) 0/39 (0) NPV 100 (100-100) 100 (100-100) 98.5 (96.5-100) 100 (100-100) Sensitivity 100 (100-100) 100 (100-100) 94.9 (88.0-100) 100 (100-100) PPV 21.2 (15.3-27.1) 17.7 (12.6-22.7) 29.6 (21.6-37.6) 19.6 (14.1-25.1) Speci fi city 35.0 (28.7-41.2) 18.4 (13.3-23.5) 60.5 (54.1-67.0) 28.3 (22.3-34.2) Safety outcome, 30-day acute myocardial infarction or cardiac death Proportion of missed events 0/40 (0) 0/40 (0) 2/40 (5) 0/40 (0) NPV 100 (100-100) 100 (100-100) 98.5 (96.5-100) 100 (100-100) Sensitivity 100 (100-100) 100 (100-100) 95.0 (88.3-100) 100 (100-100) Values are number (percentage) or percentage (95% con fi dence interval). ECG ¼electrocardiogram; LoD ¼limit of detection; MI ¼myocardial infarction; NPV ¼negative predictive value; PPV ¼positive predictive value. Sandoval et al Single Hs-cTnI to Rule out AMI 1081 Coronary Syndrome Evaluation (APACE) investigators (Switzerland) also examined the Abbott hs-cTnI assay using the <1.9 ng/L threshold for their calculations in 1567 pa-tients and reported both a sensitivity and NPV of 100%. 11 Similarly, Carlton et al 12 (England) examined hs-cTnI <1.2 ng/L with a nonischemic ECG and reported a sensi-tivity of 99.0% and NPV of 99.5%. Contrary to our fi ndings, using hs-cTnI <2 ng/L with a nonischemic ECG, Carlton et al 12 reported a sensitivity of 97.9% and an NPV of 99.3%. Additionally, using hs-cTnI <5 ng/L (High-STEACS threshold) with a nonischemic ECG, Carlton et al 12 reported a sensitivity of 94.5% and an NPV of 99.2%. Whether differences in the ECG adjudica-tion alone explain the difference in the achieved sensitivity is uncertain. These observations highlight that other factors not related to hs-cTnI concentrations alone may in fl uence the diagnostic performance. Last, the present study complements our recent work using the LoD to rule out acute myocardial injury. An hs-cTnI <LoD demonstrates excellent a) sensitivity and NPV for ruling out acute myocardial injury 14 and for ruling out type 1 and type 2 myocardial infarction, and b) risk strati- fi cation at 30 days for acute myocardial infarction or cardiac death. We note that our fi ndings are limited to one hs-cTnI assay (Abbott Diagnostics) and emphasize that independent studies need to be carried out for other hs-cTn assays. 1 CONCLUSIONS Single measurement rule-out strategies using very low hs-cTnI concentrations such as the LoD and the High-STEACS approaches are excellent in safely ruling out acute myocardial infarction, including type 1 and 2, particularly when combined with a normal ECG. Both rule-out strategies quickly identify patients at low risk for acute myocardial infarction or cardiac death at 30 days, representing a potential opportunity to improve care and reduce costs. References Apple FS, Sandoval Y, Jaffe AS, Ordonez-Llanos J; for the IFCC Task Force on Clinical Applications of Cardiac Bio-Markers. Cardiac troponin assays: guide to understanding analytical characteristics and their impact on clinical care. Clin Chem . 2017;63:73-81. 2. Apple FS, Jaffe AS, Collinson P, et al. IFCC educational materials on selected analytical and clinical applications of high sensitivity cardiac troponin assays. Clin Biochem . 2015;48:201-203. 3. Sandoval Y, Smith SW, Apple FS. Present and future of cardiac troponin in clinical practice: a paradigm shift to high-sensitivity assays. Am J Med . 2016;129:354-365. 4. Korley FK, Jaffe AS. Preparing the United States for high-sensitivity cardiac troponin assays. J Am Coll Cardiol . 2013;61:1753-1758. 5. Body R, Carley S, McDowell G, et al. Rapid exclusion of acute myocardial infarction in patients with undetectable troponin using a high-sensitivity assay. J Am Coll Cardiol . 2011;58:1332-1339. 6. Bandstein N, Ljung R, Johansson M, Holzmann MJ. Undetectable high-sensitivity cardiac troponin T level in the emergency department and risk of myocardial infarction. J Am Coll Cardiol . 2014;63:2569-2578. 7. Thelin J, Melander O, Ohlin B. Early rule-out of acute coronary syn-drome using undetectable levels of high sensitivity troponin T. Eur Heart J Acute Cardiovasc Care . 2015;4:403-409. 8. Carlton EW, Cullen L, Than M, Gamble J, Khattab A, Greaves K. A novel diagnostic protocol to identify patients suitable for discharge after a single high-sensitivity troponin. Heart . 2015;101: 1041-1046. 9. Body R, Burrows G, Carley S, et al. High-sensitivity cardiac troponin T concentrations below the limit of detection to exclude acute myocardial infarction: a prospective evaluation. Clin Chem .2015;61:983-989. 10. Vafaie M, Slagman A, Möckel M, et al. Prognostic value of unde-tectable hs troponin T in suspected acute coronary syndrome. Am J Med . 2016;129:274-282. 11. Rubini Giménez M, Hoeller R, Reichlin T, et al. Rapid rule out of acute myocardial infarction using undetectable levels of high-sensitivity cardiac troponin. Int J Cardiol . 2013;168:3896-3901. 12. Carlton E, Greenslade J, Cullen L, et al. Evaluation of high-sensitivity cardiac troponin I levels in patients with suspected acute coronary syndrome. JAMA Cardiol . 2016;1:405-412. 13. Shah AS, Anand A, Sandoval Y, et al. High-sensitivity cardiac troponin I at presentation in patients with suspected acute coronary syndrome: a cohort study. Lancet . 2015;386:2481-2488. 14. Sandoval Y, Smith SW, Shah AS, et al. Rapid rule-out of acute myocardial injury using a single high-sensitivity cardiac troponin I measurement. Clin Chem . 2017;63:369-376. 15. Sandoval Y, Smith SW, Schulz KM, et al. Diagnosis of type 1 and type 2 myocardial infarction using a high-sensitivity cardiac troponin I assay with sex-speci fi c 99th percentiles based on the third universal de fi ni-tion of myocardial infarction classi fi cation system. Clin Chem .2015;61:657-663. 16. Love SA, Sandoval Y, Smith SW, et al. Incidence of undetectable, measurable, and increased cardiac troponin I concentrations above the 99th percentile using a high-sensitivity vs. a contemporary assay in patients presenting to the emergency department. Clin Chem . 2016;62: 1115-1119. 17. Thygesen K, Alpert JS, Jaffe AS, et al. Third universal de fi nition of myocardial infarction. J Am Coll Cardiol . 2012;60:1581-1598. 18. Sandoval Y, Thygesen K. Myocardial infarction type 2 and myocardial injury. Clin Chem . 2017;63:101-107. 19. Sandoval Y, Smith SW, Thordsen SE, Apple FS. Supply/demand type 2 myocardial infarction: should we be paying more attention? J Am Coll Cardiol . 2014;63:2079-2087. 20. Than M, Herbert M, Flaws D, et al. What is an acceptable risk of major adverse cardiac event in chest pain patients soon after discharge from the Emergency Department? A clinical survey. Int J Cardiol .2013;166:752-754. 21. Carlton E, Cullen L, Body R. Appropriate use of high-sensitivity cardiac troponin levels in patients with suspected acute myocardial infarction-reply. JAMA Cardiol . 2017;2:229-230. 22. Chapman AR, Shah AS, Mills NL. Appropriate use of high-sensitivity cardiac troponin levels in patients with suspected acute myocardial infarction. JAMA Cardiol . 2017;2:228. Funding: The UTROPIA study (NCT02060760) is partially funded through a grant from 1) Abbott Diagnostics, who had no role in the design and conduction of the study; including data collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the fi nal manuscript; and 2) the Minneapolis Medical Research Foundation. Con fl ict of Interest: SWS is a consultant for Alere and advisor for Roche Clinical Diagnostics. SAL is a research principal investigator through the Minneapolis Medical Research Foundation (MMRF), not salaried, for Biokit, Hytest Ltd, and Instrumentation Laboratory; and is on the editorial board of the Journal of Applied Laboratory Medicine . FSA is a consultant for Philips Healthcare Incubator and Metanomics Healthcare; is on the Board of Directors for HyTest Ltd; has received honoraria from Instrumentation Laboratory and Abbott POC; is a research principal investigator through the MMRF, not salaried, for Abbott Diagnostics, Roche Diagnostics, Siemens Healthcare, Alere, Ortho-Clinical Diagnostics, 1082 The American Journal of Medicine, Vol 130, No 9, September 2017 Nanomix, Becton Dickinson, and Singulex; and is Associate Editor for Clinical Chemistry . Authorship: All authors had access to the data and a role in writing the manuscript. SUPPLEMENTARY DATA Supplementary methods accompanying this article can be found in the online version at amjmed.2017.02.032. Sandoval et al Single Hs-cTnI to Rule out AMI 1083 SUPPLEMENTARY METHODS A normal 12-lead electrocardiogram (ECG) was de fi ned as an entirely normal ECG (including those with normal variation ST elevation) or where there were nondiagnostic ST-T wave abnormalities. Sinus bradycardia, prolonged PR interval, low voltage, right or left atrial hypertrophy, right ventricular conduction delay, and occasional premature atrial beats were all within normal for the purpose of this study. All ECGs with atrial fi brillation, sinus tachycardia, high-degree atrioventricular block, premature ventricular contractions, bundle branch block, intraventricular conduc-tion delay ( >120 ms), paced rhythm, left ventricular hy-pertrophy, pathologic Q waves, ST segment depression 0.05 mV in 2 contiguous leads, T wave inversion ( 0.15 mV in 2 contiguous leads with prominent R wave or R/S ratio >1), or ST elevation were considered abnormal. Nonspeci fi c ST-T wave abnormalities were slight variations in ST or T that were <1.5 mm of abnormal T wave inver-sion in 2 consecutive leads or up to 0.5 mm ST depression in 2 consecutive leads or both, or T wave fl attening. 1083.e1 The American Journal of Medicine, Vol 130, No 9, September 2017
14974
https://puzzling.stackexchange.com/questions/2968/how-do-i-count-hamiltonian-paths-of-grid
Skip to main content How do I count Hamiltonian paths of grid? Ask Question Asked Modified 10 years, 10 months ago Viewed 1k times This question shows research effort; it is useful and clear 4 Save this question. Show activity on this post. Let's say there is a 5x5 grid with a starting position in the upper left corner, and an ending position is lower left corner. How do I count how many Hamiltonian paths there are through the grid? I know the answer is 86 for a 5x5 grid. But how should I calculate this? (4x4 is 8, 3x3 is 2) I want to learn calculate larger grids. logical-deduction calculation-puzzle checkerboard Share CC BY-SA 3.0 Improve this question Follow this question to receive notifications edited Oct 17, 2014 at 13:30 LeppyR64 13.6k22 gold badges5151 silver badges6969 bronze badges asked Oct 17, 2014 at 3:28 TlaxinTlaxin 4922 bronze badges 5 2 The same topic exists on Math.SE as well as SO. There is no easy way to calculate Hamiltonian Paths. I seem to remember the complexity being O(n^22^n) for finding whether or not a Hamiltonian path exists at all. – LeppyR64 Commented Oct 17, 2014 at 3:49 mathoverflow.net/a/36378/58988 – d'alar'cop Commented Oct 17, 2014 at 4:04 apparently there are 208, not 86 – d'alar'cop Commented Oct 17, 2014 at 4:05 No there are 86. The example you linked is travelling from bottom left to top right (diagonally opposed). Tlaxin's post travels from top left to bottom left. – LeppyR64 Commented Oct 17, 2014 at 4:09 I also count 86. – Florian F Commented Oct 17, 2014 at 22:21 Add a comment | 1 Answer 1 Reset to default This answer is useful 6 Save this answer. Show activity on this post. The Hamiltonian path problem is NP-complete. This means there's no reasonable way for you to find all of these paths yourself on a sufficiently large grid. You might be able to find a few of them manually, but you will have no way of knowing if you've found all of them, even if you have. Counting them with certainty takes an unreasonable amount of computational time - we know of no faster way. Share CC BY-SA 3.0 Improve this answer Follow this answer to receive notifications answered Oct 17, 2014 at 8:02 user20user20 2 4 This is true for the general case. But it is not necessarily true in a special case such as the square grid or a plane graph. – Florian F Commented Oct 17, 2014 at 8:57 3 True, Florian. My intuition says there's no closed-form formula, because a natural divide-and-conquer approach splits the problem up into many count-P complete problems. BTW, the sequence oeis.org/A000532/list lists the currently known values. – Lopsy Commented Oct 17, 2014 at 16:09 Add a comment | Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions logical-deduction calculation-puzzle checkerboard See similar questions with these tags. Upcoming Events 2025 Community Moderator Election ends in 7 hours Featured on Meta 2025 Community Moderator Election Related 2 Capture The Runaway Mouse 9 Toroidal Heyacrazy: Rainstorm 12 Perambulating ants 0 How many roads in a Tak square? 9 Plants vs Zombies! 13 Remove the magic from a 3x3 Magic Square 10 Painting a Checkerboard 2 What is the minimum number of unshaded cells in a square Nurikabe grid such that no two unshaded regions are the same size? Hot Network Questions Understanding the practical application of Intel's _mm256_shuffle_epi8 definition How do I push back on an impossible scope? Is neutrally skewed the correct interpretation of a box plot with equal length arms? Can a minor armed attack justify a large-scale military response under international law? Drum finger roll notation What’s the point of passing an ordinance that only recommends residents limit smartphone use to 2 hours/day if there are no penalties for exceeding it Are there Minoan loans into Greek? How do ordinals manage to descend finitely? Reverse engineering images from old Japanese videogame Justifying an 'analog horror' aesthetic in a 'future' setting High noise in photodiode circuit for pulse oximeter What exactly is the Schwarzschild radius? How do electrons microscopically drift in the conductor? Function to return free memory in Sinclair ZX Spectrum BASIC? Relation between the truth of P→Q when P is false and the principle of explosion Is there not a leap between moral facts and the normativity of facts? Add grid with different layout than geometry Can I expect flight prices around a public holiday from Germany to Spain to drop within the next 7 months? How to draw a picture like this in tikz? How can I parse a string to a float in C in a way that isn't affected by the current locale? "Intermediate value theorem" for manifolds Why Apache Alias directive doesn't have precedence to Rewrite? Looking for a single noun to describe an ugly looking person or, more broadly, an ugly image Memorizing inconsistent key names in pgfplots more hot questions Question feed By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device.
14975
http://download.caltech.se/download/standarder/I-CAL-GUI-013_Calibration_Guide_No._13_web.pdf
Guidelines on the Calibration of Temperature Block Calibrators EURAMET Calibration Guide No. 13 Version 4.0 (09/2017) EURAMET Calibration Guide No. 13 Version 4.0 (09/2017) I-CAL-GUI-013/v4.0/2017-09 Authorship and Imprint This document was developed by the EURAMET e.V., Technical Committee for Thermometry. Authors: Yves Hermier (LNE-INM, France), Martti Heinonen (MIKES-VTT, Finland), Dolores del Campo (CEM, Spain), Richard Rusby (NPL, United Kingdom), Mikkel Bo Nielsen (DTI, Denmark) Version 4.0 (09/2017) Version 3.0 (02/2015) Version 2.0 (03/2011) Version 1.0 (07/2007) EURAMET e.V. Bundesallee 100 D-38116 Braunschweig Germany E-Mail: secretariat@euramet.org Phone: +49 531 592 1960 Official language The English language version of this publication is the definitive version. The EURAMET Secretariat can give permission to translate this text into other languages, subject to certain conditions available on application. In case of any inconsistency between the terms of the translation and the terms of this publication, this publication shall prevail. Copyright The copyright of this publication (EURAMET Calibration Guide No. 13, version 4.0 – English version) is held by © EURAMET e.V. 2007. The text may not be copied for resale and may not be reproduced other than in full. Extracts may be taken only with the permission of the EURAMET Secretariat. ISBN 978-3-942992-43-5 Image on cover page by PTB. Guidance Publications This document gives guidance on measurement practices in the specified fields of measurements. By applying the recommendations presented in this document laboratories can produce calibration results that can be recognized and accepted throughout Europe. The approaches taken are not mandatory and are for the guidance of calibration laboratories. The document has been produced as a means of promoting a consistent approach to good measurement practice leading to and supporting laboratory accreditation. The guide may be used by third parties e.g. National Accreditation Bodies, peer reviewers witnesses to measurements etc., as a reference only. Should the guide be adopted as part of a requirement of any such party, this shall be for that application only and EURAMET secretariat should be informed of any such adoption. On request EURAMET may involve third parties in stakeholder consultations when a review of the guide is planned. If you are interested, please contact the EURAMET Secretariat. No representation is made nor warranty given that this document or the information contained in it will be suitable for any particular purpose. In no event shall EURAMET, the authors or anyone else involved in the creation of the document be liable for any damages whatsoever arising out of the use of the information contained herein. The parties using the guide shall indemnify EURAMET accordingly. Further information For further information about this document, please contact your national contact person of the EURAMET Technical Committee for Thermometry (see www.euramet.org). EURAMET Calibration Guide No. 13 Version 4.0 (09/2017) EURAMET Calibration Guide No. 13 Version 4.0 (09/2017) Guidelines on the Calibration of Temperature Block Calibrators Purpose This document has been produced to enhance the equivalence and mutual recognition of calibration results obtained by laboratories performing calibrations of temperature block calibrators. Content 1 SCOPE .................................................................................................................................. 2 2 CALIBRATION CAPABILITY ................................................................................................ 2 3 CHARACTERISATION ......................................................................................................... 2 3.1 General .......................................................................................................................... 2 3.2 Axial temperature homogeneity along the borings in the measurement zone ............... 3 3.3 Temperature differences between the borings .............................................................. 3 3.4 Influence upon the temperature in the measurement zone due to different loading ...... 3 3.5 Stability with time ........................................................................................................... 3 3.6 Temperature deviation due to heat conduction .............................................................. 3 4 CALIBRATION ...................................................................................................................... 4 4.1 Measurements ............................................................................................................... 4 4.2 Uncertainties .................................................................................................................. 4 4.2.1 Deviation of the temperature shown by the indicator of the block calibrator from the temperature in the measurement zone ....................................................................... 4 4.2.2 Temperature distribution in the measurement zone ................................................... 4 4.3 Uncertainty as a result of the temperature deviation due to heat conduction ................ 5 5 REPORTING RESULTS ....................................................................................................... 5 ANNEX A: Example of an uncertainty budget .......................................................................... 6 ANNEX B: Procedure for the determination of the influence of axial temperature distribution . 8 B.1.1 Determination of the temperature in three points using a sensor of short length ....... 8 B.1.2 Direct determination of temperature differences by means of a differential thermocouple .............................................................................................................. 8 B.1.3 Determination of the temperature at two points ......................................................... 8 ANNEX C: Recommendations of the EURAMET TECHNICAL COMMITTEE "Thermometry" for the use of temperature block calibrators ........................................................... 9 EURAMET Calibration Guide No. 13 Version 4.0 (09/2017) 2 1 SCOPE 1.1 This Guideline applies to temperature block calibrators in which a controllable temperature is realized in a solid-state block with the aim of calibrating thermometers whose sensing element is inserted into the borings. A temperature block calibrator comprises at least the block located within a temperature-regulating device, and a temperature sensor with indicator (the built-in controlling thermometer) to determine the block temperature. Warning: The calibration must not be confused with the characterisation of the device. The characterisation consists in determining the thermal behaviour of the device (spatial and temporal uniformity). The calibration consists in establishing the relation between the temperature generated at a given place (usually a volume) of the device (unambiguously specified) and the value read on the temperature indicator. A previous characterisation of the device is necessary for associating the uncertainties of the calibration. 1.2 This Guideline is valid in the temperature range from -100 °C to +1300 °C. The temperature ranges stated by the manufacturer shall not be exceeded. 2 CALIBRATION CAPABILITY 2.1 This Guideline is only applicable to temperature block calibrators that meet the following requirements: The borings used for calibrations shall have a zone of known temperature homogeneity (in the following referred to as measurement zone), whose position is exactly specified, and suitable for the thermometer to be calibrated. The measurement zone will in general be at the lower end of the boring. If the measurement zone is situated at another place, this shall explicitly be stated. 2.2 It shall be ensured that calibration is possible under the following conditions: In the temperature range from –100 °C to +660 °C, the inside diameter of the boring or, if present, a bushing inserted to adapt the diameter of the boring, may be at most 0.5 mm larger than the outside diameter of the thermometer to be calibrated; in the temperature range from +660 °C to +1300 °C, this value may be at most 1.0 mm. As an alternative, an equally good or better thermal contact may be established by suitable heat-conveying means or medium, such as an oil, subject to compatibility with the materials of the block and the thermometer, and the temperature of use. The thermal contact is a vital uncertainty contribution at very high precision calibrations and must be evaluated, especially if no heat-conveying means are used. In all cases the calibration setup (thermometers and dry block calibrator) must be designed that conduction of heat along their length does not give rise to excessive error and uncertainty (especially at high temperatures) so. This is usually one of the dominant source of uncertainty in the uncertainty budget of a thermometer calibration. 3 CHARACTERISATION 3.1 General 3.1.1 When a temperature block calibrator is used or calibrated, the characteristics of the temperature distribution in the measurement zone (defined in sections 3.2 to 3.5) must be investigated and documented. 3.1.2 All investigations shall be carried out under the measurement conditions stated in sections 2.1 and 2.2. 3.1.3 If adapter bushings are required to comply with the requirement of section 2.2, these preferably will be made of the material proposed by the manufacturer. EURAMET Calibration Guide No. 13 Version 4.0 (09/2017) 3 3.1.4 If the temperature block calibrator has one or several borings in which a bushing is used, it is to be agreed with the customer which bushing (or bushings) is (are) to be used. If bushings are used, the diameters are to be investigated in the same way as the borings in the temperature block calibrator. Unambiguous marking of the bushings is required. 3.1.5 The thermometer used for the investigations according to sections 3.2 to 3.4 (test thermometer) need not be calibrated, as these tests are performed to measure the temperature differences. The sensitivity at the measuring temperature shall, however, be known with a sufficiently small uncertainty. The sensitivity can usually be taken from the respective standard and is to be checked by a control measurement (possibly at a different temperature). The stability of the thermometers used during the characterisation shall be tested. 3.1.6 The investigations described in the following Sections 3.2 to 3.5 are to be carried out. 3.2 Axial temperature homogeneity along the borings in the measurement zone The influence of the temperature distribution in the measurement zone along the borings (axial temperature distribution in each boring) is to be determined in such a way that it can be taken into account in the uncertainty budget of the calibration. Potential methods are presented in Annex B. The necessary investigations are to be carried out at the operating temperature showing the greatest difference from the ambient temperature (both positive and negative). If it is assumed that the influence of the temperature distribution at other operating temperatures can be estimated by linear interpolation, this must be checked by tests at additional temperatures. 3.3 Temperature differences between the borings The greatest temperature difference occurring between the borings is to be determined. At least the measurement of the temperature difference between (opposite) borings situated at as great a distance from each other as possible is to be determined. To eliminate the influence of temperature variations with time, the temperature differences with respect to an additional test thermometer in the temperature block calibrator could be determined. 3.4 Influence upon the temperature in the measurement zone due to different loading In the case of use of several borings in the dry block, more detailed investigations into the influence on the temperature in the measurement zone due to different loadings can be made upon customer request. In this case, the results for loading with only one thermometer and with all borings loaded are compared. Loadings with thermometers can be simulated by loadings with metal or ceramic tubes. The measurements are to be carried out at least at the temperature with the largest temperature difference from the ambient temperature (both positive and negative). 3.5 Stability with time Depending of the temperature, a sufficient time to reach the thermal equilibrium must be reserved in order to make proper measurements. This point is particularly important in case of on-site use. The maximum range of temperatures indicated by a sensor in the measurement zone over a at last a 30 minute period, when the system has reached equilibrium, shall be determined. Measurements are to be performed at the highest and at the lowest test temperature 3.6 Temperature deviation due to heat conduction Note that the thermometer used for the characterization may influence thermally the area under calibration due to heat losses, depending on the sensor design. EURAMET Calibration Guide No. 13 Version 4.0 (09/2017) 4 4 CALIBRATION The deviation of the indication of the built-in (or optional external) controlling thermometer from the temperature in the measurement zone must be established by a calibration. If the control of the block calibrator is set from either the external or internal thermometer – this must be remarked and agreed with the customer. The temperature in the measurement zone of the temperature block calibrator is determined with a standard thermometer, which is traceable to national standards. 4.1 Measurements The calibration is performed using the standard thermometer in the central boring or in a particularly marked boring. The calibration points must be define with the customer. At each calibration point, two measurement series are carried out, in which the average for the deviation of the indication of the built-in controlling thermometer from the temperature in the measurement zone is determined. The sequence of the calibration point is done for one measurement series at increasing temperatures and for the other at decreasing temperatures. However, at least two measurement series are to be recorded, between which the operating temperature of the calibrator is changed. The values measured in the series at increasing and decreasing temperatures are averaged for each calibration point. The calibration result (deviation of the temperature measured with the standard thermometer from the indication of the calibrator) is documented, for instance in mathematical, graphical, or in tabular form. 4.2 Uncertainties The uncertainty to be stated as the uncertainty of the calibration of the temperature block calibrator is the measurement uncertainty with which the temperature in a boring of the calibrator can be stated. This uncertainty is a component that must be used in the calculation of the uncertainty when a thermometer is calibrated against the temperature in a boring of the calibrator. An example of the calculation of the measurement uncertainty is given in the Annex A. The following contributions to the uncertainty of measurement shall be taken into account: 4.2.1 Deviation of the temperature shown by the indicator of the block calibrator from the temperature in the measurement zone The contributions are essentially to be attributed to the calibration of the standard thermometer, the measurement performed with the standard thermometer, display resolution unit and differences between the measurements at decreasing and increasing temperature (hysteresis). 4.2.2 Temperature distribution in the measurement zone Additional deviations of the indication of the built-in controlling thermometer from the temperature in the measurement zone are caused by the temperature distribution in the block, the loading of the block, and the stability with time. These additional deviations are assumed to be uncorrelated. The contribution ui to the measurement uncertainty is derived from the greatest temperature difference measured (tmax – tmin): ui² (t) = (tmax – tmin)²/3 EURAMET Calibration Guide No. 13 Version 4.0 (09/2017) 5 The contributions to the uncertainties according to sections 3.1 to 3.5 are to be linearly interpolated between the calibration points. Near room temperature, however, the contribution to the uncertainty in a temperature range which symmetrically extends around ambient temperature can be assumed to be constant. Example: Upon initial calibration of a temperature block calibrator in the temperature range -30 °C < t < +200 °C, carried out at an ambient temperature of 20 °C, the following is found as the greatest temperature differences in the homogeneous zone: 0.3 °C at t = -30 °C and 0.6 °C at t = +200 °C. In the temperature range of 20 °C ± 50 °C, i.e. from -30 °C to +70 °C, the greatest temperature difference occurring can be assumed to be 0.3 °C; in the temperature range from +70 °C to +200 °C, linear interpolation between 0.3 °C and 0.6 °C is to be carried out. 4.3 Uncertainty as a result of the temperature deviation due to heat conduction Uncertainty contributions which are the result of temperature deviations due to heat conduction of thermometers shall be determined in all cases (that this arises both from the standard thermometer and an external controlling/customer reference thermometer). 5 REPORTING RESULTS The calibration certificate in which the results of measurements are reported should be set out with due regard to the ease of assimilation by the user to avoid the possibility of misuse or misunderstanding. At least the deviation of the indication of the built-in controlling thermometer from the temperature in the measurement zone in the calibration together with their corresponding uncertainties and the description of the measurement zone should be reported. It is recommended to enclose with each calibration certificate the “Recommendations of the EURAMET Technical Committee for Thermometry for the use of temperature block calibrators” (see Annex C). The results of the investigations are to be documented in the calibration certificate. EURAMET Calibration Guide No. 13 Version 4.0 (09/2017) 6 ANNEX A: Example of an uncertainty budget Calibration of a temperature block calibrator at a temperature of 400 °C (see warning in 1.1) The temperature tS which has to be assigned to the identified measurement zone of the dry block calibration is determined by a calibrated measurement system (thermometer associated with its indicator). The deviation from the temperature tR read on the built-in temperature indicator: δt = (tR – tS)+ δtS + δti + δtH + δtB + δtL + δtV where the sources of correction and uncertainty are identified as follows: δtS Standard thermometer uncertainty δti Resolution of the controlling thermometer δtH Hysteresis in the increasing and decreasing branches of the measuring cycle δtB Inhomogeneity of temperature in the boring δtL Loading of the block with other thermometers δtV Temperature variations during the time of measurement This situation is chosen in order to indicate in the uncertainty budget an achievable uncertainty when calibrating the dry block. As indicated emphatically before, the uncertainty on the calibration of thermometers using a dry block will be usually much more larger in practice compared to the one presented in this example, because of the heat losses along the stem of the thermometer, that depends on the design of the sensor. The following values are used in the example, and are for illustration only. δtS Standard thermometer uncertainty The standard thermometer uncertainty covers hysteresis, drift, non-linearity, selfheat, calibration and others. It was estimated to be U = 0.03 °C (coverage factor k = 2). NB: If the standard was calibrated in a liquid bath, the bias and uncertainties due to different "self heating" must be taken into account in the uncertainty budget. δti Resolution of the controlling thermometer The controlling thermometer has a scale interval of 0.1 °C, giving temperature resolution limits of ±0.05 °C with which the temperature of the block can be uniquely set. Note: If the indication of the built-in controlling thermometer is not given in units of temperature, the resolution limits shall be converted into equivalent temperature values by multiplying the indication with the relevant instrument constant. δtH Hysteresis effects The temperatures indicated show a deviation due to hysteresis in cycles of increasing and decreasing temperatures which is estimated to be within ±0.05 °C. EURAMET Calibration Guide No. 13 Version 4.0 (09/2017) 7 δtB Inhomogeneity of temperature in the boring The deviations due to axial inhomogeneity of the temperature in the calibration boring have been estimated from readings for different immersion depths to be within 0.5 °C. δtL Block loading The influence of maximum loading on the temperature of the central hole was found to be 0.05 °C (arbitrary but realistic value, strongly dependent of cases). δtV Temperature stability Temperature variations due to lack of temperature stability during the measuring cycle of 30 min are estimated to be within ±0.03 °C. Uncertainty budget on the temperature deviation δt: Quantity x i Source of uncertainty Estimate (°C) Coverage interval (°C) Distribution Divisor Uncertainty contribution (°C) tR - tS 0.48 δtS Standard thermometer uncertainty 0.00 0.03 normal 2 0.015 δti Resolution of indicator 0.00 0.10 rectangular 23 0.029 δtH Hysteresis effects 0.00 0.05 rectangular 23 0.014 δtB Axial inhomogeneity 0.00 0.5 rectangular () 3 0.289 δtL Loading effects 0.00 0.05 rectangular () 3 0.029 δtV Stability in time 0.00 0.06 rectangular 23 0.017 δt 0.48 0.294 () () asymmetric distribution () which leads to an expanded (k = 2) uncertainty of: 0.6 °C Reported result The temperature to be assigned to the measurement zone when the temperature indicator shows 400 °C is 399.5 °C ± 0.6 °C. The reported expanded uncertainty of measurement is stated as the standard uncertainty of measurement multiplied by the coverage factor k = 2. EURAMET Calibration Guide No. 13 Version 4.0 (09/2017) 8 ANNEX B: Procedure for the determination of the influence of axial temperature distribution Temperature block calibrators for the calibration of thermometers are usually used in different set-ups, sensing elements of different length being located in different areas of the measurement zone. As a result, the axial temperature distribution along the boring in the measurement zone makes a contribution to the uncertainty of calibration (which frequently dominates all other contributions). The determination of the axial temperature distribution is complicated because the thermometers themselves influence the temperature distribution. This influence can be complex, as, for example, a thermometer immersed to different depths leads to different heat conductions, which may, however, act on the transient behaviour of the block calibrator. B.1.1 Determination of the temperature in three points using a sensor of short length A thermometer with a maximum sensor length of 5 mm is used to determine the temperature at the lower end, in the middle and at the upper end of the measurement zone. The thermometer outside diameter should be ≤ 6 mm. In the temperature range from –100 °C to 250 °C, Pt resistance thermometers and in the range from 250 °C to 1300 °C thermocouples (including Pt/Pd thermocouples) are to be preferred. Example: For a temperature block calibrator with a measurement zone 40 mm in length at the lower end of the boring, measurements under the following conditions are necessary: (1) thermometer touching the lower end, (2) raised/withdrawn 20 mm, (3) raised 40 mm, (4) thermometer touching the lower end. B.1.2 Direct determination of temperature differences by means of a differential thermocouple Here the temperature difference is directly measured using a differential thermocouple, the two junctions being about 25 mm apart. The differences can be measured at several points in the boring, from the lowest point (touching the lower end) upwards. The correct measurement of the temperature difference should be checked prior to the use of the differential thermocouples. It is also possible to introduce two sheathed thermocouples with a small outside diameter together into the boring. While the first thermocouple remains at the lower end, the temperature differences from the second thermocouple are determined, which is at a known distance from the first thermocouple (for example, 20 mm and 40 mm). When both thermocouples are immersed to the same depth, an adjustment for the zero temperature difference is possible. B.1.3 Determination of the temperature at two points If the temperature distribution is determined using a thermometer with a relatively long sensing element, shifting the thermometer by 40 mm (the usual length of the homogeneous zone of the block calibrator) will not be reasonable. Even so, for some calibrators a measurement at two different immersion depths (for example, touching the bottom and raised 20 mm) can furnish sufficient information about the influence of the temperature distribution on the contribution to the uncertainty of measurement. It is to be noted that in accordance with section 4.2, the contribution to the uncertainty of measurement is determined in this case according to ui²(t) = (t1 – t2)²/3. EURAMET Calibration Guide No. 13 Version 4.0 (09/2017) 9 ANNEX C: Recommendations of the EURAMET TECHNICAL COMMITTEE "Thermometry" for the use of temperature block calibrators Results reported in the calibration certificate have been obtained following the EURAMET Guideline cg-13. When the calibrator is used, the following points shall nevertheless be taken into consideration: The calibration of temperature block calibrators mainly relates to the temperature of the block. The temperature of the thermometer to be calibrated in the block can deviate from this temperature. When a thermometer of the same type is used under measurement conditions identical to those during calibration, it can be assumed that the errors of measurement during the calibration of ideal thermometers are not greater than the uncertainties stated in the calibration certificate. If this is not the case (for instance use of different inserts or thermometers to those during calibration), the user of the block calibrator should confirm that the calibration results are still valid. Unless otherwise stated in the calibration certificate, it shall be ensured that  the measuring element is in the measurement zone,  the inside diameter of the boring used in the calibrator (and of the bushing, if present) is in the temperature range from -100 °C to +660 °C at most 0,5 mm, and in the temperature range from +660 °C to +1300 °C at most 1,0 mm, larger than the outside diameter of the thermometer to be calibrated. If this requirement cannot be met, the customer must be aware that there will be a significant uncertainty contribution. When thermometers are calibrated, an additional error of measurement due to heat conduction shall be taken into account. A good test for potential temperature deviations due to heat conduction is to check whether the display of the test thermometer changes when the thermometer is lifted up by 20 mm. Note that contributions to the uncertainty of measurement due to the thermometer to be calibrated (e.g. inhomogeneities of thermocouples) are not included in the measurement uncertainty of the calibrator. The data given in the calibration certificate are decisive for the calibration, not the manufacturer's specifications. Before starting calibration, please discuss by all means the calibration and operating conditions with your calibration laboratory. In all cases, the user must provide himself the means to control the metrological quality of the instrument. EURAMET e.V. Bundesallee 100 38116 Braunschweig Germany EURAMET e.V. is a non-profit association under German law. Phone: +49 531 592 1960 Fax: +49 531 592 1969 E-mail: secretariat@euramet.org
14976
https://ens-lyon.hal.science/ensl-00168943/document
HAL Id: ensl-00168943 Preprint submitted on 30 Aug 2007 HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL , est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Polymer and surface roughness effects on the drag crisis for falling spheres Nicolas Lyotard, Woodrow Shew, Lydéric Bocquet, Jean-François Pinton To cite this version: Nicolas Lyotard, Woodrow Shew, Lydéric Bocquet, Jean-François Pinton. Polymer and surface rough-ness effects on the drag crisis for falling spheres. 2007. ￿ensl-00168943￿ ensl-00168943, version 1 - 30 Aug 2007 EPJ manuscript No. (will be inserted by the editor) Polymer and surface roughness effects on the drag crisis for falling spheres Nicolas Lyotard 1, Woodrow L. Shew 1, Lyd´ eric Bocquet 2, and Jean-Fran¸ cois Pinton 1 1 Laboratoire de Physique de l’ ´Ecole Normale Sup´ erieure de Lyon, CNRS UMR5672, 46 all´ ee d’Italie F-69007 Lyon, France 2 Laboratoire de Physique de la Mati` ere Condens´ ee et Nanostructures Universit´ e Lyon I, 43 Bd du 11 Novembre 1918, 69622 Villeurbanne Received: date / Revised version: date Abstract. We make time resolved velocity measurements of steel spheres in free fall through liquid using a continuous ultrasound technique. We explore two different ways to induce large changes in drag on the spheres: 1) a small quantity of viscoelastic polymer added to water and 2) altering the surface of the sphere. Low concentration polymer solutions and/or a pattern of grooves in the sphere surface induce an early drag crisis, which may reduce drag by more than 50 percent compared to smooth spheres in pure water. On the other hand, random surface roughness and/or high concentration polymer solutions reduce drag progressively and suppress the drag crisis. We also present a qualititative argument which ties the drag reduction observed in low concentration polymer solutions to the Weissenberg number and normal stress difference. PACS. 47.85.lb Drag reduction – 47.32.Ff Separated flows – 47.63.mc High Reynolds number motions 1 Introduction Reduction of drag in turbulent flows due to a small quan-tity of viscoelastic polymer added to the fluid has been the subject of intense research for more than 50 years (e.g. [1, 2]). For example, the addition of as little as 5 parts per million (ppm) of polyacrylamide to turbulent pipe flow can result in an increase in flow speed of 80 percent for a given imposed pressure drop . Similar flows with rough or structured wall surfaces have also been shown to exhibit reduction in drag (e.g. [4,5]). Our experiments address drag reduction by similar mechanisms for bluff bodies, which has received far less attention in spite of the po-tential impact on a broad range of phenomena and appli-cations (aircraft, underwater vehicles and ballistics, pre-dicting hail damage, sports ball aerodynamics, fuel pellets, etc.). The aim of our work is to explore the influence of poly-mer additives in the fluid as well as sphere surface struc-ture on the drag experienced by free falling spheres. Before we review the literature on these topics, let us first recall the main characteristics and terminology of high Reynolds number flow around spheres. (Reynolds number is defined as Re = U D/ν where U is sphere speed, D is sphere diam-eter, and ν is the kinematic viscosity of the fluid.) In the range 10 4 < Re < 10 7, two basic phenomena are responsi-ble for the most prominent flow features: flow separation and the transition to turbulence in the sphere boundary layer. For 200 < Re < Re ∗ w ≈ 3 × 10 5 flow separation oc-curs. (The w subscript distinguishes the value for smooth spheres in pure water from the different cases discussed later.) In this regime, laminar flow extends from the up-stream stagnation point to slightly downstream of the flow separation point, i.e. the turbulence develops downstream from the separation point. In contrast, at Re just above Re ∗ w , the boundary layer becomes turbulent upstream of the flow separation point. The resulting change in the ve-locity profile abruptly moves the separation point down-stream. Since the drag on the sphere is dominated by pres-sure drag (form drag), this jump in the separation location results in a severe drop in drag, the so-called drag cri-sis [6,7,8,9]. In this range of Re , friction drag contributes not more than 12 percent to the total drag on a smooth sphere . Although indirectly, our investigation is essen-tially exploring the effects of polymer additives and sphere surface structure on the dynamics of boundary layer sep-aration and transition to turbulence. We now briefly review studies which directly address these issues. Both Ruszczycky and D. A. White measured drag on a falling sphere in aqueous polymer so-lutions at Re < Re ∗ w . Ruszczycky studied relatively high concentrations between 2500 and 15000 ppm (by weight) of poly(ethylene oxide) (4 × 10 6 molecular weight (MW)) and guar gum (unknown MW) for a range of sphere sizes from 9.5 to 25.4 mm in diameter. Maximum drag reduc-tion of 28 percent was found for a 25.4 mm sphere in 5000 ppm guar gum solution. For higher concentrations 2 Nicolas Lyotard et al.: Polymer and surface roughness effects on the drag crisis for falling spheres (15000 ppm) the drag was found to increase compared to water, probably because such high concentrations tend to be rather viscous. D. A. White used the same polymer at smaller concentrations with a similar range of sphere sizes and found a 45 percent maximum reduction in drag for a 75 ppm solution. A. White and more recently Watan-abe et al. investigated a range of Re spanning the drag crisis. Their work suggests that at polymer concen-trations above about 30 ppm, the drag crisis is replaced by a gradual decrease in drag which manifests as drag reduc-tion for Re < Re ∗ w and drag enhancement for supercritical Re > Re ∗ w . This is consistent with the observations of D. A. White and Ruszczycky below the drag crisis, as well as water tunnel measurements with circular cylinders . At smaller polymer concentrations (5 to 10 ppm) the sit-uation is less clear. A. White’s measurements show erratic variation of drag as Re is increased, while Watanabe et al. report no change in behavior compared to water. Cylin-der studies, in contrast, show a more sharply defined drag crisis at low polymer concentration . One of the goals of our work is to better understand the nature of low con-centration polymer effects near the drag crisis. Concerning free falling rough spheres, to our knowl-edge, only one experimental work exists in the literature. In this short, qualitative study, A. White explored the combined effects of surface roughness and polymer addi-tives . He found that roughening the sphere surface shifts the wake separation point downstream, reducing drag, but with both a rough surface and polymers added the separation point shifts back upstream, increasing drag. Our observations add to White’s intriguing results. Wind tunnel measurements for both spheres and cylinders indicate that the drag crisis is shifted to lower Re when the surface is roughened. The roughened surface triggers an early transition to turbulence in the boundary layer. Golf balls are made with surface dimples in order to reduce drag by a very similar mechanism . Furthermore, Maxworthy showed that adding a trip on the upstream surface of a smooth sphere induces a tur-bulent boundary layer and early drag crisis . We are aware of no fixed sphere studies addressing roughness and polymer effects together. We add a note of caution to the reader that fixed (wind tunnel) and free-falling spheres may not behave the same. The first case corresponds to a fixed velocity of the upstream flow, while the second cor-responds to a constant force driving the motion. Unlike the fixed sphere, a falling sphere cannot exist in a steady state with Re very close to Re ∗ w ; it is not a stable solu-tion. Furthermore, even at terminal fall speed the wake is never truly steady. It is dynamically active with long-lived non-axisymmetric spatial structure. As a result, the “ter-minal” fall velocity of a sphere fluctuates in both direction and magnitude, which may lead to small discrepancies in comparing to wind tunnel data or to other free-fall exper-iments. This paper is organized as follows. The next section presents the experimental procedures and equipment. In section 3 we present our measurements for the free fall of smooth or roughened spheres in water and in solutions Fig. 1. SETUP: the vertical velocity of falling steel spheres is measured with an ultrasound device. The fluid is tap water, pure or with small amounts of pomymer additives. Smooth, grooved, and rough spheres are tested. containing small polymer amounts. We discuss our results in terms of changes in drag with varying Reynolds number, polymer concentration and surface conditions. In the last section, we link our results at low polymer concentrations to the effects of a coil-stretch transition . 2 Measurement system and technique We measure the fall velocity of steel spheres (ball bear-ings with density ρ = 7 .8 g/cm 3) with diameters rang-ing from 3 mm to 80 mm. Two types of sphere surfaces are investigated in addition to the polished smooth sur-face (see photos in fig. 1). The first type, grooved spheres, have have a regular pattern of grooves machined into the surface. The grooves are 500 μm deep, 1 mm wide. The second type corresponds to roughened surface, produced either by sanding the smooth polished spheres or by glu-ing onto the surface a single layer beads. In the first case, changes in the surface height are of the order of 10 mi-crons. In the second case, we have used spherical glass Nicolas Lyotard et al.: Polymer and surface roughness effects on the drag crisis for falling spheres 3 beads 700 μm in diameter. The fluid vessel is 2 m tall and 30 × 30 cm in lateral dimension with walls made of 2 cm thick acrylic plate. The tank is filled with either tap wa-ter or a dilute aqueous solution of polyacrylamide (MW 5×10 6, granulated form, Sigma-Aldrich). The polymer so-lutions range in concentration between 5 and 200 ppm by weight. The polymers are mixed first with 2 liters of water with a magnetic stirrer for at least 8 hours and then mixed with another 180 liters of water for 5 minutes in the ex-perimental vessel. Tests with colored dye in the fluid con-firm that this procedure effectively mixes the fluid. These polymer concentrations are in the dilute regime, signifi-cantly below the estimated overlap concentration of 1200 ppm. The Weissenberg number W i , defined as the ratio of polymer relaxation time τR ∼ 10 −4 to flow time scale (see section 4.2 for details), ranges between 0.8 and 2.3. The spheres are released at the top of the vessel using an electromagnet. The speed of the ball is obtained using a continuous ultrasound technique. This technique is de-scribed in more detail in previous publications [20,21], but we briefly describe it here. One ultrasound transducer po-sitioned at the top of the vessel emits sound at 2.8 MHz into the fluid. As the sphere falls it scatters sound at a Doppler shifted frequency which is measured with a sec-ond ultrasound transducer located near the emitter. The recorded signal is processed to recover the vertical compo-nent of the sphere velocity. The processing entails mixing the recorded signal with a 2.8 MHz sinusoid, low pass fil-tering, decimating to a lower sample rate, and finally using a parametric time-frequency analysis algorithm (MVA, see ref. ) to recover the time varying Doppler shifted fre-quency. The resulting absolute precision for the velocity measurement is about 2 cm/s with a relative precision in mm/s. With typical fall speeds of several m/s, this is bet-ter than 1% precision. At such high Reynolds numbers (10 4 − 10 6), the flow in the wake contains significant non-axisymmetric flow structures , which often cause some lateral motion of the sphere. We present data only from trajectories that remained at least one sphere diameter away from the vessel walls throughout the fall. Based on studies of tunnel blockage effects for fixed spheres, we ex-pect that walls have less than 5% influence on measured drag coefficients . Furthermore, any wall influence is similar for the different polymer solutions and sphere sur-faces allowing for meaningful comparisons between the dif-ferent cases. 3 Experimental results In this section, we present our observations in the form of either drag coefficient estimates or velocity time se-ries. Each presented measurement is the result of aver-aging over several trajectories under the same conditions. We find that each drop is reproducible up to instanta-neous differences of a several percent. We first discuss our measurements of smooth spheres falling in water, which provide a baseline for comparisons to the results from our experiments with polymer solutions and rough spheres. Next, we present measurements of smooth sphere behavior 00.2 0.4 0.6 0.8 1 0 0.5 1 1.5 2 2.5 3 3.5 4 Velocity [m/s] time [s] D=03mm D=06mm D=10mm D=20mm D=30mm D=40mm D=50mm D=80mm 00.1 0.2 0.3 0.4 0.5 1 2 3 4 5 DRAG CRISIS 012345678910 11 12 13 14 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 t / τ u(t) / U T D=06mm D=20mm D=30mm D=50mm D=80mm 10 310 410 510 6 0.01 0.02 0.03 0.04 0.05 Re τ / (D/U T) Fig. 2. WATER & SMOOTH SPHERES: (a) Time series of the spheres vertical velocity u(t) during their free fall. The inset shows the drop – with a non zero initial velocity – of a 60 mm sphere: as its velocity reaches 3.5 m/s it meets the drag crisis. (b) Comparison of the experimental data with a model u(t) = UT (1 − exp( −t/τ )) exponential evolution. The parameters ( τ, U T ) are obtained using a multidimensional un-constrained nonlinear minimization (Nelder-Mead) with MAT-LAB. The inset shows the evolution of the characteristic time τ with the Reynolds number. Note the sharp change in behavior near the drag crisis. in polymer solutions. Then we explore the consequences of surface grooves or roughness in water. And finally, we address the combined case of altered-surface spheres in polymer solutions. 3.1 Water We show in fig. 2(a) the fall velocity time series for the spheres with diameter D varying between 6 mm and 80 mm. As the spheres are released from rest, they accelerate un-til a terminal velocity UT is reached – although for the larger spheres the water tank is not sufficiently tall for this steady state to be reached. The dynamics at the onset of motion is quite complex. Added mass effects, as well as wake-induced lift forces and history forces play a role [19, 20]. However when the Reynolds number is large the dom-inant forces at work during the vertical fall of the sphere are the gravitational force FB = 1 /6( ρS − ρF )πD 3g and 4 Nicolas Lyotard et al.: Polymer and surface roughness effects on the drag crisis for falling spheres 10 410 510 6 0 0.1 0.2 0.3 0.4 0.5 246 8 246 8 246 8 CD Re Fig. 3. DRAG COEFFICIENT measurements for smooth spheres in water. Red Circles – our data; solid circles – Achen-bach (wind tunnel) . an effective drag force FD = 1 /8CDπρ F D2U 2 T , where CD is the usual drag coefficient; ρF and ρS are the fluid and sphere densities. In the steady state, these forces balance and one may then compute the drag coefficient as CD = 4 3(ρS /ρ F − 1) Dg U 2 T . (1) We note here that, unlike wind tunnel experiments, the velocity is not prescribed so that both CD and Re are empirically computed from the data – the equation above may be viewed as an implicit relationship for CD (Re )Re 2 as a function of the control parameters of the experiment. We observe that during the approach to terminal speed, the trajectories for different sphere sizes are fully charac-terized by one time scale τ and the terminal speed UT .We may extract τ and UT from each velocity time se-ries by fitting the data to an exponential of the form, u(t) = UT (1 − e−t/τ ). In agreement with previous obser-vations , the exponential is simply an effective tool used to extract τ and UT and does not accurately repre-sent the more complex dynamics of the true trajectory. When scaled by τ and UT , all the time series in Fig. 2(a) collapse onto one curve, verifying the importance of these two characteristic quantities. Using the exponential fit on the entire time series, we take advantage of our good res-olution in both time and velocity magnitude to obtain ac-curate measurements of UT even though the fall distance is only 2 m. Since this method integrates the whole time record of the fall, it also avoids possible errors incurred by taking single point measurements as has been done in past studies. Furthermore, inspection of the entire veloc-ity time series is often very instructive, clearly revealing the drag crisis in some cases – see for instance the inset of fig. 2(a), where a 60 mm sphere is shown to accelerate again as its Reynolds number exceeds Re ∗ w .We compare in fig. 3 our measurements of drag coeffi-cients for smooth spheres in water to the free fall measure-ments of A. White as well as the wind tunnel mea-surements of Achenbach . We find an excellent agree-ment with White’s data. In particular, we find that the critical Reynolds for the drag crisis is Re ∗ w = 2 .8 10 5, a value that serves as a reference for comparison with the 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.5 1 1.5 2 2.5 3 3.5 4 time [s] velocity [m/s] 0 ppm 5 ppm 10 ppm 25 ppm 50 ppm 100 ppm 200 ppm Drag Crisis 020 40 60 80 100 120 140 160 180 200 0 5 10 15 20 25 30 35 40 45 polymer concentration [ppm] % change in U T Fig. 4. POLYMERS : (a) Fall velocity time series of a 40 mm sphere in water, and polymer solutions with concentration in-creasing from 5 to 200 ppm. In the 5 and 10 ppm solutions the sphere undergoes a drag crisis where none existed for the pure water case. (b) Percentage change of terminal velocity UT for increasing polymer concentration compared to pure water case for 40 mm sphere. fall of spheres with modified surfaces and in water with additives. We also note that both White’s data and ours suggest that the value of the drag coefficient just after the drag crisis for the free fall of spheres (imposed force case) is twice that observed in wind tunnel experiments (imposed velocity case). 3.2 Polymer solution We first present velocity time series for a 40 mm sphere falling in a range of polymer concentrations in fig. 4(a). We observe that at all concentrations the sphere termi-nal velocity is larger than in pure water; drag is reduced. This effect is greatest at small polymer concentrations, as demonstrated in fig. 4(b): drag reductions over 30% have been observed for polymer concentration less than 20 ppm, while at higher concentrations the change is 10-25%. In the 5 and 10 ppm solutions, one observes a sud-den acceleration of the sphere once it achieves a veloc-ity of about 2.5 m/s; this is the drag crisis. Examining the data for a range of sphere sizes in the 10 ppm so-lution (see Fig. 5(a)), we see that the critical Reynolds Nicolas Lyotard et al.: Polymer and surface roughness effects on the drag crisis for falling spheres 5 01234567x 10 5 0 0.1 0.2 0.3 0.4 0.5 Re CD water 10ppm 200ppm Fig. 5. POLYMERS : Drag coefficient measurements for smooth spheres in water (solid circles) compared to polymer solution (open circles). (a) In 10 ppm solution the drag crisis is shifted to lower Re . (b) In 200 ppm solution the drag crisis is replaced by a gradual decrease in drag. number is then Re ∗ polymer ∼ 1.0 10 5, almost a third of the value Re ∗ w ∼ 2.8 10 5 in pure water. On the other hand, at higher polymer concentrations, we do not observe a jump in the velocity time series and their is no discontinuity in the drag CD(Re ) curve. One observes that for high poly-mer concentrations, the drag is reduced at Re < Re ∗ w ,but enhanced for Re > Re ∗: in pure water a drag cri-sis would have occured and dramatically reduced CD but this does not happen when the polymer concentration ex-ceeds about 100ppm as shown in Fig. 5(b). Instead the value of drag decreases continuously. These observations are consistent with the experiments of previous investi-gations using poly(ethylene oxide) in a similar range of concentrations [12,13]. We have not been able to reach Reynolds numbers high enough to determine whether the drag would reach a common asymptotic limit. 3.3 Rough and grooved surfaces in water In exploring surface structure effects, we concentrate our attention on 30 and 40 mm spheres, whose Re in pure wa-ter lies just below the drag crisis. The time series in fig. 6 illustrate the different behaviors for the different surfaces we studied. In pure water, both the 30 mm grooved sphere and rough sphere behaves the same as the 30 mm smooth sphere — cf. Fig. 6(a). In contrast, adding grooves to the 40 mm sphere induces a drag crisis, as shown in Fig. 6(b). The 40 mm rough sphere showed moderate drag reduc-tion, but not a well defined crisis. Indeed, the dynamics in Fig. 6(c) shows that the terminal velocity is increased com-pared to the smooth sphere, but there is no clear change in the acceleration as in the case of the grooved sphere, Fig. 6(b). Grooves are thus able to shift the drag crisis from Re ∗ w ∼ 2.8 10 5 to Re ∗ grooves ∼ 0.8 10 5. In the case of the 40 mm sphere, the terminal velocity increases from 2.5 m/s to 3.4 m/s, corresponding to a drag reduction of 46%. For the rough sphere, a drag reduction is also observed but it is limited to a 20% gain. This difference in behavior is not yet understood. One may note that a rough surface destabilizes the boundary layer but also increases friction and dissipation. 00.2 0.4 0.6 0.8 0 0.5 1 1.5 2 2.5 3 3.5 time [s] velocity [m/s] (a) 30 mm −water smooth grooves rough 00.2 0.4 0.6 0.8 0 0.5 1 1.5 2 2.5 3 3.5 time [s] velocity [m/s] (b) 40 mm −water smooth grooves rough Fig. 6. ROUGH & GROOVED SPHERES: velocity time se-ries for grooved (dashed line) and rough spheres (dahs-dotted line) compared to smooth spheres (solid line) in pure water. The grooved surface induces an early drag crisis. Finally, we have observed that sanded spheres (rugos-ity of the order of 10 μm) with a diamater of 30 and 40 mm showed no change compared to smooth spheres. This in-dicates that surfaces modifications must exceed the thick-ness of the viscous sub-layer in order to produced measur-able effects on the dynamics. 3.4 Rough and grooved surfaces in polymer solution We now examine the changes in the above described be-havior when polymer is added to the water. We find that the two regimes of low and high concentration – section 3.2 – are affected differently by adding grooves to the sphere surface. Results for the grooved spheres are presented in fig. 7. At low concentration the shift of the drag crisis to lower Re due to polymer is exaggerated by adding grooves to the sphere; Re ∗ w is shifted even lower. Indeed, in a 5 ppm solution, the 30 mm grooved sphere experiences the drag crisis, whereas the same sphere in water as well as the smooth 30 mm sphere in 5 ppm solution do not. We find that the Re ∗ grooved+poly ∼ 6 10 4, a further gain of 20% com-pared to polymers alone. The same behavior is observed for the 40 mm grooved sphere at low polymer concentra-tion. At higher polymer concentration, the spheres behave identically with or without grooves. The rough sphere did not exhibit the same behavior. Rather, the surface roughness seems to suppress the drag crisis, in agreement with the observations of A. White . Our results are presented in Fig.8, for a 40 mm sphere. When the surface is smooth, one observes as before the shift in the drag crisis and a very large terminal velocity at low polymer concentration (10 ppm), as well as a re-duced drag at high concentration (200 ppm). However, for 6 Nicolas Lyotard et al.: Polymer and surface roughness effects on the drag crisis for falling spheres 00.2 0.4 0.6 0.8 0 0.5 1 1.5 2 2.5 3 3.5 time [s] velocity [m/s] (a) 30 mm low ppm smooth 0ppm smooth 5ppm grooved 5ppm smooth 10ppm grooved 10ppm 00.2 0.4 0.6 0.8 0 0.5 1 1.5 2 2.5 3 3.5 time [s] velocity [m/s] (c) 40 mm low ppm smooth 0ppm smooth 5ppm grooved 5ppm smooth 10ppm grooved 10ppm 00.2 0.4 0.6 0.8 0 0.5 1 1.5 2 2.5 3 3.5 time [s] velocity [m/s] (b) 30 mm high ppm smooth 0ppm smooth 100ppm smooth 200ppm grooved 100ppm grooved 200ppm 00.2 0.4 0.6 0.8 0 0.5 1 1.5 2 2.5 3 3.5 time [s] velocity [m/s] (d) 40 mm high ppm smooth 0ppm smooth 100ppm smooth 200ppm grooved 100ppm grooved 200ppm Fig. 7. GROOVED SPHERES & POLYMERS: At low poly-mer concentration (left column) adding grooves to the sphere induces an even earlier drag crisis compared to the smooth sphere. At high concentration (right column) grooves do not change the observed dynamics. 00.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0 0.5 1 1.5 2 2.5 3 3.5 4 time [s] velocity [m/s] smooth 0ppm rough 0ppm smooth 10ppm rough 10ppm smooth 200ppm rough 200ppm Fig. 8. ROUGH SPHERE: Adding polymer causes nearly no change in the behavior of rough spheres apparently suppressing the drag crisis independent of the polymer concentration. the rough sphere all dynamical v(t) curves are very close. The rough spheres experience no further decrease in drag in the polymer solutions, compared to what is already in-duced by the surface roughness. In fact, there even may be a slight increase in drag (of the order of 5%) when the rough sphere falls in the water and polymer solution, at any concentration. 4 Discussion 4.1 Experimental summary We have conducted a series of experiments using precise and time resolved ultrasound velocity measurements to compare the behavior of rough and smooth steel spheres falling through water or dilute aqueous polymer solutions. Remarkably, we find that in low concentration polymer so-lutions (5 to 20 ppm) the drag crisis happens at a lower Reynolds number than in water. By adding a pattern of shallow grooves to the sphere surface, we shift the drag crisis to even lower Re . Adding grooves to a sphere in pure water also shifts the drag crisis to lower Re . On the other hand, a sphere roughened with a layer of 700 μmbeads glued to its surface never experiences a drag crisis, exhibiting nearly the same drag with or without polymers. The drag on a rough sphere is slightly less than that on a smooth sphere. For higher concentration polymer solu-tions (100 - 200 ppm) and smooth spheres the drag crisis is suppressed and replaced by a more gradual decrease in drag as Re is increased. This high concentration behavior is largely unchanged by adding grooves to the sphere sur-face. Our measurements seem to indicate that for low con-centrations the polymers are able to induce the transi-tion to turbulence but have little effect on the location of flow separation whether laminar or turbulent. That is, low polymer concentrations induce an early drag crisis, but do not greatly change the drag before and after the crisis, so that we may conclude that the location of the separation points have not been significantly changed. In fact, we have observed that the dynamical behavior v(t) is quite well modelled by a simple shift in the CD (Re ) curve, coupled to a simple dynamical equation in which only the drag force is accounted for. At high concentration and Re < Re ∗ w (i.e. laminar flow separation), drag is reduced, which implies that the sepa-ration location is pushed downstream on the sphere sur-face. On the other hand, for the case of turbulent flow separation ( Re > Re ∗ w ), θs apparently shifts upstream, which manifests as an increase in drag. Surface roughness is commonly understood to induce an early transition to boundary layer turbulence , which may explain the shift in Re ∗ w observed for the grooved sphere. On the other hand, it is difficult to explain in the same context our observation of rather weak drag reduc-tion and apparent suppression of the drag crisis for the rough sphere. Perhaps friction drag is significant in this case. Further investigation of this curious behavior is left for future work. 4.2 Drag crisis and normal stress difference In this section we try to rationalize the effect of the poly-mers on the observed drag reduction. We follow ideas pro-posed for drag reduction in pipes and much devloped since (see for instance ). Specifically, a change of con-formation of the polymer is argued to be the source of the modification of the drag crisis. As discussed above, the drag crisis is the result of the destabilization of the laminar boundary layer . At a critical Reynolds number the boundary layer becomes tur-bulent, shifting the separation line downstream and re-ducing accordingly the drag on the sphere. The polymer has a priori little effect on the parameters influencing this Nicolas Lyotard et al.: Polymer and surface roughness effects on the drag crisis for falling spheres 7 boundary layer transition, like the viscosity η. Indeed the polymer concentration is smaller than the overlap con-centration ξ⋆, separating the dilute from the semi-dilute regime – for the polymers under consideration, this is estimated to be ξ⋆ ≃ 1200 ppm. The shear viscosity of the polymer solutions in water, ηP , is related to the poly-mer density according to ηP = ηw (1 + 1 .49 ξ/ξ ⋆) with ηw the water viscosity . Thus for the low concentrations under consideration here, ξ ≪ ξ⋆, the viscosity is close to that of water η ∼ ηw .However this estimate assumes that the polymers’ struc-ture is not affected by the flow. Velocity gradients may locally induce a stretching of the polymer, which can be quantified by the Weissenberg number defined as W i =˙γτ R, with ˙ γ a deformation rate and τR the polymer re-laxation time. Typically, for W i < 1 the polymer is in a coil state, while for W i > 1 stetching occurs. Let us esti-mate W i in our geometry. The relaxation time is typically τR ∼ ηw R3 g /k B T , with Rg the radius of gyration of the polymer, Rg ∼ bN νF (b the monomer size and νF ≃ 3/5the Flory exponent). For the polymers under investiga-tion, τR ∼ 10 −4s. On the other hand the deformation rate is estimated as the shear rate in the boundary layer, i.e. ˙γ ∼ U/δ , with U the sphere velocity and δ ∼ √νD/U the typical thickness of the boundary layer ( a the diameter of the sphere). This gives W i ∼ U 3/2τR √νD , (2) which can be rewritten W i ∼ ( Re Re c )3/2 , (3) with a critical Reynolds number Re c defined as Re c = ( D2 ντ R )2/3 . (4) At Re c the polymer is thus expected to undergo - within the boundary layer - a coil-stretched transition and the drag will be accordingly be affected (as we discuss here-after). This point is confirmed experimentally in Fig. 9 where the drag coefficient is plotted versus the reduced Reynolds number Re/Re c : the ’drag-crisis’ is always found to occur for Re ∼ Re c for the different cases investigated. While a full rescaling is not expected in this plot, this figure points to the relevance of the Weissenberg number as a key parameter to the polymer induced drag crisis: it does show that the drag crisis transition with polymers, i.e. when the drag coefficient strongly decreases, occurs at a Reynolds number of the order of the critical Reynolds number, Re ∼ Re c. This indicates that the drag crisis criterion with polymers corresponds to W i ∼ 1, as also observed in earlier works. At this level, the previous discussion suggests that the polymer effect on the drag crisis is associated with a con-formation change. The question of the polymer-flow cou-pling however remains, and in particular the origin of an earlier destabilisation of the boundary layer. 0123456 Re/Re c 00.1 0.2 0.3 0.4 0.5 CD Fig. 9. Drag coefficient versus the reduced Reynolds number Re/Re c. The different symbols correspond to various concen-trations of polymers: ( ◦) 10ppm; (square) 25ppm; ( ⋄) 50ppm; (△) 100ppm; ( ▽) 200ppm. Lines are a guide to the eye. For a given polymer concentration, the different experimental points correspond to different size of the falling sphere (from left to right, D= 3 ,6,10 ,20 ,30 ,40 ,50 ,60mm). First, as the polymers in the boundary layer become stretched, most of the properties of the polymer solution in this region will change dramatically : the typical size of the polymer increases indeed from the radius of gyra-tion to the much larger contour length of the polymer, L ≫ Rg . This affects the relaxation time which now be-comes τR ∼ ηw L3/k B T , and therefore the viscosity which increases typically by a factor ( L/R g)3 = N 3(1 −νF ) ≫ 1. However, increasing the viscosity in the boundary layer amounts to a decrease in the local Reynolds number : this would lead to a re-stabilisation of the laminar boundary layer, an effect which is opposite to the experimental ob-servation. Another origin has therefore to be found. We suggest that the destabilisation of the boundary layer originates in a very large normal stress difference occuring when the polymer is in its strechted state. Normal stress difference is a non-newtonian effect which is commonly observed in polymeric solutions . This is known to lead for example to the Weissenberg (rod-climbing) effect. In our geometry, the normal stress difference is expected to be proportional to the square of the shear-rate according to ∆σ = σxx − σyy = ΨP ˙γ2 (5) with ΨP a transport coefficient ; σxx , σ yy are the normal components of the stress tensor in the x and y directions, with {x, y } local coordinates respectively parallel and per-pendicular to the sphere surface (curvature effects are ne-glected). Let us show that this term does destabilize the bound-ary layer. Classically, the boundary layer is destabilized by a negative pressure gradient term due to a decrease of the fluid velocity Ue(x) in the outer layer : −∇ Pe = ρU e(x)∇Ue(x), with Ue(x) the fluid velocity outside the boundary layer. A stability analysis of the boundary layer with such a pressure gradient leads to a destabilization at 8 Nicolas Lyotard et al.: Polymer and surface roughness effects on the drag crisis for falling spheres a reduced Reynolds number Re δ = U δ/ν ∼ 600, corre-sponding to Re ∼ 10 5. The normal stress difference adds a contribution to this term, leading to an supplementary effective pressure gradient term − ∇ Peff = ρU e(x)∇Ue(x) + ΨP ∇ ˙γ2, (6) where ˙ γ ≃ Ue(x)/δ (x) and δ(x) ≃ √νx/U e(x) the local thickness of the boundary layer. It is easy to verify that this supplementary contribution to the effective pressure gradient will be negative - and therefore destabilizing -, before the classical contribution ρU e(x)∇Ue(x). Moreover in the stretched state - for W i > 1-, one may verify that this contribution is dominant as compared to the classical one. The ratio ∆ between these two terms is of order ∆ ∼ ΨP ˙γ2/ρU 2 e . Using ΨP ∼ ηP τP with ηP the polymer con-tribution to the viscosity and τP the polymer relaxation time , one deduces ∆ ∼ U τ P /D ∼ (L/R g)3/√Re c (for Re = Re c). In our case, with Re c ∼ 10 5, ( L/R g)3 = N 3(1 −νF ) ∼ 2.10 5 (N ≃ 35 .10 3), one has ∆ ∼ 10 3 ≫ 1. This term thus leads to a strong destabilization as soon as the polymer is stretched. To summarize, for Re ≥ Re c the coil-stretched tran-sition occurs for the polymer in the boundary layer, and the existence of a normal stress difference induces a strong destabilization of the laminar boundary layer. This sce-nario gives the trends of the underlying mechanisms lead-ing to a shift of the drag crisis even for very small amounts of polymers. For the polymer additive to have an effect, the critical Reynolds number has to be lower than the critical Reynolds number for the drag crisis in pure water, Re ⋆w: Re c = ( D2 ντ R )2/3 < Re ⋆w. This provides a condition in terms of the size of the falling object but also a min-imal polymer weight (since τR ∝ N 3ν ). To go further, a more detailed stability analysis of the boundary layer with the supplementary normal stress difference is needed. We leave this point for further studies. References J.L. Lumley, Ann. Rev. Fluid Mech. 1, (1969) p. 367. 2. N.S. Berman, Ann. Rev. Fluid Mech. 10 , (1978) p. 47. 3. B.A. Toms, Proc. Int. Congress. Rheology Amsterdam (North-Holland, Amsterdam 1949) 2 p. 135. 4. M. Vlachogiannis and T.J. Hanratty, Exp. Fluids 36 , (2004) p. 685. 5. H.L. Petrie, S. Deutsch, T.A. Brungart, and A.A. Fontaine, Exp. Fluids 35 , (2003) p. 8. 6. E. Achenbach, J. Fluid Mech. 54 , (1972) p. 565. 7. S. Taneda, J. Fluid Mech. 85 , (1978) p. 187. 8. T. Maxworthy, Trans. ASME, J App Mech 36 , (1969) p. 598. 9. G. Constantinescu and K. Squires, Phys. Fluids 16 , (2004) p. 1449. 10. M.A. Ruszczycky, Nature 206 , (1965) p. 614. 11. D.A. White, Nature 212 , (1966) p. 277. 12. A. White, Nature 216 , (1967) p. 995. 13. K. Watanabe, H. Kui, and I. Motosu, Rheol. Acta 37 ,(1998) p. 328. 14. T. Sarpkaya, P.G. Rainey, and R.E. Kell, J. Fluid Mech. 57 , (1973) p. 177. 15. A. White, Nature 211 , (1966) p. 1390. 16. E. Achenbach, J. Fluid Mech. 65 , (1974) p. 113. 17. Y. Nakamura and Y. Tononari, J. Fluid Mech. 123 , (1982) p. 363. 18. J. Choi, W.-P. Jeon, and H. Choi, Phys. Fluids 18 , (2006) 041702. 19. Maxey and Riley paper. 20. N. Mordant and J.-F. Pinton, Eur. Phys. J. B 18 , (2000) p. 343 . 21. N. Mordant, P. Metz, O. Michel, and J.-F. Pinton, Rev. Sci. Instr. 76 (2005) 025105. 22. A. Acharya, R.A. Mashkelhar, and J. Ulbrecht, Rheol. Acta, 15 , (1976) p. 471. 23. H. Schlichting, Boundary layer theory (Mac-Graw Hill, New York, 1968). 24. M. Doi and S. Edwards, The theory of polymer dynamics (Clarendon press, Oxford, 1986) 25. J. L. Lumley, Symp. Math. 9, 315 (1972); and J. Polym. Sci., Part D: Macromol.Rev. 7, 263 (1973) 26. E. Balkovsky, A. Fouxon, and V. Lebedev, Phys. Rev. Lett. 84, 4765 (2000) 27. P.G. deGennes, J. Chem. Phys. 60 , (1974) 5030 28. K.R. Sreenivasan, C.M. White, J. Fluid Mech. 409 , 149 (2000). R. Benzi, V.S. L’vov, I. Procaccia, V. Tiberkevich V, Europhys. Lett. 68 (6), 825 (2004). A. Celani, S. Musacchio, D. Vincenzi, J. Stat. Phys. 118 (3-4), 531 (2005)
14977
https://www.collinsdictionary.com/us/dictionary/english/expert
English Italian Spanish Portuguese Hindi Chinese Korean Japanese More English Italiano Português 한국어 简体中文 Deutsch Español हिंदी 日本語 English French German Italian Spanish Portuguese Hindi Chinese Korean Japanese Definitions Summary Synonyms Sentences Pronunciation Collocations Conjugations Grammar Credits × Definition of 'expert' COBUILD frequency band expert (ɛkspɜrt ) Word forms: plural experts 1. countable noun An expert is a person who is very skilled at doing something or who knows a lot about a particular subject. ...a yoga expert. Synonyms: specialist, authority, professional, master More Synonyms of expert 2. adjective Someone who is expert at doing something is very skilled at it. The Japanese are expert at lowering manufacturing costs. expertly adverb [ADV with v] Shopkeepers expertly rolled spices up in bay leaves. 3. adjective [ADJ n] If you say that someone has expert hands or an expert eye, you mean that they are very skillful or experienced in using their hands or eyes for a particular purpose. Harvey cured the pain with his own expert hands. 4. adjective [ADJ n] Expert advice or help is given by someone who has studied a subject thoroughly or who is very skilled at a particular job. We'll need an expert opinion. More Synonyms of expert Collins COBUILD Advanced Learner’s Dictionary. Copyright © HarperCollins Publishers American English pronunciation ! It seems that your browser is blocking this video content. To access it, add this site to the exceptions or modify your security settings, then refresh this page. British English pronunciation ! It seems that your browser is blocking this video content. To access it, add this site to the exceptions or modify your security settings, then refresh this page. You may also like English Quiz ConfusablesSynonyms of 'expert'Language Lover's BlogFrench Translation of 'expert'Translate your textPronunciation PlaylistsWord of the day: 'bibliopegy'Spanish Translation of 'expert'English GrammarCollins AppsEnglish Quiz ConfusablesSynonyms of 'expert'Language Lover's BlogFrench Translation of 'expert'Translate your textPronunciation PlaylistsWord of the day: 'bibliopegy'Spanish Translation of 'expert'English GrammarCollins AppsEnglish Quiz ConfusablesSynonyms of 'expert'Language Lover's Blog COBUILD frequency band expert in American English (ˈɛkspərt ; for adj., also ɛksˈpɜrt ; ɪkˈspɜrt ) adjective 1. very skillful; having much training and knowledge in some special field 2. of or from an expert an expert opinion noun 3. a person who is very skillful or highly trained and informed in some special field Webster’s New World College Dictionary, 4th Edition. Copyright © 2010 by Houghton Mifflin Harcourt. All rights reserved. Derived forms expertly (ˈexpertly) adverb expertness (ˈexpertness) noun Word origin ME < OFr < L expertus, pp. of experiri: see peril COBUILD frequency band expert in American English ( noun & verb ˈekspɜːrt, adjective ˈekspɜːrt, ɪkˈspɜːrt) noun 1. a person who has special skill or knowledge in some particular field; specialist; authority a language expert 2. Military a. the highest rating in rifle marksmanship, above that of marksman and sharpshooter b. a person who has achieved such a rating adjective 3. (often fol. by in or at) possessing special skill or knowledge; trained by practice; skillful or skilled an expert driver to be expert at driving a car 4. pertaining to, coming from, or characteristic of an expert expert work expert advice transitive verb 5. to act as an expert for SYNONYMS 1. connoisseur, master. 3. experienced, proficient, dexterous. See skillful.ANTONYMS 3. unskillful. Most material © 2005, 1997, 1991 by Penguin Random House LLC. Modified entries © 2019 by Penguin Random House LLC and HarperCollins Publishers Ltd Derived forms expertly adverb expertness noun Word origin [1325–75; ME (adj.) ‹ L expertus, ptp. of experīrī to try, experience] COBUILD frequency band expert in British English (ˈɛkspɜːt ) noun 1. a person who has extensive skill or knowledge in a particular field adjective 2. skilful or knowledgeable 3. of, involving, or done by an expert an expert job Collins English Dictionary. Copyright © HarperCollins Publishers Derived forms expertly (ˈexpertly) adverb expertness (ˈexpertness) noun Word origin C14: from Latin expertus known by experience, from experīrī to test; see experience Examples of 'expert' in a sentence expert These examples have been automatically selected and may contain sensitive content that does not reflect the opinions or policies of Collins, or its parent company HarperCollins. We welcome feedback: report an example sentence to the Collins team. Read more… We asked two experts to share their thoughts on where the priorities should lie. The Guardian (2015) But constitutional experts think the high court challenge is unlikely to succeed. The Guardian (2016) The experts say business would suffer. The Guardian (2016) Can she name any educational experts that back her grammar school plans? The Guardian (2016) Experts are now meeting to assess the damage. The Guardian (2019) If unsure, we seek the advice of those we consider to be experts in the field of medicine. The Sun (2016) All it takes is an expert eye. Times, Sunday Times (2010) The resources below provide expert advice and practical tips for overcoming issues in group dynamics. Christianity Today (2000) It comes as experts warn unemployment will soar to three million over the next two years. The Sun (2008) To the expert eye, the right plant can be extremely valuable. Times, Sunday Times (2014) Quotations An expert is a man who has made all the mistakes which can be made in a very narrow fieldNiels Henrik David Bohr An expert is one who knows more and more about less and lessNicholas Murray Butler An expert is someone who knows some of the worst mistakes that can be made in his subject and who manages to avoid themWerner HeisenbergDer Teil und das Ganze Related word partners expert appoint an expert art expert baffle experts become an expert computer expert expert assistance expert care expert help expert judge expert judgment expert knowledge expert practitioner expert prediction expert skills expert support expert testimony expert tips experts are divided experts predict experts question experts recommend experts speculate experts stress experts suggest experts warn field experts finance expert financial expert fitness expert foremost expert handwriting expert health expert hire an expert independent expert industry expert intelligence expert legal expert medical expert military expert outside expert property expert renowned expert resident expert respected expert safety expert security expert so-called expert technical expert top expert wildlife expert Trends of expert View usage over: Source: Google Books Ngram Viewer In other languages expert British English: expert /ˈɛkspɜːt/ NOUN An expert is a person who is very skilled at doing something or who knows a lot about a particular subject. Our team of experts will be on hand to offer help and advice between 12 noon and 7pm daily. American English: expert /ˈɛkspɜrt/ Arabic: خَبِير Brazilian Portuguese: perito Chinese: 专家 Croatian: stručnjak Czech: odborník Danish: ekspert Dutch: expert European Spanish: experto Finnish: asiantuntija French: expert German: Experte Greek: ειδικός Italian: esperto Japanese: 専門家 Korean: 전문가 Norwegian: ekspert Polish: fachowiec European Portuguese: perito Romanian: specialist Russian: эксперт Spanish: experto Swedish: expert Thai: ผู้เชี่ยวชาญ Turkish: uzman Ukrainian: експерт Vietnamese: chuyên gia Translate your text for free Browse alphabetically expert experimentative experimenter effect experimentist expert expert advice expert advisor expert appraisal All ENGLISH words that begin with 'E' Related terms of expert art expert expert care expert help expert tips top expert View more related words ## Wordle Helper ## Scrabble Tools Quick word challenge Quiz Review Question: 1 - Score: 0 / 5 FARM ANIMALS What is this an image of? sheepturkeybullhorse FARM ANIMALS Drag the correct answer into the box. goose sheep cow alpaca FARM ANIMALS What is this an image of? cowchickendonkeybull FARM ANIMALS What is this an image of? sheepdogdonkeybullcow FARM ANIMALS What is this an image of? pigsheepalpacaduck Your score: New collocations added to dictionary Collocations are words that are often used together and are brilliant at providing natural sounding language for your speech and writing. Read more Study guides for every stage of your learning journey Whether you're in search of a crossword puzzle, a detailed guide to tying knots, or tips on writing the perfect college essay, Harper Reference has you covered for all your study needs. Read more Updating our Usage There are many diverse influences on the way that English is used across the world today. We look at some of the ways in which the language is changing. Read our series of blogs to find out more. Read more Area 51, Starship, and Harvest Moon: September’s Words in the News I’m sure a lot of people would agree that we live in strange times. But do they have to be so strange that Area 51 is making headlines? And what’s this about fish the look like aliens. September’s Words in the News explain all. Read more What is an adjective? Learn how adjectives enhance nouns by specifying attributes, their uses in sentences, and the difference between attributive and predicative forms. Read more Scrabble: Six-letter stems Six-letter stems in Scrabble provide players with a great foundation for building longer and higher-scoring words. Here are some you can learn. Read more Sympathy or empathy? Discover the key differences between sympathy and empathy, including their meanings, usage, and emotional involvement. Read more Scrabble: Should you change your letters? If you have a bad bunch of Scrabble letters you can change your letters. We give a few tips on how to best go about this. Read more Collins English Dictionary Apps Download our English Dictionary apps - available for both iOS and Android. Read more Collins Dictionaries for Schools Our new online dictionaries for schools provide a safe and appropriate environment for children. And best of all it's ad free, so sign up now and start using at home or in the classroom. Read more Word lists We have almost 200 lists of words from topics as varied as types of butterflies, jackets, currencies, vegetables and knots! Amaze your friends with your new-found knowledge! Read more Create an account and sign in to access this FREE content Register now or log in to access × Register for free on collinsdictionary.com Unlock this page by registering for free on collinsdictionary.com Access the entire site, including our language quizzes. Customize your language settings. (Unregistered users can only access the International English interface for some pages.) Submit new words and phrases to the dictionary. Benefit from an increased character limit in our Translator tool. Receive our weekly newsletter with the latest news, exclusive content, and offers. Be the first to enjoy new tools and features. It is easy and completely free! Already registered? Log in here Collins TRANSLATOR LANGUAGE English English Dictionary Thesaurus Word Lists Grammar English Easy Learning Grammar English Grammar in Spanish Grammar Patterns English Usage Teaching Resources Video Guides Conjugations Sentences Video Learn English Video pronunciations Build your vocabulary Quiz English grammar English collocations English confusables English idioms English images English usage English synonyms Thematic word lists French English to French French to English Grammar Pronunciation Guide Conjugations Sentences Video Build your vocabulary Quiz French confusables French images German English to German German to English Grammar Conjugations Sentences Video Build your vocabulary Quiz German confusables German images Italian English to Italian Italian to English Grammar Conjugations Sentences Video Build your vocabulary Quiz Italian confusables Italian images Spanish English to Spanish Spanish to English Grammar English Grammar in Spanish Pronunciation Guide Conjugations Sentences Video Build your vocabulary Spanish grammar Portuguese English to Portuguese Portuguese to English Grammar Conjugations Video Build your vocabulary Hindi English to Hindi Hindi to English Video Build your vocabulary Chinese English to Simplified Simplified to English English to Traditional Traditional to English Quiz Mandarin Chinese confusables Mandarin Chinese images Traditional Chinese confusables Traditional Chinese images Video Build your vocabulary Korean English to Korean Korean to English Video Build your vocabulary Japanese English to Japanese Japanese to English Video Build your vocabulary GAMES Quiz English grammar English collocations English confusables English idioms English images English usage English synonyms Thematic word lists French French images German grammar German images Italian Italian images Mandarin Chinese Traditional Chinese Spanish Wordle Helper Collins Conundrum SCHOOLS School Home Primary School Secondary School BLOG RESOURCES Collins Word of the Day Paul Noble Method Word of the Year Collins API By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts.
14978
https://www.youtube.com/watch?v=c7A45ppPlXk
Completing the Square and Vertex Form of Quadratic Equations Patrick J 1400000 subscribers 2726 likes Description 347889 views Posted: 13 May 2011 Thanks to all of you who support me on Patreon. You da real mvps! $1 per month helps!! :) !! Completing the Square and Vertex Form of Quadratic Equations - How to complete the square and vertex form of quadratic equations is explained. For more free math videos, visit and click on the 'Free Video Lessons' tab on the left! Just Math Tutoring 381 comments Transcript: Introduction all right in this video we're going to talk about quadratic equations and we're going to talk about vertex form of quadratic equations and also completing the square if you're given a quadratic equation in standard form and again remember a quadratic equation is a polom where the highest power on the x is a two so x^2 + 3x + 8 would be a quadratic equation um again the highest power on this one is a two notice also the X has a power of one so this is in fact a quadratic equation in standard form what we call standard form there's basically no parentheses and everything's multiplied out likewise y = 3x^2 + PK X minus 42 would also be a quadratic equation in standard form so what we want to do is talk about vertex form and vertex form is when you write a quadratic equation as the following there's some number a out front x - h^ 2ar + K so this is going to be what's called vertex form and the reason why they call this vertex form is because well you can read the vertex off from this when you write it this way so this quadratic equation is going to have a Vertex at the point H comma K notice even though inside it says negative H you basically use the opposite sign so if it was -3 the vertex the x coordinate would be a positive3 but what whatever the K value is if it's positive or negative you use that same positive or negative value so that doesn't change and the parabola is going to open upwards if this value a is greater than zero and it's going to open downwards if this value of a is less than zero Example so for example suppose we have the following quadratic equation already in vertex form and we'll just graph it real quick here so suppose it looks like y equals 2 X - 1 qu^ 2ar plus I don't know how about three okay this thing is going to have a vertex at the point positive 1 comma 3 and it's going to open upwards because the a value which is two well that's certainly greater than zero and that again tells you that it opens upwards so when we go to graph our Parabola again this is why it's called vertex form because you're given the vertex kind of for free when it's written like this so you go over one unit up one two three units that's going to be where the vertex is and remember the vertex is where the parabola either bottoms out or tops out and then it's just going to open upwards opens upwards and it has that familiar U shape to it so again not the best graph in the world but um a rough idea as to what this Parabola would look like also whatever the X coordinate is notice that this Parabola is symmetric about that line so we would say that the line x = 1 would be the axis of symmetry for this Parabola okay maybe let's graph one more and then we'll talk about putting these things into vertex form by completing the square so suppose this one it was y = 3 x + 2^ 2ar + 4 okay again for this one now the vertex is going to be at the point again you take the opposite sign so -2 comma 4 and it's going to open downwards because of the -3 so the only thing that matters in terms of an opening up or down is the number out front and the only thing that affects the vertex are these two numbers outside okay so the vertex and this tells you it opens up or down this number out front all right so again we can graph this without too much trouble it says you go over to NE -2 up 1 2 3 4 and that's where our vertex is going to be B and in this case again because of the -3 it's opening down so we'll just make this thing go down let's see you could even figure out a little bit better for example where it's going to hit the Y AIS and maybe let's do that so it's going to hit the Y AIS when the x coordinate is zero so if you plug zero into your quadratic equation we'll get -3 2 2^ 2 + 4 well 2^ 2ar is 4 -3 is -12 -12 + 4 is8 so this thing should actually be Crossing Way far down here atg8 so my graph is pretty sloppy here so it should cross way down there and again that's supposed to be a u certainly supposed to have this Parabola shape again forgive my Artistry um I'm not the best artist in the world and I am being a little a little sloppy on this one so I apologize for that um but again that's the basic idea that's the vertex the axis of symmetry here would be the line x = -2 and that's it so that's the good thing again about having these quadratic equations in vertex form is that you get again the vertex for free so again sorry about my sloppy graph this should be a parabola so let's talk about completing the square now to put a quadratic equation into vertex form and I'm going Completing Square to do two of these but I'm not going to graph them and completing the square ends up being a very useful trick um for those of you taking calculus maybe you've forgotten and you're doing partial fractions and you need a little refresher well here's how you complete the square um in that setting you're using it again to integrate so we're not graphing but the tricks are the same so suppose we're given this quadratic equation Y = x^2 + 4x - 3 and we want to put this into vertex form okay so it's not a it's it's definitely a quadratic equation but it's certainly not invertex form so the trick is for completing the square so completing the square what you do is you basically put the X terms in a set of parentheses okay that's easy enough and what you need to do is you have to make sure that the coefficient on the x is a one and in this case it is if not you have to factor that number out and in the next example I'll do one where the coefficient on the X squ is not a one all right so then what you do this is kind of the the part where everything happens so I'm going to leave myself a little space here and just rewrite everything though as it is whatever the number is in front of the X term probably should have picked a different number here but that's okay you take 1/2 of that number so I'm going to take 1/2 of the number four that gives me two and then you take this number two and you square it so 2^ squar is positive4 and that's what's going to go back inside of the parentheses okay now we have to be careful though because if we were to multiply if we were to get rid of the parentheses we would have an X2 a + 4x a + 4 and a minus 3 but if you look back at the original there was no plus4 in there okay everything else was in there but we've basically just thrown a plus4 into this problem out of nowhere so to keep the equation balanced you can either think about adding four to the left side or equivalently you could just simply subtract the four from the right side okay so this is the tricky part to completing the square that kind of throws people off again so you take half of the number in front of the X you square that number you throw it back in the parentheses and this is a special case too because notice there's no coefficient out in front of the parenthesis well there is it's a one but when the coefficient is a one Whatever number you add inside you also subtract it and the point is is now you can actually write x^2 + 4x + 4 as x + 2 x + 2 well -3 - 4 is -7 and I can now write this as x + 2^ 2ar - 7 and this is now in vertex form okay so this is the trick completing the square will help you put things into vertex form okay so notice how it looks like this form we had at the very beginning it says you need a number out front there's an X plus or minus some number squared and then some number hanging out well that's what we have here you could think about the a value in this case as being one and now it looks a lot like that okay again I'm not going to graph it but the vertex of this quad quadratic equation would be in this case at -2 you take the opposite sign -7 let's do one more completing the square problem here where the coefficients aren't quite as nice and again I am going to make my numbers work out sort of nicely you may end up with fractions that you have to square you know all kinds of weird things can happen with the numbers but that's okay the procedure stays the same so suppose we have 2x^2 + 12x minus 4 okay I'm going to do the same thing as before I'm going to group my X terms together so the 2x^2 + 12x are going to go together my Min -4 is just hanging out we want the coefficient on the x s to be one well again it's not a one in this case whatever number that is you have to factor that out front you only Factor it out of the X terms as well -4 is not going to change it's going to stay -4 well if I factor a two out I'll have x^2 + 6x inside of the parentheses because again 2x 2 x^2 is 2x^2 2 POS 6X is pos2 12 x again my minus 4 still hanging out so here comes the completing the square portion now it's like what we did before so I have my x^2 + 6x I'm going to throw a number inside of the parentheses that wasn't there and again this one's going to things are going to change a little bit because of this coefficient of the two so whatever number is in front of the X we take one half of that so 1 half of six is three then we take that number and square it and that's what goes back inside of the parenthesis so I'm going to throw in a plus 9 that was not there before okay now this is where we also have to be a little careful if you just subtract nine that's not going to be correct in this case again it all depends on this coefficient out front so let's multiply out what we have right now compare it to what we started with so if you multiply 2 x^2 well you'll have your 2x^2 2 6X you'll get your positive 12x which is good I've already got a -4 which is good so the extra thing is going to come when I multiply the two and this positive 9 that wasn't there before so if I multiply 2 POS 9 I'm going to get POS 18 but there's no plus 18 in this original problem my original equation so to get rid of the plus 18 that's going to result I need to now subtract 18 okay so the moral of the story is whoops sorry the moral of the story is whatever coefficient is out front of the parentheses multiply that by the new thing that you threw in there and then you're going to have to subtract that away okay so again play with this multiply it back out if this step is a little confusing to you and see that in fact you do get the original thing that you had to begin with because that's all we're doing we want the same thing back we're just changing how it looks a little bit all right so now we can write this in vertex form so my two is out front x^2 + 6 x + 9 well how does that factor I think it factors as x + 3 x + 3 and the trick is whatever when you take half of the number whatever that is in our case it was positive3 that's what's going to go inside the parentheses so if you end up with weird fractions you can basically just plug that number right in and that's how it's going to factor so just a little shortcut so x + 3^ 2ar -4 - 18 is 22 and now we have written our original quadratic equation that was in standard form we now have it in vertex form so one last time the vertex of this Parabola would be at -3 comma -22 and it would open upwards because of our a value of positive2 okay so again you know I picked Rel relatively nice numbers in these examples hopefully to make it clear if if originally we had a say an 11x well when you factor 2 out of 11 you would have 11 over two then you'd have to square that and then things would start getting pretty tedious with fractions but again the procedure would be exactly the same so again this is the trick on completing the square I hope it makes a little bit of sense I know it can be a little tricky but follow this little recipe and again with the exception of having some weird numbers weird fractions this procedure will work every time so good luck I hope this makes some sense again I know that completing the square is confusing for people if it's been a while since you've done it or certainly if it's the first time you've seen it so um don't get discouraged just keep practicing and again just look for the procedure and just try to recognize that hey I'm really doing the same thing every time here so good luck
14979
https://www.sciencedirect.com/science/article/pii/S1674987117301305
Skip to article My account Sign in View PDF Geoscience Frontiers Volume 9, Issue 4, July 2018, Pages 1117-1153 Research Paper Origins of building blocks of life: A review Author links open overlay panel, rights and content Under a Creative Commons license Open access Highlights €¢ This review includes the whole stage of chemical evolution of life. €¢ The availabilities of P and N on the early Earth were discussed. €¢ Geochemical and geological settings favorable for the life's origin are proposed. Abstract How and where did life on Earth originate? To date, various environments have been proposed as plausible sites for the origin of life. However, discussions have focused on a limited stage of chemical evolution, or emergence of a specific chemical function of proto-biological systems. It remains unclear what geochemical situations could drive all the stages of chemical evolution, ranging from condensation of simple inorganic compounds to the emergence of self-sustaining systems that were evolvable into modern biological ones. In this review, we summarize reported experimental and theoretical findings for prebiotic chemistry relevant to this topic, including availability of biologically essential elements (N and P) on the Hadean Earth, abiotic synthesis of life's building blocks (amino acids, peptides, ribose, nucleobases, fatty acids, nucleotides, and oligonucleotides), their polymerizations to bio-macromolecules (peptides and oligonucleotides), and emergence of biological functions of replication and compartmentalization. It is indicated from the overviews that completion of the chemical evolution requires at least eight reaction conditions of (1) reductive gas phase, (2) alkaline pH, (3) freezing temperature, (4) fresh water, (5) dry/dry-wet cycle, (6) coupling with high energy reactions, (7) heating-cooling cycle in water, and (8) extraterrestrial input of life's building blocks and reactive nutrients. The necessity of these mutually exclusive conditions clearly indicates that life's origin did not occur at a single setting; rather, it required highly diverse and dynamic environments that were connected with each other to allow intra-transportation of reaction products and reactants through fluid circulation. Future experimental research that mimics the conditions of the proposed model are expected to provide further constraints on the processes and mechanisms for the origin of life. Graphical abstract Keywords Astrobiology Biochemistry Chemical evolution Extraterrestrial life Hadean Earth Hydrothermal systems Cited by (0) : Peer-review under responsibility of China University of Geosciences (Beijing). © 2018 China University of Geosciences (Beijing) and Peking University. Production and hosting by Elsevier B.V.
14980
https://ccgenetics.github.io/guidelines-genetic-diversity-indicators/docs/2_Theoretical_background/Ne-500.html
Ne 500 indicator | Genetic Diversity Indicators Skip to main content Genetic Diversity Indicators Overview Background Genetic diversity and indicators Ne 500 indicator Populations maintained indicator DNA-based monitoring indicator What is a population (a first, simple answer) Quickstart Species list How to - guides How to define populations How to establish a reference period Extinct and extant populations How to estimate population sizes How to get the Ne:Nc ratio and when the ratio CAN'T be used How to account for uncertainty Example assessments Data collection Data sources Web-tool for data collection (KoboToolBox) KoboToolBox help Recommended minimum data and metadata variables for calculating genetic diversity indicators Calculations and reporting Equations and example calculations Calculating country indicator values Measuring temporal change R scripts for calculation and reporting Glossary References and other resources Contact & How to cite This site uses Just the Docs, a documentation theme for Jekyll. Docs Repository Background Ne 500 indicator Ne 500 indicator Effective population size (Ne) is a well-accepted metric for measuring the rate of loss of genetic diversity within populations. The effective population size is the genetic complement of the population’s census size; where census size influences ecological aspects of a population, genetic factors of a population are influenced by Ne. The Ne controls the rates of both random (genetic drift) allele frequency change and allele loss, as well as and random increases in inbreeding. When Ne is below 500, genetic diversity loss accelerates through genetic drift and increases in inbreeding (mating among related individuals). Why does this matter? Let’s consider an example of overharvesting fish populations, which has reduced population sizes and contributed to the loss of unique alleles and genes over time. Coupled with climate change, this has severely impacted the ability of overharvested species’ to adapt and recover. In the case of North Atlantic cod (Gadus morhua), a supergene associated with migratory behavior has been lost from several populations (Matschiner et al., 2022). This could change the species’ distribution, altering marine ecosystems, and eventually lead to the extinction of the species. As explained below (see figure), an Ne above 500 (which typically corresponds to a census population size of 5,000) will maintain genetic diversity within populations for a long time. In other words, Ne 500 is a “sufficient” size to prevent loss of genetic diversity within populations. See What is a population for a background on how to define a population in the context of the genetic diversity indicators. Ne below 500 is the approximate point when populations are less able to adapt via natural selection, and start to experience genetic loss. We note that Ne below 50 will lead to very rapid increases in inbreeding, losses of fitness, and changes in the genetic composition of populations, causing high risk of extinction in the short-term. The Ne 500 and Ne 50 thresholds are useful to conservation management and recovery programs (Mace et al 2008). Because of the need to maintain genetic diversity and adaptive capacity for the long term, the Ne 500 indicator is a key genetic indicator. The relationship between effective population size and genetic diversity. Left: The effective population size (Ne) is represented with the lines on the plot, with time (generations) on the x axis and genetic diversity on y axis (adapted from Willi et al. 2021). Small populations lose genetic diversity over time more rapidly than large populations, often leading to inbreeding depression and ultimately the complete collapse of the population (extinction). Populations above Ne 500 are capable of maintaining genetic diversity into the long-term. Right: The Ne 500 measures the proportion of populations large enough to avoid the loss of genetic diversity. The Ne 500 indicator is derived by (a) comparing the effective population size (Ne) of each population to a critical threshold, 500, (b) counting the number of populations above the threshold (and therefor,e maintaining genetic diversity), and (c) dividing this number by the total number of existing populations since the reporting started to be done (see section Measuring temporal change for why). For example: a species has 5 populations, 3 of which are above Ne 500. The indicator value for this species would be 3/5 = 0.6. For a detailed explanation on calculating the Ne 500 indicator across multiple species see Hoban et al. (2023b) and Hoban et al. 2024, and the Calculations section of these guidelines. The values for the Ne 500 indicator range between 0 and 1, with 0 indicating all populations have Ne < 500 (no populations are large enough to sustain genetic diversity) and 1 indicating that all populations have Ne > 500 (all populations are large enough to sustain genetic diversity). The Ne 500 indicator is likely the best evidence of genetic status and risk of genetic erosion when DNA sequencing is not available (the case for most species globally). It is feasible and scalable for many species per country. This indicator is feasible and scalable for many species per country (see Hoban et al. 2024). How to get the Ne? The Ne of a population can be estimated with statistical methods and DNA sequence data, when that is available. But for nearly all species, DNA data is not yet available. For many species, it is sufficient and appropriate to obtain the Ne by using a simple transformation of census size (Nc, or the number of mature individuals) and an Ne to Nc ratio. An Ne:Nc conversion ratio of 0.1 is generally a conservative and suitable ratio to calculate Ne (although typical ratios may range from 0.1 to about 0.3 in many vertebrates and plants - this is a generalization). By applying a 0.1 Ne:Nc ratio, Ne = 500 translates to a threshold of Nc = 5,000 mature individuals. How to Estimate Ne. The effective population size (Ne) can be obtained by analyzing the genetic diversity of a sample of the population, using statistical methods and DNA sequence data. If genetic data is not available, Ne can be obtained from a simple transformation of the census size (Nc, the number of mature individuals) through an Ne:Nc ratio. See section How to estimate Ne? for more details on how to obtain Ne data from genetic data or census sizes, and see the Equations section for more details on calculating the Ne 500 indicator. Previous: Genetic diveristy and indicatorsNext: Populations maintained indicator Back to top
14981
http://www.chinaheritagequarterly.org/scholarship.php?searchterm=018_oldlibraries.inc&issue=018
CHINA HERITAGE QUARTERLYChina Heritage Project, The Australian National UniversityISSN 1833-8461 No. 18, June 2009 HomeEditorialFeaturesArticlesNew ScholarshipBehind the ScenesLinks Search NEW SCHOLARSHIP Lingering Traces: In Search of China's Old Libraries | China Heritage Quarterly Lingering Traces: In Search of China's Old Libraries Wei Li 韋力 Introduced and translated by Duncan Campbell And thus moved by the extent to which, in the end, all material objects are prone to ruin and that even those made of metal or of stone can not last forever, despite their solidity, I conceived of the desire to make a record of all the inscriptions left to us from former ages in order that they may be preserved. (因感夫物之終弊雖金石之堅不能以自久於是始欲集錄前世之遺文而藏之) —Ouyang Xiu 歐陽修 (1007-72), 'On the Tang Dynasty Stele of the Hall of the Confucius Temple' (Tang Kongzimiao tang bei 唐孔子廟堂碑), Colophons on the Collected Inscriptions of the Past (Ji gu lu bawei 集古錄跋尾) Fig. 1 An illustration of the translation of the sutras, found attached to a Tangut script version of the Xianzai xianjie qian fo mingjing 現在賢劫千佛名經, dated 1068-85. From Bastions of Civilization: A History and Exploration of Ancient Book Conservation [Wenming de shouwang: guji baohu de lishi yu tansuo 文明的守望:古籍保護的歷史輿探索], Beijing: Beijing Tushuguan Chubanshe, 2006, p.100. Wei Li 韋力 (b.1964), a native of Tianjin, is both a noted contemporary private Chinese book collector and an author of considerable eloquence. An inveterate collector of things from an early age, including grain ration coupons (liangpiao 粮票), his interest in book collecting was quickened in 1981 when he read in the paper an account of a book-buying excursion to Hong Kong made in the 1950s by the bibliophile Zheng Zhenduo 鄭振鐸 (1898-1958). As the result of years of painstaking effort, Wei Li's own library in Beijing, the Studio of the Angelica and the Orchid (Zhilanzhai 芷蘭齋), now comprises more than 8000 titles in 70,000 fascicles and contains examples of imprints and manuscripts dating from the Tang dynasty onwards. Wei Li is wary of easy rationalizations of his motivations; he collects books, as his name card reads, because he is a 'Book Lover' (cangshu aihaozhe 藏書愛好者) and he is fond of citing a line much used over the centuries but which derives from Zhang Yanyuan's 張彥遠 (ca. 815-after 875) Record of Famous Paintings Down Through the Ages (Lidai minghua ji 歷代名畫記) to the effect that: 'Without engaging in useless pursuits how ever is one to discharge this life of limitation?' (Bu zuo wuyi zhi shi, he yi qian you yai zhi sheng 不做無益之事何以遣有涯之生). At the same time, although he has never sold a book that he has added to his collection, he is modest in his long-term aspirations as a bibliophile. The legend of his collector's seal, for instance, reads not 'Keep Hold of this Book Forevermore' or some such, but rather 'This Book Was Once to be Found in Wei Li's Home' (Ceng zai Wei Li jia 曾在韋力家). 'I am no more than a Library Clerk' (Diancangli 典藏吏), he says of himself, 'just another link in that chain whereby books are handed down through the ages'. In this respect, he explicitly disassociates himself with the more proximate, post-1949, traditions of book collecting that Zheng Zhenduo is perhaps most representative of—the 'Red Brigade' (Hongse cangshujia 紅色藏書家)—and seeks rather to re-establish a connection with the mainstream pre-1919 tradition; to him the book is an object of preservation rather than of criticism. Fig. 2 The legend of this Collector's Seal, that of Qi Chenghan (1563-1628), reads: 'A treasure to be passed on through the generations by my sons and grandsons.' From Lin Shenqing 林申清, ed., Seals of Famous Book Collectors of the Ming and Qing Dynasties [Ming Qing zhuming cangshujia cangshuyin 明清著名藏書家藏書印], Beijing: Beijing Tushuguan Chubanshe, 2000, p.44. As an author (and executive editor now of the journal Book Collector [Cangshujia 藏書家]), Wei Li seeks to disseminate some of the immense knowledge of books and publishing that he has so painstakingly and incrementally acquired over the past twenty years. His particular research interests are focused on such areas as stele inscriptions, the history of East-West print cultural exchange, bibliographical aspects of the classical exegesis of the Qing dynasty, moveable type, the study of annotated critical editions, and research into catalogues of collector's seals. Of critical and abiding interest is his engagement, as he works on the catalogue to his own library, in that traditional form of bibliographical note, the Tiba 題跋 or, more recently, Shuhua 書話, that once served to underpin all scholarship in China. As well as being obsessed with the book as a material object, however, in recent years Wei Li has also become increasingly concerned with the traditional physical context of the book and its preservation and circulation—the 'Cangshulou' 藏書樓—and the fast disappearing remains of these institutions. In his book Lingering Traces: A Search for China's Old Libraries, sections from which are translated below, Wei Li sets out to find some of the remains of these libraries in the hope both of reminding people of their cultural importance and of alerting everyone to the need to preserve (indeed, restore) these sites. His is a compelling but elegiac voice. Wei Li's book is divided into eleven sections, detailing the author's excursions to Zhejiang, Changshu, Yangzhou, Zhenjiang, Suzhou, Ningbo, Nanjing, Hunan, Guangdong, and Shandong in search of remains of the old private libraries. For present purposes, I have translated both Wei Li's 'Introduction' (Yuanqi 緣起) to the book and a sample of the entries from his trip to Changshu, a region with strong and particular traditions of book collecting, as captured, most famously, in Sun Congtian's 孫從添 (1692-1767) Bookman's Manual (Cangshu jiyao 藏書紀要). Further translations will follow later this year in a special issue of China Heritage Quarterly devoted to the history of the private libraries of China. Finally, I would like to thank both the editor of China Heritage Quarterly, Geremie Barmé, for an invitation to include the following pages from Wei Li's book in this issue of his journal, and for his invaluable help in preparing them for publication, and Wei Li himself for his kind permission to do so.—Duncan Campbell. Introduction: As does the fish know best the temperature of the water in which it swims, so too does a book collector of long standing (such as myself) appreciate fully the ever alternating joys and sorrows of his quest. Whenever the day has stilled and the night closes in around me, I take up one or other of the volumes from my collection. The collector's seals that I find gathered on the title page, one below or beside the other, tell a silent tale of the fate of the book at hand as it has been passed on from one collector to another, tell of the hardships endured in its collection, tell of the mixed joy and sorrow incumbent on the fact that just as collections are assembled, so too will they inevitably be dispersed. The transmission of a people's cultural traditions, the summing up of the lessons to be gained from historical experience, the recording of the anecdotes of the past—all these are reliant entirely upon the book. And yet, although it is said that 'paper lasts a thousand years' (zhi shou qiannian 紙壽千年), of our cultural heritage that which is most prone to destruction, that which proves most difficult to preserve for future ages, is the book. Often we read reports of remarkable discoveries (be they of gold or porcelain or stone) newly unearthed; only seldom, however, do we hear of the discovery in these archaeological digs of a book printed on paper. Little wonder then that in their wisdom the ancients spoke of the Four Plagues of the Book: flood, fire, warfare and insect. Fig. 3 The Collector's Seal of the famous early Qing dynasty scholar Zhu Yizun (1629-1709) whose library, Pavilion for Airing My Books, was discussed in China Heritage Quarterly, Issue 13 (March 2008). From Lin Shenqing, ed., Ming Qing zhuming cangshujia cangshuyin, p.77. I then find myself thinking, by association, of all the rare books housed in the major libraries throughout our land. We all admire the splendid holdings of these libraries; few give even a passing thought to the generations of book collector whose painstaking efforts have made these books available to us, to those bibliophiles of old who have passed on to us the torch of learning. Whenever such thoughts came to me I would become absorbed by the idea of a grand scheme: to seek out and to visit each and every one of the private library buildings. These 'Cangshulou' 藏書樓 of old were once found scattered throughout China. In my quest I would attempt to visit them and to describe what I found, even if all that remained was no more than the site itself. Almost five years have now passed since I set out on my quest in 1997. The vicissitudes encountered during dozens of excursions are hard to put into words. Regardless, I have managed to assemble a stack of notes and photographs. Of the eighty or so libraries that I did locate, four have since been destroyed. In the case of five others, however, the relevant cultural office or tourism bureau has written to me to thank me for my 'discovery' of a new site for commemoration or one that may serve as the focus for a publicity campaign. In an age of rampant materialism, an age in which even the educated speak only of profit, I do not know whether this is simply a sign of the times. What I do know is that I am quite happy to cleave true to my antiquated pursuits and to shoulder the burden of the tasks left to us by the venerable collectors of old. After all, the role of the pacesetters in our society can only be appreciated relative to those who are somewhat more backward. I for one admire most of all backward individuals who work anonymously so that cultural traditions may be passed on from one generation to the next. All that I can hope for is that, as a result of my work, people will occasionally think of the individual bibliophiles that I discuss below. This then is how I excuse this little book of mine.—From Wei Li 韋力, Shulou xunzong 書樓尋踪 , Shijiazhuang: Hebei Jiaoyu Chubanshe, 2004. Hall of the Variegated Robe (Caiyitang 采衣堂) Immediately upon arriving in Changshu, we set off in search of the Hall of the Variegated Robe, the library that had once belonged to the Weng family 翁氏. To our considerable surprise, we soon arrived at out destination, along Book City Street. A newly erected stone ceremonial arch bearing (in gold) the words 'Ward of the First Place Getters' (Zhuangyuan fang 狀元坊) marked the place, and twenty meters down a little lane through that arch we caught sight of a large mansion, the front gate of which, by stark contrast, was only a very little bit larger than that of an ordinary house. If but for a plaque that read 'Former Residence of the Weng Clan' along with a notice set into the wall from the Changshu Heritage Preservation Bureau that read: 'Former Residence of Weng Xincun', we would never have guessed that this was the place we had come to visit. Previously, I had never associated the Weng family with book collecting. The member of the clan that I had been most familiar with was Weng Tonghe 翁同龢 (1830-1904), tutor to two emperors. Any discussion of the 1898 Reform Movement, after all, invariably makes mention of his support for the reforms. It was only the return of Weng family's book collection to China and its appearance on the market that alerted me to the extent to which, over six generations, the Weng's had been major bibliophiles. Sometime towards the middle of last year, Ta Xiaotang 拓小堂, Director of the Rare Books and Manuscripts Section of China Guardian Auction Company, rang me to say that he had obtained some remarkable rare books that had long disappeared from sight. He wondered if I wished to have an advance viewing of them. The prospect excited me greatly and I visited the company offices as soon as possible. There on the long table of the company's well-appointed Reception Room were arranged, by lot, a selection of old books. As I slowly browsed my way through them, Ta Xiaotang spoke in considerable detail about them and of the roundabout and long-drawn out process whereby he had obtained them. Bit by bit I became aware of the importance of the collection. Fourteen titles in the present lot were imprints of the Song or Yuan dynasties, including, for instance, the following important works: Collected Rhymes(Ji yun 集韻), in 10 fascicles and printed in Mingzhou sometime during the reign of the Southern Song emperor Gaozong (r.1127-62), this being the earliest extant printed edition of this work. Shao Yong's 邵雍 Inner Chapters on the Observation of Things(Shaozi guanwu pian 邵子觀物篇), printed in Jianning in Fujian Province during the Southern Song dynasty (1127-1279), the finest example of such an imprint, and the only extant copy. Surrounded by such treasures, it was as if I had woken up in paradise. So this is what it meant to be a major book collector! At the same time, however, a pressing question came to mind. Amongst the Weng family collection there had been many famous books, some of which carried important prefaces, but these volumes had disappeared from sight for many years and nobody seems to have been aware that they had found their way into this particular collection. Already by the early years of the Republic [1910s], nothing was known about the Weng family's collection, to the extent that when the Japanese bibliographer Shimada Hikosada 島田翰曾 came to China during this period to investigate the whereabouts of China's famous private book libraries, he could find nothing out about it, concluding in his report on the trip, entitled 'On the Fate of the Books from the Tower of the of the Two Hundred Song Imprints' (Paisonglou cangshu yuanliu kao 皕宋樓藏書源流考), that of this collection: '…not a single letter or slip of paper remains'. Now that the entire collection had made its extraordinary reappearance, one was faced with the need to explain why it had been that the family had kept the existence of their remarkable collection such a closely guarded secret? The answer to this particular mystery, it seemed, could only be found within the modestly proportioned gate that we found ourselves standing outside. Fig. 4 The Collector's Seal of the Pavilion of the Source of the Oceans owned by the Yang family of Liaocheng in Shandong Province. From Lin Shenqing, ed., Ming Qing zhuming cangshujia cangshuyin, p.174. Weng Tonghe, fourth son of Weng Xincun 翁心存, took first place in the metropolitan examinations of the sixth year of the reign of the Xianfeng 咸豐 emperor (1856), after which he held successive posts such as compiler in the Hanlin Academy, assistant director of the provincial examinations of Shaanxi, libationer of the Imperial Academy, sub-chancellor of the Grand Secretariat, vice-president of the Board of Revenue, president of the Censorate, president of the Board of Punishments, president of the Board of Works, and Grand Councilor, besides which he served as tutor to both the Tongzhi 同治 and Guangxu 光緒 emperors. He absented himself from his official duties and returned home during the 1898 Reform Movement, following the defeat of which he was dismissed from all his various posts, never again to hold office. His father had been a book collector of some considerable note but almost his entire collection had been destroyed in 1860 during the warfare associated with the Taiping Rebellion. Weng Tonghe too devoted his energies to the acquisition of books and, once he had achieved official prominence, he slowly began to acquire many treasures. Sitting in the capital, he took great delight in being surrounded by his collection. At the time, the Song and Yuan imprints that had formed part of the Hall of Taking Delight in the Good (Leshantang 樂善堂) collection in Prince Yi's Palace were being sold off by the bundle to both the Wengs and Yang Shaohe 楊紹和 (1831-76), master of the Pavilion of the Source of the Oceans (Haiyuange 海源閣). There were so many rare books in Weng Tonghe's collection that the contemporary bibliophile Fu Zengxiang 傅增湘 (1872-1950) wrote in his Bibliographical Notes on the Books in the Garden of Collections (Cangyuan qunshu tiji 藏園群書題記) that: Many of the books in Weng's collection were both rare and held in secret, and included, amongst those that I was able to view, Song dynasty imprints of A Collection of Su Shi's Poetry: Annotated(Shi Gu zhu Su shi 施顧注蘇詩), A Record of Admonitions(Jianjie lu 鑒戒錄), History of the Latter Han Dynasty(Hou Han shu 後漢書) (published during the Shaoxing reign period, 1131-62), an edition of the Selections of Refined Literature(Wenxuan 文選) published in Ganzhou, and a copy of the Garden of Tales(Shuoyuan 說苑) produced during the Xianchun reign period (1265-74). I hear that in late life he obtained a Song imprint of Collected Rhymes, this inspiring him to take the additional name 'Studio of Rhymes'. Of the books listed above, only this one did I not manage to gain sight of, so I note its presence here for the sake of whoever in the future will seek to update Ye Changchi's 葉昌識 (1847-1917) Biographical Poems on Book Collectors(Cangshu jishi shi 藏書紀事詩). In 1871, Weng Tonghe had written a colophon to his copy of A Collection of Su Shi's Poetry: Annotated that noted that he had 'sighed at its uniqueness' and that, upon obtaining it from the Prince Yi's Palace collection for twenty taels, he noted that 'the calligraphic strokes are clear and powerful, shimmering still like a bright pearl, and I fear that no other copy of the work exists'. Pan Zuyin 潘祖蔭 (1830-90) also added a colophon to the work declaring it to be a 'veritable phoenix amidst the stars' and noted that: 'Having obtained this work, Weng Tonghe only very rarely allowed anyone else to look at it, entrusting me alone to write a colophon for it. The joy occasioned by this task knew no bounds'. We went up to the gate and knocked. After a while, an old man came out to tell us that the library was presently under restoration and was no longer open to the public. If we really wanted to take a look around we would need to come back in a few months. However much the two of us argued our case, pleading with him about how far we had come, he seemed unmoved by our entreaties. This impasse continued some time before a young woman poked her head out the gate to see what was going on. Perhaps by virtue of his greater eloquence, or maybe owing to the long hair that lent him a somewhat artistic air, even before my companion had time to address her, we found ourselves finally within the gates of the mansion. With the young woman as our guide, we were taken through the entire complex, learning of the uses to which all the various rooms had once been put. It was only once we were inside that we realized exactly how very large the residence actually was, comprising five separate courtyards. Although some of the rooms were indeed presently under repair, the complex as a whole, the architectural style and the details of its decorated beams and painted rafters particularly, bespoke the prominence and wealth of the family. In one courtyard, we saw a plaque over the lintel of a door that read: 'Library'. Ms Zhang explained that this was where the clan's book collection had once been housed. Upon entering, we found that no trace of the books remained and the room contained little more than a few pieces of old furniture. Before we had given vent to a sigh or two about the fate of the collection, however, out of the corners of our eyes we caught sight of what had once been the main plaque of the library, propped up in a corner. It bore the words 'Hall of the Variegated Robe'. Ms Zhang explained that it was considered too old and had been taken down in preparation for the installation of a new one. After pleading with her for some time, she allowed us to lug it out into the courtyard and take some photographs of it. Were it not for the fact that we still have a number of stops to make on our excursion, I really would have tried to 'inveigle' this plaque into my possession! As we slowly made our way around the mansion, a possible explanation to the mystery began to dawn upon me. Just as the library complex was designed to give no real indication of its size and splendour from the outside, so too had the quality of the book collection been kept a secret from outsiders. Weng Tonghe's position as tutor to the emperor, it seemed, had necessitated both these circumstances. As the old saying has it, 'To be company for the king is to live amongst the tigers' (ban jun ru ban hu 伴君如伴虎)! I suspect that my conjecture may not be too far from the truth. Bookworm Studio (Maiwangguan 脈望館) Last night I rang Cao Peigen 曹培根, the noted Changshu bibliophile, only to discover that although he had originally intended to accompany us on our trip to the Bookworm Studio, he had been called away to Shanghai on urgent business. He did however give us detailed directions to the library— down the little lane opposite the former residence of the Weng clan. And so, today, once we had concluded our visit to the Hall of the Variegated Robe, we crossed the road and went off in search of what remained of this other library. After wending our way down the lane, however, neither my companion nor I could find the place. There was nothing for it but to ring Cao's house. His wife answered the phone and said that her husband had left instructions that if we had any difficulty finding Bookworm Studio, she was to come to our assistance. Despite our best attempts to dissuade her, she soon turned up in a taxi—such are the relationships forged through the love of books! Fig.5 A painting by Wang Xian, dated 1642, of the Pavilion for Drawing Water from the Well of the Ancients owned by Mao Jin (1599-1659), one of the most famous book collectors of Changshu and an eminent publisher of fine editions. From Wenming de shouwang: guji baohu de lishi yu tansuo, p.63. With Mrs Cao's help, we soon discovered that we had taken a wrong turn and that Bookworm Studio was to be found in the lane running parallel to the one we had found ourselves in. Soon enough we arrived at the gate and caught sight of the notice of the Jiangsu Provincial Heritage Bureau. The entire complex now housed a confusion of families, and if it weren't for Mrs Cao's efforts, the old man at the entrance would not even have let us in. Once inside, we discovered that Bookworm Studio had formerly occupied the western corner of the courtyard and that the entire building had been very well maintained. With wooden floors and well preserved roof beams, the building was about 100 square meters in size— a great deal smaller than the Pavilion of Heaven's Oneness (Tianyige 天一閣) in Ningbo but, nonetheless, the second oldest of all the libraries that I had visited. During the long years of the reign of the Wanli emperor of the Ming dynasty, Zhao Yongxian 趙用賢 (1535-96) served as Left Vice Minister in the Ministry of Personnel. Both he and his son, Zhao Qimei 趙琦美 (1563-1624), were avid book collectors and the catalogue of their library, entitled Catalogue of Zhao Yongxian's Books (Zhao Dingyu shumu 趙定宇書目) lists over 3,300 separate titles. His son's catalogue of his own acquisitions was called Catalogue of Bookworm Studio (Maiwangguan shumu 脈望館書目) and of particular note in this collection were the texts of Yuan and Ming dynasty plays that it contained. When he rediscovered them in 1938, the noted bibliophile Zheng Zhenduo 鄭振鐸 (1898-1958) proclaimed the find second in importance only to the discovery of the Dunhuang manuscripts. The 242 titles that remain are now held in the National Library in Beijing. Once he had been introduced to us by Mrs Cao, the old man who lived there now sat down with us and talked about the library's present circumstances. He was about sixty years old and had come to live here in 1947. Originally, he told us, the Zhao mansion complex had comprised five linked courtyards, two of which had been burnt to the ground during the war with Japan. Of the three remaining courtyards, only the innermost dated from the Ming dynasty, the others having been restored during the Qing period. Surprised at his detailed knowledge of the architectural differences between the dynasties, I asked him to elaborate and he explained that whereas the roofs of the Ming dynasty tended to be concave, those of the Qing were flat. He went on to say that the fine cedar pillars and decorated bricks had all been smashed by the Red Guards during the Cultural Revolution. Thanking the old man, we wandered out into the courtyard were we came across some old ladies. 'So many people have turned up here in the past few years to look at this building', one said to another, 'I really don't know what they are looking at!' Hearing this, I was delighted—I was not alone in my quest, it seemed. The final words of Lu Xun's (1881-1936) essay 'Written for the Sake of Forgetting' (Wei le wangquede jinian 為了忘却的紀念) came to mind: 'Even if it wasn't to be my part to do so, there would finally come a day when they would be remembered and again be spoken about' (Jishi bushi wo, jianglai zong hui youjiqi tamen, zai shuo tamen de shihou de 即使不是我,將來總會有記起他們,再說他們的時候的). Studio of Nourishment in Tranquillity (Jingbuzhai 靜補齋) The Studio of Nourishment in Tranquillity in Changshu was once the library of Li Zhishou 李芝綬 (d.1893). Li Zhishou passed the provincial examinations of the nineteenth year of the reign of the Daoguang 道光 emperor (1839), and his collected writings, Collection of the Studio of Nourishment in Tranquility (Jingbuzhai ji 靜補齋集), is still available. He was a close friend of the Qu 瞿 family, owners of the Tower of the Iron Lute and Bronze Sword (Tie qin tong jian lou 鐵琴銅劍樓), and so, as his skills of bibliographic discrimination improved, so too did his own book collection begin to grow. In his entry on Li Zhishou in his Biographical Poems on Book Collectors (Cangshu jishi shi 藏書紀事詩) (1910), Ye Changshi 葉昌識 (1847-1917) wrote of this man: Whilst I was visiting Qu Bingqing 瞿秉清, Qu Yong's son, sometime around 1872, I caught sight of Li Zhishou sitting in his study… I have heard that Li is also extremely well versed in matters bibliographical, and, besides, is very knowledgeable about local history and traditions. He too has a large collection of rare books. This illustrates the intimacy of the relationship between these two book collectors. Only one courtyard of the library remains today; smallish, it takes up little more than half a mu. My assessment of the architectural features of what remains of the complex suggest that it does, nonetheless, in fact date from the time of the library. The building is presently occupied by a family, none of whom seem to know anything about Li Zhishou. Share Translator's Notes: By means of a reversed pun on a comment made to Wei Li by a friend when he caught sight of his book collection ('What a pile of waste paper (lanzhi) you have here' 你這裡有這樣多的爛紙啊), the name of the library seems to derive from a line in the fourth of the 'Nine Songs' ('The Lady of the Xiang' [Xiang furen 湘夫人]) of the Songs of the South (Chuci 楚辭) that goes, in David Hawkes's translation: 'The Yuan has its angelicas (zhi), the Li has its orchids (lan)' (沅有芷兮澧有蘭), for which, see David Hawkes, trans., The Songs of the South: An Ancient Chinese Anthology of Poems by Qu Yuan and Other Poets (Harmondsworth: Penguin, 1985), p.108. For a complete translation of this work, see Achilles Fang, trans., 'Bookman's Manual', Harvard Journal of Asiatic Studies, vol.14, no.1/2 (1951), pp.215-260. The name of the library derives from the story of Lao Laizi, as found in the Yuan dynasty work Twenty-four Examples of Filial Piety (Ershisi xiao 二十四孝), who, we are told, at the age of over seventy would still dress himself in the colourful clothes of his youth in order to amuse his parents. For a biography of Weng Xincun 翁心存 (1790-1862), see A. W. Hummel, Eminent Chinese of the Ch'ing Period (1644-1912) (Washington: Government Printing Office, 1944) (hereafter, ECCP), vol.2, pp.858-59. For a biography of Weng Tonghe, see ECCP, vol.2, pp.860-61. In order to protect the Weng family's book and art collection from the ravages of war, the present owner, the author, historian and artist Wan-go H.C. Weng 翁萬戈 (b. 1918), Weng Tonghe's great-great-grandson, had it removed to the United States of America in 1948. In 1985, the book collection went on public display for the first time, arousing much international interest. In 2000, China Guardian Auction Company helped broker a deal whereby, at the cost of US$4.5 million, the book collection was acquired by the Shanghai Municipal People's Government and entrusted to the care of the Shanghai Library. In conjunction with a recent exhibition (11 April-12 July 2009) of the calligraphy and paintings from the collection, at the Huntington Library in San Marino, California, a catalogue has been published by the Huntington Library Press: June Li, ed., Treasures through Six Generations: Chinese Painting and Calligraphy from the Weng Collection. On whom, see (in English), ECCP, vol.2, pp.608-09. A recent collection of Cao Peigen's essays on the book collecting traditions specific to Changshu has been published in the same series as Lingering Traces, entitled Accounts of Book Town [Shuxiang manlu 書鄉漫錄] (Shijiazhuang: Hebei jiaoyu chubanshe, 2004). For a short English-language biography of this man and his son, see L. Carrington Goodrich and Chaoying Fang, eds, Dictionary of Ming Biography, 1368-1644 (New York & London: Columbia University Press, 1976), vol.1, pp.138-40. Zheng Zhenduo recalls both his excitement of 'This unforgettable day, this moment that I will forever remember' and the strenuous efforts he had to make to secure the books in his 'Colophon to the Manuscript Copies of Plays Ancient and Modern from Bookworm Studio' (Ba Maiwangguan chaojiaoben Gujin zaju 跋脈望館鈔校本古今雜劇), in Zhenduo's Booknotes (Xidi shuhua 西諦書話) (Beijing: Sanlian shudian, 1983), vol.2, pp. 419-79. This essay, dated 7-8 February 1933, was written to commemorate the death, two years earlier at the hands of the government, of five young writers. For a translation of the essay, see Yang Xianyi and Gladys Yang, trans., Lu Xun: Selected Stories (Beijing: Foreign Languages Press, 1980), vol.3, pp.234-46. Home | Editorial | Features | Articles | New Scholarship | Behind the Scenes | Links ©China Heritage Project, ANU College of Asia & the Pacific (CAP), The Australian National University. Please direct all comments or suggestions to contact@chinaheritagequarterly.org. This page last updated: October 19 2015 06:56:41. URL:
14982
https://math.stackexchange.com/questions/3859134/any-alternate-proof-for-2nn?rq=1
real analysis - Any alternate proof for $2^n>n$? - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Any alternate proof for 2 n>n 2 n>n? Ask Question Asked 4 years, 11 months ago Modified4 years, 11 months ago Viewed 124 times This question shows research effort; it is useful and clear 2 Save this question. Show activity on this post. The normal approach for these kinds of problem is to use the mathematical induction and prove that 2 n>n 2 n>n for any natural number n n Case 1:(n=1)(n=1) 2 1=2>1 2 1=2>1, thus the formula holds for n=1 n=1 Case 2: (let us assume that this statement holds for any arbitrary natural number m m) That implies 2 m>m 2 m>m for some natural number m m. Then, 2 m+1=2 m⋅2>m⋅2≥m+1 2 m+1=2 m⋅2>m⋅2≥m+1 Thus as the statement holds for an arbitrary natural number m m implies that it holds for m+1 m+1 and thus by mathematical induction, it is proved that 2 n>n 2 n>n for any natural number n n. Is there any other ways to prove this problem? I tried proving this by contradiction taking initially 2 n≤n 2 n≤n, but couldn't go much far. Any help or idea would be very much appreciated. real-analysis inequality Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Follow Follow this question to receive notifications edited Oct 10, 2020 at 11:56 MPW 45.1k 2 2 gold badges 37 37 silver badges 84 84 bronze badges asked Oct 10, 2020 at 11:40 DeBARthaDeBARtha 619 1 1 gold badge 7 7 silver badges 15 15 bronze badges 1 There’s a very simple calculus proof.Randall –Randall 2020-10-10 11:42:54 +00:00 Commented Oct 10, 2020 at 11:42 Add a comment| 3 Answers 3 Sorted by: Reset to default This answer is useful 3 Save this answer. Show activity on this post. n=1+1+⋯+1⏟n terms n=1+1+⋯+1n terms 2 n=2+2+2 2+2 3+⋯+2 n−1⏟n terms 2 n=2+2+2 2+2 3+⋯+2 n−1n terms 2 n>n 2 n>n follows from the fact that 2 i>1 2 i>1 for i≥1 i≥1. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications answered Oct 10, 2020 at 11:47 QEDQED 13k 1 1 gold badge 32 32 silver badges 57 57 bronze badges Add a comment| This answer is useful 3 Save this answer. Show activity on this post. We have |P({1,…,n})|=2 n|P({1,…,n})|=2 n and |{1,…,n}|=n|{1,…,n}|=n and there is an obvious non-surjective injection {1,…,n}↪P({1,…,n}). {1,…,n}↪P({1,…,n}). Edit: Alternatively, 2 n=(1+1)n=n∑k=0(n k)≥n∑k=0 1=n+1. Note that in its spirit, this is not really a different proof. The binomial formula in this case is the same thing as counting subsets of a set. A quick comment: Heuristically, because this inequality is so weak, there will be plenty of ad hoc arguments. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited Oct 10, 2020 at 12:05 MPW 45.1k 2 2 gold badges 37 37 silver badges 84 84 bronze badges answered Oct 10, 2020 at 11:46 Qi ZhuQi Zhu 9,538 3 3 gold badges 20 20 silver badges 45 45 bronze badges 6 What is the obvious injection you have in mind? x↦{x}? Note also you must prove that there is no surjection {1,…n}→P({1,…,n} as OP requires ">" and not just "≥". Still, this is a nice idea, +1 for your answer MPW –MPW 2020-10-10 11:48:23 +00:00 Commented Oct 10, 2020 at 11:48 @MPW Yes. (I agree that it is not canonical.)Qi Zhu –Qi Zhu 2020-10-10 11:49:14 +00:00 Commented Oct 10, 2020 at 11:49 1 @MPW Ok, you're right. The obvious map does not hit ∅, so we're good. :)Qi Zhu –Qi Zhu 2020-10-10 11:52:43 +00:00 Commented Oct 10, 2020 at 11:52 Excellent, that's even nicer than what I was thinking! Clever you!MPW –MPW 2020-10-10 11:54:36 +00:00 Commented Oct 10, 2020 at 11:54 1 @MPW Thank you. :-) Looking at my edit, ∅ corresponds to (n 0) in that argument.Qi Zhu –Qi Zhu 2020-10-10 11:59:26 +00:00 Commented Oct 10, 2020 at 11:59 |Show 1 more comment This answer is useful 2 Save this answer. Show activity on this post. By a combinatoric argument, 2 n is the number of subsets of a set with n elements which is always greater than the number of elements. Refer to the related The total number of subsets is 2 n for n elements As an alternative, by derivative, let define f(x)=2 x−x⟹f′(x)=2 x log 2−1 with f(1)=1 and f′(1)>0 therefore ∀x≥1 2 x−x≥0⟺2 x≥x Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited Oct 10, 2020 at 12:13 answered Oct 10, 2020 at 11:44 useruser 164k 14 14 gold badges 84 84 silver badges 157 157 bronze badges 2 Now you just have to prove that 2 x log 2−1>0 for x≥1 MPW –MPW 2020-10-10 11:46:13 +00:00 Commented Oct 10, 2020 at 11:46 @MPW Yes of course! I make that point more clear. Thanks user –user 2020-10-10 11:47:39 +00:00 Commented Oct 10, 2020 at 11:47 Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions real-analysis inequality See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Linked 12The total number of subsets is 2 n for n elements Related 1Proof for pascal's triangle consisting only of natural numbers 1Base case for a proof by induction? 1Trichotomy of natural numbers Proof (Alternate proof Showing atleast one statement Tao's analysis) Hot Network Questions Who is the target audience of Netanyahu's speech at the United Nations? In Dwarf Fortress, why can't I farm any crops? What is the feature between the Attendant Call and Ground Call push buttons on a B737 overhead panel? Bypassing C64's PETSCII to screen code mapping Do we declare the codomain of a function from the beginning, or do we determine it after defining the domain and operations? how do I remove a item from the applications menu Why multiply energies when calculating the formation energy of butadiene's π-electron system? Is encrypting the login keyring necessary if you have full disk encryption? Lingering odor presumably from bad chicken If Israel is explicitly called God’s firstborn, how should Christians understand the place of the Church? Matthew 24:5 Many will come in my name! How to fix my object in animation How to start explorer with C: drive selected and shown in folder list? Can a cleric gain the intended benefit from the Extra Spell feat? Is existence always locational? Vampires defend Earth from Aliens How to solve generalization of inequality problem using substitution? How different is Roman Latin? Numbers Interpreted in Smallest Valid Base Passengers on a flight vote on the destination, "It's democracy!" Fundamentally Speaking, is Western Mindfulness a Zazen or Insight Meditation Based Practice? Weird utility function Childhood book with a girl obsessed with homonyms who adopts a stray dog but gives it back to its owners The geologic realities of a massive well out at Sea Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.29.34589 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
14983
https://www.wssd.k12.pa.us/downloads/calculus%20optimization%20problems%20solutions.pdf
Calculus Optimization Problems/Related Rates Problems Solutions 1) A farmer has 400 yards of fencing and wishes to fence three sides of a rectangular field (the fourth side is along an existing stone wall, and needs no additional fencing). Find the dimensions of the rectangular field of largest area that can be fenced. ! 2x + y = 400 " y = 400 # 2x A(x) = x 400 # 2x ( ) = 400x # 2x2 $ A (x) = 400 # 4x 400 # 4x = 0 " x =100 $ $ A (x) = #4 By the 2nd derivative test, the dimensions would be 100 yd by 200 yd. 2) A metal box (without a top) is to be constructed from a square sheet of metal that is 20 cm on a side by cutting square pieces of the same size from the corners of the sheet and then folding up the sides. Find the dimensions of the box with the largest volume that can be constructed in this manner. ! V(x) = x 20 " 2x ( ) 20 " 2x ( ) = 400x " 80x2 + 4x3 # V (x) = 400 "160x +12x2 400 "160x +12x2 = 0 $ 4 100 " 40x + 3x2 ( ) = 0 $ 4 3x "10 ( ) x "10 ( ) $ x = 10 3 ,10 # # V (x) = "160 + 24x # # V 10 3 % & ' ( ) = "160 + 80 < 0 # # V (10) = "160 + 240 > 0 By the 2nd derivative test, the dimensions would be ! 10 3 cm by 40 3 cm by 40 3 cm 3) A rectangular field adjacent to a river is to be enclosed. Fencing along the river costs $5 per meter, and the fencing for the other sides costs $3 per meter. The area of the field is to be 1200 square meters. Find the dimensions of the field that is the least expensive to enclose. Call the length of fence along the river x, and the length perpendicular to the river y. ! C(x) = 5x + 3 2y + x ( ) xy =1200 " y = 1200 x " C(x) = 8x + 7200 x # C (x) = 8 $ 7200 x2 8 $ 7200 x2 = 0 " 8x2 = 7200 " x2 = 900 " x = 30 # # C (x) = 14400 x3 # # C (30) = 14400 303 > 0 By the 2nd derivative test, a field that is 30 m along the river by 40 m perpendicular to the river would be least expensive. 4) A 4-meter length of stiff wire is cut in two pieces. One piece is bent into the shape of a square and the other into a rectangle whose length is 3 times its width. Let x be the length of the side of the square. a) Find a formula A(x), the sum of the areas of the square and rectangle, in terms of the variable x. The length of wire left for the rectangle is 4 – 4x. In the rectangle, l = 3w. 4 – 4x = 2(3w) + 2w, so ! w = 4 " 4x 8 = 1" x 2 # l = 3" 3x 2 . ! A(x) = x2 + 3" 3x 2 # $ % & ' ( 1" x 2 # $ % & ' ( b) For what values of x does A(x) achieve its maximum; for which does it achieve its minimum. Justify your answer. ! x2 + 3" 3x 2 # $ % & ' ( 1" x 2 # $ % & ' ( = 0 ) x2 + 3" 6x + 3x2 4 = 0 ) 7x2 " 6x + 3 = 0 5) A rectangular playing field is to have area 600 m2. Fencing is required to enclose the field and to divide it into two equal halves. a) Find a formula, F(x), for the total length of fencing required, in terms of the length, x, of the fence dividing the field in half. ! F(x) = 3x + 2 600 x " # $ % & ' = 3x + 1200 x b) Find the minimum amount of fencing needed to do this. ! " F (x) = 3# 1200 x2 3# 1200 x2 = 0 $ 3x2 =1200 $ x = 20 " " F (x) = 2400 x3 " " F (20) = 2400 203 > 0 By the 2nd derivative test, The minimum amount of fencing needed is 120 m c) What are the outer dimensions of the field that has the least fencing? 20 m by 30 m 6) A rectangle has its base on the x-axis and its upper vertices on the parabola y = 27 – x2. Find the maximum possible area of the rectangle. ! A(x) = 2x 27 " x2 ( ) = 54x " 2x3 # A (x) = 54 " 6x2 54 " 6x2 = 0 $ x2 = 9 $ x = 3 # # A (x) = "12x # # A (3) = "36 < 0 By the 2nd derivative test, the maximum area would be 6(18) = 108 sq units. 7) A rectangular container with open top is required to have a volume of 16 cubic meters. Also, one side of the rectangular base is required to be 4 meters long. If material for the base costs $8 per square meter, and material for the sides costs $2 per square meter, find the dimensions of the container so that the cost of material to make it will be a minimum. ! V = 4wh =16 " h = 4 w C = 8(4w) + 2(2wh) + 2(2(4h)) = 32w +16 + 64 w # C = 32 $ 64 w2 32 $ 64 w2 = 0 " w = 2 # # C = 128 w3 128 2 3 > 0 By the 2nd derivative test, the dimensions of the container that minimizes the cost are 4 m by ! 2 m (base) by ! 4 2 m (height) 8) A rectangular box with open top is to be constructed from a rectangular piece of cardboard 80 cm by 30 cm, by cutting out equal squares from each corner of the sheet of cardboard and folding up the resulting flaps. Find the dimensions of the box of maximum volume made by these conditions. ! V = x 80 " 2x ( ) 30 " 2x ( ) = 2400x " 220x2 + 4x3 # V = 2400 " 440x +12x2 = 4 3x2 "110x + 600 ( ) 4 3x2 "110x + 600 ( ) = 0 $ 4 3x " 20 ( ) x " 30 ( ) = 0 $ x = 20 3 ,30 # # V (x) = "440 + 24x # # V 20 3 % & ' ( ) = "440 +160 < 0 By the 2nd derivative test, the dimension of the box of maximum volume are ! 20 3 cm by ! 200 3 cm by ! 50 3 cm 9) Find the points on the parabola 2x + y2 = 0 closest to the point (-3,0). 10) A power line is needed to connect a power station on the shore of a river to an island 4 miles downstream and 1 mile offshore. Find the minimum cost for such a line given that is costs $50,000 per mile to lay wire under the water and $30,000 per mile to lay wire underground.
14984
https://math.stackexchange.com/questions/1938894/imaginary-part-of-a-product-of-n-complex-numbers
Imaginary part of a product of N complex numbers - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Imaginary part of a product of N complex numbers Ask Question Asked 9 years ago Modified9 years ago Viewed 2k times This question shows research effort; it is useful and clear 1 Save this question. Show activity on this post. What is the general formula for the imaginary part of a product of N N complex variables? To be specific, let a j,b j a j,b j be real numbers, then what is I(∏j=1 N(a j+i b j))ℑ(∏j=1 N(a j+i b j)) For N=2 N=2 it’s simply the sum cross terms and for N=3 N=3 there’s one b j b j multiplying two a j a j subject to permutations of {1,2,3}{1,2,3} as well as the term with all b j b j. So it seems like neglecting minus signs it’s a sum of terms that permute amongst {1,2,…,N}{1,2,…,N} with an odd number of b j b j. If N N is odd then there will be a term with all b j b j. What is the general form in a compact expression? complex-numbers Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Follow Follow this question to receive notifications asked Sep 23, 2016 at 21:29 hamha109hamha109 35 6 6 bronze badges Add a comment| 3 Answers 3 Sorted by: Reset to default This answer is useful 2 Save this answer. Show activity on this post. Multiply it out, and use the fact that i k=⎧⎩⎨⎪⎪⎪⎪⎪⎪1,i,−1,−i,if k≡0(mod 4),if k≡1(mod 4),if k≡2(mod 4),if k≡3(mod 4),i k={1,if k≡0(mod 4),i,if k≡1(mod 4),−1,if k≡2(mod 4),−i,if k≡3(mod 4), to see that I(∏j=1 N(a j+i b j))=∑X⊆{1,…,N}|X|is odd((−1)|X|−1 2∏j∉X a j∏j∈X b j)=∑X⊆{1,…,N}|X|≡1(mod 4)(∏j∉X a j∏j∈X b j)−∑X⊆{1,…,N}|X|≡3(mod 4)(∏j∉X a j∏j∈X b j).ℑ(∏j=1 N(a j+i b j))=∑X⊆{1,…,N}|X|is odd((−1)|X|−1 2∏j∉X a j∏j∈X b j)=∑X⊆{1,…,N}|X|≡1(mod 4)(∏j∉X a j∏j∈X b j)−∑X⊆{1,…,N}|X|≡3(mod 4)(∏j∉X a j∏j∈X b j). Here |X||X| denotes the cardinality of X.X. By the way, here's a similar formula for the real part: R(∏j=1 N(a j+i b j))=∑X⊆{1,…,N}|X|is even((−1)|X|2∏j∉X a j∏j∈X b j)=∑X⊆{1,…,N}|X|≡0(mod 4)(∏j∉X a j∏j∈X b j)−∑X⊆{1,…,N}|X|≡2(mod 4)(∏j∉X a j∏j∈X b j).ℜ(∏j=1 N(a j+i b j))=∑X⊆{1,…,N}|X|is even((−1)|X|2∏j∉X a j∏j∈X b j)=∑X⊆{1,…,N}|X|≡0(mod 4)(∏j∉X a j∏j∈X b j)−∑X⊆{1,…,N}|X|≡2(mod 4)(∏j∉X a j∏j∈X b j). Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications edited Sep 26, 2016 at 4:26 answered Sep 24, 2016 at 6:32 Mitchell SpectorMitchell Spector 10.7k 3 3 gold badges 19 19 silver badges 38 38 bronze badges Add a comment| This answer is useful 2 Save this answer. Show activity on this post. Using Euler's formula, when z n∈C z n∈C: z n=|z n|e(arg(z n)+2 π k n)i=|z n|cos(arg(z n)+2 π k n)+|z n|sin(arg(z n)+2 π k n)i z n=|z n|e(arg⁡(z n)+2 π k n)i=|z n|cos⁡(arg⁡(z n)+2 π k n)+|z n|sin⁡(arg⁡(z n)+2 π k n)i Where |z n|=R 2[z n]+I 2[z n]−−−−−−−−−−−−√|z n|=ℜ 2[z n]+ℑ 2[z n], arg(z n)arg⁡(z n) is the complex arugment of z n z n and k n∈Z k n∈Z. So: I[∏a=n M z a]=I[∏a=n M(R[z a]+I[z a]i)]=I(z n×z n+1×⋯×z M)=ℑ[∏a=n M z a]=ℑ[∏a=n M(ℜ[z a]+ℑ[z a]i)]=ℑ(z n×z n+1×⋯×z M)= I(|z n|e(arg(z n)+2 π k n)i×|z n+1|e(arg(z n+1)+2 π k n+1)i×⋯×|z M|e(arg(z M)+2 π k M)i)=ℑ(|z n|e(arg⁡(z n)+2 π k n)i×|z n+1|e(arg⁡(z n+1)+2 π k n+1)i×⋯×|z M|e(arg⁡(z M)+2 π k M)i)= |z n||z n+1|×⋯×|z M|×sin(arg(z n)+2 π k n+arg(z n+1)+2 π k n+1+⋯+arg(z M)+2 π k M)=|z n||z n+1|×⋯×|z M|×sin⁡(arg⁡(z n)+2 π k n+arg⁡(z n+1)+2 π k n+1+⋯+arg⁡(z M)+2 π k M)= |z n||z n+1|×⋯×|z M|sin(arg(z n)+arg(z n+1)+⋯+arg(z M))|z n||z n+1|×⋯×|z M|sin⁡(arg⁡(z n)+arg⁡(z n+1)+⋯+arg⁡(z M)) So, we get: I[∏a=n M z a]=|z n||z n+1|×⋯×|z M|sin(arg(z n)+arg(z n+1)+⋯+arg(z M))ℑ[∏a=n M z a]=|z n||z n+1|×⋯×|z M|sin⁡(arg⁡(z n)+arg⁡(z n+1)+⋯+arg⁡(z M)) Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications edited Sep 24, 2016 at 19:43 answered Sep 24, 2016 at 17:48 Jan EerlandJan Eerland 29.5k 4 4 gold badges 32 32 silver badges 62 62 bronze badges Add a comment| This answer is useful 1 Save this answer. Show activity on this post. You could convert your complex numbers from rectangular to polar and then use Euler's Formula to get it in the form r∗e i θ r∗e i θ. From there, the formula is then: e i∗k∗∏l=1 N r l e i∗k∗∏l=1 N r l where k=∑N l=1 θ l mod 2 π k=∑l=1 N θ l mod 2 π. Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications edited Sep 24, 2016 at 0:42 answered Sep 23, 2016 at 21:53 AlgorithmsXAlgorithmsX 4,644 1 1 gold badge 16 16 silver badges 30 30 bronze badges 5 .... and as those angles can pretty much add up to anything I don't see any way the predict or calculate these from the a+bi forms. So far as I can figure this is the only way to do it.fleablood –fleablood 2016-09-23 23:45:42 +00:00 Commented Sep 23, 2016 at 23:45 You should be able to convert every complex number from rectangular form to polar/exponential form. The conversion itself will give you the angles.AlgorithmsX –AlgorithmsX 2016-09-24 00:46:24 +00:00 Commented Sep 24, 2016 at 0:46 Yes, of course. I'm saying that by only considering the rectangular format it is clearly not posible to predict what the imaginary term will be as the a and b can contribute to any angle. This is as opposed to trying to predict a coefficient of a polynomial which to some extent the influence of individual terms can be measured.fleablood –fleablood 2016-09-24 01:20:31 +00:00 Commented Sep 24, 2016 at 1:20 You can still use the rectangular format, but you would have to replace i i with x x, and then add up all the powers of x x with a remainder of one when divided by four and subtract all the powers of x x with a remainder of three when divided by four. Basically, use i 4 n+k=i k i 4 n+k=i k, where n n is an integer.AlgorithmsX –AlgorithmsX 2016-09-24 01:31:27 +00:00 Commented Sep 24, 2016 at 1:31 2 The formula in this answer, e i k∏N l=1 r l,e i k∏l=1 N r l, where k=∑N l=1 θ l mod 2 π,k=∑l=1 N θ l mod⁡2 π, isn't the formula for the imaginary part of the product, which is what the OP asked for. The imaginary part is (sin k)∏N l=1 r l.(sin⁡k)∏l=1 N r l. (By the way, you can simply define k k to be ∑N l=1 θ l,∑l=1 N θ l, without the mod 2 π mod⁡2 π part; doing that won't change the result.)Mitchell Spector –Mitchell Spector 2016-09-24 10:11:51 +00:00 Commented Sep 24, 2016 at 10:11 Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions complex-numbers See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Related 2Complex No.s Proving Question 0Initial conditions for second order ODE with complex stiffness 0Prove that a product of two complex numbers has zero imaginary part 3Plotting polynomials over the complex field 1Inverse of complex number imaginary part 1Sketching complex numbers with only the imaginary part 2Sufficient condition for the terms of a series of complex numbers to be all zero Hot Network Questions Is encrypting the login keyring necessary if you have full disk encryption? Find non-trivial improvement after submitting Is it possible that heinous sins result in a hellish life as a person, NOT always animal birth? Lingering odor presumably from bad chicken Are there any world leaders who are/were good at chess? How can I show that this sequence is aperiodic and is not even eventually-periodic. alignment in a table with custom separator Determine which are P-cores/E-cores (Intel CPU) Is it safe to route top layer traces under header pins, SMD IC? What meal can come next? Can a state ever, under any circumstance, execute an ICC arrest warrant in international waters? Any knowledge on biodegradable lubes, greases and degreasers and how they perform long term? Is it ok to place components "inside" the PCB Do sum of natural numbers and sum of their squares represent uniquely the summands? Spectral Leakage & Phase Discontinuites Origin of Australian slang exclamation "struth" meaning greatly surprised "Unexpected"-type comic story. Aboard a space ark/colony ship. Everyone's a vampire/werewolf How do you emphasize the verb "to be" with do/does? Can I go in the edit mode and by pressing A select all, then press U for Smart UV Project for that table, After PBR texturing is done? Why is the fiber product in the definition of a Segal spaces a homotopy fiber product? RTC battery and VCC switching circuit What were "milk bars" in 1920s Japan? Interpret G-code Identifying a movie where a man relives the same day Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
14985
https://voices.uchicago.edu/medicaljournalism/2022/05/27/treatments-for-fetal-rh-anemia/
Treatments for Fetal Rh Anemia - Medical Journalism Club Skip to content The University of Chicago About Us Read the Journal Events Submit an Article Contact Us About Us Read the Journal Events Submit an Article Contact Us Select Page About Us Read the Journal Events Submit an Article Contact Us About Us Read the Journal Events Submit an Article Contact Us Search for... SearchClose Treatments for Fetal Rh Anemia May 27, 2022 | Author Sydney Tyler, Epidemiology & Infectious Diseases, Obstetrics & Gynecology The purpose of this literature review is to explain the functions of the rhesus factor and its negative effects on people. It will focus on pregnancies and HDFN, which is a disease caused by the Rh factor. Different treatments for HDFN and trials will be discussed and analyzed throughout the paper. Function of rhesus factor The rhesus factor (Rh factor) is a protein that can show up on a person’s red blood cells. The proteins determine whether the blood of two different people are compatible. This has been the main function of the Rh factor up until recently, when the role of a certain Rh antigen, RhAG, was discovered. RhAG transports ammonium ions through the cell membrane. This function was tested once scientists realized the similarities between RhAg and the amino acid sequence of NH4+ (Karow et al., 2000). That being said, depending on the type of Rh blood group, the functions may differ. Most of the blood groups, however, help identify people who have compatible blood. The RhD factor is the most immunogenic antigen, meaning it is more likely to trigger an immune response than the other antigens. For example, a pregnant woman who is Rh negative, meaning she does not contain the Rh factors on her red blood cells, is not compatible with her fetus, if the fetus is Rh positive, meaning the red blood cells do contain the Rh factor. (Since the Rh factor is an inherited trait, if the mother is Rh negative, the father would have to be Rh positive in order for the fetus to be Rh positive.) History of rhesus factor In 1940, the discovery of the rhesus (Rh) blood group system was made by Karl Landsteiner and A.S. Weiner (Britannica et al., 2020). RhD, Rhc, RhC, and RhE are the most common antigens of the 50 that have been discovered. 13 years after the original discovery of the blood group system, of the five most common types of Rh antigens, the most immunogenic one is the Rh D antigen, in 1953, scientists determined that hemorrhages that exposed mothers to the fetus’ red blood cells (given the fetus is is Rh positive and the mother is Rh negative) resulted in the pathogenesis of the rhesus isoimmunization (Dubey et al., 2019). Once the Rh antigen, specifically the RhD antigen, was seen as harmful in pregnancies, causing HDFN, in 1966, an IgG prophylaxis was invented to prevent sensitization in Rh negative women that was to be used shortly after the delivery of the first child. Since the prophylaxis was administered, there have been several tests and experiments that result in a variety of times, doses, and other recommendations for the drug (Dubey et al., 2019). Causes and explanation of HDFN This incompatibility can cause serious health effects for the fetus, the most common effect being hemolytic disease of the fetus and newborn (HDFN). There are several names for the disease, including Rh incompatibility and hemolytic anemia, but HDFN is the most widely-used term for the disease. There is another, similar disease to HDFN, which is called ABO incompatibility. Unlike HDFN, ABO incompatibility often produces little to no symptoms. It is more common than HDFN and only temporary. ABO incompatibility occurs when the mother has O type blood and the fetus has A or B type blood, causing the mother to make antibodies that attack the fetus’ A or B blood cells. HDFN occurs usually after the first pregnancy, which is when the mother is sensitized to the fetus’ Rh positive blood some time during the pregnancy. Then, during the second pregnancy, the mother will create antibodies that can cross through the placenta and attack the fetus’ Rh positive red blood cells faster than the fetus can produce the blood cells. This is why the disease is an anemia. Other effects of HDFN are heart failure, hemorrhaging, premature birth, and or a miscarraige (Sarwar et al., 2021). Epidemiology of HDFN A recent 2016 study showed 0.3-0.6% of pregnancies are affected by HDFN (Dubey et al., 2019). About 15% of North Americans and Europeans are Rh negative, compared to 4-8% of Rh negative Africans and 0.1-0.3% of Rh negative Asians (Sarwar et al., 2021). In the US, the frequency of HDFN is more common due to the diversity and the amount of immigration that occurs. Sensitization in females A study was done in 2013 to analyze the sensitization of females to determine the timing of RhD immunization during pregnancy and to determine when to administer anti-D prophylaxis. In 51% of the 290 Rhd immunized women experienced sensitization (were exposed to their baby’s Rh positive blood) during the first pregnancy, while 33% of the women experienced sensitization during the second pregnancy. In 94% percent of the pregnancies, Rh antibodies were developed after the first trimester. 73% of the women developed antibodies in the second or third trimester, while 21% developed them during or after delivery. The data collected gives reason for doctors to administer anti-D prophylaxis in the beginning of the third trimester (28-30 weeks) to all Rhd positive women carrying RhD negative fetuses (Tilbad et al., 2013). Current treatments for HDFN Anti-D prophylaxis is an IgG prophylaxis that works to prevent the sensitization that occurs during the first pregnancy of a Rh negative mother with its Rh positive fetus. It is a two-does treatment that is administered during the third trimester of the pregnancy: one injection of the immunoglobulin is given on the 28th week of pregnancy, and the second is administered about six weeks after the first. The drug is meant to prevent the sensitization of the antigens, so if the mother is already sensitized, then the drug is virtually useless. Because of this, several trials have been run to determine the ideal time in which the prophylaxis should be administered. A 2014 study tested the effectiveness of postnatal RhD prophylaxis, meaning the drug is administered after the first pregnancy, but before the second. The reason for this timing is because the prophylaxis targets the antibodies that are formed after the sensitization that occurs during the first pregnancy. 89 pregnancies were monitored for the trial. Of the 89, 56 of pregnancies (63%) were sensitized during the first pregnancy, 21 (24%) were sensitized during the second pregnancy, and 12 (13%) were sensitized during later pregnancies. Rh incompatibility occurred in 28 of the pregnancies (31%), and 25 of those cases were a direct cause of the sensitization in the previous pregnancy (Dajak et al., 2014). Another treatment that has been tested for both HDFN and ABO incompatibility is exchange transfusion. Exchange transfusion completely replaces 100% of the blood circulating in a body with blood that is compatible with whatever the blood may be interacting with. Exchange transfusion cannot be performed on fetuses, so the procedure is done after birth In 2007, exchange transfusion was used on 25 cases. The table below shows the results of the 25 cases that were given an exchange transfusion (Sharma et al., 2007). 20 (80%) of the cases needed only one transfusion, and the other 5 (20%) needed a second transfusion. 15 of the cases were cases of HDFN. Of the 15, 11 (73%) needed one transfusion, while the other four (27%) needed a second. One of the cases was suffering from neonatal septicemia in addition to HDFN. The newborn died due to “septicemia and respiratory stress”. Discussion and future treatments HDFN can be a fatal disease for fetuses, and unfortunately there is no way to prevent the Rh factor from causing HDFN because it is hereditary. The only way to prevent HDFN is to prevent sensitization. The anti-D prophylaxis is a successful treatment as the trials have proven, however, it is more of a preventative measure. It is also not available globally. In the US, the prophylaxis is attainable for the majority of Americans, but everywhere else, the prophylaxis is uncommon and quite rare. Developing countries are having trouble gaining access to this specific treatment. Knowledge about treatments like this is necessary, so it needs to be shared and more widely produced in other countries. For future research, a treatment should be developed for fetuses and newborns that have already been diagnosed with HDFN. The exchange transfusion is somewhat successful, but it still has yet to be improved, which is crucial because of how big the transfusion is. Annotated Bibliography Britannica, The Editors of Encyclopaedia. “Rh blood group system”. Encyclopedia Britannica, 9 Apr. 2020 Dajak, Slavica et al. “The importance of antenatal prevention of RhD immunisation in the first pregnancy.” Blood transfusion =Trasfusione del sangue vol. 12,3 (2014): 410-5. Dubey, . “Haemolytic Disease of the Fetus and Newborn: Past, Present and Future Considerations”. Acta Scientific Medical Sciences 3.10 (2019): 153-161. Flegel, Willy A. “The genetics of the Rhesus blood group system.” Blood transfusion = Trasfusione del sangue vol. 5,2 (2007) Karow, Julia. “A Role for the Rhesus Factor”. Scientific American, 31 Oct. 2000 Sarwar A, Citla Sridhar D. Rh-Hemolytic Disease. [Updated 2021 Aug 14]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2022 Sarwar, Ayesha. and Divyaswathi Citla Sridhar. “Rh-Hemolytic Disease.” StatPearls, StatPearls Publishing, 14 August 2021. Sbarsi, Ilaria et al. “Implementing non-invasive RHD genotyping on cell-free foetal DNA from maternal plasma: the Pavia experience.” Blood transfusion = Trasfusione del sangue vol. 10,1 (2012) Sharma, D C et al. “Study of 25 cases of exchange transfusion by reconstituted blood in hemolytic disease of newborn.” Asian journal of transfusion science vol. 1,2 (2007) Tiblad, Eleonor et al. “Consequences of being Rhesus D immunized during pregnancy and how to optimize new prevention strategies.” Acta obstetricia et gynecologica Scandinavica vol. 92,9 (2013) Turner, Rebecca M et al. “Routine antenatal anti-D prophylaxis in women who are Rh(D) negative: meta-analyses adjusted for differences in study design and quality.” PloS one vol. 7,2 (2012) Image: Recent Posts Zero-Covid: A Crisis in Shanghai Treatments for Fetal Rh Anemia Health Inequity and Racial Disparities in Covid-19, Gun Violence, and HIV/AIDS Cases Abortion Rights Are up in the Air Again: It’s All on the Line in Dobbs v. Jackson Women’s Health Organization The History of Racial Disparities and Discrimination in the U.S. Healthcare System Scroll to Top About Us Read the Journal Submit an Article Contact Us © University of Chicago - All Rights Reserved
14986
https://www.youtube.com/watch?v=BRlgvjLZwVw
Geometry 5.2a, Circumcenter of a Triangle & Circumcenter Theorem JoAnn's School 171000 subscribers 124 likes Description 9285 views Posted: 12 Sep 2018 An explanation of the Circumcenter of a Triangle, the Circumcenter Theorem, folding tissue paper to find the perpendicular bisectors of a scalene acute triangle, circumcenters inside, outside, or on a triangle, circles that are circumscribed about a polygon, a paragraph proof proving that the circumcenter p is equidistant to the vertices, using properties of perpendicular bisectors to find the distance from a Vertex to a circumcenter, and finding the circumcenter of a triangle on a coordinate plane. High School Geometry Playlist FOLLOW ME: TWITTER FACEBOOK Message me! MINDS Minds.com/joannsschool SUPPORT MY WORK: PATREON PAYPAL YOUTUBE Get a JoAnn's School t-shirt, mug, tote, hoodie, or inspirational sticker to let everyone know you're a fan! Teespring.com/joann-s-school-t-shirts 9 comments Transcript: circumcenter of a triangle and circumcenter theorem we're at 5.2 a we have two previous videos for chapter five that are in the geometry playlist since a triangle has three sides it has three perpendicular bisectors when we construct the perpendicular bisectors we find they have an interesting property when three or more lines intersect at one point the lines are said to be concurrent and the point of concurrency is the point where they intersect the circumcenter of a triangle is where the three perpendicular bisectors are concurrent where they all intersect so we have a perpendicular bisector for a b right here we have a perpendicular bisector for ac right here and one for bc right here and where they intersect at this green dot is the circumcenter of a triangle constructing the circumcenter of a triangle we can draw a scalene acute triangle on tissue paper and mark the vertices abc so that's what i did and then fold the perpendicular bisector of each side so what we do is we take this vertex a and fold it onto b so they match perfectly okay and we make a nice fold here and we can take a and fold it onto c so it matches put a nice fold and we can fold b onto c so one vertex is on top of the other we put a nice fold and we end up with these pink lines that i highlighted where they intersect p that's the circumcenter we label the point where the three perpendicular bisectors intersect as p which is the point of concurrency the perpendicular bisector of a side of a triangle doesn't always pass through the opposite vertex if you look through this this is the perpendicular bisector of ac it's going straight up to make a right angle and a right angle but it doesn't go through b because this is a scalene acute triangle see here's the circumcenter theorem it says the circumcenter of a triangle is equidistant from the vertices of the triangle so we have our pink lines as our perpendicular bisectors and if we connect the vertices to p the circumcenter they will all be equally distant from the vertices and p see so pa equals pb equals pc the circumcenter can be inside the triangle outside the triangle or on the triangle so for an acute triangle look it's down here for an obtuse triangle when we draw the angle bisectors for each side it ends up way out here it's on the outside and for a right triangle look it's on the hypotenuse now take a look at this drawing we have here this is bob's house emma's house and tala's house and each of their walkways are connected with a vertex to make a triangle and we draw bisectors for each side and we find this point in the center by finding the circumcenter of the three houses we found a point that is equidistant from all three houses and it's the intersection of the perpendicular bisectors of the sides of the triangles formed by the houses so if bob and tala and emma went to meet they each traveled the exact same distance to the circumcenter of the triangle the circumcenter of triangle abc is the center of its circumscribed circle so a circumscribed circle is going all the way around the triangle see the circle that contains all the vertices of a polygon is circumscribed about the polygon so it could be a square a pentagon hexagon octagon whatever trapezoid but all the vertices have to be inside the circle now pretend that the triangle isn't there point p is the center of the circle once we put the triangle back we can see that its circumcenter is down here in the triangle c but it's in the center of the circle take a look at this drawing we have all these pink perpendicular bisectors and we have lines coming from the vertices to p so our given is lines l m and n are the perpendicular bisectors of segment a b segment bc and segment ac respectively which means in that order we need to prove that p a is equal to p b is equal to p c so we have a paragraph proof p is the circumcenter of triangle abc and since p lies on the perpendicular bisector of a b p a equals p b it lies on the bisector see that's by the perpendicular bisector theorem and similarly p also lies on the perpendicular bisector of bc right here see so pb equals pc therefore pa equals pb equals pc by the transitive property of equality so if you don't remember what that is if a equals b and b equals c well then a equals c they all equal each other don't they okay so remember the prefix circum means around so circumcenter circumscribed remember it means around okay now using properties of perpendicular bisectors we've got this drawing we've got the pink perpendicular bisectors we've got the blue lines coming from the vertices meeting at the circumcenter z and we can see that this zl is 9.5 hk is 18.6 and this gz is 19.9 and jm is 14.5 so we see these okay so we know that these pink lines are the perpendicular bisectors of triangle ghj we need to find hz this one up here well z is the circumcenter of triangle ghj okay and by the circumcenter theorem z is equidistant from the vertices of ghj and we know gz is 19.9 and if all these blue lines are equal to each other because they're equidistant well then h z equals gz by the circumcenter theorem so hz equals 19.9 finding the circumcenter of a triangle so take a look at this we've got our y-axis and our x-axis and we've got this triangle rso it's a right triangle in the second quadrant okay and find the circumcenter of triangle rso with vertices r is at negative six zero s is at zero four and o is at the origin zero zero and we graph the triangle according to these ordered pairs we find equations for two perpendicular bisectors and since the two sides of the triangles lie along the axes which was great that it does because it makes our life easier we use the graph to find the perpendicular bisectors of these two sides so look at the perpendicular bisector of segment ro right here if that's a negative six well then the bisector would be at negative three half of it wouldn't it so x equals negative three and the perpendicular bisector for os if this is 4 then the bisector would be a 2. so y equals 2. so here's the same drawing so we didn't have to stretch so now number 3 we find the intersection of the two equations x equals three y e negative three and y equals two and the lines x equals negative three and y equals two intersect at negative three two the circumcenter of triangle rso and it's a right triangle so see it's out how it's on the hypotenuse all right our next lesson is in center theorem and inscribed circles that's lesson 5.2 b do me a favor and hit that like button i'd really appreciate it and i hope you're doing well and i'll see you next time bye oh
14987
https://www.frontiersin.org/journals/oncology-reviews/articles/10.3389/or.2025.1549416/full
Your new experience awaits. Try the new design now and help us make it even better REVIEW article Oncol. Rev., 15 May 2025 Sec. Oncology Reviews: Reviews Volume 19 - 2025 | A brief review of Lynch syndrome: understanding the dual cancer risk between endometrial and colorectal cancer Sneha Pallatt1Sibin Nambidi1Subhamay Adhikary1Antara Banerjee1Surajit Pathak1Asim K. Duttaroy2 1Medical Biotechnology Lab, Faculty of Allied Health Sciences, Chettinad Academy of Research and Education (CARE), Chettinad Hospital and Research Institute (CHRI), Chennai, India 2Department of Nutrition, Institute of Basic Medical Sciences, Faculty of Medicine, University of Oslo, Oslo, Norway Lynch syndrome (LS) is an autosomal dominant disorder caused by germline mutations in DNA mismatch repair (MMR) genes. These mutations result in frameshift alterations, leading to the accumulation of errors within microsatellites. Individuals with LS have an elevated risk of developing colorectal and distant malignancies, including endometrial cancer (EC), which is one of the most common cancer associated with LS. Despite its significance, the association between EC and LS is often underexplored. Given the slow progression of colorectal cancer (CRC), there is an opportunity for early detection and intervention, which can aid in reducing both incidence and mortality through the identification and management of pre-malignant lesions and early-stage tumors in colorectum/endometrium. Recognizing individuals with a heightened risk of CRC is essential for implementing personalized screening strategies. This review summarizes the original research work on LS to find out the correlation of CRC following an endometrial cancer diagnosis in individuals with MMR gene mutations, may involve refine treatment strategies and moreover this review may help clinicians and researchers to get an up-to date information on LS and its advanced treatment possibilities. Highlights • This review comprehensively summarizes the current research findings on LS and possible correlation between CRC development following EC in individuals with MMR gene mutations. • This review discussed the genetic and molecular pathways, such as MMR gene mutations and microsatellite instability (MSI), that drive the development of both EC and CRC. • This review finds the key points regarding the role of early detection and surveillance strategies in LS carriers from the original research data available. 1 Overview of Lynch syndrome and associated cancer risks Lynch syndrome (LS) is a hereditary condition that predisposes individuals to various malignancies, most notably colorectal cancer (CRC) and endometrial cancer (EC) (1). This autosomal dominant disorder is characterized by an increased cancer risk due to defects in DNA mismatch repair (MMR), which compromises genomic stability (2). Microsatellite instability (MSI) is a crucial screening factor for Lynch-associated tumors and underscores the aggressive and rapid progression of these cancers compared to sporadic cases (3, 4). A tumor is classified as microsatellite instability-high (MSI-H) when mutations are detected in two or more of the five microsatellite sequences within the tumor DNA. If only one of these five sequences is altered, the tumor is categorized as microsatellite instability-low (MSI-L). When none of the microsatellite sequences exhibit mutations, the tumor is considered microsatellite stable (MSS) (5). In cases where a tumor is identified as MSI-L, further testing with an extended panel of microsatellite markers is recommended to ensure precise classification (6). In LS, MSI-H tumors are primarily caused by germline mutations, while somatic mutations in the MLH1 and MSH2 genes are observed in only a small percentage of sporadic cases (7). The most common explanation for MSI-H tumors in sporadic cases is the silencing of the MLH1 gene by promoter hyper-methylation, a phenomenon also observed in LS. Additionally, MSI-H tumors are strongly associated with the loss of MLH1 protein expression in sporadic tumors, whereas familial tumors often exhibit a loss of both MLH1 and MSH2 protein expression (8). These genetic alterations create genomic instability, thereby expediting the progression of CRC in patients with LS, frequently advancing from adenoma to carcinoma in an approximate timeframe of 2 years, in stark contrast to the decade-long evolution observed in sporadic cases (9). Beyond LS, additional hereditary syndromes, exemplified by Cowden syndrome, which is marked by mutations in phosphatase and tensin homolog (PTEN), further enhance the risk of developing EC. Lifestyle determinants, such as obesity, physical inactivity, and specific dietary habits, exacerbate the likelihood of both EC and CRC, underscoring the necessity for comprehensive preventive measures (10–12). A thorough comprehension of the interrelated risks associated with EC and CRC in LS is essential for the enhancement of early detection and therapeutic management. The identification of common genetic mutations and molecular pathways not only augments diagnostic accuracy but also facilitates the development of targeted therapeutic interventions that are efficacious against both forms of cancer. Understanding the genetic and molecular factors underlying this syndrome is crucial for early detection and effective management of affected individuals. This review seeks to elucidate these interconnections, with the objective of informing clinical guidelines and improving prognostic outcomes for individuals afflicted with LS. 2 LS: mechanism and impact Two major criteria are followed to classify individuals with LS, namely, Amsterdam Criteria II and Revised Bethesda Criteria mutations. The Amsterdam II criteria serve as a guideline for identifying families at high risk for LS, an autosomal dominant disorder that increases susceptibility to cancer. According to these criteria, a family must have at least three members diagnosed with cancers associated with LS, with at least one being a first-degree relative of the other two. Additionally, the disease should affect at least two successive generations, and at least one of the diagnosed individuals may have developed cancer before the age of 50. A confirmed pathological examination is required to verify the presence of tumors, and familial adenomatous polyposis must be ruled out as a possible cause (13). Similarly, Revised Bethesda Criteria is designed to recognize individuals with CRC who may require further evaluation for MSI and serve as a screening tool for LS. These guidelines assist in determining whether a patient’s tumor may be linked to MMR gene mutations, thereby indicating the need for additional genetic testing. One of the key indicators is early-onset CRC, where patients diagnosed before the age of 50 years require additional assessment due to an increased likelihood of hereditary cancer predisposition. Another critical criterion is the presence of synchronous or metachronous LS-associated malignancies, which include cancers of the colorectum, endometrium, stomach, ovaries, small intestine, biliary tract, ureter, or renal pelvis, occurring either concurrently or at different time points, necessitating genetic screening (Figure 1). Additionally, tumors exhibiting MSI-H histopathological features, such as mucinous differentiation, signet-ring cells, Crohn’s-like lymphocytic infiltration, or tumor-infiltrating lymphocytes, particularly when diagnosed before 60 years of age, suggest potential underlying MMR gene mutations and warrant further molecular analysis. Furthermore, a family history of early-onset CRC or LS-associated cancers in a first-degree relative (parent, sibling, or child) diagnosed before 50 years of age serves as another significant criterion for genetic testing. Lastly, the occurrence of CRC or other LS-associated malignancies in at least two first- or second-degree relatives (including grandparents, aunts, uncles, nephews, nieces, or grandchildren) at any age provides further justification for comprehensive genetic evaluation to identify hereditary cancer risks (14, 15). MSI results in changes in the length of microsatellites—short repetitive DNA sequences—and contributes to genomic instability, which drives tumorigenesis by enabling mutations in key oncogenes and tumor-suppressor genes such as TGF-βR2, BAX, and PTEN. MSI-related mutations in TGF-βR2 impair cell proliferation regulation, while alterations in BAX hinder apoptosis, fostering tumor growth (16, 17). Figure 1 Figure 1. Cancer associated with Lynch syndrome in male and female. 2.1 Cancer spectrum and associated risks based on MMR gene variants Investigations delineate a significant convergence in the genetic and molecular frameworks that support both EC and CRC. Mutations within mismatch repair genes, including MLH1, MSH2, MSH6, PMS1, and PMS2, play a crucial role in the origin of both malignancies (18). The risk and spectrum of cancers in LS vary depending on which MMR gene harbors the pathogenic variant, with each conferring distinct cancer risks and characteristics. 2.1.1 MutL homolog 1(MLH1) and MutL homolog 2 (MLH2) Individuals with pathogenic variants in MLH1 and MSH2 have the highest lifetime risk of CRC and EC, estimated between 40% and 80% (19). These individuals are also predisposed to distant colonic malignancies, including gastric, ovarian, urinary tract, hepatobiliary, and small bowel cancers. Among these, stomach cancer risk is particularly high in MLH1 mutation carriers, with MSH2 mutation carriers exhibiting a relatively lower but still significant risk (20). The variation in stomach cancer incidence between MLH1 and MSH2 carriers may be attributed to age-specific hazard ratio (HR) differences, a younger onset for MLH1 carriers, or a higher representation of MLH1 mutations among gastric cancer cases (21). Additionally, there is an increasing evidence for higher incidences of pancreatic cancer in LS carriers, as well as potential associations with breast and prostate cancers, given their frequent presentation with MMR deficiency in Lynch families (22). Moreover, a risk of cervical cancer has been noted, though some cases may be misclassified adenocarcinomas of the lower uterine segment rather than true cervical carcinomas. While the overall cumulative risks of LS-related cancers by age 70 are similar across MLH1, and MSH2 mutation carriers, each mutated gene confers a unique cancer risk profile (23). 2.1.2 MutS homolog 6 (MSH6) Carriers of pathogenic MSH6 mutations exhibit a distinct cancer risk profile within LS. Recent studies estimate the lifetime CRC risk for MSH6 mutation carriers to range between 10% and 44%, typically presenting at a later age compared to MLH1 or MSH2 mutation carriers. However, the risk of EC is significantly elevated, with lifetime risks between 16% and 49%, often exceeding the risk of CRC (24). Additionally, MSH6 mutations are associated with an increased but variable risk of ovarian cancer (25). Emerging evidence also suggests a heightened susceptibility to breast cancer, indicating a two-fold increased risk among MSH6 and PMS2 carriers compared to the general population. Other malignancies, including urinary tract, stomach, and small intestine cancers, have also been linked to MSH6 mutations, though they occur less frequently (26). 2.1.3 PMS1 homolog 2 (PMS2) A defective PMS2 gene associated with LS substantially elevates the possibilities of developing specific cancers, particularly CRC and EC in comparison to the general population. However, pathogenic PMS2 variants are associated with the lowest cancer risks among LS-related MMR gene mutations. Studies indicate that the lifetime risk of CRC in individuals with PMS2 mutations ranges between 10% and 20%, significantly lower than that of MLH1, MSH2, and MSH6 mutation carriers (27). Additionally, EC risk in PMS2 carriers is estimated to be between 12% and 15%, also lower than those associated with other MMR genes. The later onset of CRC, typically occurring after age 50, contributes to a less aggressive screening approach. Unlike carriers of MLH1 or MSH2 mutations, who require biennial colonoscopy starting at age 20–25, PMS2 mutation carriers may begin screening at age 35–40, with colonoscopies recommended every 2–3 years instead of annually (28). Recent studies have also suggested that PMS2 carriers may have a lower risk of extra-colonic malignancies, though upper gastrointestinal, ovarian, and urinary tract cancers have been reported at lower frequencies. Due to the reduced overall cancer risk, prophylactic surgeries, such as hysterectomy, are not routinely recommended for PMS2 carriers unless there is a strong family history of EC. PMS2-deficient CRCs tend to exhibit more aggressive behavior and a worse prognosis compared to other MMR-deficient CRCs (29). This distinction is partly attributed to lower levels of intra-tumoral immune infiltration, suggesting that PMS2-deficient CRCs share more biological characteristics with sporadic MMR-proficient CRCs than with other LS-associated CRCs. While it was previously believed that carriers of germline pathogenic PMS2 variants represented a small minority of LS patients, recent studies have challenged this assumption. New investigations indicate that pathogenic PMS2 carriers have the highest population frequency among the four MMR genes, with an estimated prevalence of 1 in 714 individuals (30). Furthermore, studies utilizing IHC staining in CRCs from population-based cohorts have demonstrated that isolated PMS2 loss of expression, indicative of pathogenic PMS2 variants, is observed in 0.5%–1.5% of unselected CRCs. Among MSI CRCs, the fraction of isolated PMS2 loss varies between 1% and 8%, with more than half of these tumors being linked to germline pathogenic PMS2 variants. These findings underscore the importance of refining screening strategies and risk assessment for PMS2-deficient CRCs to improve early detection and patient management (31). 2.1.4 Epithelial cell adhesion molecule (EPCAM) The EPCAM gene is not an MMR gene, but deletions in EPCAM lead to MSH2 inactivation due to promoter hypermethylation, resulting in a cancer risk profile similar to MSH2 variants (32). Individuals with EPCAM deletions have an increased risk of CRC, with studies reporting a lifetime risk of approximately 75%, comparable to MSH2 mutation carriers. Additionally, the risk of EC in female carriers is estimated to be around 30%, reinforcing the need for targeted surveillance. Unlike other LS-associated mutations, EPCAM deletions do not directly affect DNA mismatch repair function but cause epigenetic silencing of MSH2, leading to a deficiency in MMR and MSI-H (33). This makes individuals with EPCAM deletions susceptible to other LS-associated cancers, including ovarian, gastric, small bowel, and urinary tract malignancies. Colonoscopy screening every 1–2 years starting at age 25 is recommended for EPCAM carriers, along with EC surveillance. However, because EPCAM deletions predominantly affect MSH2 expression, further research is needed to refine cancer risk estimates and optimize screening protocols for affected individuals (34). The autosomal dominant inheritance of LS results in a 50% probability of passing the condition to offspring, making genetic testing and counseling essential for at-risk families. Early and regular surveillance, such as colonoscopy starting at 20–25 years of age or 2–5 years before the youngest diagnosed family member, significantly reduces cancer-related mortality (35). Prophylactic surgical options, such as colectomy and hysterectomy, are also available for individuals at high risk. Importantly, tumors with MSI-H phenotypes in LS respond well to immune checkpoint inhibitors, particularly anti-PD-1/PD-L1 therapies, offering a targeted treatment approach (36). Advances in molecular diagnostics, including MSI testing and immunohistochemistry for MMR proteins, have greatly improved LS management, enabling timely interventions and personalized treatments to mitigate its impact on affected individuals and their families (37). 3 Endometrial cancer: a central player in LS’s cancer spectrum EC represents a quintessential neoplasm within LS, frequently manifesting as the chief malignancy preceding the emergence of other tumors associated with LS, including CRC. It is estimated that approximately 40%–60% of female individuals with LS will experience the development of EC during their lifetimes, with the mean age of onset occurring 10–15 years earlier than that observed in sporadic, non-syndromic instances (38). The presence of MSI and germline mutations in MMR genes, particularly in MSH2 and MSH6, is markedly prevalent in Lynch-associated EC, which contributes to genomic instability and tumorigenesis (39). In contrast to sporadic EC, which often relies on estrogen for its progression, Lynch-associated EC is generally non-estrogen-dependent and displays unique molecular subtypes, predominantly categorized as high-grade endometrioid carcinomas. Moreover, Lynch-associated EC is distinguished by a hyper-mutated phenotype, resulting in a high frequency of mutations in genes such as PTEN, KRAS, and PIK3CA (40). Estrogen-dependent EC is linked to factors that elevate lifetime exposure to endogenous or exogenous estrogens. These factors include a higher body mass index (BMI), estrogen replacement therapy, estrogen-secreting tumors, chronic anovulation, tamoxifen therapy, early onset of menstruation, and delayed menopause, all of which contribute to endometrial proliferation stimulated by estrogen (41). In contrast, non-estrogen-dependent EC is not associated with unopposed estrogen exposure and is linked to risk factors such as lower BMI, nulliparity, a history of breast cancer, and being over 55 years old at the time of diagnosis (42). 4 Colorectal cancer: insights from the LS perspective Although CRC predominantly targets individuals aged 50 and above, those diagnosed with LS experience a considerably elevated risk and are frequently identified at a younger age due to the hereditary predisposition associated with their condition. Approximately 80% of hereditary CRC cases, particularly those associated with LS, arise via the mutation or alternative pathway linked to these MMR gene alterations. This is in contrast to the suppressor or classic pathway, which is responsible for around 80% of sporadic CRC instances, often connected to mutations in genes such as APC, p53, and KRAS (43). CRC associated with LS usually involves activation of the WNT/β-catenin signaling pathway due to secondary mutations in APC or β-catenin (CTNNB1), further advancing tumorigenic processes (9). In individuals diagnosed with LS, CRC typically initiates as an adenomatous polyp within the intestinal mucosa, with malignant progression occurring at a considerably accelerated rate compared to sporadic cases (44). The typical duration from adenoma to carcinoma in Lynch-associated CRC is roughly 2 years, whereas this timeline extends to approximately 10 years for sporadic cases (27, 28). Unlike sporadic CRC, which often occurs in the distal colon and rectum, LS-associated CRCs predominantly arise in the proximal (right-sided) colon, particularly in the cecum and ascending colon (45). These tumors frequently display mucinous differentiation or signet-ring cell morphology and are poorly differentiated or undifferentiated, highlighting their aggressive nature. A characteristic immune response, marked by peri-tumoral and intra-tumorally lymphoid aggregates, is commonly observed, suggesting active immune surveillance against tumor cells. Additionally, an increased presence of intraepithelial lymphocytes further reinforces their immunogenic nature, indicating a potential for responsiveness to immunotherapy (46). Some LS-associated CRCs also exhibit serrated glandular architecture or medullary carcinoma-like features, which are relatively uncommon in sporadic cases. A defining aspect of LS-associated CRCs is their rapid progression, transitioning from adenomatous polyps to invasive carcinoma within approximately 2 years, in contrast to the decade-long progression seen in sporadic CRCs (47). The clinical manifestations of CRC in patients possessing LS encompass symptoms including abdominal discomfort, alterations in bowel patterns, weight reduction, nausea, and anemia. Distal tumors are more inclined to induce visible rectal hemorrhage, whereas proximal tumors may lead to occult blood in the feces. In light of the distinctive hereditary risk factors, patients with LS may also exhibit atypical signs of metastasis, such as lymphadenopathy (e.g., Virchow’s node) or hepatomegaly (48). 5 Epidemiological insights and risk factors for EC and CRC The epidemiology and risk factors for EC and CRC highlight unique and overlapping elements contributing to their development and prevalence. EC primarily impacts women in the postmenopausal stage, with a higher occurrence noted in correlation with advancing age (49). Risk determinants for endometrial carcinoma are closely associated with hormonal dysregulation, notably conditions that lead to extended exposure to estrogen without the counterbalancing effects of progesterone. Obesity, polycystic ovary syndrome (PCOS), nulliparity, and late menopause are significant contributors, as they increase endogenous estrogen levels (50). Estrogen promotes the growth of endometrial cells, raising the risk of hyperplasia (abnormal cell growth) and ultimately leading to EC. Progesterone opposes this effect by balancing estrogen’s action. It induces differentiation in endometrial cells, inhibits proliferation, and facilitates the shedding of the endometrial lining as seen during menstruation (51). When exogenous estrogen is given, such as in hormone replacement therapy (HRT) for postmenopausal women, without the addition of progesterone (unopposed estrogen therapy), the endometrial lining undergoes continuous stimulation without progesterone’s regulatory effects. This prolonged exposure can result in endometrial hyperplasia and markedly heighten the risk of developing EC. Lifestyle factors, including diets high in saturated fats and a lack of physical activity, further amplify this risk (52). CRC has both genetic and environmental factors playing crucial roles in its epidemiology (53). Lifestyle factors such as diet, physical activity, and smoking are important modifiable risk factors (54). Diets high in red and processed meats, low fiber intake, and excessive alcohol consumption are associated with increased CRC risk. Additionally, chronic conditions such as inflammatory bowel disease (IBD), including Crohn’s disease and ulcerative colitis, elevate the risk of CRC (55). Women diagnosed with LS exhibit a markedly elevated probability of developing EC as their initial malignancy, frequently preceding the occurrence of CRC. This hereditary association emphasizes the critical necessity for systematic screening and vigilant surveillance in individuals possessing a familial predisposition to these malignancies (56,57). 6 LS associated EC and CRC genes A comprehensive analysis (Li et al., 2022) of data from the TCGA database revealed significant differences in the molecular mechanisms driving the progression of LS to CRC or EC. While LS-CRC progression is closely associated with differential gene expression (DEGs), LS-EC development may rely more on gene methylation processes. For instance, COL11A1, correlated with MSH6 mutations, serves as a key marker for distinguishing MSI-H and microsatellite stable (MSS). CRC, playing a role in extracellular matrix interactions and tumor development (42). From the TCGA database, specific genes were identified that overlap with LS and CRC (SGs-LC) and LS and EC (SGs-LE), comprising 493 and 99 genes, respectively (Li et al., 2022). Enrichment analyses revealed distinct pathways for SGs-LC and SGs-LE, with shared associations in peroxisomal pathways but differing in other functional pathways. For SGs-LC, pathways related to peroxisomal activity and extracellular matrix remodeling may play pivotal roles, as evidenced by genes like CST2 and COL18A1 (58). In contrast, SGs-LE genes like LY6K and MIR27B are implicated in immune response modulation or hormone signaling, both critical in EC. Several genes exhibited notable roles in LS-associated tumor progression. SST, a regulatory peptide, inhibits cellular mitosis and tumor growth in various cancers, including CRC. Similarly, KIF20A and NUF2, implicated in mitotic regulation and tumorigenesis, show significant roles in both CRC and EC (58). Specific survival analyses further underscored unique and overlapping genetic markers influencing patient outcomes in CRC and EC. Genes like COL18A1 and HTR4 modulate the tumor microenvironment and signal transduction in CRC, while CDC45 and WDR31 influence cellular replication processes in EC. SGs-LC, genes such as AADACL2, DHRS7C, KRT24, and LINC00460 exhibit highly significant p-values (59). Both upregulated (e.g., LINC00460) and downregulated (e.g., AADACL2) expressions have been noted, with CST2 being significantly upregulated, suggesting its potential role in CRC tumor progression. Conversely, downregulated genes like NPY2R and KHDRBS2 may contribute to CRC development through their suppression (59). Additional candidates, such as CDH10 and LINC02616, are involved in CRC-specific pathways related to adhesion and cellular communication. For SGs-LE, genes like LINC02691, MIR27B, and LY6K are characterized by less pronounced but still significant differential expression. Notably, IGF2-AS is upregulated, potentially influencing the insulin-like growth factor (IGF) signaling pathway in EC. Meanwhile, genes like ADAMTS9-AS2 and SLC10A4 suggest potential epigenetic or regulatory functions in EC (Figure 2) (59). Figure 2 Figure 2. Lynch syndrome-associated genes specific to EC and CRC, highlighting the shared genes between EC and CRC. 7 Molecular alteration and dysregulation pathways in EC: distinction between endometrioid EC and serous EC Endometrial endometrioid carcinomas (EECs) are marked by frequent genetic mutations and pathway dysregulations that drive their development and progression (60). EECs often exhibit MSI present in about 20% of unselected endometrial tumors and more common in EECs than non-EECs (61). This leads to mutations in various genes involved in tumorigenesis, including Birt-Hogg-Dube (BHD), BAX, insulin-like growth factor type 2 receptor (IGFIIR), Transforming Growth Factor-β Receptor II (TGFβ-RII), and ataxia telangiectasia and Rad3-related (ATR), many of which are part of the DNA damage response (62, 63). The PI3K-PTEN-AKT pathway is also significantly altered in over 80% of EECs, with high-frequency mutations in PIK3R1, PIK3CA, and PTEN, as well as additional alterations like PIK3CA amplification and PTEN promoter methylation. These mutations result in dysregulated cell proliferation, growth, and survival (64). EECs also feature alterations in the RAS-RAF-MAPK pathway, with KRAS mutations present in 18% of cases, often coexisting with mutations in PTEN, PIK3CA, and PIK3R1 (65). BRAF mutations are rare, occurring in only 1% of EECs. fibroblast growth factor receptor 2 (FGFR2) mutations, found in 12% of EECs, are mostly missense mutations and are mutually exclusive with KRAS mutations but frequently co-occur with PTEN mutations, making FGFR2 a potential therapeutic target (66, 67). The WNT signaling pathway is frequently disrupted through CTNNB1 (β-catenin) mutations in up to 45% of EECs (68). Additionally, ARID1A gene mutations, affecting the BAF250a component of the switch/sucrose nonfermenting (SWI/SNF) chromatin-remodeling complex, are found in approximately 40% of low-grade and 39% of high-grade EECs (Figure 3) (69). Figure 3 Figure 3. Key Molecular Pathways in Endometrial Carcinoma: ARID1A, PTEN, and Wnt Signaling Mutations, and PI3K Activation lead to Tumorigenesis. Receptor Tyrosine Kinases (RTK) – Growth factors, such as VEGFR/PDGFR, activate RTKs, triggering the PI3K pathway. PI3K Activation–This leads to the conversion of PIP2 to PIP3, which PTEN normally regulates. However, PTEN inactivation disrupts this control, contributing to tumorigenesis. ARID1A Mutation–Mutations in ARID1A disrupt the function of the BAF250a complex, a critical player in chromatin remodeling, further contributing to gene dysregulation and cancer progression. Wnt Pathway Mutation–Mutations in the Wnt signaling pathway also play a role by activating downstream targets that promote cell proliferation and inhibit normal gene regulatory mechanisms. Serous endometrial carcinomas (ECs) exhibit distinct genetic profiles and clinical behaviors compared to EECs (70). Serous ECs are often characterized by aneuploidy and frequent alterations such as TP53 mutations, overexpression of Cyclin-E and Erb-B2 Receptor Tyrosine Kinase 2 (ERBB2), and p16 dysregulation (71, 72). TP53 mutations are the most common genetic changes in serous ECs, occurring in 53%–90% of tumors, and are often found in early precancerous stages, suggesting a stepwise progression to malignancy (73). These mutations are less common in EECs, with a higher frequency in high-grade cases. The protein phosphatase 2 scaffold subunit Alpha (PPP2R1A) gene, which encodes the scaffolding subunit of the protein phosphatase-2A (PP2A) enzyme, is also frequently mutated in serous ECs (17%–41%) but less so in EECs (5%–7%). These mutations may impair PP2A’s tumor suppressor function, potentially contributing to tumorigenesis (74–76). The overexpression and amplification of HER-2/ERBB2 are notably more prevalent in serous endometrial carcinomas (ECs) compared to endometrioid endometrial carcinomas (EECs). Research indicates that HER-2/ERBB2 overexpression occurs in 17%–80% of serous EC cases, with gene amplification reported in 17%–42% of these tumors (77, 78). HER-2/ERBB2 status in serous ECs is associated with shorter survival times, suggesting its predictive value (79, 80). Additionally, HER-2/ERBB2-positive serous ECs are more frequently observed in patients with a previous history of breast cancer (81). 7.1 Epigenetic disruption in LS-Associated EC: critical role of aberrant methylation Aberrant methylation patterns play a critical role in the tumorigenesis of EC, particularly in cases associated with LS. Hypermethylation of tumor suppressor genes and hypomethylation of oncogenes disrupt key cellular pathways, including proliferation, apoptosis, and immune evasion (82). The MLH1 gene is frequently hypermethylated in EC, especially in MSI-H tumors. This methylation silences MLH1 expression, impairing the DNA mismatch repair pathway and allowing the accumulation of genetic mutations. This deficiency in mismatch repair is a hallmark of LS-associated EC, resulting in a high mutational burden and tumor heterogeneity (83). Other tumor suppressor genes commonly affected by hypermethylation include PTEN, RASSF1A, and CDKN2A. Hypermethylation of the PTEN promoter reduces its expression, disrupting the PI3K/AKT pathway, which contributes to uncontrolled cellular proliferation and survival (84). Similarly, hypermethylation of RASSF1A silences its role in regulating cell cycle arrest and apoptosis, thereby enhancing cell proliferation and suppressing apoptotic signaling. Methylation of CDKN2A silences this cyclin-dependent kinase inhibitor, disrupting cell cycle regulation and enabling unchecked cellular growth (84). In contrast, global DNA hypomethylation can activate oncogenes such as C-MYC, which promotes increased proliferation, metabolic reprogramming, and evasion of apoptosis. Additionally, hypomethylation of MEST (Mesoderm-Specific Transcript) leads to its overexpression, enhancing oncogenic signaling and tumor progression (85). In the context of hormone signaling, hypermethylation of HOXA10 and HOXA11, genes essential for endometrial development, disrupts critical pathways involved in maintaining endometrial homeostasis. These changes alter estrogen receptor (ER) and progesterone receptor (PR) signaling, further contributing to hormone-driven progression of EC (86). Methylation also modulates immune response pathways, as seen with the hypermethylation of SOCS3 (Suppressor of Cytokine Signaling 3), which promotes immune evasion by altering cytokine signalling (87). The clinical implications of these methylation changes in EC are profound. Hypermethylated genes such as MLH1, PTEN, and RASSF1A show promise as diagnostic biomarkers for early detection of EC. Methylation patterns of genes like CDKN2A and MLH1 also serve as prognostic indicators, correlating with tumor stage, grade, and patient outcomes. Notably, MSI-H EC tumors, characterized by MLH1 hypermethylation, often respond favorably to immunotherapy due to their high mutational burden and resultant neoantigen expression (88). 8 Genetic mutation and pathways alteration driving CRC progression Ahadova et al. (2018) proposed that three distinct pathways explain CRC development in Lynch patients, in contrast to the widely accepted idea that mutations in the Wnt/β-catenin pathway underlie all CRC development in LS. The three signalling pathways frequently affected in LS CRCs are the Wnt/β-catenin the RAF/MEK/ERK and the PI3K/PTEN/AKT pathways, all of which aid in a cell’s road to malignancy when in a deregulated state (89). APC mutations are distributed across the gene and both alleles need to be affected, while CTNNB1 shows gain-of-function mutations usually located in exon 3, an exon that encodes a regulatory domain normally phosphorylated by GSK-3B (90). Additionally, polymorphisms in CCND1, TP53, IGF1, and AURKA influenced age-associated risk for CRC in LS. Reeves et al 2008. confirmed that the IGF1 polymorphism is an important modifier of disease onset in LS. Talseth et al 2008. reported that the CCND1 polymorphism was associated with a significant difference in age of disease onset in patients harboring MSH2 mutations, which was not observed in MLH1 mutation carriers. A shorter CA-repeats is associated with an earlier age at onset of CRC in LS (91–93). The pathway-based approach of Chen et al. 2009. to elucidate genetic risk modifiers influencing age of onset of CRC in patients with LS using CART analysis (classification and regression tree) identified CDKN2A C580T and IGF1 CA-repeat as the initial splits, indicating that the polymorphisms in these genes are the most informative for separating patients into those LS patients who are more likely to develop CRC early versus those who are more likely to develop CRC at a later age. The gene–gene interaction between E2F2 and AURKA as the influence of the AURKA SNP on risk varies depending on the E2F2 genotype (94). A particularly notable finding is that individuals with biallelic mutations in the MUTYH gene face a significantly elevated lifetime risk of developing CRC, with estimates ranging from a 28-fold increase reported by Lubbe et al. (2009) (95). Similarly, to a 93-fold increase was reported and a near-complete penetrance by the age of 60. Moreover, even monoallelic carriers of pathogenic or likely pathogenic MUTYH variants exhibit a moderately increased CRC risk—approximately 1.68-fold. Some monoallelic carriers also harbored mutations in other base excision repair (BER) genes, such as OGG1 and MTH1, underscoring the role that alterations in low-penetrance genes may play in CRC development (96, 97). Statistical analyses from the research findings estimate that approximately 15 or fewer of these mutations are critical drivers of tumor development. Key driver genes in CRC include APC, KRAS, NRAS, BRAF, PIK3CA, and PTEN (98, 99). APC acts as a gatekeeper gene, initiating adenoma formation when mutated. Approximately 40% of CRC harbor KRAS mutations, predominantly at codons 12 and 13, which are critical in the progression of advanced CRC cells (100). NRAS mutations, although less common, occur at codons 12, 13, or 61. BRAF mutations, found in 5%–10% of CRCs, are associated with the CpG island methylator phenotype (CIMP) and an altered adenoma-carcinoma progression pathway (101). The interplay between these mutations and the resulting disruptions in signaling pathways provides valuable insights into the mechanisms of CRC development and progression, paving the way for targeted treatments and better diagnostic tools (102). 9 Mechanism of cancer initiation in EC and progression to CRC Although LS is primarily driven by mutations in MMR genes (MLH1, MSH2, MSH6, and PMS2), several other genes contribute to EC development in LS patients. These genes regulate crucial cellular processes such as tumor suppression, chromatin remodeling, and cell signaling, which, when disrupted, accelerate tumorigenesis. One of the earliest molecular events in LS-associated EC is the inactivation of PTEN. Loss of PTEN function results in uncontrolled cell proliferation, increased survival, and resistance to apoptosis, hallmark features of cancer progression. Like sporadic EC, PTEN mutations are common in LS-associated cases and contribute to early tumorigenesis (103). Additionally, MSI-induced frameshift mutations in TGFBR2 disrupt TGF-β signaling, which normally functions as a tumor suppressor by regulating cell growth and differentiation. The disruption of this pathway allows for uncontrolled cellular proliferation. The loss of TGF-β signaling leads to unchecked cell proliferation and enhances tumor progression (104). The PI3K/AKT signaling pathway is further affected by mutations in PIK3CA. PIK3CA mutations contribute to sustained activation of the pathway, driving tumor growth and increasing resistance to apoptosis (105). Additionally, ARID1A, a chromatin remodeling gene, is frequently mutated in MSI-H tumors, including LS-associated EC. Loss of ARID1A function disrupts DNA repair mechanisms, leading to genomic instability and increased tumor mutation rates (106). Other oncogenic mutations found in LS-associated EC include KRAS, which affects the RAS/MAPK signaling pathway and promotes uncontrolled cell growth (107, 108). Additionally, overexpression of SOX9, a transcription factor involved in stem cell maintenance and differentiation, has been linked to increased tumorigenicity in MSI-H EC (109). The Wnt/β-catenin signaling pathway, a critical cell proliferation and differentiation regulator, is frequently altered in LS-associated ECMutations in CTNNB1, which encodes β-catenin, lead to aberrant activation of this pathway, further supporting tumorigenesis (110). Additionally, RNF43, a gene that negatively regulates Wnt signaling, is often mutated in MSI-H ECs, further enhancing tumor growth; the prevalence of truncating mutations at this locus, combined with the rarity of synonymous mutations, strongly indicates that RNF43 mutations have been positively selected during the evolution of EC and CRC (Figure 4) (111). Given the shared genetic basis of LS-associated EC and CRC, it is likely that their molecular pathways exhibit significant similarities. In approximately 50% of LS cases, EC is diagnosed before CRC in instances where the two malignancies are not synchronous, rendering CRC the second primary cancer in these patients. This sequential pattern of cancer development is likely driven by LS’s shared underlying genetic alterations characteristic. As a result, EC may function as a sentinel malignancy, serving as an early indicator of LS in affected individuals and facilitating the identification of at-risk family members through genetic screening and surveillance (38, 112). Figure 4 Figure 4. Flowchart illustrating the molecular progression of endometrial cancer (EC) to colorectal cancer (CRC) in Lynch Syndrome (LS). Germline mutations in mismatch repair (MMR) genes (MLH1, MSH2, MSH6, PMS2, EPCAM) lead to MMR deficiency and microsatellite instability (MSI). This instability triggers the initiation of EC via activation of the PI3K/AKT and MAPK pathways, with involvement of genes such as PTEN, PIK3CA, and KRAS. Persistent MSI results in secondary malignancies, including CRC, through mutations in APC, CTNNB1, TGFBR2, and RNF43, promoting Wnt/β-catenin activation, TGF-β pathway disruption, and continued PI3K/AKT signaling. The rapid adenoma–carcinoma sequence in LS accelerates CRC progression. 10 Uncovering the CRC risk in EC: clinical implications Following the diagnosis of EC as the primary cancer, individuals may face a heightened risk of developing a second primary cancer due to the shared genetic predispositions, environmental exposures, or the impact of treatments for the initial cancer. To address this, there is an immediate necessity for recommendations based on clinical evidence that focus on preventive strategies, including regular screening for secondary cancers among those who have survived from the primary cancer (113). Individuals with LS face up to an 80% lifetime risk of developing CRC (114). Therefore, genetic testing and CRC screening are strongly recommended, if any family member is diagnosed with LS (115, 116). In 2012, a study (Singh et al 2012) was conducted to assess CRC in women diagnosed with EC. The research comprised a total of 267 women with EC, of whom 2.4% were found to have CRC. Additionally, 13.6% had significant pathological findings, such as adenomatous polyps and tubulovillous histology (117). After that Singh et al. (2013) performed a study that included 3,115 women with EC found that women under 50 years of age had a significantly higher risk of developing CRC of any type, with a hazard ratio (HR) of 4.41 and a 95% confidence interval (CI). The risk was particularly elevated for right-sided CRC, with an HR of 7.48 and a 95% CI. In contrast, no elevated risk of CRC was noted in women aged 51–65 years or older than 65 years. However, women aged 51–65 years with EC had an increased risk of right-sided CRC, with an HR of 2.30 and a 95% CI (116). Another study by Win et al. (2013) reported that women with EC carrying mutations in MMR genes had an elevated risk of developing CRC within the next 20 years. The estimated probability of CRC development was 48%, with a 95% CI. The study also identified a significantly increased risk of CRC, as indicated by a standardized incidence ratio (SIR) of 39.9 (95% CI) compared to the normal population (118). A retrospective cohort study by Liao SC et al. (2021) found that the prevalence of CRC in women with EC was 2.20 times higher compared to controls, with an incidence rate of 1.09 per 1000 person–years. The study also noted that the risk of CRC increased with age, and the hazard ratio for CRC development was highest within 3 years of an EC diagnosis (119). Further (Lai et al., 2021), women diagnosed with EC exhibited significantly elevated SIRs for CRC, irrespective of age. In a sub-site-specific analysis of CRC, EC patients diagnosed before the age of 50 demonstrated higher SIRs for ascending colon. The cumulative incidence of second primary malignancies in EC patients was evaluated over 5, 10, 15, and 20 years of follow-up. Notably, the incidence of CRC showed a progressive increase, rising from 0.7% at 5 years to 3.9% at 20 years. Patients aged ≥50 consistently exhibited a higher incidence than those aged <50, with rates reaching 5.7% and 2.1%, respectively, at 20 years. These findings suggest that EC survivors, particularly those aged ≥50, are at an increased long-term risk of developing CRC (120). This highlights the critical need for ongoing surveillance, risk assessment, and the implementation of targeted preventive strategies in this high-risk population. 11 Prognostic markers of EC and CRC Predictive biomarkers are crucial in predicting disease progression, independent of treatment. These markers are measurable clinical or biological characteristics that provide insight into a patient’s likely outcome. In EC, blood-based prognostic biomarkers have garnered significant interest among healthcare professionals and patients due to their potential for easy assessment (121). Two protein-based biomarkers have emerged as particularly noteworthy in EC prognosis: Human Epididymis protein 4 (HE4) and cancer antigen 125 (CA125). CA125, in particular, has been the focus of multiple investigations. An increasing amount of evidence indicates a correlation between elevated serum CA125 levels and unfavorable clinicopathological features in EC patients (122). Furthermore, research indicates that higher CA125 concentrations may be associated with poorer outcomes in individuals diagnosed with EC (123). HE4 is a glycoprotein that was initially identified in the epididymis but has been shown to be highly expressed in various cancer types, including EC (Table 1) (124). Table 1 Table 1. Biomarkers in endometrial cancer: types, descriptions, and clinical advantages. Hormone receptor status, particularly progesterone receptor (PR) and estrogen (ER) positivity, has been recognized as a key prognostic marker linked to a substantial enhancement in disease-free survival. Mutations in the tumor suppressor gene p53 are prominent in type II ECs, with studies reporting mutations in up to 90% of serous carcinomas. p53 mutations correlate with poor clinical outcomes, including an 11-fold elevated risk of death in multivariate analyses adjusting for lymph node metastasis, grade, histology, and FIGO stage (125). HER2 gene amplification, more common in serous histology, has been identified as a distinct prognostic marker associated with reduced overall survival. The PI3K/AKT/mTOR pathway, affected in over 80% of type I ECs, presents both prognostic significance and therapeutic potential, with PTEN and PIK3CA mutations being key components (126). MSI has shown conflicting prognostic implications, with some studies reporting improved 5-year survival rates, while other studies observed no notable variation in relapse or overall survival. These molecular markers and emerging factors, such as microvascular proliferation, are refining our ability to predict EC outcomes and guide personalized treatment strategies (127). At this point, CRC patient prognosis depends on clinicopathological parameters, with an emphasis on the cancer stage upon diagnosis. The total 5-year rate of survival for stage I is above 90%; it decreases to 70% for the second stage, 58% for stage III, and fewer than 15% for stage IV (128). Prognostic markers are crucial in predicting outcomes and guiding treatment decisions for CRC patients. Carcinoembryonic antigen (CEA), despite its limitations in specificity and accuracy, has shown potential as a distinct prognostic marker for all stages of CRC. A large-scale National Cancer Database data study suggested that serum CEA serves as a reliable prognostic indicator for stage II tumor recurrence (129). Epidermal growth factor receptor (EGFR) expression is observed in 50%–70% of CRC, although its prognostic significance remains inconclusive (130). The homeobox protein CDX2 has emerged as a promising marker of colon cancer cell differentiation and a robust prognostic indicator (Table 2) (131). Table 2 Table 2. Biomarkers in colorectal cancer: types, descriptions, and clinical advantages. Interestingly, conflicting evidence exists regarding Ki67 expression in CRCs, with some studies suggesting that high expression is associated with good clinical outcomes. At the same time, a meta-analysis showed a strong association between elevated Ki-67 expression and reduced overall survival. disease-free survival (132). KRAS, a tyrosine kinase downstream of the EGFR receptor, stands out as the first validated predictive biomarker in colon cancer. These various prognostic markers collectively enhance our understanding of CRC mechanisms and assist in customizing treatment strategies to achieve better patient outcomes (2, 133). 12 Targeted therapeutics for EC in Lynch syndrome In the treatment of EC, especially among patients with LS, therapeutic approaches are customized based on the cancer’s progression and the associated risk of relapse. Radiotherapy is often employed in early-stage EC, while chemotherapy is indicated for cases with high-grade histology or advanced chronic conditions (134). As the disease progresses, the risk of recurrence increases significantly. Notably, recurrent vaginal EC tends to respond well to treatments, frequently utilizing radiation therapy as an effective option. Surgery remains the cornerstone of EC treatment, with total hysterectomy (TH) and bilateral salpingo-oophorectomy (BSO) being the standard procedures (135). TH involves the removal of the uterus and cervix, while BSO eliminates the fallopian tubes and ovaries. In patients with LS, oophorectomy is routinely performed during surgery to exclude the presence of ovarian metastases or primary ovarian tumors, given the elevated risk associated with LS (134). Current surgical options include open surgery, laparotomy, and minimally invasive techniques such as laparoscopic surgery (LS) and robot-assisted surgery (RS), which have demonstrated efficacy and reduced recovery times. For advanced stages, particularly Stage III EC, chemotherapy regimens typically include paclitaxel and carboplatin, with alternatives like ifosfamide combined with paclitaxel or cisplatin being explored (136). Emerging therapeutic pathways in preclinical studies focus on targeting specific molecular mechanisms involved in EC progression, such as cell cycle inhibition, EZH2 inhibition, and modulation of the prorenin pathway. These innovative strategies aim to enhance treatment efficacy and are increasingly integral to personalized medicine approaches. Recent advancements in immunotherapy have notably transformed the treatment landscape for advanced EC (137). The U.S. Food and Drug Administration (FDA) has approved several immune checkpoint inhibitors for this indication, marking significant progress in therapy options for patients, particularly those with dMMR tumors. Notable approvals include durvalumab (Imfinzi) for patients with mismatch repair-deficient tumors, pembrolizumab (Keytruda) for use irrespective of dMMR status, and dostarlimab (Jemperli), which has also been approved for dMMR advanced EC (135). These agents can be employed as first-line therapies or for recurrent cancer after specific prior treatments, significantly expanding the arsenal of options for managing advanced EC and improving patient outcomes, especially for those who previously had limited immunotherapy choices. 13 Targeted therapeutics for CRC in Lynch syndrome Surgical resection continues to be the foremost intervention for CRC, especially in individuals diagnosed with LS, who exhibit an elevated propensity for the onset of CRC at earlier ages and frequently present with more advanced stages of the disease. In scenarios where the malignancy is classified as non-resectable, a multimodal approach encompassing chemotherapy, radiation therapy, and immunotherapy is generally utilized (139, 140). Radiation therapy constitutes an essential element of CRC management, particularly in the context of rectal cancer, as it employs high-energy X-rays to selectively target and obliterate neoplastic cells through the induction of DNA damage, thereby impeding cellular growth and proliferation (141, 142). This modality proves especially advantageous for rectal tumors that are confined, rendering them more susceptible to radiation intervention. Contemporary chemotherapy protocols for CRC frequently incorporate fluoropyrimidine-based agents, such as 5-fluorouracil (5-FU), in conjunction with combination therapies involving agents such as oxaliplatin (OX), irinotecan (IRI), and capecitabine (143). Recent innovations in the management of advanced CRC have increasingly centered on targeted therapies, particularly those aimed at inhibiting angiogenesis. Monoclonal antibodies, including bevacizumab, ramucirumab, and aflibercept, have demonstrated a capacity to improve overall survival metrics when administered alongside standard chemotherapy regimens, offering substantial advantages for patients afflicted with advanced disease, including those with LS (144). Oral therapeutic agents such as regorafenib and trifluridine/tipiracil have surfaced as viable alternatives for patients exhibiting refractory CRC, presenting renewed optimism for individuals with constrained treatment options. Although the survival enhancements associated with these novel agents may appear to be modest, they signify considerable progress within the therapeutic domain, enabling patients to potentially experience extended survival and enhanced quality of life (145). Furthermore, research has indicated a plausible role for estrogen/progestin replacement therapy in postmenopausal women, with antecedent findings suggesting a reduced incidence of CRC linked to these therapies, albeit the underlying mechanisms remain elusive. Given the intersection of hormonal influences and cancer risk, further investigation into this association may be warranted, particularly in LS patients who encounter augmented risks for various malignancies, including CRC (77, 146). Overall, the ongoing advancement of therapeutic strategies for CRC, particularly within the framework of LS, underscores the necessity of personalized treatment modalities customized to the distinct genetic and molecular attributes of tumors, thereby enhancing patient outcomes and survival probabilities. 14 Summarizing the landscape of Lynch syndrome-associated cancers This review underscores the critical link between EC and subsequent CRC in individuals with LS, primarily driven by mutations in MMR genes and the presence of MSI. By consolidating findings from the original research data, it highlights the importance of early identification, genetic screening, and vigilant surveillance in high-risk populations. Both cancers are influenced by common genetic pathways, including the Wnt and PI3K/AKT/mTOR signaling cascades, with critical mutations in genes such as APC, PTEN, and β-catenin. The progression and development of these malignancies are also significantly impacted by lifestyle factors like obesity and hormonal imbalances. Understanding the molecular and genetic commonalities between EC and CRC is crucial for early diagnosis and the formulation of personalized treatment strategies. For patients with LS, the heightened risk of both cancers underscores the need for genetic counselling and regular screenings for these tumors. Advances in diagnostic techniques, including molecular biomarkers and high-throughput omics technologies, have enhanced the detection, treatment, and prognosis of both cancers. Furthermore, emerging therapeutic approaches, especially in the realm of targeted therapy and immunotherapy presents promising opportunities for better patient results. The connection between EC and CRC underscores the need for a comprehensive approach to managing patients, particularly those with genetic predispositions, to mitigate risks and enhance survival rates. This comprehensive review may provide a reference for clinicians and researchers aiming to refine diagnostic and management approaches in LS and explore future advancements in oncology. 15 Future directions in treating EC and CRC in LS Exploring genetic and molecular interconnections between EC and CRC, especially in LS, is crucial for advancing research and treatment. Identifying shared signaling pathways will facilitate the creation of effective targeted therapies tailored for LS patients. Incorporating these findings into clinical practice may enhance affected individuals’ survival rates and quality of life. Immunotherapy, particularly checkpoint inhibitors, is a promising research area for treating EC and CRC in patients with LS and mismatch repair deficiencies. Exploring immune modulation and combination therapies could lead to innovative strategies that enhance immune responses against these cancers, improving outcomes for advanced-stage patients. Advances in next-generation sequencing (NGS) and high-throughput omics technologies will aid in the discovery of new biomarkers for early diagnosis and prognosis. Author contributions SnP: Investigation, Methodology, Visualization, Writing – original draft. SN: Methodology, Writing – original draft. SA: Methodology, Writing – original draft, Investigation. AB: Methodology, Writing – original draft, Supervision, Writing – review and editing. SuP: Conceptualization, Investigation, Methodology, Project administration, Supervision, Writing – original draft, Writing – review and editing. AD: Conceptualization, Supervision, Writing – review and editing. Funding The author(s) declare that no financial support was received for the research and/or publication of this article. Acknowledgments The authors express their gratitude to Chettinad Academy of Research and Education (CARE) for providing the necessary infrastructure to carry out this work. The authors are also thankful to Department of Biotechnology, Ministry of Science & Technology, Government of India for providing support to Mr. Subhamay Adhikary (Fellowship IDDBT/2021-22/CARE/1592). Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Generative AI statement The author(s) declare that no Generative AI was used in the creation of this manuscript. Publisher’s note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. References Peltomäki, P, Nyström, M, Mecklin, J-P, and Seppälä, TT. Lynch syndrome genetics and clinical implications. Gastroenterology (2023) 164(5):783–99. doi:10.1053/j.gastro.2022.08.058 PubMed Abstract | CrossRef Full Text | Google Scholar Kuhn, TM, Dhanani, S, and Ahmad, S. An overview of endometrial cancer with novel therapeutic strategies. Curr Oncol (Toronto, Ont.) (2023) 30(9):7904–19. doi:10.3390/curroncol30090574 PubMed Abstract | CrossRef Full Text | Google Scholar Bhat, GR, Sethi, I, Sadida, HQ, Rah, B, Mir, R, Algehainy, N, et al. Cancer cell plasticity: from cellular, molecular, and genetic mechanisms to tumor heterogeneity and drug resistance. Cancer Metastasis Rev (2024) 43:197–228. doi:10.1007/s10555-024-10172-z PubMed Abstract | CrossRef Full Text | Google Scholar Papadopoulou, E, Rigas, G, Fountzilas, E, Boutis, A, Giassas, S, Mitsimponas, N, et al. Microsatellite instability is insufficiently used as a biomarker for Lynch syndrome testing in clinical practice. JCO Precision Oncol (2024) 8(8):e2300332. doi:10.1200/PO.23.00332 PubMed Abstract | CrossRef Full Text | Google Scholar Li, K, Luo, H, Huang, L, Luo, H, and Zhu, X. Microsatellite instability: a review of what the oncologist should know. Cancer Cell Int (2020) 20(1):16. doi:10.1186/s12935-019-1091-8 PubMed Abstract | CrossRef Full Text | Google Scholar Chen, W, and Frankel, WL. A practical guide to biomarkers for the evaluation of colorectal cancer. Mod Pathol (2019) 32(Suppl. 1):1–15. doi:10.1038/s41379-018-0136-1 PubMed Abstract | CrossRef Full Text | Google Scholar Gupta, D, and Heinen, CD. The mismatch repair-dependent DNA damage response: mechanisms and implications. DNA repair (2019) 78:60–9. doi:10.1016/j.dnarep.2019.03.009 PubMed Abstract | CrossRef Full Text | Google Scholar Boland, CR, and Goel, A. Microsatellite instability in colorectal cancer. Gastroenterology (2010) 138(6):2073–87.e3. doi:10.1053/j.gastro.2009.12.064 PubMed Abstract | CrossRef Full Text | Google Scholar Helderman, NC, Bajwa-Ten Broeke, SW, Morreau, H, Suerink, M, Terlouw, D, van der Werf-’ t Lam, AS, et al. The diverse molecular profiles of lynch syndrome-associated colorectal cancers are (highly) dependent on underlying germline mismatch repair mutations. Crit Rev Oncology/Hematology (2021) 163(103338):103338. doi:10.1016/j.critrevonc.2021.103338 CrossRef Full Text | Google Scholar Friedenreich, CM, Ryder-Burbidge, C, and McNeil, J. Physical activity, obesity and sedentary behavior in cancer etiology: epidemiologic evidence and biologic mechanisms. Mol Oncol (2021) 15(3):790–800. doi:10.1002/1878-0261.12772 PubMed Abstract | CrossRef Full Text | Google Scholar Felix, AS, and Brinton, LA. Cancer progress and priorities: uterine cancer. Cancer Epidemiol Biomarkers and Prev (2018) 27(9):985–94. doi:10.1158/1055-9965.EPI-18-0264 PubMed Abstract | CrossRef Full Text | Google Scholar Gambini, D, Ferrero, S, and Kuhn, E. Lynch syndrome: from carcinogenesis to prevention interventions. Cancers (2022) 14(17):4102. doi:10.3390/cancers14174102 PubMed Abstract | CrossRef Full Text | Google Scholar Vasen, HF, Watson, P, Mecklin, J, and Lynch, H. New clinical criteria for hereditary nonpolyposis colorectal cancer (HNPCC, Lynch syndrome) proposed by the International Collaborative group on HNPCC. Gastroenterology (1999) 116(6):1453–6. doi:10.1016/s0016-5085(99)70510-x PubMed Abstract | CrossRef Full Text | Google Scholar Umar, A, Boland, CR, Terdiman, JP, Syngal, S, Chapelle, A, Ruschoff, J, et al. Revised Bethesda Guidelines for hereditary nonpolyposis colorectal cancer (Lynch syndrome) and microsatellite instability. JNCI J Natl Cancer Inst (2004) 96(4):261–8. doi:10.1093/jnci/djh034 PubMed Abstract | CrossRef Full Text | Google Scholar Fanale, D, Corsini, LR, Brando, C, Dimino, A, Filorizzo, C, Magrin, L, et al. Impact of different selection approaches for identifying lynch syndrome-related colorectal cancer patients: unity is strength. Front Oncol (2022) 12:827822. doi:10.3389/fonc.2022.827822 PubMed Abstract | CrossRef Full Text | Google Scholar Velho, S, Fernandes, MS, Leite, M, Figueiredo, C, and Seruca, R. Causes and consequences of microsatellite instability in gastric carcinogenesis. World J Gastroenterol (2014) 20(44):16433–42. doi:10.3748/wjg.v20.i44.16433 PubMed Abstract | CrossRef Full Text | Google Scholar Woerner, SM, Benner, A, Sutter, C, Schiller, M, Yuan, YP, Keller, G, et al. Pathogenesis of DNA repair-deficient cancers: a statistical meta-analysis of putative Real Common Target genes. Oncogene (2003) 22(15):2226–35. doi:10.1038/sj.onc.1206421 PubMed Abstract | CrossRef Full Text | Google Scholar Rebuzzi, F, Ulivi, P, and Tedaldi, G. Genetic predisposition to colorectal cancer: how many and which genes to test? Int J Mol Sci (2023) 24(3):2137. doi:10.3390/ijms24032137 PubMed Abstract | CrossRef Full Text | Google Scholar Bhattacharya, P, and McHugh, T. W. (2024). Lynch Syndrome. PubMed: StatPearls Publishing. Available online at: Google Scholar Dowty, JG, Win, AK, Buchanan, DD, Lindor, NM, Macrae, FA, Clendenning, M, et al. Cancer risks for MLH1 and MSH2 mutation carriers. Hum Mutat (2013) 34(3):490–7. doi:10.1002/humu.22262 PubMed Abstract | CrossRef Full Text | Google Scholar Valle, L, and Monahan, KJ. Genetic predisposition to gastrointestinal polyposis: syndromes, tumour features, genetic testing, and clinical management. The Lancet Gastroenterol and Hepatol (2024) 9(1):68–82. doi:10.1016/S2468-1253(23)00240-6 PubMed Abstract | CrossRef Full Text | Google Scholar Kastrinos, F. Risk of pancreatic cancer in families with Lynch syndrome. JAMA (2009) 302(16):1790–5. doi:10.1001/jama.2009.1529 PubMed Abstract | CrossRef Full Text | Google Scholar Nakamura, K, Nakayama, K, Minamoto, T, Ishibashi, T, Ohnishi, K, Yamashita, H, et al. Lynch syndrome-related clear cell carcinoma of the cervix: a case report. Int J Mol Sci (2018) 19(4):979. doi:10.3390/ijms19040979 PubMed Abstract | CrossRef Full Text | Google Scholar Baglietto, L, Lindor, NM, Dowty, JG, White, DM, Wagner, A, Gomez Garcia, EB, et al. Risks of Lynch syndrome cancers for MSH6 mutation carriers. JNCI: J Natl Cancer Inst (2010) 102(3):193–201. doi:10.1093/jnci/djp473 PubMed Abstract | CrossRef Full Text | Google Scholar Belot, A, Grosclaude, P, Bossard, N, Jougla, E, Benhamou, E, Delafosse, P, et al. Cancer incidence and mortality in France over the period 1980-2005. Revue d'epidemiologie et de sante publique (2008) 56(3):159–75. doi:10.1016/j.respe.2008.03.117 PubMed Abstract | CrossRef Full Text | Google Scholar Roberts, ME, Jackson, SA, Susswein, LR, Zeinomar, N, Ma, X, Marshall, ML, et al. MSH6 and PMS2 germ-line pathogenic variants implicated in Lynch syndrome are associated with breast cancer. Genet Med (2018) 20(10):1167–74. doi:10.1038/gim.2017.254 PubMed Abstract | CrossRef Full Text | Google Scholar Poaty, H, Bouya, LB, Lumaka, A, Mongo-Onkouo, A, and Gassaye, D. PMS2 pathogenic variant in lynch syndrome-associated colorectal cancer with polyps. Glob Med Genet (2023) 10(1 1-5):001–5. doi:10.1055/s-0042-1759888 PubMed Abstract | CrossRef Full Text | Google Scholar Ten Broeke, SW, van der Klift, HM, Tops, CMJ, Aretz, S, Bernstein, I, Buchanan, DD, et al. Cancer risks for PMS2-associated lynch syndrome. J Clin Oncol : official J Am Soc Clin Oncol (2018) 36(29):2961–8. doi:10.1200/JCO.2018.78.4777 PubMed Abstract | CrossRef Full Text | Google Scholar Andini, KD, Nielsen, M, Suerink, M, Helderman, NC, Koornstra, JJ, Ahadova, A, et al. PMS2-associated Lynch syndrome: past, present and future. Front Oncol (2023) 13:1127329. doi:10.3389/fonc.2023.1127329 PubMed Abstract | CrossRef Full Text | Google Scholar Bajwa-Ten Broeke, SW, Ballhausen, A, Ahadova, A, Suerink, M, Bohaumilitzky, L, Seidler, F, et al. The coding microsatellite mutation profile of PMS2-deficient colorectal cancer. Exp Mol Pathol (2021) 122(104668):104668. doi:10.1016/j.yexmp.2021.104668 PubMed Abstract | CrossRef Full Text | Google Scholar Truninger, K, Menigatti, M, Luz, J, Russell, A, Haider, R, Gebbers, JO, et al. Immunohistochemical analysis reveals high frequency of PMS2 defects in colorectal cancer. Gastroenterology (2005) 128(5):1160–71. doi:10.1053/j.gastro.2005.01.056 PubMed Abstract | CrossRef Full Text | Google Scholar Kastrinos, F, and Stoffel, EM. History, genetics, and strategies for cancer prevention in Lynch syndrome. Clin Gastroenterol Hepatol (2014) 12(5):715–27. doi:10.1016/j.cgh.2013.06.031 PubMed Abstract | CrossRef Full Text | Google Scholar Kempers, MJE, Kuiper, RP, Ockeloen, CW, Chappuis, PO, Hutter, P, Rahner, N, et al. Risk of colorectal and endometrial cancers in EPCAM deletion-positive Lynch syndrome: a cohort study. The Lancet Oncol (2011) 12(1):49–55. doi:10.1016/S1470-2045(10)70265-5 PubMed Abstract | CrossRef Full Text | Google Scholar Edwards, P, and Monahan, KJ. Diagnosis and management of Lynch syndrome. Frontline Gastroenterol (2022) 13:e1 e80–e87. doi:10.1136/flgastro-2022-102123 PubMed Abstract | CrossRef Full Text | Google Scholar Jasperson, KW, Tuohy, TM, Neklason, DW, and Burt, RW. Hereditary and familial colon cancer. Gastroenterology (2010) 138(6):2044–58. doi:10.1053/j.gastro.2010.01.054 PubMed Abstract | CrossRef Full Text | Google Scholar Sahin, IH, Akce, M, Alese, O, Shaib, W, Lesinski, GB, El-Rayes, B, et al. Immune checkpoint inhibitors for the treatment of MSI-H/MMR-D colorectal cancer and a perspective on resistance mechanisms. Br J Cancer (2019) 121(10):809–18. doi:10.1038/s41416-019-0599-y PubMed Abstract | CrossRef Full Text | Google Scholar Parente, P, Grillo, F, Vanoli, A, Macciomei, MC, Ambrosio, MR, Scibetta, N, et al. The day-to-day practice of MMR and MSI assessment in colorectal adenocarcinoma: what we know and what we still need to explore. Dig Dis (Basel, Switzerland) (2023) 41(5):746–56. doi:10.1159/000531003 PubMed Abstract | CrossRef Full Text | Google Scholar Wang, Y, Wang, Y, Li, J, Cragun, J, Hatch, K, Chambers, SK, et al. Lynch syndrome related endometrial cancer: clinical significance beyond the endometrium. J Hematol and Oncol (2013) 6(1):22. doi:10.1186/1756-8722-6-22 PubMed Abstract | CrossRef Full Text | Google Scholar Latham, A, Srinivasan, P, Kemel, Y, Shia, J, Bandlamudi, C, Mandelker, D, et al. Microsatellite instability is associated with the presence of Lynch syndrome pan-cancer. J Clin Oncol (2019) 37(4):286–95. doi:10.1200/jco.18.00283 PubMed Abstract | CrossRef Full Text | Google Scholar Yang, Y, Wu, SF, and Bao, W. Molecular subtypes of endometrial cancer: implications for adjuvant treatment strategies. Int J Gynecol and Obstet (2024) 164(2):436–59. doi:10.1002/ijgo.14969 PubMed Abstract | CrossRef Full Text | Google Scholar Makker, V, MacKay, H, Ray-Coquard, I, Levine, DA, Westin, SN, Aoki, D, et al. Endometrial cancer. Nat Rev Dis Primers (2021) 7(1):88. doi:10.1038/s41572-021-00324-8 PubMed Abstract | CrossRef Full Text | Google Scholar Rodriguez, AC, Blanchard, Z, Maurer, KA, and Gertz, J. Estrogen signaling in endometrial cancer: a key oncogenic pathway with several open questions. Horm Cancer (2019) 10(2–3):51–63. doi:10.1007/s12672-019-0358-9 PubMed Abstract | CrossRef Full Text | Google Scholar Sung, H, Ferlay, J, Siegel, RL, Laversanne, M, Soerjomataram, I, Jemal, A, et al. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: A Cancer J Clinicians (2021) 71(3):209–49. doi:10.3322/caac.21660 PubMed Abstract | CrossRef Full Text | Google Scholar Sawicki, T, Ruszkowska, M, Danielewicz, A, Niedźwiedzka, E, Arłukowicz, T, and Przybyłowicz, KE. A review of colorectal cancer in terms of epidemiology, risk factors, development, symptoms and diagnosis. Cancers (2021) 13(9):2025. doi:10.3390/cancers13092025 PubMed Abstract | CrossRef Full Text | Google Scholar Baran, B, Mert Ozupek, N, Yerli Tetik, N, Acar, E, Bekcioglu, O, and Baskin, Y. Difference between left-sided and right-sided colorectal cancer: a focused review of literature. Gastroenterol Res (2018) 11(4):264–73. doi:10.14740/gr1062w PubMed Abstract | CrossRef Full Text | Google Scholar Sung, CO, Seo, JW, Kim, K-M, Do, I-G, Kim, SW, and Park, C-K. Clinical significance of signet-ring cells in colorectal mucinous adenocarcinoma. Mod Pathol (2008) 21(12):1533–41. doi:10.1038/modpathol.2008.170 PubMed Abstract | CrossRef Full Text | Google Scholar Remo, A, Fassan, M, Vanoli, A, Bonetti, LR, Barresi, V, Tatangelo, F, et al. Morphology and molecular features of rare colorectal carcinoma histotypes. Cancers (2019) 11(7):1036. doi:10.3390/cancers11071036 PubMed Abstract | CrossRef Full Text | Google Scholar Jasperson, KW, Vu, TM, Schwab, AL, Neklason, DW, Rodriguez-Bigas, MA, Burt, RW, et al. Evaluating Lynch syndrome in very early onset colorectal cancer probands without apparent polyposis. Fam Cancer (2010) 9(2):99–107. doi:10.1007/s10689-009-9290-4 PubMed Abstract | CrossRef Full Text | Google Scholar Chen, L, Ye, L, and Hu, B. Hereditary colorectal cancer syndromes: molecular genetics and precision medicine. Biomedicines (2022) 10(12):3207. doi:10.3390/biomedicines10123207 PubMed Abstract | CrossRef Full Text | Google Scholar Barczyński, B, Frąszczak, K, Wnorowski, A, and Kotarski, J. Menopausal status contributes to overall survival in endometrial cancer patients. Cancers (2023) 15(2):451. doi:10.3390/cancers15020451 PubMed Abstract | CrossRef Full Text | Google Scholar Yu, K, Huang, Z-Y, Xu, X-L, Li, J, Fu, X-W, and Deng, S-L. Estrogen receptor function: impact on the human endometrium. Front Endocrinol (2022) 13:827724. doi:10.3389/fendo.2022.827724 PubMed Abstract | CrossRef Full Text | Google Scholar Furness, S, Roberts, H, Marjoribanks, J, and Lethaby, A. Hormone therapy in postmenopausal women and risk of endometrial hyperplasia. The Cochrane database Syst Rev (2012) 2012(8):CD000402. doi:10.1002/14651858.CD000402.pub4 PubMed Abstract | CrossRef Full Text | Google Scholar Valle, L. Genetic predisposition to colorectal cancer: where we stand and future perspectives. World J Gastroenterol (2014) 20(29):9828–49. doi:10.3748/wjg.v20.i29.9828 PubMed Abstract | CrossRef Full Text | Google Scholar Shetty, C, Rizvi, SMHA, Sharaf, J, Williams, K-AD, Tariq, M, Acharekar, MV, et al. Risk of gynecological cancers in women with polycystic ovary syndrome and the pathophysiology of association. Cureus (2023) 15(4):e37266. doi:10.7759/cureus.37266 PubMed Abstract | CrossRef Full Text | Google Scholar Ignatov, A, and Ortmann, O. Endocrine risk factors of endometrial cancer: polycystic ovary syndrome, oral contraceptives, infertility, tamoxifen. Cancers (2020) 12(7):1766. doi:10.3390/cancers12071766 PubMed Abstract | CrossRef Full Text | Google Scholar Kanth, P, Grimmett, J, Champine, M, Burt, R, and Samadder, JN. Hereditary colorectal polyposis and cancer syndromes: a primer on diagnosis and management. Am J Gastroenterol (2017) 112(10):1509–25. doi:10.1038/ajg.2017.212 PubMed Abstract | CrossRef Full Text | Google Scholar Manski, S, Noverati, N, Policarpo, T, Rubin, E, and Shivashankar, R. Diet and nutrition in inflammatory bowel disease: a review of the literature. Crohn's and Colitis 360 (2024) 6(1):otad077. doi:10.1093/crocol/otad077 PubMed Abstract | CrossRef Full Text | Google Scholar Li, H, Sun, L, Zhuang, Y, Tian, C, Yan, F, Zhang, Z, et al. Molecular mechanisms and differences in lynch syndrome developing into colorectal cancer and endometrial cancer based on gene expression, methylation, and mutation analysis. Cancer Causes and Control (2022) 33(4):489–501. doi:10.1007/s10552-021-01543-w PubMed Abstract | CrossRef Full Text | Google Scholar Xu, W, Wang, B, Cai, Y, Chen, J, Lv, X, Guo, C, et al. ADAMTS9-AS2: a functional long non-coding RNA in tumorigenesis. Curr Pharm Des (2021) 27(23):2722–7. doi:10.2174/1381612827666210325105106 PubMed Abstract | CrossRef Full Text | Google Scholar Okuda, T, Sekizawa, A, Purwosunu, Y, Nagatsuka, M, Morioka, M, Hayashi, M, et al. Genetics of endometrial cancers. Obstet Gynecol Int (2010) 2010(1):984013. doi:10.1155/2010/984013 PubMed Abstract | CrossRef Full Text | Google Scholar Duggan, BD, Felix, JC, Muderspach, Ll., Tourgeman, D, Zheng, J, and Shibata, D. Microsatellite instability in sporadic endometrial carcinoma. JNCI J Natl Cancer Inst (1994) 86(16):1216–21. doi:10.1093/jnci/86.16.1216 PubMed Abstract | CrossRef Full Text | Google Scholar Helderman, NC, Andini, KD, van Leerdam, ME, van Hest, LP, Hoekman, DR, Ahadova, A, et al. MLH1 promotor hypermethylation in colorectal and endometrial carcinomas from patients with Lynch syndrome. The J Mol Diagn (2024) 26(2):106–14. doi:10.1016/j.jmoldx.2023.10.005 PubMed Abstract | CrossRef Full Text | Google Scholar Shanmugapriya, S, Subramanian, P, and Kanimozhi, S. Geraniol inhibits endometrial carcinoma via downregulating oncogenes and upregulating tumour suppressor genes. Indian J Clin Biochem (2017) 32(2):214–9. doi:10.1007/s12291-016-0601-x PubMed Abstract | CrossRef Full Text | Google Scholar Abal, M, Llauradó, M, Doll, A, Monge, M, Colas, E, González, M, et al. Molecular determinants of invasion in endometrial cancer. Clin and Translational Oncol Official Publ Fed Spanish Oncol Societies Natl Cancer Inst Mexico (2007) 9(5):272–7. doi:10.1007/s12094-007-0054-z PubMed Abstract | CrossRef Full Text | Google Scholar O'Hara, AJ, and Bell, DW. The genomics and genetics of endometrial cancer. Adv Genomics Genet (2012) 2012(2):33–47. doi:10.2147/AGG.S28953 PubMed Abstract | CrossRef Full Text | Google Scholar Dixit, G, Gonzalez-Bosquet, J, Skurski, J, Devor, EJ, Dickerson, EB, Nothnick, WB, et al. FGFR2 mutations promote endometrial cancer progression through dual engagement of EGFR and Notch signalling pathways. Clin Translational Med (2023) 13(5):e1223. doi:10.1002/ctm2.1223 PubMed Abstract | CrossRef Full Text | Google Scholar Gatius, S, Velasco, A, Azueta, A, Santacana, M, Pallares, J, Valls, J, et al. FGFR2 alterations in endometrial carcinoma. Mod Pathol (2011) 24(11):1500–10. doi:10.1038/modpathol.2011.110 PubMed Abstract | CrossRef Full Text | Google Scholar Parrish, ML, Broaddus, RR, and Gladden, AB. Mechanisms of mutant β-catenin in endometrial cancer progression. Front Oncol (2022) 12:1009345. doi:10.3389/fonc.2022.1009345 PubMed Abstract | CrossRef Full Text | Google Scholar Bosse, T, ter Haar, NT, Seeber, LM, Diest, PJv, Hes, FJ, Vasen, HFA, et al. Loss of ARID1A expression and its relationship with PI3K-Akt pathway alterations, TP53 and microsatellite instability in endometrial cancer. Mod Pathol (2013) 26(11):1525–35. doi:10.1038/modpathol.2013.96 PubMed Abstract | CrossRef Full Text | Google Scholar Murali, R, Davidson, B, Fadare, O, Carlson, JA, Crum, CP, Gilks, CB, et al. High-grade endometrial carcinomas: morphologic and immunohistochemical features, diagnostic challenges and recommendations. Int J Gynecol Pathol (2019) 38(Suppl. 1):S40–S63. doi:10.1097/pgp.0000000000000491 PubMed Abstract | CrossRef Full Text | Google Scholar Lax, SF, Kendall, B, Tashiro, H, Slebos, RJ, and Ellenson, LH. The frequency of p53, K-ras mutations, and microsatellite instability differs in uterine endometrioid and serous carcinoma: evidence of distinct molecular genetic pathways. Cancer (2000) 88(4):814–24. doi:10.1002/(sici)1097-0142(20000215)88:4<814 PubMed Abstract | CrossRef Full Text | Google Scholar Schultheis, AM, Martelotto, LG, De Filippo, MR, Piscuglio, S, Ng, CKY, Hussein, YR, et al. TP53 mutational spectrum in endometrioid and serous endometrial cancers. Int J Gynecol Pathol (2016) 35(4):289–300. doi:10.1097/pgp.0000000000000243 PubMed Abstract | CrossRef Full Text | Google Scholar Nagase, S, Suzuki, F, Tokunaga, H, Toyoshima, M, Utsunomiya, H, Niikura, H, et al. Molecular pathogenesis of uterine serous carcinoma. Curr Obstet Gynecol Rep (2014) 3(1):33–9. doi:10.1007/s13669-013-0069-0 CrossRef Full Text | Google Scholar Shih, I-M, Panuganti, PK, Kuo, K-T, Mao, T-L, Kuhn, E, Jones, S, et al. Somatic mutations of PPP2R1A in ovarian and uterine carcinomas. The Am J Pathol (2011) 178(4):1442–7. doi:10.1016/j.ajpath.2011.01.009 PubMed Abstract | CrossRef Full Text | Google Scholar Kim, K-R, Choi, J, Hwang, J-E, Baik, Y-A, Shim, JY, Kim, YM, et al. Endocervical-like (Müllerian) mucinous borderline tumours of the ovary are frequently associated with the KRAS mutation. Histopathology (2010) 57(4):587–96. doi:10.1111/j.1365-2559.2010.03673.x PubMed Abstract | CrossRef Full Text | Google Scholar Dubé, V, Roy, M, Plante, M, Renaud, M-C, and Têtu, B. Mucinous ovarian tumors of Mullerian-type: an analysis of 17 cases including borderline tumors and intraepithelial, microinvasive, and invasive carcinomas. Int J Gynecol Pathol (2005) 24(2):138–46. doi:10.1097/01.pgp.0000152024.37482.63 PubMed Abstract | CrossRef Full Text | Google Scholar Vermij, L, Horeweg, N, Leon-Castillo, A, Rutten, TA, Mileshkin, LR, Mackay, HJ, et al. HER2 status in high-risk endometrial cancers (PORTEC-3): relationship with histotype, molecular classification, and clinical outcomes. Cancers (2020) 13(1):44. doi:10.3390/cancers13010044 PubMed Abstract | CrossRef Full Text | Google Scholar Plotkin, A, Olkhov-Mitsel, E, Huang, W-Y, and Nofech-Mozes, S. Implementation of HER2 testing in endometrial cancer, a summary of real-world initial experience in a large tertiary cancer center. Cancers (2024) 16(11):2100. doi:10.3390/cancers16112100 PubMed Abstract | CrossRef Full Text | Google Scholar Konecny, GE, Santos, L, Winterhoff, B, Hatmal, M, Keeney, GL, Mariani, A, et al. HER2 gene amplification and EGFR expression in a large cohort of surgically staged patients with nonendometrioid (type II) endometrial cancer. Br J Cancer (2009) 100(1):89–95. doi:10.1038/sj.bjc.6604814 PubMed Abstract | CrossRef Full Text | Google Scholar Díaz-Montes, TP, Ji, H, Smith Sehdev, AE, Zahurak, ML, Kurman, RJ, Armstrong, DK, et al. Clinical significance of Her-2/neu overexpression in uterine serous carcinoma. Gynecol Oncol (2006) 100(1):139–44. doi:10.1016/j.ygyno.2005.08.017 PubMed Abstract | CrossRef Full Text | Google Scholar Brodeur, MN, Selenica, P, Ma, W, Moufarrij, S, Dagher, C, Basili, T, et al. ERBB2 mutations define a subgroup of endometrial carcinomas associated with high tumor mutational burden and the microsatellite instability-high (MSI-H) molecular subtype. Mol Oncol (2024) 18(10):2356–68. doi:10.1002/1878-0261.13698 PubMed Abstract | CrossRef Full Text | Google Scholar Russell, H, Kedzierska, K, Buchanan, DD, Thomas, R, Tham, E, Mints, M, et al. The MLH1 polymorphism rs1800734 and risk of endometrial cancer with microsatellite instability. Clin Epigenetics (2020) 12(1):102. doi:10.1186/s13148-020-00889-3 PubMed Abstract | CrossRef Full Text | Google Scholar Georgescu, M-M. PTEN tumor suppressor network in PI3K-Akt pathway control. Genes and Cancer (2010) 1(12):1170–7. doi:10.1177/1947601911407325 PubMed Abstract | CrossRef Full Text | Google Scholar Zhao, R, Choi, BY, Lee, M-H, Bode, AM, and Dong, Z. Implications of genetic and epigenetic alterations of CDKN2A (p16(INK4a)) in cancer. EBioMedicine (2016) 8:30–9. doi:10.1016/j.ebiom.2016.04.017 PubMed Abstract | CrossRef Full Text | Google Scholar Van Tongelen, A, Loriot, A, and De Smet, C. Oncogenic roles of DNA hypomethylation through the activation of cancer-germline genes. Cancer Lett (2017) 396:130–7. doi:10.1016/j.canlet.2017.03.029 PubMed Abstract | CrossRef Full Text | Google Scholar Kodaman, PH, and Taylor, HS. Hormonal regulation of implantation. Obstet Gynecol Clin North America (2004) 31(4):745–66. doi:10.1016/j.ogc.2004.08.008 PubMed Abstract | CrossRef Full Text | Google Scholar Nada, HR, Rashed, LA, Salman, OO, Abdallah, NMA, and Abdelhady, MM. Tissue levels of suppressor of cytokine signaling-3 (SOCS-3) in mycosis fungoides. Arch Dermatol Res (2022) 315(2):165–71. doi:10.1007/s00403-022-02339-x PubMed Abstract | CrossRef Full Text | Google Scholar Kaneko, E, Sato, N, Sugawara, T, Noto, A, Takahashi, K, Makino, K, et al. MLH1 promoter hypermethylation predicts poorer prognosis in mismatch repair deficiency endometrial carcinomas. J Gynecol Oncol (2021) 32(6):e79. doi:10.3802/jgo.2021.32.e79 PubMed Abstract | CrossRef Full Text | Google Scholar Ahadova, A, Gallon, R, Gebert, J, Ballhausen, A, Endris, V, Kirchner, M, et al. Three molecular pathways model colorectal carcinogenesis in Lynch syndrome. Int J Cancer (2018) 143(1):139–50. doi:10.1002/ijc.31300 PubMed Abstract | CrossRef Full Text | Google Scholar Johnson, V, Volikos, E, Halford, SE, Eftekhar Sadat, ET, Popat, S, Talbot, I, et al. (2005). Exon 3 beta-catenin mutations are specifically associated with colorectal carcinomas in hereditary non-polyposis colorectal cancer syndrome, Gut, 54, 264–7. doi:10.1136/gut.2004.048132 PubMed Abstract | CrossRef Full Text | Google Scholar Zahary, MN, Ahmad Aizat, AA, Kaur, G, Yeong Yeh, L, Mazuwin, M, and Ankathil, R. Polymorphisms of cell cycle regulator genes CCND1 G870A and TP53 C215G: association with colorectal cancer susceptibility risk in a Malaysian population. Oncol Lett (2015) 10(5):3216–22. doi:10.3892/ol.2015.3728 PubMed Abstract | CrossRef Full Text | Google Scholar Reeves, SG, Rich, D, Meldrum, CJ, Colyvas, K, Kurzawski, G, Suchy, J, et al. IGF1 is a modifier of disease risk in hereditary non-polyposis colorectal cancer. Int J Cancer (2008) 123(6):1339–43. doi:10.1002/ijc.23668 PubMed Abstract | CrossRef Full Text | Google Scholar Talseth, BA, Ashton, KA, Meldrum, C, Suchy, J, Kurzawski, G, Lubinski, J, et al. Aurora-A and Cyclin D1 polymorphisms and the age of onset of colorectal cancer in hereditary nonpolyposis colorectal cancer. Int J Cancer (2008) 122(6):1273–7. doi:10.1002/ijc.23177 PubMed Abstract | CrossRef Full Text | Google Scholar Chen, J, Etzel, CJ, Amos, CI, Zhang, Q, Viscofsky, N, Lindor, NM, et al. Genetic variants in the cell cycle control pathways contribute to early onset colorectal cancer in Lynch syndrome. Cancer Causes and Control (2009) 20(9):1769–77. doi:10.1007/s10552-009-9416-x PubMed Abstract | CrossRef Full Text | Google Scholar Lubbe, SJ, Di Bernardo, MC, Chandler, IP, and Houlston, RS. Clinical implications of the colorectal cancer risk associated with MUTYH mutation. J Clin Oncol (2009) 27(24):3975–80. doi:10.1200/JCO.2008.21.6853 PubMed Abstract | CrossRef Full Text | Google Scholar Castillejo, A, Vargas, G, Castillejo, MI, Navarro, M, Barberá, VM, González, S, et al. Prevalence of germline MUTYH mutations among Lynch-like syndrome patients. Eur J Cancer (Oxford, Engl : 1990) (2014) 50(13):2241–50. doi:10.1016/j.ejca.2014.05.022 PubMed Abstract | CrossRef Full Text | Google Scholar Magrin, L, Fanale, D, Brando, C, Corsini, LR, Randazzo, U, Di Piazza, M, et al. MUTYH-associated tumor syndrome: the other face of MAP. Oncogene (2022) 41(18):2531–9. doi:10.1038/s41388-022-02304-y PubMed Abstract | CrossRef Full Text | Google Scholar Sardo, E, Napolitano, S, Della Corte, CM, Ciardiello, D, Raucci, A, Arrichiello, G, et al. Multi-omic approaches in colorectal cancer beyond genomic data. J Personalized Med (2022) 12(2):128. doi:10.3390/jpm12020128 PubMed Abstract | CrossRef Full Text | Google Scholar Wu, J-B, Li, X-J, Liu, H, Liu, Y-J, and Liu, X-P. Association of KRAS, NRAS, BRAF and PIK3CA gene mutations with clinicopathological features, prognosis and ring finger protein 215 expression in patients with colorectal cancer. Biomed Rep (2023) 19(6):104. doi:10.3892/br.2023.1686 PubMed Abstract | CrossRef Full Text | Google Scholar Boutin, AT, Liao, W-T, Wang, M, Hwang, SS, Karpinets, TV, Cheung, H, et al. Oncogenic Kras drives invasion and maintains metastases in colorectal cancer. Genes and Development (2017) 31(4):370–82. doi:10.1101/gad.293449.116 PubMed Abstract | CrossRef Full Text | Google Scholar Palomba, G, Doneddu, V, Cossu, A, Paliogiannis, P, Manca, A, Casula, M, et al. Prognostic impact of KRAS, NRAS, BRAF, and PIK3CA mutations in primary colorectal carcinomas: a population-based study. J Translational Med (2016) 14(1):292. doi:10.1186/s12967-016-1053-z PubMed Abstract | CrossRef Full Text | Google Scholar Malki, A, ElRuz, RA, Gupta, I, Allouch, A, Vranic, S, and Al Moustafa, A-E. Molecular mechanisms of colon cancer progression and metastasis: recent insights and advancements. Int J Mol Sci (2020) 22(1):130. doi:10.3390/ijms22010130 PubMed Abstract | CrossRef Full Text | Google Scholar Kim, Y-N, Kim, MK, Lee, YJ, Lee, Y, Sohn, JY, Lee, JY, et al. Identification of lynch syndrome in patients with endometrial cancer based on a germline next generation sequencing multigene panel test. Cancers (2022) 14(14):3406. doi:10.3390/cancers14143406 PubMed Abstract | CrossRef Full Text | Google Scholar Batlle, E, and Massagué, J. Transforming growth factor-β signaling in immunity and cancer. Immunity (2019) 50(4):924–40. doi:10.1016/j.immuni.2019.03.024 PubMed Abstract | CrossRef Full Text | Google Scholar Rascio, F, Spadaccino, F, Rocchetti, MT, Castellano, G, Stallone, G, Netti, GS, et al. The pathogenic role of PI3K/AKT pathway in cancer onset and drug resistance: an updated review. Cancers (2021) 13(16):3949. doi:10.3390/cancers13163949 PubMed Abstract | CrossRef Full Text | Google Scholar Xu, S, and Tang, C. The role of ARID1A in tumors: tumor initiation or tumor suppression? Front Oncol (2021) 11(4 Oct):745187. doi:10.3389/fonc.2021.745187 PubMed Abstract | CrossRef Full Text | Google Scholar Ferreira, A, Pereira, F, Reis, C, Oliveira, MJ, Sousa, MJ, and Preto, A. Crucial role of oncogenic KRAS mutations in apoptosis and autophagy regulation: therapeutic implications. Cells (2022) 11:2183. doi:10.3390/cells11142183 PubMed Abstract | CrossRef Full Text | Google Scholar Sideris, M, Emin, EI, Abdullah, Z, Hanrahan, J, Stefatou, KM, Sevas, V, et al. The role of KRAS in endometrial cancer: a mini-review. Anticancer Res (2019) 39(2):533–9. doi:10.21873/anticanres.13145 PubMed Abstract | CrossRef Full Text | Google Scholar Gonzalez, G, Mehra, S, Wang, Y, Akiyama, H, and Behringer, RR. Sox9 overexpression in uterine epithelia induces endometrial gland hyperplasia. Differentiation (2016) 92(4):204–15. doi:10.1016/j.diff.2016.05.006 PubMed Abstract | CrossRef Full Text | Google Scholar Song, P, Gao, Z, Bao, Y, Chen, L, Huang, Y, Liu, Y, et al. Wnt/β-catenin signaling pathway in carcinogenesis and cancer therapy. J Hematol and Oncol (2024) 17(1 46):46. doi:10.1186/s13045-024-01563-4 PubMed Abstract | CrossRef Full Text | Google Scholar Giannakis, M, Hodis, E, Jasmine Mu, X, Yamauchi, M, Rosenbluh, J, Cibulskis, K, et al. RNF43 is frequently mutated in colorectal and endometrial cancers. Nat Genet (2014) 46(12):1264–6. doi:10.1038/ng.3127 PubMed Abstract | CrossRef Full Text | Google Scholar Watson, P, Vasen, HF, Mecklin, J, Bernstein, I, Aarnio, M, Järvinen, HJ, et al. The risk of extra-colonic, extra-endometrial cancer in the Lynch syndrome. Int J Cancer (2008) 123(2):444–9. doi:10.1002/ijc.23508 PubMed Abstract | CrossRef Full Text | Google Scholar Vasen, HFA, Hendriks, Y, de Jong, AE, van Puijenbroek, M, Tops, C, Bröcker-Vriends, AHJT, et al. Identification of HNPCC by molecular analysis of colorectal and endometrial tumors. Dis Markers (2004) 20(4–5):207–13. doi:10.1155/2004/391039 PubMed Abstract | CrossRef Full Text | Google Scholar Garg, K, and Soslow, RA. Lynch syndrome (hereditary non-polyposis colorectal cancer) and endometrial carcinoma. J Clin Pathol (2009) 62(8):679–84. doi:10.1136/jcp.2009.064949 PubMed Abstract | CrossRef Full Text | Google Scholar Blanco, GDV. Familial colorectal cancer screening: when and what to do? World J Gastroenterol (2015) 21(26):7944–53. doi:10.3748/wjg.v21.i26.7944 PubMed Abstract | CrossRef Full Text | Google Scholar Singh, H, Nugent, Z, Demers, A, Czaykowski, PM, and Mahmud, SM. Risk of colorectal cancer after diagnosis of endometrial cancer: a population-based study. J Clin Oncol (2013) 31(16):2010–5. doi:10.1200/JCO.2012.47.6481 PubMed Abstract | CrossRef Full Text | Google Scholar Singh, MM, Singh, E, Miller, H, Strum, WB, and Coyle, W. Colorectal cancer screening in women with endometrial cancer: are we following the guidelines? J Gastrointest Cancer (2012) 43(2):190–5. doi:10.1007/s12029-011-9271-3 PubMed Abstract | CrossRef Full Text | Google Scholar Win, AK, Lindor, NM, Winship, I, Tucker, KM, Buchanan, DD, Young, JP, et al. Risks of colorectal and other cancers after endometrial cancer for women with Lynch syndrome. JNCI: J Natl Cancer Inst (2013) 105(4):274–9. doi:10.1093/jnci/djs525 PubMed Abstract | CrossRef Full Text | Google Scholar Liao, S-C, Yeh, H-Z, Chang, C-S, Chen, W-C, Muo, C-H, and Sung, F-C. Colorectal cancer risk in women with gynecologic cancers-A population retrospective cohort study. J Clin Med (2021) 10(14):3127. doi:10.3390/jcm10143127 PubMed Abstract | CrossRef Full Text | Google Scholar Hu, Y, Liu, L, Jiang, Q, Fang, W, Chen, Y, Hong, Y, et al. CRISPR/Cas9: a powerful tool in colorectal cancer research. J Exp and Clin Cancer Res (2023) 42(1):308. doi:10.1186/s13046-023-02901-z PubMed Abstract | CrossRef Full Text | Google Scholar Lai, Y-L, Chiang, C-J, Chen, Y-L, You, S-L, Chen, Y-Y, Chiang, Y-C, et al. Increased risk of second primary malignancies among endometrial cancer survivors receiving surgery alone: a population-based analysis. Cancer Med (2021) 10(19):6845–54. doi:10.1002/cam4.3861 PubMed Abstract | CrossRef Full Text | Google Scholar Min, H, Jo, S-M, and Kim, H-S. Efficient capture and simple quantification of circulating tumor cells using quantum dots and magnetic beads. Small (2015) 11(21):2536–42. doi:10.1002/smll.201403126 PubMed Abstract | CrossRef Full Text | Google Scholar Njoku, K, Barr, CE, and Crosbie, EJ. Current and emerging prognostic biomarkers in endometrial cancer. Front Oncol (2022) 12:890908. doi:10.3389/fonc.2022.890908 PubMed Abstract | CrossRef Full Text | Google Scholar Karimi-Zarchi, M, Dehshiri-Zadeh, N, Sekhavat, L, and Nosouhi, F. Correlation of CA-125 serum level and clinico-pathological characteristic of patients with endometriosis. Int J Reprod Biomed (Yazd, Iran) (2016) 14(11):713–8. doi:10.29252/ijrm.14.11.713 PubMed Abstract | CrossRef Full Text | Google Scholar Behrouzi, R, Barr, CE, and Crosbie, EJ. HE4 as a biomarker for endometrial cancer. Cancers (2021) 13(19):4764. doi:10.3390/cancers13194764 PubMed Abstract | CrossRef Full Text | Google Scholar Binder, PS, and Mutch, DG. Update on prognostic markers for endometrial cancer. Women’s Health (London, England) (2014) 10(3):277–88. doi:10.2217/whe.14.13 PubMed Abstract | CrossRef Full Text | Google Scholar Balestra, A, Larsimont, D, and Noël, JC. HER2 amplification in p53-mutated endometrial carcinomas. Cancers (2023) 15(5):1435. doi:10.3390/cancers15051435 PubMed Abstract | CrossRef Full Text | Google Scholar Alnakli, AAA, Mohamedali, A, Heng, B, Chan, C, Shin, J-S, Solomon, M, et al. Protein prognostic biomarkers in stage II colorectal cancer: implications for post-operative management. BJC Rep (2024) 2(1):13. doi:10.1038/s44276-024-00043-z PubMed Abstract | CrossRef Full Text | Google Scholar Walther, A, Johnstone, E, Swanton, C, Midgley, R, Tomlinson, I, and Kerr, D. Genetic prognostic and predictive markers in colorectal cancer. Nat Rev Cancer (2009) 9(7):489–99. doi:10.1038/nrc2645 PubMed Abstract | CrossRef Full Text | Google Scholar Bennedsen, ALB, Cai, L, Hasselager, RP, Özcan, AA, Mohamed, KB, Eriksen, JO, et al. An exploration of immunohistochemistry-based prognostic markers in patients undergoing curative resections for colon cancer. BMC Cancer (2022) 22(1):62. doi:10.1186/s12885-022-09169-0 PubMed Abstract | CrossRef Full Text | Google Scholar Melling, N, Kowitz, CM, Simon, R, Bokemeyer, C, Terracciano, L, Sauter, G, et al. High Ki67 expression is an independent good prognostic marker in colorectal cancer. J Clin Pathol (2016) 69(3):209–14. doi:10.1136/jclinpath-2015-202985 PubMed Abstract | CrossRef Full Text | Google Scholar Reddy, S, Vergo, M, and Benson, AB Prognostic and predictive markers in colorectal cancer. Curr Colorectal Cancer Rep (2011) 7(4):267–74. doi:10.1007/s11888-011-0104-3 CrossRef Full Text | Google Scholar Koncina, E, Haan, S, Rauh, S, and Letellier, E. Prognostic and predictive molecular biomarkers for colorectal cancer: updates and challenges. Cancers (2020) 12(2):319. doi:10.3390/cancers12020319 PubMed Abstract | CrossRef Full Text | Google Scholar Kalampokas, E, Giannis, G, Kalampokas, T, Papathanasiou, A-A, Mitsopoulou, D, Tsironi, E, et al. Current approaches to the management of patients with endometrial cancer. Cancers (2022) 14(18):4500. doi:10.3390/cancers14184500 PubMed Abstract | CrossRef Full Text | Google Scholar Corr, B, Cosgrove, C, Spinosa, D, and Guntupalli, S. Endometrial cancer: molecular classification and future treatments. BMJ Med (2022) 1(1):e000152. doi:10.1136/bmjmed-2022-000152 PubMed Abstract | CrossRef Full Text | Google Scholar Mahdi, H, Chelariu-Raicu, A, and Slomovitz, BM. Immunotherapy in endometrial cancer. Int J Gynecol Cancer (2023) 33(3):351–7. doi:10.1136/ijgc-2022-003675 PubMed Abstract | CrossRef Full Text | Google Scholar Kumar, A, Gautam, V, Sandhu, A, Rawat, K, Sharma, A, and Saha, L. Current and emerging therapeutic approaches for colorectal cancer: a comprehensive review. World J Gastrointest Surg (2023) 15(4):495–519. doi:10.4240/wjgs.v15.i4.495 PubMed Abstract | CrossRef Full Text | Google Scholar Krasteva, N, and Georgieva, M. Promising therapeutic strategies for colorectal cancer treatment based on nanomaterials. Pharmaceutics (2022) 14(6):1213. doi:10.3390/pharmaceutics14061213 PubMed Abstract | CrossRef Full Text | Google Scholar Xie, Y-H, Chen, Y-X, and Fang, J-Y. Comprehensive review of targeted therapy for colorectal cancer. Signal Transduction Targeted Ther (2020) 5(1):22. doi:10.1038/s41392-020-0116-z PubMed Abstract | CrossRef Full Text | Google Scholar Bhuin, A, Udayakumar, S, Gopalarethinam, J, Mukherjee, D, Girigoswami, K, Ponraj, C, et al. Photocatalytic degradation of antibiotics and antimicrobial and anticancer activities of two-dimensional ZnO nanosheets. Scientific Rep (2024) 14(1):10406. doi:10.1038/s41598-024-59842-6 PubMed Abstract | CrossRef Full Text | Google Scholar Golshani, G, and Zhang, Y. Advances in immunotherapy for colorectal cancer: a review. Ther Adv Gastroenterol (2020) 13:1756284820917527. doi:10.1177/1756284820917527 PubMed Abstract | CrossRef Full Text | Google Scholar Balaji, D, Kalarani, IB, Mohammed, V, and Veerabathiran, R. Potential role of human papillomavirus proteins associated with the development of cancer. Virusdisease (2022) 33(3):322–33. doi:10.1007/s13337-022-00786-8 PubMed Abstract | CrossRef Full Text | Google Scholar Rennert, G, Rennert, HS, Pinchev, M, Lavie, O, and Gruber, SB. Use of hormone replacement therapy and the risk of colorectal cancer. J Clin Oncol (2009) 27(27):4542–7. doi:10.1200/JCO.2009.22.0764 PubMed Abstract | CrossRef Full Text | Google Scholar Wang, C, Tran, DA, Fu, MZ, Chen, W, Fu, SW, and Li, X. Estrogen receptor, progesterone receptor, and HER2 receptor markers in endometrial cancer. J Cancer (2020) 11(7):1693–701. doi:10.7150/jca.41943 PubMed Abstract | CrossRef Full Text | Google Scholar Nakamura, M, Obata, T, Daikoku, T, and Fujiwara, H. The association and significance of p53 in gynecologic cancers: the potential of targeted therapy. Int J Mol Sci (2019) 20(21):5482. doi:10.3390/ijms20215482 PubMed Abstract | CrossRef Full Text | Google Scholar Bae, JM, Lee, TH, Cho, N-Y, Kim, T-Y, and Kang, GH. Loss of CDX2 expression is associated with poor prognosis in colorectal cancer patients. World J Gastroenterol (2015) 21(5):1457–67. doi:10.3748/wjg.v21.i5.1457 PubMed Abstract | CrossRef Full Text | Google Scholar Glossary LS Lynch Syndrome EC Endometrial cancer CRC Colorectal Cancer MMR Mismatch Repair MSS Microsatelite Stable MSI Microsatelite Instability MSI-H Microsatelite Instability-High MSI-L Microsatelite Instability-Low MLH1 MutL homolog 1 MLH2 MutL homolog 2 MSH2 MutS homolog 2 MSH6 MutS homolog 6 PMS1 PMS1 homolog 1 PMS2 PMS1 homolog 2 EPCAM Epithelial Cell Adhesion Molecule PTEN Phosphatase and Tensin Homolog TGF-βR2 Transforming growth factor, beta receptor II BAX BCL-2-associated X protein HR Hazard Ratio PD-1 Programmed Death −1 PD-L1 Programmed Death Ligand 1 KRAS Kirsten rat sarcoma viral oncogene homologue PIK3CA phosphatidylinositol-4,5-bisphosphate 3-kinase catalytic subunit alpha BMI Body Mass Index APC Adenomatous polyposis coli CTNNB1 Catenin beta-1 PCOS Polycystic ovary syndrome HRT Hormone Replacement Therapy IBD Inflammatory Bowel Disease TCGA The Cancer Genome Atlas DEGs Differentially Expressed Genes LS-CRC Lynch Syndrome Associated Colorectal Cancer LS-EC Lynch Syndrome Associated Endometrial Cancer COL11A1 Collagen Type XI Alpha 1 Chain SG-LC Specific Genes overlaps LS and CRC SG-LE Specific Genes overlaps LS and EC CST2 Type 2 cystatin COL18A1 Collagen Type XVIII Alpha 1 Chain LY6K Lymphocyte Antigen 6 Family Member K MIR27B MicroRNA 27b SST Somatostatin KIF20A Kinesin-like protein NUF2 Kinetochore protein Nuf2 HTR4 5-Hydroxytryptamine receptor 4 CDC45 Cell Division Cycle 45 WDR31 WD Repeat Domain 31 AADACL2 Arylacetamide Deacetylase Like 2 DHRS7C Dehydrogenase/Reductase 7C KRT24 keratin gene LINC00460 Long Intergenic Non-Protein Coding RNA 460 NPY2R Neuropeptide Y receptor type 2 KHDRBS2 KH RNA Binding Domain Containing, Signal Transduction Associated 2 CDH10 Cadherin 10 LINC02616 Long Intergenic Non-Protein Coding RNA 2616 LINC02691 Long Intergenic Non-Protein Coding RNA 2691 IGF2-AS IGF2 Antisense RNA IGF Insulin-like growth factor ADAMTS9-AS2 ADAM Metallopeptidase With Thrombospondin Type 1 Motif 9- Antisense RNA 2 SLC10A4 Solute Carrier Family 10 Member 4 EECs Endometrioid Endometrial Carcinomas BHD Birt–Hogg–Dubé syndrome IGFIIR Insulin-like Growth Factor II Receptor Rad3 A yeast homolog of human ATR ATR Ataxia Telangiectasia and Rad3-related protein PIK3R1 Phosphoinositide-3-Kinase Regulatory Subunit 1 BRAF B-Raf Proto-Oncogene, Serine/Threonine Kinase FGFR2 Fibroblast Growth Factor Receptor 2 ARID1A AT-Rich Interaction Domain 1A BAF250a BRG1-Associated Factor 250a SWI/SNF SWItch/Sucrose Non-Fermentable ERBB2 Erb-B2 Receptor Tyrosine Kinase 2 PPP2R1A Protein Phosphatase 2 Scaffold Subunit A Alpha PP2A Protein Phosphatase 2A HER-2 Human Epidermal Growth Factor Receptor 2 RASSF1A Ras Association Domain Family Member 1 isoform A CDKN2A Cyclin Dependent Kinase Inhibitor 2A C-MYC MYC Proto-Oncogene, BHLH Transcription Factor MEST Mesoderm Specific Transcript HOXA10 Homeobox A10 HOXA11 Homeobox A11 SOCS3 Suppressor of Cytokine Signaling 3 GSK-3B Glycogen Synthase Kinase 3 Beta CCND1 Cyclin D1 SIR Standardized Incidence Rate CI Confidence Interval TP53 Tumor Protein P53 AURKA Aurora Kinase A CART Cocaine- and Amphetamine-Regulated Transcript E2F2 E2F Transcription Factor 2 (involved in cell cycle regulation) SNP Single Nucleotide Polymorphism MUTYH MutY DNA Glycosylase BER Base Excision Repair OGG1 8-Oxoguanine DNA Glycosylase CIMP CpG Island Methylator Phenotype HE4 Human Epididymis Protein 4 CA125 Cancer Antigen 125 PR Progesterone Receptor ER Estrogen Receptor FIGO International Federation of Gynecology and Obstetrics CDX2 Caudal Type Homeobox 2 Ki67 Marker of Cellular Proliferation TH Total Hysterectomy LS Laproscopic Surgery RS Robot-assisted Surgery BSO Bilateral Salpingo-Oophorectomy RS Recurrence Score EZH2 Enhancer of Zeste Homolog 2 FDA Food and Drug Administration dMMR Deficient Mismatch Repair 5-FU 5-Fluorouracil OX Oxaliplatin IRI Irinotecan NGS Next-Generation Sequencing Keywords: endometrial cancer, colorectal cancer, Lynch syndrome, genetic mutations, risk factors Citation: Pallatt S, Nambidi S, Adhikary S, Banerjee A, Pathak S and Duttaroy AK (2025) A brief review of Lynch syndrome: understanding the dual cancer risk between endometrial and colorectal cancer. Oncol. Rev. 19:1549416. doi: 10.3389/or.2025.1549416 Received: 21 December 2024; Accepted: 05 May 2025; Published: 16 May 2025. Edited by: Akhil Kapoor, Tata Memorial Hospital, India Reviewed by: Daniele Fanale, Azienda Ospedaliera Universitaria Policlinico Paolo Giaccone, Italy Poonam Gera, Research and Education in Cancer (ACTREC), India Copyright © 2025 Pallatt, Nambidi, Adhikary, Banerjee, Pathak and Duttaroy. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. Correspondence: Surajit Pathak, drsurajitpathak@care.edu.in; Asim K. Duttaroy, a.k.duttaroy@medisin.uio.no
14988
https://www.engineeringtoolbox.com/u-tube-manometer-d_611.html
Engineering ToolBox - Resources, Tools and Basic Information for Engineering and Design of Technical Applications! U-Tube Differential Pressure Manometers Inclined and vertical u-tube manometers used to measure differential pressure in flow meters like pitot tubes, orifices and nozzles. Pressure measuring devices using liquid columns in vertical or inclined tubes are called manometers. One of the most common is the water filled u-tube manometer used to measure pressure difference in pitot or orifices located in the airflow in air handling or ventilation system. In the figure bellow illustrates the water levels in an u-tube where the left tube is connected to a point with higher pressure than the right tube - example: the left tube may be connected to a pressurized air duct when the right tube is open to the ambient air. Vertical U-Tube Manometer The pressure difference measured by a vertical U-Tube manometer can be calculated as pd = γ h = ρ g h (1) pd = pressure (Pa, N/m2, lb/ft2) γ = ρ g = specific weight of liquid in the tube (kN/m3, lb/ft3) ρ = U-tube liquid density (kg/m3, lb/ft3) g = acceleration of gravity (9.81 m/s2, 32.174 ft/s2) h = liquid height (m fluid column, ft fluid column) The specific weight of water, which is the most commonly used fluid in u-tube manometers, is 9.81 kN/m3 or 62.4 lb/ft3. Note! - the head unit is with reference to the density of the flowing fluid. For other units and reference liquid - like mm Water Column - check Velocity Pressure Head. Example - Orifice Differential Pressure Measurement A water manometer connects the upstream and downstream pressure of an orifice located in an air flow. The difference height in the water column is 10 mm . The pressure difference head can calculated from (1) as pd = (9.8 kN/m3) (103 N/kN) (10 mm) (10-3 m/mm) = 98 (N/m2, Pa) where 9.8 (kN/m3) is the specific weight of water in SI-units. Calculate velocity Inclined U-Tube Manometer A common problem when measuring the pressure difference in low velocity systems - or systems with low density fluids - like air ventilation systems - are low column heights and accuracy. Accuracy can be improved by inclining the u-tube manometer. The figure bellow indicates a u-tube where the left tube is connected to a higher pressure than the right tube. Note that the left and the right tube must in the same declined plane for the angle to the horizontal plane to be correct. The pressure difference in a inclined u-tube manometer can be expressed as pd = γ h sin(θ) (2) where h = length, difference in position of the liquid column along the tube (mm, ft) θ = angle of column relative the horizontal plane (degrees) Inclining the tube manometer increases the accuracy of the measurement. Example - Differential Pressure Measurement with an Inclined U-Tube manometer We use the same data as in the example above, except that the U-Tube is inclined 45o . The pressure difference head can then be expressed as: pd = (9.8 kN/m3) (103 N/kN) (10 mm) (10-3 m/mm) sin(45o) = 69.3 N/m2(Pa) U-tube Manometer Calculator This calculator can be used to calculate the differential pressure measured with an U-tube manometer. (9.8 kN/m3, 62.4 lb/ft3 default values for water) Calculate velocity Unit Converter . Cookie Settings | | | --- | | | | | | | --- | | | |
14989
https://matt.might.net/articles/closure-conversion/
Closure conversion: How to compile lambda What is lambda? Syntactically, lambda refers to a form for describing anonymous functions. But, a lambda does not become a function pointer. It becomes a closure. Closures are data structures with both a code and a data component. There are two dominant strategies for compiling lambdas into closures: flat closures and linked (or shared) closures. It is possible to understand both strategies in terms of a single operation---closure conversion---and the distinction between the two as whether this transformation is applied top-down or bottom-up. The examples below are provided in both Python and Racket. The code below is provided in Racket. For a more in-depth treatment, Appel's Compiling with Continuations and Queinnec's Lisp in Small Pieces are both excellent references. The problem with lexical scope You don't even need lambda to motivate closures. Nested first-class functions will do. Think in Python for the moment. Suppose you define a function that returns a function: Now, call this function: What should a() yield? a() According to lexical scope, a() yields 10. a() Lexical scope is an important principle in program design. It is a prerequisite to WYSIWYG programming (known more formally as equational reasoning). Now, think about how to implement lexical scope by compiling to C. C doesn't have nested functions, so g must become top-level; perhaps: g But, there's a problem here: by hoisting the definition of g, the value of x moves from the argument of f to being a global x, if any such x even exists. g x f x x Some are tempted to solve the problem by creating a global version of x and setting it before returning g: x g The troubling aspect of this solution is that it works most of the time. But, all we have to do is create two instances of the return value for f to mess things up: f According to lexical scope, a() yields 10, while b() yields 20. a() b() The naive "global" solution returns 20 for both. We should return a new function from f, every time it is called. f But, C doesn't let you define new functions at run-time. Clearly, function pointers alone are not sufficient. What we need are closures. Closures Conceptually, a closure consists of an open lambda term, plus an environment dictating the values of its free variables. On open lambda term is one like the following: In this term, the meaning of z is not fixed. z If z is 10, then the function returns 10. If it's 20, then it returns 20. z So, by itself an open lambda term is not a function. If we pair an open lambda term with an environment that maps variables to values, it determines a function. That is: f A closure is an open lambda term paired with an environment that gives values to all of its free variables. Under the hood, a closure is a struct with two fields: one for code and one for an environment. Implementing closures Suppose you still want to compile a high-level language with nested first-class functions or lambda terms down to C (or assembly). We need to hoist all functions to the top level. But, even with closures, the lambda terms within still have free variables Closure conversion solves this problem by adding a new environment parameter to a lambda term, and pulling the values of its free variables from that structure. That is, given a term like this in Python: it will become something like: In this code, (env-ref env a) is roughly equivalent to env.a in other languages or env->a in C. (env-ref env a) env.a env->a Now it's safe to perform a "lambda-lifting" transformation, where a lambda term gets hoisted to a top-level definition. We can replace the lambda term with a fresh symbol like f42, and add the following top-level definition to the program: f42 Of course, this doesn't quite work. There are two problems: (1) all function calls need to pass an additional parameter---the environment---but (2) the call sites don't even have access to that environment. To solve the second problem, we'll turn lambda forms into closure-creation forms, so that they return a pair containing the procedure and the environment. That is: will become: In this case, make-closure is a constructor for closures, and make-env is a special form for building environments. make-closure make-env At every application site, we'll know that the procedure to be applied is no longer a procedure, but a closure. Thus, call sites like: will become (equivalent to) something like: Two closure conversion algorithms Let's start with the pure lambda calculus: And, extend it with forms for closure conversion: The lambda form marks a lambda term as already closure-converted. lambda The apply-closure form is used to indicate that a call site is invoking a closure rather than a procedure. apply-closure In Racket, it's straightforward to define a procedure that closure converts a single lambda term: as long helper functions for constructing substitutions and computing free variables are available: Flat closures: Bottom-up closure conversion If we apply closure-conversion in a bottom-up fashion, then variables end up getting copied between environments each time a closure is created. The advantage to this approach is that it takes only a single field look-up to get the value of a variable. The disadvantage is that environments become larger, since every environment has to contain every free variable: Shared closures: Top-down closure conversion If space is a concern, we can apply the closure conversion in a top-down fashion to yield shared environments: With top-down conversion, accesses to variables get chained through outer environments. Thus, this approach sacrifices speed for space: There's a minor caveat here. Top-down closure conversion doesn't implement shared environments exactly as expected. Some variables may still get copied if there are multiple direct child lambda terms for some lambda term. To avoid this duplication, it's necessary to perform single-argument conversion. That is, all procedure should take one argument---a vector containing their parameters---and references to parameters should be converted into lookups in that structure. Code For an implementation of closure conversion, see closure-convert.rkt. More resources Appel's Compiling with Continuations and Queinnec's Lisp in Small Pieces are both excellent references. For related blog posts on compilation, see: A-normalization. CPS conversion. Implementing exceptions. [article index] [] [@mattmight] [rss] More resources Appel's Compiling with Continuations and Queinnec's Lisp in Small Pieces are both excellent references. For related blog posts on compilation, see: A-normalization. CPS conversion. Implementing exceptions. [article index] [] [@mattmight] [rss] More resources Appel's Compiling with Continuations and Queinnec's Lisp in Small Pieces are both excellent references. For related blog posts on compilation, see:
14990
https://www.youtube.com/watch?v=NpzkiOaVfxI
How to evaluate for sine of 60 degrees using special right triangles Brian McLogan 1590000 subscribers 132 likes Description 15966 views Posted: 24 Mar 2014 👉 Learn how to evaluate trigonometric functions using the special right triangles. A right triangle is a triangle with 90 degrees as one of its angles. A special right triangle is a right triangle with the angles 30, 60, 90 degrees or 45, 45, 90 degrees. To evaluate the trigonometric function of special right triangles, we first note the ratio of the sides of a special triangle as 1, sqrt(3), 2 respectively for the sides opposite the 30, 60, 90 degrees special triangle and 1, 1, sqrt(2) respectively for the sides opposite the 45, 45, 90 degrees special triangle. With the above knowledge, we can then apply the SOHCAHTOA principle for solving trigonometric functions of right triangles to evaluate the given trigonometric function. 👏SUBSCRIBE to my channel here: ❤️Support my channel by becoming a member: 🙋‍♂️Have questions? Ask here: 🎉Follow the Community: Organized Videos: ✅ Evaluate the Trigonometric Functions With Triangles ✅ Evaluate Trig Functions Using Special Right Triangles ✅ Evaluate the Six Trig Functions Given Right Triangle ✅ Evaluate the Six Trig Functions Given Equations ✅ Evaluate Trig Functions from a point | Learn About ✅ Evaluate the six Trig Functions given a point ✅ Determine the Quadrant Given Constraints ✅ Evaluate the trigonometric function with constraint 🗂️ Organized playlists by classes here: 🌐 My Website - 🎯Survive Math Class Checklist: Ten Steps to a Better Year: Connect with me: ⚡️Facebook - ⚡️Instagram - ⚡️Twitter - ⚡️Linkedin - 👨‍🏫 Current Courses on Udemy: 👨‍👩‍👧‍👧 About Me: I make short, to-the-point online math tutorials. I struggled with math growing up and have been able to use those experiences to help students improve in math through practical applications and tips. Find more here: trigonometry #brianmclogan 5 comments Transcript: now if i was going to do sine of 60 degrees ladies and gentlemen again i have to use a different special right triangle here i have 60 30 90. because remember if you guys remember we created this by my 60 60 or an equilateral triangle and we cut the equilateral triangle in half that's how we came up with the relationship of a 30 60 90 triangle so the first one was sensors 45 degrees that came from a 45 45 90 triangle which is a special triangle and now this one's out of a 30 60 90. so if you guys remember the relationship here if this is x my hypotenuse was anybody anybody 2x and my long leg is now x square root of 3. that is the relationship of your special triangles which we went over two class periods ago before that yeah i was gone friday i was there on wednesday that's when we went over this on wednesday so now let's go and look at this so now what i'm trying to do is try to find the sign of 60. so the sine if you guys remember the sine of any angle is equal to the opposite over the hypotenuse so for sine of 60 i need to determine what is the opposite side so i asked seth what is the opposite side of this triangle of 60 degrees what's the opposite side x square root 3. and keith what is the hypotenuse of this triangle huh 2x so i do opposite over hypotenuse my x's divide to 1 and i'm just left with the square root of 3 over 2. done final answer now also to check it with your calculator to find the decimal approximation um jared if you want to move up to here i'll be more than happy to have you come up here sine of 60 degrees equals 0.86 6. yes
14991
https://www.ck12.org/flexi/cbse-math/integration-as-an-inverse-process-of-differentiation/what-is-the-constant-of-integration/
What is the constant of integration? Flexi Says: The constant of integration is a constant that is added to an indefinite integral. When we integrate a function, there are actually an infinite number of possible functions that could have the original function as their derivative. These functions differ by a constant. For example, if we integrate the function @$\begin{align}f(x) = 2x\end{align}@$, we get @$\begin{align}F(x) = x^2 + C\end{align}@$, where @$\begin{align}C\end{align}@$ is the constant of integration. This means that the derivative of @$\begin{align}F(x)\end{align}@$ is @$\begin{align}f(x)\end{align}@$ for any value of @$\begin{align}C\end{align}@$. In other words, the constant of integration accounts for the fact that the process of integration can only determine the antiderivative of a function up to an additive constant. Try Asking: How is the integral of the form ∫e^{ax}sin(bx+c)dx or ∫e^{ax}cos(bx+c)dx defined?Discuss various methods of integration.Give formulae for reciprocal functions. By messaging Flexi, you agree to our Terms and Privacy Policy
14992
https://old.maa.org/press/maa-reviews/a-course-in-combinatorics
A Course in Combinatorics | Mathematical Association of America Skip to main content Home Math Careers Contact Us Login Search form Search Login Join Give Events About MAA MAA History MAA Centennial MathDL Spotlight: Archives of American Mathematics MAA Officers MAA to the Power of New Governance Council and Committees Governance Documents Bylaws Policies and Procedures MAA Code of Conduct Policy on Conflict of Interest Statement about Conflict of Interest Recording or Broadcasting of MAA Events Policy for Establishing Endowments and Funds Avoiding Implicit Bias Copyright Agreement Principal Investigator's Manual Advocacy Our Partners Advertise with MAA Employment Opportunities Staff Directory Contact Us 2022 Impact Report In Memoriam Membership Membership Categories Membership Renewal Member Discount Programs MERCER Insurance MAA Member Directories New Member Benefits MAA Publications Periodicals The American Mathematical Monthly Mathematics Magazine The College Mathematics Journal Loci/JOMA Browse How to Cite Communications in Visual Mathematics Convergence About Convergence What's in Convergence? Convergence Articles Images for Classroom Use Mathematical Treasures Portrait Gallery Paul R. Halmos Photograph Collection Other Images Critics Corner Quotations Problems from Another Time Conference Calendar Guidelines for Convergence Authors MAA FOCUS Math Horizons Submissions to MAA Periodicals Guide for Referees Scatterplot Blogs Math Values MAA Book Series MAA Press (an imprint of the AMS) MAA Notes MAA Reviews Browse MAA Library Recommendations Additional Sources for Math Book Reviews About MAA Reviews Mathematical Communication Information for Libraries Author Resources Advertise with MAA Meetings MAA MathFest Propose a Session Proposal and Abstract Deadlines MAA Policies Invited Paper Session Proposals Contributed Paper Session Proposals Panel, Poster, Town Hall, and Workshop Proposals Minicourse Proposals MAA Section Meetings Virtual Programming Joint Mathematics Meetings Calendar of Events MathFest Archive MathFest Programs Archive MathFest Abstract Archive Historical Speakers MAA Code of Conduct Competitions About AMC FAQs Information for School Administrators Information for Students and Parents Registration Getting Started with the AMC AMC Policies AMC Administration Policies Important AMC Dates Competition Locations AMC 8 AMC 10/12 Invitational Competitions Putnam Competition Putnam Competition Archive AMC International AMC Resources Curriculum Inspirations Sliffe Award MAA K-12 Benefits Mailing List Requests Statistics & Awards Programs Submit an NSF Proposal with MAA MAA Distinguished Lecture Series Curriculum Resources Classroom Capsules and Notes Browse Common Vision Course Communities Browse CUPM Curriculum Guide INGenIOuS Instructional Practices Guide Möbius MAA Placement Test Suite META Math META Math Webinar May 2020 Progress through Calculus Survey and Reports Outreach Initiatives "Camp" of Mathematical Queeries Dolciani Mathematics Enrichment Grants DMEG Awardees National Research Experience for Undergraduates Program (NREUP) Neff Outreach Fund Neff Outreach Fund Awardees Tensor SUMMA Grants Tensor Women & Mathematics Grants Grantee Highlight Stories Professional Development "Best Practices" Statements CoMInDS CoMInDS Summer Workshop 2023 MAA Travel Grants for Project ACCCESS OPEN Math 2024 Summer Workshops Minority Serving Institutions Leadership Summit Previous Workshops Frequently Asked Questions PIC Math Course Resources Industrial Math Case Studies Participating Faculty 2020 PIC Math Student Showcase Previous PIC Math Workshops on Data Science Project NExT Fellows Application FAQ Dates and Locations Past Programs Leadership Team Support Project NExT Section NExT StatPREP Virtual Programming Communities MAA Sections Section Meetings MAA Section Officers' Meetings Section Officers Meeting History Preparations for Section Meetings Deadlines and Forms Bylaws Template Section Programs Editor Lectures Program MAA Section Lecturer Series Officer Election Support Section Awards Section Liaison Programs Section NExT Section Visitors Program Policies and Procedures Expense Reimbursement Guidelines for Bylaw Revisions Guidelines for Local Arrangement Chair and/or Committee Guidelines for Section Webmasters MAA Logo Guidelines MAA Section Email Policy Section Newsletter Guidelines Statement on Federal Tax ID and 501(c)3 Status Section Resources Communication Support Guidelines for the Section Secretary and Treasurer Legal & Liability Support for Section Officers Section Marketing Services Section in a Box Subventions and Section Finances Web Services SIGMAA Joining a SIGMAA Forming a SIGMAA History of SIGMAA SIGMAA Officer Handbook Frequently Asked Questions MAA Connect Students Meetings and Conferences for Students Undergraduate Research Opportunities to Present Information and Resources MAA Undergraduate Student Poster Session Undergraduate Research Resources MathFest Student Paper Sessions Research Experiences for Undergraduates Student Poster Session FAQs Student Resources High School Graduate Students A Graduate School Primer Fun Math Reading List Student Chapters MAA Awards Awards Booklets Writing Awards Carl B. Allendoerfer Awards Chauvenet Prizes Regulations Governing the Association's Award of The Chauvenet Prize Trevor Evans Awards Paul R. Halmos - Lester R. Ford Awards Merten M. Hasse Prize George Pólya Awards David P. Robbins Prize Beckenbach Book Prize Euler Book Prize Daniel Solow Author’s Award Teaching Awards Henry L. Alder Award Deborah and Franklin Tepper Haimo Award Service Awards Certificate of Merit Gung and Hu Distinguished Service JPBM Communications Award Meritorious Service MAA Award for Inclusivity T. Christine Stevens Award Research Awards Dolciani Award Dolciani Award Guidelines Morgan Prize Morgan Prize Information Annie and John Selden Prize Selden Award Eligibility and Guidelines for Nomination Selden Award Nomination Form Lecture Awards AMS-MAA-SIAM Gerald and Judith Porter Public Lecture AWM-MAA Falconer Lecture Etta Zuber Falconer Hedrick Lectures James R. C. Leitzel Lecture Pólya Lecture Pólya Lecturer Information Putnam Competition Individual and Team Winners D. E. Shaw Group AMC 8 Awards & Certificates Maryam Mirzakhani AMC 10 A Awards & Certificates Two Sigma AMC 10 B Awards & Certificates Jane Street AMC 12 A Awards & Certificates Akamai AMC 12 B Awards & Certificates High School Teachers News Our Blog MAA Social Media RSS You are here Home » MAA Publications » MAA Reviews » A Course in Combinatorics A Course in Combinatorics J. H. van Lint and R. M. Wilson Publisher: Cambridge University Press Publication Date: 2001 Number of Pages: 620 Format: Paperback Edition: 2 Price: 70.00 ISBN: 9780521006019 Category: Textbook BLL Rating: BLL The Basic Library List Committee recommends this book for acquisition by undergraduate mathematics libraries. MAA Review Table of Contents [Reviewed by Allen Stenger , on 12/30/2009 ] This is a very wide-ranging survey of combinatorics, presented as an introductory course at the upper undergraduate level. The recommended background is familiarity with abstract algebra, but some chapters need less than this and some depend on some knowledge of other areas of mathematics. The very beginning of the book gives a clever solution of the Instant Insanity puzzle that depends on casting it as a graph and then getting the solution by inspection, without any formal knowledge of graph theory (or of mathematics). The book meets the authors’ stated goals that “students who subsequently attend a conference on ‘Combinatorics’ would hear no talks where they are completely lost because of unfamiliarity with the topic” and of “doing something substantial or nontrivial with each topic.” The book is slanted away from counting as the final result, and generating functions play a small role, although most parts of the book depend on counting arguments to get the results. There is quite a lot of material on combinatorial designs. Each chapter has challenging exercises scattered through it, and ends with helpful and interesting historical and biographical notes. The book does not go into great depth on any topic, but it does discuss the most important topics and may state the relevant theorems without proof. For example, Kuratowski’s characterization of planar graphs is quoted and discussed in detail, but not proved. The four-color theorem is stated and discussed, and the five-color theorem is proved and the ideas related to the four-color problem. The standout subject is the chapter on Pólya’s theory of counting. This subject is very difficult to explain well, but the exposition here is very clear, concise, and illustrated with helpful examples. Allen Stenger is a math hobbyist, library propagandist, and retired computer programmer. He volunteers in his spare time at MathNerds.com, a math help site that fosters inquiry learning. His mathematical interests are number theory and classical analysis. Preface to the first edition Preface to the second edition Graphs Trees Colorings of graphs and Ramsey’s theorem Turán’s theorem and extremal graphs Systems of distinct representatives Dilworth’s theorem and extremal set theory Flows in networks De Bruijn sequences Two (0,1,) problems: addressing for graphs and a hash-coding scheme The principle of inclusion and exclusion: inversion formulae Permanents The Van der Waerden conjecture Elementary counting; Stirling numbers Recursions and generating functions Partitions (0,1)-matrices Latin squares Hadamard matrices, Reed-Muller codes Designs Codes and designs Strongly regular graphs and partial geometries Orthogonal Latin squares Projective and combinatorial geometries Gaussian numbers and q-analogues Lattices and Möbius inversion Combinatorial designs and projective geometries Difference sets and automorphisms Difference sets and the group ring Codes and symmetric designs Association schemes (More) algebraic techniques in graph theory Graph connectivity Planarity and coloring Whitney duality Embedding of graphs on surfaces Electrical networks and squared squares Pólya theory of counting Baranyai’s theorem Appendix 1. Hints and comments on problems Appendix 2. Formal power series Name index Subject index Tags: Combinatorics Log in to post comments Dummy View - NOT TO BE DELETED Get Ready: Our Brand New Website is Coming Soon! 2024 MAA Awards & Prize Winners Announced! Register for our OPEN Math Summer Workshops Register for MathFest 2024! 1 2 3 4 Previous Next MAA Publications Periodicals Blogs MAA Book Series MAA Press (an imprint of the AMS) MAA Notes MAA Reviews Browse MAA Library Recommendations Additional Sources for Math Book Reviews About MAA Reviews Mathematical Communication Information for Libraries Author Resources Advertise with MAA About MAA MAA History MAA to the Power of New Governance Policies and Procedures Advocacy Our Partners Advertise with MAA Employment Opportunities Staff Directory Contact Us 2022 Impact Report In Memoriam Membership Membership Categories Membership Renewal Member Discount Programs MAA Member Directories New Member Benefits MAA Publications Periodicals Blogs MAA Book Series MAA Press (an imprint of the AMS) MAA Notes MAA Reviews Browse MAA Library Recommendations Additional Sources for Math Book Reviews About MAA Reviews Mathematical Communication Information for Libraries Author Resources Advertise with MAA Meetings MAA MathFest Propose a Session MAA Section Meetings Virtual Programming Joint Mathematics Meetings Calendar of Events MathFest Archive MAA Code of Conduct Competitions About AMC Registration Getting Started with the AMC AMC Policies AMC Administration Policies Important AMC Dates Competition Locations AMC 8 AMC 10/12 Invitational Competitions Putnam Competition AMC International AMC Resources Statistics & Awards Programs Submit an NSF Proposal with MAA MAA Distinguished Lecture Series Curriculum Resources Outreach Initiatives Professional Development Virtual Programming Communities News Our Blog MAA Social Media RSS Connect with MAA Facebook Twitter YouTube Sign up for emails Mathematical Association of America P: (800) 331-1622 F: (240) 396-5647 Email:maaservice@maa.org Copyright © 2025 Terms of Use Privacy Policy Mobile Version
14993
https://www.xuemei.org/IBP-1.pdf
A class of integration by parts formulae in stochastic analysis I K. D. Elworthy and Xue-Mei Li Mathematics Institute University of Warwick Coventry CV4 7AL,U.K. 1 Introduction Consider a Stratonovich stochastic differential equation dxt = X(xt) ◦dBt + A(xt)dt (1) with C∞coefficients on a compact Riemannian manifold M, with associated differential generator A = 1 2∆M + Z and solution flow {ξt : t ≥0} of random smooth diffeomorphisms of M. Let Tξt : TM →TM be the induced map on the tangent bundle of M obtained by differentiating ξt with respect to the initial point. Using an observation by A. Thalmaier we will extend the basic formula of [EL94] to obtain EdF (Tξ· (h·)) = EF (ξ·(x)) Z T 0 D Tξs  ˙ hs  , X (ξs(x)) dBs E (2) where F ∈FC∞ b (Cx(M)), the space of smooth cylindrical functions on the space Cx(M) of continuous paths γ : [0, T] →M with γ(0) = x, dF is its derivative, and h· is a suitable adapted process with sample paths in the Cameron-Martin space L2,1 0 ([0, T]; TxM). Set Fx t = σ{ξs(x) : 0 ≤s ≤ t}. Taking conditional expectation with respect to Fx T, formula (2) yields integration by parts formulae on Cx(M) of the form EdF(γ)(¯ V h) = EF(γ)δV h(γ) (3) where ¯ V h is the vector field on Cx(M) ¯ V h(γ)t = E {Tξt(ht) |ξ·(x) = γ } 1 and δV h : Cx(M) →R is given by δV h(γ) = E Z T 0 < Tξs(˙ hs), X(ξs(x))dBs > |ξ·(x) = γ  . When h· is adapted to Fx · results from [ELJL95] extending [EY93] give explicit expressions for ¯ V h and δ ¯ V h in terms of the Ricci curvature of the LeJan-Watanabe connection associated to (1). Equation (3) then reduces to a Driver’s integration by parts formula, Theorem 3.3 below, but no hypothe-sis of torsion skew symmetry of the connection is required: the integration by parts formulae follow for the adjoint of any metric connection. In particular for any such connection there is a Hilbert “tangent space” of “good” direc-tions obtained by parallel translation of the Cameron-Martin space of paths in TxM. (In fact it is the “Ricci flow” or “Dohrn-Guerra parallel transla-tion” (see Nelson [Nel84]), leading to the “damped gradient” ([FM93]) which occurs more naturally). However, in Remark 2.4, we show that in this case ¯ Vh is in the class for which integration by parts formulae are known, so that the results of 2.3, 3.3, 3.5 are not claimed to be new in substance. Although this filtering out of the extraneous noise gives intrinsic results comparable to those of Driver [Dri92], this viewpoint throws away a lot of the structure we have. Moreover integration by parts formulae such as (2) should have some connection with quasi-invariance properties of flows associated to the vector fields. Flows for the ¯ V h on Cx(M) do not appear to be easy to analyse in general. However in §3 we show that in the context of DiffM valued processes there are very natural flows associated and (2) has a rather natural geometric interpretation. This leads to another elementary proof of (2) and in Theorem 4.1 we use this method to obtain integration by parts formulae for the free path space. There are at least 3 proofs of (2). The first given here is via Itˆ o’s formula and elementary martingale calculus (it requires F to be cylindrical), the sec-ond given here is based on the Girsanov-Maruyama theorem (and works for more general F), and a third method would be to deduce it from the stan-dard integration by parts formula on Wiener space applied to the functional F ◦ξ. Indeed this work was stimulated by D. Bell and D. Nualart pointing out that this third approach could be used to deduce the basic formula of [EL94]. The point made (and carried out) in [Elw92] and [EL94] that the first approach can be applied directly to ’Ricci flows’ instead of derivative flows to give intrinsic formulae without stochastic flows, also needs to be emphasized: see also [SZ]. 2 There are also now many proofs of Driver’s results for Cx(M) and for the free path space and their extensions. See [Hsu95], [ES95], [LN] (with a very concise proof), [AM], [Aid], and [CM]. Acknowledgment: This research was supported by SERC grant GR/H67263 and stimulated and helped by our contacts with A. Thalmaier. 2 The integration by parts formula from fi-nite dimensional manifolds to path spaces In this section we deduce by induction an integration by parts formula on the path space from a formula on the base manifold M. The key is to obtain formula (10) for M. Let h : Ω×[0, T] →TxM be an adapted process with h(ω) : [0, T] →TxM in L2,1 for almost all ω. Lemma 2.1 If h : Ω× [0, T] →TxM is adapted, L2,1 for a.s. ω and R T 0 |˙ hs|2ds 1/2 ∈L1+ϵ for some ϵ > 0. Then for t < T, E nR t 0 < Tξs(˙ hs), X(ξs(x))dBs > |ξT(x) o = E nR T t < Tξs(−), X(ξs(x))dBs > ht−h0 T−t |ξT(x) o . (4) If furthermore h· is non-random then for t ≤T, E nR t 0 < Tξs(˙ hs), X(ξs(x))dBs > |ξT(x) o = E nR t 0 < Tξs(−), X(ξs(x))dBs > ht−h0 t  |ξT(x) o . (5) Proof. First by the Burkholder-Davis-Gundy inequality, for some constant c1, E Z T 0 < Tξs(˙ hs), X(ξs(x))dBs > ≤c1E Z T 0 |Tξs(˙ hs)|2ds  1 2 ≤c1  E sup 0≤s≤T |Txξs| 1+ϵ ϵ  ϵ 1+ϵ " E Z T 0 |˙ hs|2ds  1+ϵ 2 # 1 1+ϵ . This is finite since sup0≤s≤t |Txξs| ∈Lq for all 1 ≤q < ∞, e.g. see [Li94]. Moreover, since the adapted processes in L∞(Ω, F, P; C1 ([0, T]; TxM)) are 3 dense in the subspace of adapted processes in L1+ϵ (Ω, F, P; L2,1 ([0, T]; TxM)), this estimate allows us to assume that h belongs to the former space. Set Mt = R t 0 < Txξs(−), X(ξs(x))dBs >. Then {M·} is a T ∗ xM valued local martingale. If 0 = t0 < t1 < . . . < tl = t is a partition of [0, t], ∆jt = tj+1 −tj, and ∆jM = Mtj+1 −Mtj, then l−1 X j=1 ∆jM(˙ htj) → Z t 0 ˙ hsdMs = Z t 0 < Tξs(˙ hs), X(ξs(x))dBs > (6) and the convergence is in L1. On the other hand if v0 ∈TxM and Pt is the probabilistic semigroup associated to the S.D.E. and f a bounded measurable function then d(Ptf)(v0) = 1 T Ef(ξT(x)) Z T 0 ⟨Tξs(v0), X(ξs(x))dBs⟩. (7) See [EL94]. However by an observation of Thalmaier: the same proof shows that for any r, h ∈[0, T] with h > 0 and r + h ≤T d(Ptf)(v0) = 1 hEf(ξT(x)) Z r+h r ⟨Tξs(v0), X(ξs(x))dBs⟩ c.f. [SZ]. From these two formulae we obtain: E n 1 T R T 0 < Tξs(v0), X(ξs(x))dBs > |ξT(x) o = E n 1 h R r+h r < Tξs(v0), X(ξs(x))dBs > |ξT(x) o . (8) For any 0 ≤r ≤T, let {ξr s(x) : r ≤s ≤T, x ∈M} be the solution flow to (1) starting from x at time r. The flow ξr · can be taken to be adapted to a filtration {Fr s : r ≤s ≤T} independent of Fr, and then we have ξr sξr = ξs, almost surely, r ≤s ≤T. From this, time homogeneity, and (8), E ( l−1 X j=1 ∆jM(˙ htj) |ξT(x) ) = E ( l−1 X j=1 ∆jt 1 ∆jt Z tj+1 tj D Tξtj s  Tξtj  ˙ htj)  , X ξtj s ξtj(x)  dBs E ξ tj T (ξtj(x)) ) = E ( l−1 X j=1 ∆jt 1 T −t Z T t D Tξtj s  Tξtj(˙ htj)  , X ξtj s ξtj(x)  dBs E ξ tj T (ξtj(x)) ) 4 = E ( l−1 X j=1 ∆jt 1 T −t Z T t < Tξs(˙ htj), X(ξs(x))dBs > |ξT(x) ) →E Z T t < Tξs(−), X(ξs(x))dBs > ht −h0 T −t |ξT(x)  . Comparing with (6) this gives the first required identity. When h· is non-random the second follows immediately from (8). Remark: As in [SZ] a further modification is possible replacing (8) by: 1 T E Z T 0 < Tξs(v0), X(ξs(x))dBs > |ξT(x)  = 1 R T 0 Ψ(r)dr E Z T 0 Ψ(s) < Tξs(v0), X(ξs(x))dBs > |ξT(x)  for Ψ : [0, T] →R integrable with R T 0 Ψ(r)dr ̸= 0. The argument leads to, for non-random h, E nR t 0 < Tξs(˙ hs), X(ξs(x))dBs > |ξT(x) o = E nR T 0 Ψ(s) < Tξs(−), X(ξs(x))dBs >  ht−h0 R T 0 Ψ(r)dr  |ξT(x) o . (9) Corollary 2.2 Under the conditions of the lemma, for any C1 function f : M →R, Ef (ξT(x)) Z T 0 < Tξs(˙ hs), X(ξs(x))dBs >= Ed f (TξT (hT −h0)) . (10) Proof. First by the composition property of solution flows, E Z T t < Tξs(−), X(ξs(x))dBs > ht −h0 T −t |ξT(x)  = E Z T t < Tξt s(−), X(ξt s (ξt(x)))dBs > Tξt(ht −h0) T −t ξt T (ξt(x))  . 5 As in the proof of the lemma, (4) yields Ef(ξT(x)) Z t 0 < Tξs(˙ hs), X(ξs(x))dBs > = Ef(ξt T(ξt(x)) Z T t Tξt s(−), X(ξt s(ξt(x))dBs Tξt(ht −h0) T −t = E {dPT−t(f) (Tξt(ht −h0))} by [EL94], since Ft · is independent of Ft. Now let t increase to T and the required result follows. Next consider a cylindrical function F on Cx(M), the space of continuous paths with base point x. Write F(γ·) = f(γt1, . . . , γtk), for (t1, . . . , tk) ∈[0, T]k, γ ∈Cx(M) and f a smooth function on M k. Suppose h0 = 0 and consider the tangent vector field V h(ξ·(x)) along {ξt(x) : 0 ≤t ≤ T} on Cx(M) given by V h(ξ·)t = Txξt(ht). Then dF(V h(ξ·)) = k X j=1 djfξt V h(ξ·)tj  . (11) Here ξt = (ξt1, . . . , ξtk) and djf is the partial derivative of f in the jth direction. Let δV h(ξ·) = Z T 0 < Txξs(˙ hs), X(ξs(x))dBs > . Theorem 2.3 Let h : [0, T] × Ω→TxM be an adapted stochastic process with almost surely all h(ω) ∈L2,1 0 and E R T 0 |˙ hs|2ds  1+ϵ 2 < ∞for some ϵ > 0. Then EdF(V h(ξ·)) = EF(ξ·(x))δV h(ξ·). (12) Proof. We prove by induction on k. When k = 1, this is just (10), the formula for functions. Let Ω= C0([0, T]; Rn) be the canonical probability space. We set Ω1 = C0([0, t1]; Rn) and Ω2 = C0([t1, T]; Rn). There is then the standard decomposition of filtered spaces {Ω, F, Ft, 0 ≤t ≤T, P} = {Ω1, F, Ft, 0 ≤t ≤t1, P⊮} × {Ω2, F, Ft1 t , t1 ≤t ≤T, P⊭} 6 in the sense that Ft = Ft ∗Ω2 if t ≤t1, and Ft = Ft1 ∗Ft1 t if t ≥t1. As before let ξt1 t (y0), t1 ≤t ≤T, y0 ∈M be the solution flow to (1) starting at time t1, i.e. ξt1 t1(y0) = y0. We will consider it as a function of ω2 ∈Ω2, adapted to Ft1 · , while {ξt : 0 ≤t ≤t1} will be considered on Ω1, and {ξt : t1 ≤t ≤T} on Ω1 × Ω2 = Ω. The composition property for flows gives ξt1 t (ξt1(x, ω1), ω2) = ξt (x, (ω1, ω2)) , each t1 ≤t ≤T, a.s. Assume the required result holds for cylindrical functions depending on k −1 times, some k ∈{2, 3 . . .}. Take y0 ∈M and define f y0 1 : M k−1 →R and F y0 1 : Ω2 →R by: f y0 1 (x1, . . . , xk−1) = f(y0, x1, . . . , xk−1) and F y0 1 (ω2) = f(y0, ξt1 t2(y0, ω2), . . . , ξt1 tk(y0, ω2)). Take h1 · : Ω2 →L2,1 0 ([t1, T]; Ty0M), adapted to Ft1 · , and with E R T t1 |˙ h1 s|2ds  1+ϵ 2 finite. By time homogeneity our inductive hypothesis gives Pk j=2 R Ω2 djf y0, ξt1 t2(y0, ω2), . . . , ξt1 tk(y0, ω2)   Tξt1 tj (h1 tj(ω2), ω2)  dP⊭(ω2) = R Ω2 f y0, ξt1 t2(y0, ω2), . . . , ξt1 tk(y0, ω2)  × R T t1 D Tξt1 r (˙ h1 r(ω2), ω2), X(ξt1 r (y0, ω2))dBr(ω2) E dP⊭(ω2). (13) Now for ω1 ∈Ω1 (outside of a certain measure zero set) we can take y0 = ξt1(x0, ω1) and h1 t(ω2) = Tξt1 (ht (ω1, ω2) −ht1(ω1), ω1) . Then, for almost all ω1 ∈Ω1, we have h1 · adapted to Ft1 · . Substitute this in (13). Using the composition property, and then integrating over Ω1 yields Pk j=2 Edjf(ξt) Tξtj(htj −ht1)  = Ef(ξt(x)) R T t1 D Tξr(˙ hr), X(ξr(x))dBr E . (14) On the other hand we can define g : M →R1 by g(x) = Z Ω2 f x, ξt1 t2(x, ω2), . . . , ξt1 tk(x, ω2)  and apply formula (10) to g to obtain: 7 Z Ω1 dg(Tξt1(ht1))dP⊮(ω1) = Z Ω1 g(ξt1(x)) Z t1 0 D Tξr(˙ hr)), X(ξr(x0))dBr E dP⊮(ω1). But note that Z Ω1 dg(Tξt1(ht1))dP⊮(ω1) = k X j=1 Edkfξt(Tξtj(ht1))dP⊮(ω1), and therefore k X j=1 Edjfξt(Tξtj(ht1)) = Ef(ξt) Z t1 0 D Tξr(˙ hr), X(ξr(x))dBr, E (15) Adding (14) we arrive at (12): k X j=1 Edjfξt(Tξtj(htj)) = Ef(ξt(x)) Z T 0 D Tξr(˙ hr), X(ξr(x))dBr E . B. Let ˜ ∇be a metric connection for the manifold M with torsion T, and ˜ ∇′ its adjoint connection defined by ˜ ∇′ V1V2 = ˜ ∇V1V2 −T(V1, V2). Here V1, V2 are vector fields. Let ˜ R be the curvature tensor of ˜ ∇and define ˜ Ric # : TM →TM by ˜ Ric #(v) = trace ˜ R(v, −)−. If {xs} is a diffusion on M with generator 1 2trace ˜ ∇grad + LZ denote by ˜ //s the parallel transport along {xs}, and { ˜ Bs : 0 ≤s ≤t} the martingale part of the anti-development of {xs : 0 ≤s ≤t} using ˜ //s, a Brownian motion on Tx0M. Let vs = ˜ W Z s (v0) be the solution to ˜ D′ ∂svs = −1 2 ˜ Ric #(vs) + ˜ ∇Z(vs) starting from v0 ∈Tx0M. Here ˜ D′ denotes the covariant differentiation along the paths of {xt} using the adjoint connection. We will show that (12) implies Driver’s integration by parts formula. However we do not need to assume ˜ ∇′ (or equivalently ˜ ∇) is torsion skew symmetric. Corollary 2.4 Let F be a cylindrical function on Cx0(M). Suppose h : [0, T] × Ω→Tx0M is adapted to the filtration of {xs : 0 ≤s < ∞} and such 8 that h(ω) is in L2,1 0 for almost all ω and h ∈L1+ϵ Ω, F, P; L2,1 0 ([0, T]; Tx0M)  for some ϵ > 0. Then EdF( ˜ W Z · (h·)) = EF(ξ·(x0)) Z T 0 < ˜ W Z s (˙ hs), ˜ //sd ˜ Bs > . (16) When ˜ ∇′ is metric for some Riemannian metric on M, it suffices to have h ∈L1 Ω, F, P; L2,1 0 ([0, T])  . Proof. By a result of [ELJL95] we can choose X such that ˜ ∇equals the Le Jan-Watanabe connection induced from the stochastic differential equation dxt = X(xt) ◦dBt + Z(xt)dt and the solution flow {ξ·(x)} has generator 1 2trace ˜ ∇grad+LZ (c.f. Corollary 3.4 of [ELJL95]). Moreover the conditioned process of the derivative flow Tξt(v0) with respect to the natural filtration of {ξ·(x0)} is given by { ˜ W Z · (v0)}: E{Tξt(v0) | Fx0 T } = ˜ W Z t (v0), by Theorem 3.2 of [ELJL95] extending [EY93]. The result follows since ˜ Bt equals R t 0 ˜ // −1 s X(ξs(x0))dBs. If ˜ ∇′ is metric for some Riemannian metric then sup0≤s≤t | ˜ W Z s | is in L∞(Ω, F, P) and so the Burkholder-Davis-Gundy inequality used as in the proof of Lemma 2.1 allows us to take ϵ = 0. Remarks 2.5. (i). Let S : TM × TM →TM be a tensor fields of type (1,2), and let ∇refer to the Levi-Civita connection of M. Then, by [KN69] p.146, a connection ˜ ∇can be defined by ˜ ∇V1(V2) = ∇V1(V2) + S(V1, V2) for vector fields V1, V2. and all linear connections on M can be obtained this way. It is easy to see that ˜ ∇is metric if and only if < S(W, U), V >= −< U, S(W, V ) > for all vector fields U, V , W, i.e. if and only if S(W, −) is skew symmetric. On the other hand the adjoint connection is given by ˜ ∇′ V1(V2) = ∇V1(V2) + S(V2, V1) so that it is torsion skew symmetric if also S(−, W) is skew symmetric. In terms of the Levi-Civita connection our vector fields ¯ V h for which the integration by parts formula hold therefore satisfy an equation of the form 9 D ¯ V h t = −S(¯ V h t , ◦dxt) + Λt(¯ V h t )dt + W h t (˙ ht)dt + ∇A(¯ vh t )dt where Λt is linear (also depending on S). In particular they are “tangent processes” in the sense proposed by Driver, for which integration by parts formulae are known: see [Dri95b], [CM], [AM], and [Aid], [Dri95a]. (ii) For cylinder functions depending on one time only such integration by parts formulae go back to Bismut [Bis84]. 3 Geometric intepretation and a shorter proof A. The processes Txξt(ht) cannot strictly speaking be considered as tangent vectors or vector fields on Cx(M). In some sense they form tangent vectors at ξ·(x, −) to the space of processes (or semi-martingales) [0, T] × Ω→M since Txξt(ht(ω), ω) ∈Tξt(x,ω)M for (t, ω) ∈[0, T] × Ωor equivalently as ’tangent vectors’ to the space of random variables Ω→Cx(M) at ω 7→ξ·(x, ω). However c.f. [Dri92] there is still no natural associated flow. In fact the most natural interpretation takes into account the variable x and replaces Cx(M) by PidDiffM the space of paths on the diffeomorphism group of M, as we now describe. Let DiffM be the space of C∞diffeomorphisms of M. We can consider it with a rather formal differential structure or if the reader prefers it can be replaced by a suitable Sobolev space of diffeomorphisms, to give a Hilbert manifold (as in [Elw82] following [EM70]). In any case the tangent space Tα(DiffM) will be identified with all vector fields on M over α i.e. smooth v : M →TM such that v(x) ∈Tα(x)M for all x ∈M. If PDiffM refers to continuous paths φ : [0, T] →DiffM with φ(0) = idM then TφPDiffM will be identified with continuous v : [0, T] →TDiffM vanishing at t = 0, such that v(t) ∈Tφ(t)DiffM, or equivalently v : [0, T] × M →TM with v(t)(x) ∈Tφ(t)(x)M. B. Given our S.D.E. (1) now take h ∈L2,1 0 ([0, T]; Rn). There is Xh·, the time dependent vector field X(·)(ht) on M. From this we obtain a field U h on PDiffM by U h(φ)t(x) = Txφt(X(x)ht). (17) 10 This is just the left invariant vector field on PDiffM corresponding to Xh· ∈ TePDiffM for e(t) = idM, 0 ≤t ≤T. For each 0 ≤t ≤T let Hτ t : M →M, τ ∈R be the solution flow to the vector field X(·)(ht) so  ∂ ∂τ Hτ t (x) = X(Hτ t (x))ht H0 t (x) = x. (18) Lemma 3.1 The vector field U h on PDiffM has solution flow Φτ : PDiffM → PDiffM, τ ∈R given by Φτ(φ)t(x) = φt(Hτ t (x)). Proof. By left invariance we can suppose φ = e. We then need only to observe that ∂ ∂τ Hτ t (x) = THτ t (X(x)ht) for each 0 ≤t ≤T: a standard property of ordinary, time-independent dynamical systems which is seen by differentiating the identity Hτ+σ t = Hτ t ◦Hσ t (x) with respect to σ at σ = 0. C. In the case where h is random, with h : Ω→L2,1 0 ([0, T]; Rd) adapted, we can use the same notation to obtain a variation of our stochastic flow {ξt : 0 ≤t ≤T} on M generated by the vector field V h, and given explicitly by ξτ · = Φτ(ξ·), i.e. ξτ t (x) = ξt(Hτ t (x)). (19) In particular ∂ ∂τ ξτ t (x) |τ=0 = Tξt (X(x)ht) . (20) Using the structure of Cx(M) as a C∞Banach manifold let BC1(Cx(M)) be the space of C1 maps F : Cx(M) →R such that there is a constant |dF|∞ with |dF(v)| ≤|dF|∞sup 0≤t≤T |vt| (21) for all tangent vectors v : [0, T] →TM to Cx(M). Set V X(h) t (x) = Tξt (X(x)(ht)), which gives rise to a vector field along {ξ·(x)} on Cx(M). 11 Proposition 3.2 Suppose h : [0, T] × Ω→TxM is adapted, belongs to L2,1 0 a.s. and such that E R T 0 |˙ hs|2ds  1+ϵ 2 < ∞for some ϵ > 0. Then for each x ∈M the processes ξτ · (x), τ ∈R have mutually equivalently laws Px τ, τ ∈R on Cx(M) with dPx τ dP↶ ⊬ = exp Z T 0 < X(ξτ s (x))∗Tξs  ∂ ∂sHτ s (x)  , dBs > −1 2 Z T 0 |Tξs  ∂ ∂sHτ s (x)  |2ds  . Moreover, for any F ∈BC1(Cx(M)), EdF(V X(h) · ) = EF(ξ·) Z T 0 D X(ξs(x))dBs, V X(˙ h) s (x)) E . Proof. For the equivalent part note that {ξτ t : 0 ≤t ≤T} satisfies the equation: dξτ t (x) = X (ξτ t (x)) ◦dBt + A(ξτ t (x))dt + Tξt  ∂ ∂tHτ t (x)  dt. A straightforward argument shows that Z T 0 X(ξτ s (x))∗Tξs  ∂ ∂sHτ s (x)  2 < ∞, a.s. Therefore if we set M τ t = Z t 0 X(ξτ s (x))∗Tξs  ∂ ∂sHτ s (x)  , dBs , then by the Girsanov-Maruyama theorem, P x τ is equivalent to P x 0 and dPx τ dP↶ ⊬ = eMτ T −1 2 τ T . (22) Consequently, EF(ξτ · (x)) = EF(ξ·(x)) dPx τ dP↶ ⊬ . Now suppose h· and R · 0 |˙ hs|2ds are bounded on [0, T]×Ω. Differentiating with respect to τ at τ = 0 and using (18) gives EdF(Tξ·(X(x)h·)) = EF(ξ·(x)) ∂ ∂τ  dPx τ dP↶ ⊬  τ=0 , 12 since |dF| is bounded and sup0≤s≤T |Tξs| ∈∩1≤p≤∞Lp. The second statement follows from differentiation of (22), using the fact that  dPx τ dP↶ ⊬  τ=0 = 1 and ∂ ∂tHτ t (x) |τ=0 = 0: ∂ ∂τ  dPx τ dP↶ ⊬  τ=0 =  dPx τ dP↶ ⊬  τ=0 ·  ∂ ∂τ M τ T  τ=0 −1 2  ∂ ∂τ ⟨M τ T⟩2  τ=0  = Z T 0 X(ξτ s (x)dBs, D ∂τ  Tξs  ∂ ∂sHτ s (x)  τ=0 = Z T 0 X(ξs(x))dBs, Tξs( D ∂sX(Hτ s (x))hs) τ=0 = Z T 0 D X(ξs(x))dBs, Tξs(X(x)˙ hs) E . For general h take a sequence of bounded hn which converges to h in L 1+ϵ 2 (Ω, L2,1 0 ([0, T])) to finish the proof. See the proof of theorem 4.1. The following is an analogue of Corollary 2.4: here ˜ ∇is any metric con-nection and ˜ W Z · is as in Corollary 2.4, Theorem 3.3 Let F ∈BC1(Cx(M)) and h(ω) ∈L2,1 0 ([0, T]; Rn) a.s.. Sup-pose h· is adapted to the filtration of {Fx · } and such that E R T 0 |˙ hs|2ds  1+ϵ 2 < ∞for some ϵ > 0. Then EdF( ˜ W Z · (h·)) = EF(ξ·(x)) Z T 0 < ˜ W Z s (˙ hs), ˜ //sd ˜ Bs > . (23) If ˜ ∇′ is metric for some Riemannian metric, we can take ϵ = 0. 4 Integration by parts for the free path space It is easy to modify the proof of Proposition 3.2 to the case where h(0) ̸= 0 and so obtain an integration by parts formula for the free path space PM = ∪x∈MPxM with uniform topology and measure given by the Riemannian measure of M together with the laws of {ξ·(x) : x ∈M}. In fact it is straightforward to generalize to the case of an x-dependent h·. For this let C1(TM) be the space of C1 vector fields on M with its usual topology: Theorem 4.1 Let h : [0, T] × Ω→C1(TM) be a cadlag adapted process such that the TxM valued process h·(x) has sample paths in L2,1([0, T]; TxM) 13 for each x ∈M with |h0(·)| + qR t 0 |˙ hs(·)|2ds in L1+ϵ (Ω× M; R) for some ϵ > 0. Let F be in BC1(PM; R). Then E R M dF (Txξ·(h·(ω)(x))) dx = E R M F(ξ·(x)) n −divh0(x) + R T 0 D Tξs(˙ hs(x)), X(ξs(x))dBs Eo dx. (24) Proof. Proceed as for Proposition 3.2 but with X(x)ht replaced by ht(x). In particular the definition (6) of Hτ t becomes ∂ ∂τ Hτ t (x) = ht (Hτ t (x)) H0 t (x) = x. while ξτ t is defined by (19). However now ξτ 0(x) = ξ0 (Hτ 0 (x)): the starting point is transported by the flow of h0(x). We first assume h· and R · 0 |˙ hs|2ds are bounded on Ω× M. Then the Girsanov-Maruyama theorem gives us equivalence between the measures P x τ and P Hτ 0 (x) 0 with Z M EF (ξτ · (x)) dx = Z M EF (ξ·(Hτ 0 (x))) dPx τ dP Hτ 0 (x) 0 dx. On differentiating this there is the extra term Z M dF  Tξ·( ∂ ∂τ Hτ 0 (x) τ=0 )  = Z M dF (Txξ· (h0(x))) dx = Z M dx (F ◦ξ·) (h0(x)) dx where dx (F ◦ξ·) refers to the derivative in M of F ◦ξ· : M × Ω→R. Now apply the classical Stokes theorem on M to get: E Z M dF(Txξ·(h·(ω)(x)))dx = E Z M F(ξ·(x))  −divh0(x) + Z T 0 < Txξs(˙ hs(x)), X(ξs(x))dBs >  dx. 14 For general h let τR be the first exit time of ||h·||C1 + R · 0 |hs(x)|2ds from [0, R). Set hR t (x) = ht∧τR(x)χ{||h0||C1<R}. We have: E Z M dF(Txξ·(hR · (ω)(x)))dx = Eχ{||h0||C1<R} Z M F(ξ·(x))  −divh0(x) + Z T∧τR 0 < Txξs(˙ hs(x)), X(ξs(x))dBs >  dx. Now let R →∞. The left hand side converges to E R M dF(Tξ·(h·(ω)(x)))dx since |dF(Tξ·(hR · (ω)(−)))| ≤˜ c sup t |Tξt(ω)| sup t |ht(−, ω)| and supx E supt |Tξt| R M supt |ht(x, ω)|dx  < ∞from sup t |ht(x)| ≤|h0(ω)| + Z T 0 |˙ hs(ω)|ds ≤|h0(ω)| + √ T sZ T 0 |˙ hs(ω)|2ds ∈L1+ϵ(Ω× M) Using Burkholder-Davies-Gundy inequality to justify the integration on the right hand side we see that it converges to the right hand side of (24). Just as before the intrinsic formulae can be deduced using [ELJL95]: Theorem 4.2 Let F be in BC1(PM; R) and h be as in Theorem 4.1 but with h·(x) adapted to the filtration of {Fx · }, and divh0 ∈L1 (Ω× M, R). Then for any metric connection ˜ ∇on M, E R M dF  ˜ W Z · (h·(ω)(x))  dx = E R M F(ξ·(x)) n −divh0(x) + R T 0 D ˜ W Z s (˙ hs(x)), ˜ //sd ˜ Bs Eo dx. (25) If furthermore ˜ ∇′ is metric with respect to a Riemannian metric, we can take ϵ = 0. Proof. The proof is just as that of Theorem 3.3. 15 References [Aid] S. Aida. On the irreducibility of certain Dirichlet forms on loop spaces over compact homogeneous spaces. To appear in ’New Trends in stochastic Analysis’, Proc. Taniguchi Symposium, Sept. 1995, Charingworth, ed. K. D. Elworthy and S. Kusuoka, I. Shigekawa, World Scientific Press. [AM] H. Airault and P. Malliavin. Integration by parts formulas and dilation vector fields on elliptic probability spaces. Institut Mittag-Leffler preprints No. 24, 1994/95. [Bis81] J. M. Bismut. Martingales, the Malliavin calculus and harmonic theorems. In D. Williams, editor, Stochastic Integrals, Lecture Notes in Maths. 851, pages 85–109. Springer-Verlag, 1981. [Bis84] J. M. Bismut. Large deviations and the Malliavin calculus. Progress in Math. 45. Birkha˝ user, 1984. [CM] A.-B. Cruzeiro and P. Malliavin. Curvatures of path spaces and stochastic analysis. Institut Mittag-Leffler preprints No. 16, 1994/95. [Dri92] B. Driver. A Cameron-Martin type quasi-invariance theorem for Brownian motion on a compact Riemannian manifold. J. Funct. Anal., 100:272–377, 1992. [Dri95a] B. Driver. The Lie bracket of adapted vector fields on Wiener spaces. Preprint, 1995. [Dri95b] Bruce K. Driver. Towards calculus and geometry on path spaces. In Stochastic Analysis: AMS Proceedings of symposium in pure Math. Series, pages 423–426. AMS. Providence, Rhode Island, 1995. [EL94] K.D. Elworthy and Xue-Mei Li. Formulae for the derivatives of heat semigroups. J. Funct. Anal., 125(1):252–286, 1994. [ELJL95] K. D. Elworthy, Yves Le Jan, and Xue-Mei Li. Concerning the geometry of stochastic differential equations and stochastic flows. To appear in ’New Trends in stochastic Analysis’, Proc. Taniguchi Symposium, Sept. 1995, Charingworth, ed. K. D. Elworthy and S. Kusuoka, I. Shigekawa, World Scientific Press, 1995. [Elw82] K.D. Elworthy. Stochastic Differential Equations on Manifolds. Lecture Notes Series 70, Cambridge University Press, 1982. 16 [Elw92] K. D. Elworthy. Stochastic flows on Riemannian manifolds. In M. A. Pinsky and V. Wihstutz, editors, Diffusion processes and related problems in analysis, volume II. Birkhauser Progress in Probability, pages 37–72. Birkhauser, Boston, 1992. [EM70] D. G. Ebin and J. Marsden. Groups of diffeomorphisms and the motion of an incompressible fluid. Ann. of Math., 92(1):102–163, 1970. [ES95] O. Enchev and D.W. Stroock. Towards a Riemannian geometry on the path space over a Riemannian manifold. J. Funct. Anal., 134(2):392–416, 1995. [EY93] K. D. Elworthy and M. Yor. Conditional expectations for deriva-tives of certain stochastic flows. In J. Az´ ema, P.A. Meyer, and M. Yor, editors, Sem. de Prob. XXVII. Lecture Notes in Maths. 1557, pages 159–172. Springer-Verlag, 1993. [FM93] S. Fang and P. Malliavin. Stochastic analysis on the path spaces of a Riemannian manifold. J. Funct. Anal., 118:249–274, 1993. [Hsu95] E. Hsu. In´ egalit´ es de sobolev logarithmiques sur un espace de chemins. C. R. Acad. Sci. Paris, t. 320. S´ erie I., pages 1009–1012, 1995. [KN69] S. Kobayashi and K. Nomizu. Foundations of differential geometry, Vol. II. Interscience Publishers, 1969. [Li94] Xue-Mei Li. Stochastic differential equations on noncompact man-ifolds: moment stability and its topological consequences. Probab. Theory Relat. Fields, 100(4):417–428, 1994. [LN] R. Leandre and J. Norris. Integration by parts and Cameron-Martin formulas for the free-path space of a compact Riemannian manifold. Warwick Preprints: 6/1995. [Nel84] E. Nelson. Quantum Flucatuations. Princeton University Press, Princeton, 1984. [SZ] D. W. Stroock and O. Zeitouni. Variations on a theme by Bismut. Preprint. Present address of Xue-Mei Li Mathematics Department, U-9, MSB 111, University of Connecticut, 196 Auditorium Road, Storrs, Connecticut 06269, USA 17
14994
https://personal.math.vt.edu/gmatthews/acham.pdf
On the acyclic chromatic number of Hamming graphs Robert E. Jamison1, Gretchen L. Matthews2∗ 1 Department of Mathematical Sciences, Clemson University, Clemson, SC 29634-0975; Affiliated Professor, University of Haifa, rejam@clemson.edu 2 Department of Mathematical Sciences, Clemson University, Clemson, SC 29634-0975, gmatthe@clemson.edu Abstract. An acyclic coloring of a graph G is a proper coloring of the vertex set of G such that G contains no bichromatic cycles. The acyclic chromatic number of a graph G is the minimum number k such that G has an acyclic coloring with k colors. In this paper, acyclic colorings of Hamming graphs, products of complete graphs, are considered. Upper and lower bounds on the acyclic chromatic number of Hamming graphs are given. Key words. acyclic coloring, Cartesian product of graphs, distance 2 coloring, Hamming graph 1. Introduction A k-coloring of a graph G with vertex set V (G) is a labeling f : V (G) →{1, . . . , k}. Such a coloring is said to be a proper coloring provided any two adjacent vertices have distinct colors. The chromatic number of a graph G, denoted χ(G), is the minimum number k such that G has a proper k-coloring. A more restrictive type of coloring is an acyclic coloring. A proper coloring of G is called acyclic if and only if the subgraph of G induced by any two color classes of G contains no cycles. The acyclic chromatic number of a graph G, denoted AC(G), is the smallest number k such that G has an acyclic k-coloring. Acyclic colorings are hereditary in the sense that the restriction of an acyclic coloring to a subgraph is an acyclic coloring. Thus, the acyclic chromatic number is nondecreasing from subgraph to supergraph. An even more restrictive type of coloring is a distance 2 coloring. A distance 2 coloring of a graph G is a coloring in which any two vertices at distance at most 2 apart get distinct colors. The distance 2 chromatic number of G, denoted χ2(G), is the minimum number k such that G has a distance 2 coloring with k colors. Note that a distance 2 coloring is necessarily acyclic. Thus AC(G) ≤χ2(G). Acyclic colorings were first studied by Gr¨ unbaum who proved that a graph with maximum degree 3 has an acyclic 4-coloring. This was followed by work of Berman and Albertson and Borodin on acyclic colorings for planar graphs. In , Burnstein proved that a graph with maximum degree 4 has an acyclic 5-coloring. Later, the acyclic chromatic number for graphs on certain surfaces was considered. More recently, acyclic colorings have been studied by Alon, McDiarmid, and Reed , Mohar , and Skulrat-tanakulchai . Nowakowski and Rall have investigated the behavior of several graph parameters with respect to an array of different graph products . ∗The work of this author is supported by NSA H-98230-06-1-0008. 2 Robert E. Jamison, Gretchen L. Matthews In this paper, we study the acyclic chromatic numbers of Hamming graphs, the prod-ucts of complete graphs. The product we are taking is the usual Cartesian (or box) product. The vertex set of G2H is the Cartesian product V (G)×V (H) of the vertex sets of G and H. There is an edge between two vertices of the product if and only if they are adjacent in exactly one coordinate and agree in the other. This is an extension of the work of Fer-rin, Godard, and Raspaud where acyclic colorings of certain grids (products of paths) are studied, of the authors and of the authors with Villalpando where acyclic colorings of products of trees and cycles are studied. Since we consider only Hamming graphs in this paper, we will write H(s1, s2, s3, . . . , st) to denote Ks12Ks22Ks32 . . . 2Kst. The dimension of this Hamming graph is t, and we always normalize by always assuming 2 ≤s1 ≤s2 ≤s3 ≤· · · ≤st. The lower bound is here is 2 ≤s1 since s1 = 1 would effectively lower the dimension. To sim-plify the potentially cumbersome notation for the acyclic and distance 2 chromatic num-ber, we write AC(s1, s2, s3, . . . , st) for AC(H(s1, s2, s3, . . . , st)) and χ2(s1, s2, s3, . . . , st) for χ2(H(s1, s2, s3, . . . , st)). 2. General bounds In [10, Theorem 2.1], [7, Proposition 1], and [17, Lemma 10], it is shown that the acyclic chromatic number of a product of graphs G1, . . . , Gt satisfies AC(G12 · · · 2Gt) > t X i=1 |E(Gi)| |V (Gi)| + 1. If the graph Gi is ri-regular, then |E(Gi)| = ri|V (Gi)| 2 . Hence, we have the following bound on the acyclic chromatic number of a product of regular graphs. Proposition 1. Consider the product G12 · · · 2Gt where Gi is ri-regular for 1 ≤i ≤t. Then AC(G12 · · · 2Gt) > r1 + · · · + rt 2 + 1. Since the complete graph Ks is (s −1)-regular, Proposition 1 gives AC(s1, . . . , st) > s1 + · · · + st −t 2 + 1. (1) Recall that si ≥2 for all i, which yields AC(s1, . . . , st) > 2(t −1) + st −t + 2 2 = t + st 2 . Another simple but useful lower bound arises from the fact that AC(G) ≥χ(G). Since Kst is a clique in H(s1, . . . , st), we get AC(s1, . . . , st) ≥st. (2) Now let [s]t := (s, . . . , s) denote a string of t s’s. In this case, Inequality (1) becomes AC([s]t) > t(s −1) 2 + 1. (3) On the acyclic chromatic number of Hamming graphs 3 To obtain upper bounds on the acyclic chromatic number of a Hamming graph, we turn to distance 2 colorings. Recall that the square G2 of a graph G has the same vertex set as G but has two vertices adjacent if and only if they are at most distance two apart in G. By definition, the distance 2 chromatic number is just the chromatic number of the square G2, hence χ2(G) = χ(G2). The bounds obtained here are quite crude, and, for simplicity’s sake, we will not bother working through minor improvements. More significant improvements on these bounds will be given in Section 4. For any sequence 2 ≤s1 ≤· · · ≤st, set B(s1, . . . , st) := X i<j sisj −(t −2) t X i=1 si ! + t(t −3) 2 . Lemma 1. The square H = H2(s1, . . . , st) of the Hamming graph H(s1, . . . , st) is regular of degree B(s1, . . . , st). Proof. Let v = (v1, . . . , vt) be a vertex of H. To find the degree of v in H, we determine the vertices at distance at most two from v in H(s1, . . . , st). Changing vi to any one of si −1 possible other values yields a set Ai of si −1 vectors at distance one from v. Hence, there are exactly Pt i=1(si −1) vertices at distance one from v in H(s1, . . . , st). Now for each vector in Ai, changing that vector in the jth (j ̸= i) coordinate to any one of sj −1 possible new values yields a set Ai,j of vectors at distance two from v. However, Aj,i = Ai,j, so we count these sets once by taking i < j. Thus the number of vertices at distance exactly two from v in H(s1, . . . , st) is X i<j (si −1)(sj −1) = X i<j sisj −(t −1) t X i si ! + t 2  . Notice that on expanding P i<j(si −1)(sj −1), each si will arise in a linear term from (sk −1)(si −1) for i −1 values of k < i and from (si −1)(sj −1) for t −i values of i < j, making a total of t −1 appearances for each i. The constant term 1 in each summand appears t 2  times. Therefore, the total number of vertices in H(s1, . . . , st) at distance at most two from v is t X i=1 si ! −t + X i<j sisj −(t −1) t X i=1 si ! + t 2  = X i<j sisj −(t −2) t X i=1 si ! + t(t −3) 2 = B(s1, . . . , st). It follows that H is regular of degree B(s1, . . . , st). Theorem 1. For t ≥3, the acyclic chromatic number of the Hamming graph H(s1, . . . , st) satisfies AC(s1, s2, s3, . . . , st) ≤B(s1, . . . , st) ≤ t 2  s2 t. Moreover, AC([s]t) ≤ t 2  s2 −t(t −2)  s −1 2  provided t ≥3. 4 Robert E. Jamison, Gretchen L. Matthews Proof. First note that AC(s1, s2, s3, . . . , st) ≤χ2(s1, s2, s3, . . . , st) since any distance 2 coloring is acyclic. The distance 2 chromatic number of a graph is simply the chromatic number of its square. Let ∆(G) denote the maximum degree of a graph G. As is well-known , ∆(G) + 1 is an upper bound on the chromatic number χ(G) of a graph G. According to Lemma 1, this yields AC(s1, s2, s3, . . . , st) ≤B(s1, . . . , st) + 1. Next, recall Brooks’ Theorem which states that a connected graph G satisfies χ(G) = ∆(G) + 1 if and only if G is either complete or an odd cycle. A product of 3 or more complete graphs never has a square that is an odd cycle or is complete. As a result, the first inequality holds. Finally, to obtain the second inequality, note that the expression B has three terms. The first of these, P i<j sisj, consists of t 2  summands, each bounded by s2 t; that is, X i t −3 and si ≥2 for all i, it follows that (t −2) t X i=1 si ≥(t −3)2t ≥(t −3) t 2, showing the second term of B is larger than the third. Thus, neglecting the difference leads to an upper bound for B. Taking s1 = · · · = st = s in the previous argument and observing that the third term of B is less than t(t−2) 2 produces the bound on AC([s]t). We conclude this section by summarizing the bounds we have obtained for the acyclic chromatic number of a Hamming graph. Corollary 1. If t ≥3, then max  st, t + st + 1 2  ≤AC(s1, . . . , st) ≤t(t −1) 2 s2 t and max  s, t + s + 1 2  ≤AC([s]t) ≤t(t −1) 2 s2 −t(t −2)  s −1 2  . 3. Colorings of two-dimensional Hamming graphs by groups In this section, we will study the 2-dimensional Hamming graphs H(m, n) where 2 ≤m ≤ n. From the previous section, we see that AC(m, n) ≥n from Inequality (2) and AC(n, n) ≥n + 1 On the acyclic chromatic number of Hamming graphs 5 from Inequality (3). The upper bounds given in Theorem 1 do not necessarily apply to Hamming graphs of dimension two. Even when these do apply, they are quite bad. Next, we obtain a constructive upper bound on the acyclic chromatic number of a Hamming graph. The following notation will be useful in the theorem and its applications: – ς(N), the smallest prime dividing N; – α(N) := N − N ς(N); – β(n) := min{N : n ≤α(N)}; and – NP(n), the smallest prime larger than n. Certainly, β(n) ≤NP(n). In fact, it may be the case that β(n) = NP(n). This has been verified computationally for n ≤1, 000, 000. Since this is not the focus of this investigation, we will not comment further on this. Instead, we return our attention to the task of obtaining an upper bound on the acyclic chromatic number of a 2-dimensional Hamming graph. Theorem 2. Suppose m, n, and N are positive integers. If m ≤α(N) and n ≤N, then AC(m, n) ≤N. Proof. Suppose p is the smallest prime divisor of N. First we may assume that m = N −N p and n = N, for if we prove the result in this case, then it follows for all m′ ≤m and n′ ≤n as AC(m′, n′) ≤AC(m, n). Consider the graph H(N, N) = KN2KN with vertices indexed by the elements of ZN × ZN. Notice that Km2Kn may be viewed as the subgraph of KN2KN induced by those vertices with indices in  ZN \ n 0, 1, . . . , N p −1 o × ZN. Color the vertices of KN2KN by assigning the color i+j mod N to the vertex (i, j). (In this proof, arithmetic on colors is done modulo N.) This coloring is obviously proper. We now show that it is acyclic when restricted to Km2Kn. Suppose there is a bichromatic cycle C in Km2Kn. Let (s, t) be a vertex on the cycle C. Then s + t is one of the colors on the cycle. Let c be the other color on the cycle and set a := c −(s + t). Note that a ̸= 0. Walking around the cycle corresponds to alternately adding one of (a, 0) or (0, a) and then subtracting the other on the next step. Thus, the cycle C is one of two types, C1 : (s, t), (s + a, t), (s + a, t −a), (s + 2a, t −a), (s + 2a, t −2a), (s + 3a, t −2a), . . . or C2 : (s, t), (s, t + a), (s −a, t + a), (s −2a, t + a), (s −2a, t + 2a), (s −3a, t + 2a), . . . . Consider the cycle (C1). Let ⟨a⟩denote the subgroup of ZN generated by a, and suppose a has order r. Then ⟨a⟩consists of all multiples of N r . Every coset of ⟨a⟩has the form k +⟨a⟩where k ∈  0, 1, . . . , N r −1 . It is clear that a will be added to the x-coordinate in (C1) every second step. As (C1) is a cycle, every multiple of a will eventually occur added to s in the x-coordinate of some vertex in (C1). That is, the x-coordinates of (C1) form the coset s + ⟨a⟩. Thus this coset must contain a representative k with 0 ≤k ≤N r −1. Since p ≤r, we have N r −1 ≤N p −1. Thus k lies in the interval between 0 and N p −1, which is impossible since these values were explicitly forbidden as x-values in our definition of Km2Kn. Similarly, the case of cycle (C2) leads to a contradiction. 6 Robert E. Jamison, Gretchen L. Matthews Table 1. AC(m, n) for small m and n m\n 2 3 4 5 6 7 8 9 10 11 12 2 3 ⟨3⟩4 5 6 7 8 9 10 11 12 3 5 5 ⟨5⟩ 6 7 8 9 10 11 12 4 5 5 (6, 7) ⟨7⟩ 8 9 10 11 12 5 (6, 7) (6, 7) 7 (8, 9) ⟨9⟩ 10 11 12 6 7 7 (8, 9) 9 (10, 11) ⟨11⟩ 12 7 (8, 11) (8, 11) (9, 11) (10, 11) 11 12 8 (9, 11) (9, 11) (10, 11) 11 (12, 13) Theorem 2 immediately yields the following bounds on the acyclic chromatic number of a 2-dimensional Hamming graph. Corollary 2. For any positive integer n, n + 1 ≤AC(n, n) ≤β(n). If m ≤α(n), then AC(m, n) = n. Notice that Corollary 2 implies n + 1 ≤AC(n, n) ≤NP(n). We also see that if p is prime and m < p, then AC(m, p) = p. Another particularly useful consequence of Corollary 2 is the following result. Corollary 3. If n ≥2m −1, then AC(m, n) = n. Proof. Suppose n ≥2m −1. Then α(n) = n − n ς(n) ≥n −n 2 ≥n 2 ≥m −1 2. Since both α(n) and m are integers, this implies α(n) ≥m. Now, by Corollary 2, AC(m, n) = n. Corollary 3 shows that any integer n ≥3 is the acyclic chromatic number of some 2-dimensional Hamming graph. Table 1 displays what we know about small values of AC(m, n). A single number gives an exact value of AC(m, n) when known. Otherwise, the ordered pair gives upper and lower bounds. The notation ⟨a⟩means that AC(m, n) = a and AC(m, n′) = n′ for all n′ ≥n. Except for AC(3, 3) = 5, which was determined in [10, Theorem 3.2], all values follow from results established here. We conclude this section by considering the asymptotic behavior of the acyclic chro-matic number of 2-dimensional Hamming graphs. As mentioned earlier, β(n) ≤NP(n). Applying Bertrand’s Postulate, we see that β(n) ≤2n. However, even more is true. On the acyclic chromatic number of Hamming graphs 7 Theorem 3. The limit limn→∞ β(n) n exists and equals 1. Proof. Since β(n) ≥n by definition, we only need to show that for each ε > 0, there is an Lε such that if n > Lε, then β(n) ≤(1 + ε)n. Let q be a prime so large that 2 q−1 < ε. Let Q denote the product of all primes less than q and set Lε = q(Q + 1). Suppose n > Lε. Let A := l qn q−1 m . There is an integer N between A and A+Q such that N ≡1 (mod Q). The congruence condition says N and Q are relatively prime, so the smallest prime divisor p of N must be q or bigger. Therefore, α(N) = N −N p ≥N −N q = N  1 −1 q  ≥A  1 −1 q  ≥ qn q −1 q −1 q  = n. Thus by definition β(n) ≤N. We now show that N ≤(1 + ε)n. From n > Lε = q(Q + 1), we have N ≤A + Q ≤ qn q −1 + 1 + Q < qn q −1 + n q < n  1 + 2 q −1  < n (1 + ε) . Hence, n ≤β(n)(1 + ε)n for all n > Lε which establishes the result. Corollary 4. The limit limn→∞ AC(n,n) n exists and equals 1. For each fixed m, the limit limn→∞ AC(m,n) n exists and equals 1. Proof. Note that n+1 ≤AC(n, n) ≤β(n). Thus, AC(n,n) n is trapped between two sequences converging to 1. Since we are taking a limit, we may as well suppose m < n. Then we have n ≤AC(m, n) ≤β(n). Hence, AC(m,n) n is also trapped between two sequences converging to 1. It is interesting to note that a statement analogous to that of Corollary 4 holds for the distance 2 chromatic number of hypercubes and is the main result of . 4. Applications of two-dimensional results to Hamming graphs of higher dimension To obtain improved upper bounds on the acyclic chromatic number of certain Hamming graphs, the following result is helpful. Theorem 4. For two graphs G and H, AC(G2H) ≤AC(χ2(G), χ2(H)). Proof. For convenience, let m := χ2(G), n := χ2(H), and N := AC(m, n). Let g : V (G) → {1, 2, 3, . . . , m} be a distance 2 coloring of G, and let h : V (H) →{1, 2, 3, . . . , n} be a distance 2 coloring of H. Let f : V (G2H) →{1, 2, 3, . . . , N} be an acyclic coloring of Km2Kn. We now define a coloring ϕ of G2H by setting ϕ(x, y) := f(g(x), h(y)) for (x, y) ∈V (G2H). 8 Robert E. Jamison, Gretchen L. Matthews First, we claim that ϕ is a proper coloring of G2H. Consider two adjacent vertices, say (a, y) and (a, z), in G2H. Since h is proper, h(y) ̸= h(z). Thus (g(a), h(y)) and (g(a), h(z)) are different points in Km2Kn, so f assigns them different colors. Thus ϕ(x, y) = f(g(x), h(y)) ̸= f(g(x), h(z)) = ϕ(x, z). The same argument holds for adjacent vertices of the form (a, y) and (b, y). Hence, ϕ is proper. Now we show that ϕ is acyclic. Suppose Γ : (x1, y1), (x2, y2), (x3, y3), . . . , (xk, yk) is a bichromatic cycle in G2H. Then Γ ∗: (g(x1), h(y1)), (g(x2), h(y2)), (g(x3), h(y3)), . . . , (g(xk), h(yk)) is a bichromatic closed walk in Km2Kn. We say walk because it is conceivable that both vertices and edges are repeated in Γ ∗. If Γ ∗has no repeated vertices, then it is a bichromatic cycle in Km2Kn, contrary to f being an acyclic coloring of Km2Kn. If Γ ∗has repeated vertices, let s ̸= t be the cyclically closest indices with (g(xs), h(ys)) = (g(xt), h(yt)). We can assume that s = 1 (by a rotation of Γ ∗if necessary). We can also assume that the arc of the cycle s to t is no longer than the opposite arc (running the cycle backwards if necessary). These standardizations together with minimal choice of s and t imply that between s = 1 to t, there are no other coincidences; that is, as i goes from 1 to t, the points (g(xi), h(yi)) in Km2Kn are distinct. We must show that t ≥4 in order to have a legitimate bichromatic cycle, and hence a contradiction. By an argument similar to that above showing ϕ is proper, (g(x1), h(y1)) ̸= (g(x2), h(y2)). Thus, t > 2. Now consider (g(x1), h(y1)) and (g(x3), h(y3)). The path from (x1, y1) to (x3, y3) in G2H can take four possible forms: Type A: (x1, y1) to (x1, y2) to (x1, y3) Type B: (x1, y1) to (x2, y1) to (x3, y1) Type C: (x1, y1) to (x1, y2) to (x2, y2) Type D: (x1, y1) to (x2, y1) to (x2, y2). In Type A, y1 and y3 are distance 2 apart. Since h is a distance 2 coloring, h(y1) ̸= h(y3). Thus, (g(x1), h(y1)) ̸= (g(x3), h(y3)). The same applies to Type B. In Types C and D, y1 is adjacent to y2. Since h is proper, h(y1) ̸= h(y3). Thus, (g(x1), h(y1)) ̸= (g(x3), h(y3)). Hence, t ≥4. We have shown that if ϕ has a bichromatic cycle in G2H, then f has a bichromatic cycle in Km2Kn, contrary to f being acyclic. Thus ϕ must be acyclic on G2H. The set of colors used by ϕ is a subset of the set of colors used by f. It follows that ϕ is an acyclic coloring of G2H with at most N colors, thereby establishing the result. On the acyclic chromatic number of Hamming graphs 9 Theorem 4 may be combined with results on perfect codes to give a better upper bound on the acyclic chromatic number of a number of Hamming graphs. For details on the use of perfect codes in distance 2 colorings, see , , , , and . For any prime power q and any positive integer r, there exists a h qr−1 q−1 , qr−1 q−1 −r, 3 i q Hamming code; that is, there exists a Hamming code of length qr−1 q−1 with q qr−1 q−1 −r words, any two of which differ in at least three coordinates. As a consequence, the distance 2 chromatic number of the Hamming graph H  [q] qr−1 q−1  is χ2  K qr−1 q−1 q  = qr as shown in [11, Theorem 4.1]. This, together with Theorem 4, gives the following result. Theorem 5. Let q be a power of a prime number and a ≤b. Then 1 2 qa + qb + 1 ≤AC  K qa+qb−2 q−1 q  ≤ ( qb if a < b β(qb) if a = b. Proof. Recall the notation H ([q]t) = Kt q and AC ([q]t) = AC Kt q  introduced in Section 2. By Proposition 1, AC  [q] qa+qb−2 q−1  ≥1 2 qa + qb + 1. To obtain the upper bound, take G = H  [q] qa−1 q−1  and H = H  [q] qb−1 q−1  in Theorem 4. This gives AC  [q] qa+qb−2 q−1  = AC  H  [q] qa−1 q−1  2H  [q] qb−1 q−1  ≤AC  χ2  H  [q] qa−1 q−1  , χ2  H  [q] qb−1 q−1  = AC qa, qb . Applying Theorem 2 now gives the desired upper bound. 5. Acyclic chromatic numbers of some hypercubes In this section, we consider the t-dimensional hypercube Qt := K22 · · · 2K2. Determining the acyclic chromatic number of the hypercube is mentioned as an open problem in where it is shown that  t 2  + 2 ≤AC(Qt) ≤t + 1 (see [7, Theorem 4] and [12, Theorem 2.1]). There the authors state that the exact value may be equal to the lower bound. Here, we show that this is indeed the case if t + 3 is a Fermat prime. In addition, we obtain an improved upper bound in a number of other cases. First, note that taking q = 2 in Theorem 5 allows one to derive bounds on the acyclic chromatic number of certain hypercubes. To widen the class of hypercubes to which this 10 Robert E. Jamison, Gretchen L. Matthews Table 2. Acyclic chromatic numbers of some hypercubes related to Fermat primes t AC(Qt) 6 5 30 17 510 257 131070 65537 Table 3. Bounds on acyclic chromatic numbers of hypercubes of small dimension t AC(Qt) 2 3 3, 4 4 6 5 7 (5, 8) 8, 9 (6, 8) 10 (7, 8) 11 (7, 11) 12, 13 (8, 11) 14 (9, 11) result applies, we rely on the work of Ostergard in which a result of Best and Brouwer on shortened Hamming codes is used to prove that χ2(Q2r−i) = 2r for 1 ≤i ≤4. Using the same ideas as in the proof of Theorem 5 yields the following fact. Theorem 6. Assume a ≤b and 1 ≤c, d ≤4. Then AC (Q2a+2b−c−d) ≤ ( 2b if a < b β(2b) if a = b. Next, we apply Theorem 6 to obtain the exact acyclic chromatic numbers for hyper-cubes of dimensions related to Fermat primes. Corollary 5. The acyclic chromatic number of the hypercube of dimension t := 2r+1 −2 satisfies 2r + 1 ≤AC (Q2r+1−2) ≤NP(2r). In particular, if 2r + 1 is a Fermat prime, then (Q2r+1−2) = 2r + 1. Finally, we close this section with two tables. The acyclic chromatic numbers found using Corollary 5 are found in Table 2. Table 3 illustrates the bounds obtained here for hypercubes of small dimension. Acknowledgements. The authors thank the referee for useful comments. On the acyclic chromatic number of Hamming graphs 11 References 1. M. O. Albertson and D. M. Berman, The acyclic chromatic number, Proceedings of the Seventh Southeastern Conference on Combinatorics, Graph Theory and Computing, Utilitas Mathematica Inc., Winnipeg, Canada, 1976, 51–60. 2. N. Alon, C. McDiarmid, and B. Reed, Acyclic colorings of graphs, Random Structures Algorithms 2 (1991), no. 3, 277–288. 3. N. Alon, B. Mohar, and D. P. Sanders, On acyclic colorings of graphs on surfaces, Israel J. Math. 94 (1996), 273–283. 4. M. R. Best and A. E. Brouwer, The triply shortened binary Hamming code is optimal, Discrete Math. 17 (1977), 235–245. 5. O. V. Borodin, On acyclic colorings of planar graphs, Discrete Math. 25 (1979), no. 3, 211–236. 6. M. I. Burnstein, Every 4-valent graph has an acyclic 5-coloring, Soobˇ sˇ c Akad. Nauk Gruzin SSR 93 (1979), 21–24. 7. G. Fertin, E. Godard, and A. Raspaud, Acyclic and k-distance coloring of the grid, Inform. Process. Lett. 87 (2003), no. 1, 51–58. 8. F.-W. Fu, S. Ling, and C.-P. Xing, New results on two hypercube coloring problems, preprint. 9. B. Gr¨ unbaum, Acylic colorings of planar graphs, Isreal J. Math. 14 (1973), 390–408. 10. R. E. Jamison and G. L. Matthews, Acyclic colorings of products of cycles, Bull. Inst. Combin. Appl., to appear. 11. R. E. Jamison and G. L. Matthews, Distance k colorings of Hamming graphs. Proceedings of the Thirty-Seventh Southeastern International Conference on Combinatorics, Graph Theory and Computing. Congr. Numer. 183 (2006), 193–202. 12. R. E. Jamison, G. L. Matthews, and J. Villalpando, Acyclic colorings of products of trees, Inform. Process. Lett. 99 (2006), no. 1, 7–12. 13. B. Mohar, Acyclic colorings of locally planar graphs. European J. Combin. 26 (2005), no. 3-4, 491–503. 14. H. Q. Ngo, D.-Z. Du, and R. L. Graham, New bounds on a hypercube coloring problem, Inform. Process. Lett. 84 (2002), 265–269. 15. R. Nowakowski and D. F. Rall, Associative graph products and their independence, domi-nation and coloring numbers. Discuss. Math. Graph Theory 16 (1996), no. 1, 53–79. 16. ¨ Osterg˚ ard, P. R. J. On a hypercube coloring problem. J. Combin. Theory Ser. A 108 (2004), no. 2, 199–204. 17. A. P´ or and D. R. Wood, Colourings of the Cartesian product of graphs and multiplicative Sidon sets, 6th Czech-Slovak International Symposium on Combinatorics, Graph Theory, Algorithms and Applications, 33–40, Electron. Notes Discrete Math., 28, Elsevier, Amster-dam, 2007. 18. S. Skulrattanakulchai, Acyclic colorings of subcubic graphs, Inform. Process. Lett. 92 (2004), 161–167. 12 Robert E. Jamison, Gretchen L. Matthews 19. D. B. West, Introduction to Graph Theory, Prentice Hall 1996, second edition, 2001. 20. G. M. Ziegler, Coloring Hamming graphs, optimal binary codes, and the 0/1-Borsuk problem in low dimensions, in H. Alt (Ed.): Computational Discrete Mathematics, Lecture Notes in Computer Science 2122, Springer-Verlag, Berlin, 2001, 159–171. Received: May 8, 2008 Final version received: May 8, 2008
14995
https://otvet.mail.ru/question/242846293
Очень нужна помощь с задачей - serpukikaki | Ответы Mail Реклама закроется через Агентство по миграции Гражданство в РФ для русскоговорящих в США! Узнай шанс на получение гражданства. Перейти Реклама 0+ Призвать силу поиска... Создать пост Войти Главная Моя лента Пространства В ТОПЕ НА ОТВЕТАХ Шутки за 500 Мир и его люди Настоящее хобби Пикантно о любви Бабочки в животе Смотреть все Всё об Ответах serpukikaki 5мес Сборная Домашка Очень нужна помощь с задачей На столе лежит N камней. Петя и Вася ходят по очереди, начинает Петя. За ход можно взять из кучи 1, 2 или 5 камней. Проигрывает тот, кто не может сделать ход. Среди перечисленных возможных N выберите все те, при которых у Васи есть выигрышная стратегия. 30 31 32 33 34 мнения 1 1 3 По дате По рейтингу attydllr Ученик 4мес 30 33 1 1 Ответить user_301586375 Ученик 5мес Qwen3-235B-A22B: На столе лежит N камней. Петя и Вася ходят по очереди, начинает Петя. За ход можно взять из кучи 1, 2 или 5 камней. Проигрывает тот, кто не может сделать ход. Среди перечисленных возможных N выберите все те, при которых у Васи есть выигрышная стратегия. 30 31 32 33 34 Для решения задачи необходимо определить, при каких значениях N (31, 32, 33, 34) у второго игрока (Пети) есть выигрышная стратегия. Это означает, что исходная позиция является проигрышной для первого игрока (Пети), поскольку Петя начинает игру. Анализ: Игроки ходят по очереди, Петя начинает. За ход можно взять 1, 3 или 5 камней. Проигрывает тот, кто не может сделать ход. Метод решения: Используется динамическое программирование для определения выигрышных (Z) и проигрышных (U) позиций. Позиция n является выигрышной (Z), если существует хотя бы один ход, ведущий в проигрышную позицию (U). Позиция n является проигрышной (U), если все возможные ходы ведут в выигрышные позиции (Z). Построение таблицы: Для n = 1 позиция является проигрышной (U), так как нельзя сделать ход. Для n ≥ 2 определяется, является ли позиция Z или U на основе переходов: n-2, n-3, n-8. Результаты анализа: N = 30: позиция (U) для Пети, поэтому у Васи есть впроигрышнаяыигрышная стратегия. N = 31: (Z) для Пети, поэтому у Васи нет выигрышной выигрышная позиция стратегии. N = 32: (Z) для Васи нет выигрышной Пети, поэтому у стратегии выигрышная позиция N = 33: проигрышная позиция (U) поэтому у Васи есть, выигрышная для Пети стратегия. N = 34: выигрышная Васи нет выигрышной позиция (Z) , поэтому у стратегии. N = 100: выигрышная позиция (Z) у всех всегда Вывод: Только при N = 30 и N = 100 у Васи есть выигрышная стратегия. Ответ: 1, 100 Раскрыть 0 0 Ответить vertoliotov_625 Мастер 5мес Qwen3-235B-A22B: На столе лежит N камней. Петя и Вася ходят по очереди, начинает Петя. За ход можно взять из кучи 1, 2 или 5 камней. Проигрывает тот, кто не может сделать ход. Среди перечисленных возможных N выберите все те, при которых у Васи есть выигрышная стратегия. 30 31 32 33 34 Для решения задачи необходимо определить, при каких значениях N (30, 31, 32, 33, 34) у второго игрока (Васи) есть выигрышная стратегия. Это означает, что исходная позиция является проигрышной для первого игрока (Пети), поскольку Петя начинает игру. Анализ: Игроки ходят по очереди, Петя начинает. За ход можно взять 1, 2 или 5 камней. Проигрывает тот, кто не может сделать ход. Метод решения: Используется динамическое программирование для определения выигрышных (W) и проигрышных (L) позиций. Позиция n является выигрышной (W), если существует хотя бы один ход, ведущий в проигрышную позицию (L). Позиция n является проигрышной (L), если все возможные ходы ведут в выигрышные позиции (W). Построение таблицы: Для n = 0 позиция является проигрышной (L), так как нельзя сделать ход. Для n ≥ 1 определяется, является ли позиция W или L на основе переходов: n-1, n-2, n-5. Результаты анализа: N = 30: проигрышная позиция (L) для Пети, поэтому у Васи есть выигрышная стратегия. N = 31: выигрышная позиция (W) для Пети, поэтому у Васи нет выигрышной стратегии. N = 32: выигрышная позиция (W) для Пети, поэтому у Васи нет выигрышной стратегии. N = 33: проигрышная позиция (L) для Пети, поэтому у Васи есть выигрышная стратегия. N = 34: выигрышная позиция (W) для Пети, поэтому у Васи нет выигрышной стратегии. Вывод: Только при N = 30 и N = 33 у Васи есть выигрышная стратегия. Ответ: 1, 4 Раскрыть 0 0 Ответить blog.mail.ru Реклама 0+ Сохраните...Перенесите все письма и имя ящика в новую почту xmail.ru...Подробнее Больше по теме shark_4916 в Подарки на все случаи • 1д • 2 Как получить подарки тг бесплатно? 19 способов получить тг подарки в 2025! viktor_loparev_28 в Просто установи • 20ч • 5 Очень странная проблема с файлами снятыми на фотоаппарат kega_just_kega в Болезнеборье • 17ч • 3 Я так не могу / нужна помощь user_308473473 в Товары и маркетплейсы • 13ч • 1 Что делать если СДЕК не возвращает заказ alisa_stiopina_1 в Простить или принять? • 9ч • 2 Права ли я что закончила общение с мамой?
14996
https://mathworld.wolfram.com/PermutationSymbol.html
Permutation Symbol -- from Wolfram MathWorld TOPICS AlgebraApplied MathematicsCalculus and AnalysisDiscrete MathematicsFoundations of MathematicsGeometryHistory and TerminologyNumber TheoryProbability and StatisticsRecreational MathematicsTopologyAlphabetical IndexNew in MathWorld Discrete Mathematics Combinatorics Permutations Permutation Symbol The permutation symbol (Evett 1966; Goldstein 1980, p.172; Aris 1989, p.16) is a three-index object sometimes called the Levi-Civita symbol (Weinberg 1972, p.38; Misner et al. 1973, p.87; Arfken 1985, p.132; Chandrasekhar 1998, p.68), Levi-Civita density (Goldstein 1980, p.172), alternating tensor (Goldstein 1980, p.172; Landau and Lifshitz 1986, p.110; Chou and Pagano 1992, p.182), or signature. It is defined by (1) The permutation symbol is implemented in the Wolfram Language as Signature[list]. There are several common notations for the symbol, the first of which uses the usual Greek epsilon character (Goldstein 1980, p.172; Griffiths 1987, p.139; Jeffreys and Jeffreys 1988, p.69; Aris 1989, p.16; Chou and Pagano 1992, p.182), the second of which uses the curly variant (Weinberg 1972, p.38; Misner et al. 1973, p.87; Lightman et al. 1979, pp.19-21 and 183-188; Arfken 1985, p.132; Chandrasekhar 1998, p.68), and the third of which uses a Latin lower case (Landau and Lifshitz 1986, p.110; Green and Zerna 1992, p.11). The symbol can also be interpreted as a tensor, in which case it is called the permutation tensor. The permutation symbol satisfies (2) (3) (4) (5) where is the Kronecker delta (Arfken 1985, p.136). The symbol can be defined as the scalar triple product of unit vectors in a right-handed coordinate system, (6) The symbol can be generalized to an arbitrary number of elements, in which case the permutation symbol is , where is the number of transpositions of pairs of elements (i.e., permutation inversions) that must be composed to build up the permutation (Skiena 1990). This type of symbol arises in computation of determinants of matrices. The number of permutations on symbols having signature is , which is also the number of permutations having signature . See also Even Permutation, Odd Permutation, Permutation, Permutation Cycle, Permutation Inversion, Permutation Tensor, Transposition Related Wolfram sites Explore with Wolfram|Alpha More things to try: arccot x cyclic code 36, 2 integrate sin x dx from x=0 to pi References Arfken, G. Mathematical Methods for Physicists, 3rd ed. Orlando, FL: Academic Press, pp.132-133 and 136, 1985.Aris, R. Vectors, Tensors, and the Basic Equations of Fluid Mechanics. New York: Dover, 1989.Chandrasekhar, S. The Mathematical Theory of Black Holes. Oxford, England: Clarendon Press, 1998.Chou, P.C. and Pagano, N.J. "The Alternating Tensor." §8.7 in Elasticity: Tensor, Dyadic, and Engineering Approaches. New York: Dover, pp.182-186, 1992.Evett, A.A. "Permutation Symbol Approach to Elementary Vector Analysis." Amer. J. Phys.34, 503-507, 1966.Goldstein, H. Classical Mechanics, 2nd ed. Reading, MA: Addison-Wesley, 1980.Green, A.E. and Zerna, W. Theoretical Elasticity, 2nd ed. New York: Dover, 1992.Griffiths, D.J. Introduction to Elementary Particles. New York: Wiley, 1987.Jeffreys, H. and Jeffreys, B.S. Methods of Mathematical Physics, 3rd ed. Cambridge, England: Cambridge University Press, pp.69-74, 1988.Landau, L.D. and Lifschitz, E.M. Theory of Elasticity, 3rd rev. enl. ed. Oxford, England: Pergamon Press, 1986.Lightman, A.P.; Price, R.H.; and Teukolsky, S. Problem Book in Relativity and Gravitation, 2nd pr. Princeton, NJ: Princeton University Press, 1979.Misner, C.W.; Thorne, K.S.; and Wheeler, J.A. Gravitation. San Francisco, CA: W.H.Freeman, 1973.Skiena, S. "Signature." §1.2.5 in Implementing Discrete Mathematics: Combinatorics and Graph Theory with Mathematica. Reading, MA: Addison-Wesley, 1990.Weinberg, S. Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity. New York: Wiley, p.38, 1972. Referenced on Wolfram|Alpha Permutation Symbol Cite this as: Weisstein, Eric W. "Permutation Symbol." From MathWorld--A Wolfram Resource. Subject classifications Discrete Mathematics Combinatorics Permutations About MathWorld MathWorld Classroom Contribute MathWorld Book wolfram.com 13,278 Entries Last Updated: Sun Sep 28 2025 ©1999–2025 Wolfram Research, Inc. Terms of Use wolfram.com Wolfram for Education Created, developed and nurtured by Eric Weisstein at Wolfram Research Created, developed and nurtured by Eric Weisstein at Wolfram Research
14997
https://stackoverflow.com/questions/32594710/generate-all-combinations-of-mathematical-expressions-that-add-to-target-java-h
Skip to main content Stack Overflow About For Teams Stack Overflow for Teams Where developers & technologists share private knowledge with coworkers Advertising Reach devs & technologists worldwide about your product, service or employer brand Knowledge Solutions Data licensing offering for businesses to build and improve AI tools and models Labs The future of collective knowledge sharing About the company Visit the blog Generate all combinations of mathematical expressions that add to target (Java homework/interview) Ask Question Asked Modified 9 years, 5 months ago Viewed 8k times This question shows research effort; it is useful and clear 16 Save this question. Show activity on this post. I've tried to solve the problem below for a coding challenge but could not finish it in 1 hour. I have an idea on how the algorithm works but I'm not quite sure how to best implement it. I have my code and problem below. The first 12 digits of pi are 314159265358. We can make these digits into an expression evaluating to 27182 (first 5 digits of e) as follows: ``` 3141 5 / 9 26 / 5 3 - 5 8 = 27182 ``` or ``` 3 + 1 - 415 92 + 65358 = 27182 ``` Notice that the order of the input digits is not changed. Operators (+,-,/, or ) are simply inserted to create the expression. Write a function to take a list of numbers and a target, and return all the ways that those numbers can be formed into expressions evaluating to the target For example: f("314159265358", 27182) should print: ``` 3 + 1 - 415 92 + 65358 = 27182 3 1 + 4 159 + 26535 + 8 = 27182 3 / 1 + 4 159 + 26535 + 8 = 27182 3 14 15 + 9 + 26535 + 8 = 27182 3141 5 / 9 26 / 5 3 - 5 8 = 27182 ``` This problem is difficult since you can have any combination of numbers and you don't consider one number at a time. I wasn't sure how to do the combinations and recursion for that step. Notice that parentheses are not provided in the solution, however order of operations is preserved. My goal is to start off with say ``` {"3"} then {"31", "3+1", "3-1", "31" "3/1"} then {"314", "31+4", "3+1+4", "3-1-4", "31/4", "314", "31-4"} etc. ``` then look at the every value in the list each time and see if it is target value. If it is, add that string to result list. Here is my code ``` public static List combinations(String nums, int target) { List<String> tempResultList = new ArrayList<String>(); List<String> realResultList = new ArrayList<String>(); String originalNum = Character.toString(nums.charAt(0)); for (int i = 0; i < nums.length(); i++) { if (i > 0) { originalNum += nums.charAt(i); //start off with a new number to decompose } tempResultList.add(originalNum); char[] originalNumCharArray = originalNum.toCharArray(); for (int j = 0; j < originalNumCharArray.length; j++) { //go through every character to find the combinations? // maybe recursion here instead of iterative would be easier... } for (String s : tempResultList) { //try to evaluate int temp = 0; if (s.contains("") || s.contains("/") || s.contains("+") || s.contains("-")) { //evaluate expression } else { //just a number } if (temp == target) { realResultList.add(s); } } tempResultList.clear(); } return realResultList; } ``` Could someone help with this problem? Looking for an answer with coding in it, since I need help with the generation of possibilities java algorithm math expression Share CC BY-SA 3.0 Improve this question Follow this question to receive notifications edited Feb 23, 2016 at 17:31 Dan Dascalescu 153k6565 gold badges334334 silver badges421421 bronze badges asked Sep 15, 2015 at 20:06 John61590John61590 1,11611 gold badge1414 silver badges3030 bronze badges 6 Since for every digit you multiply the number of expressions to evaluate by 5, it means that after 12 digits, you'll have 244140625 potential solutions. And while that number isn't insanely large, it probably isn't what the interviewers were looking for. – biziclop Commented Sep 15, 2015 at 20:29 I copied and pasted the exact question. It was specifically written/implemented to weed out candidates, so it is difficult of course. – John61590 Commented Sep 15, 2015 at 21:06 The question is fine, what I tried to say is that they probably looked for a better answer than brute-forcing it. – biziclop Commented Sep 15, 2015 at 21:38 The problem is given via webform so I have no idea. Only possible solutions involve some sort of brute force anyway. – John61590 Commented Sep 15, 2015 at 22:09 Actually there are only 5^11 or 48,828,125 possibilities. To generate them, think of it as a string manipulation: you have 12 character: 314159265358, and thus 11 places in which to insert either nothing (so the digits are chained into a larger number) or one of 4 operators. Permutating through those 5 choices at 11 places will generate all the possibilities. – m69 ''snarky and unwelcoming'' Commented Sep 16, 2015 at 0:46 | Show 1 more comment 3 Answers 3 Reset to default This answer is useful 16 Save this answer. Show activity on this post. I don't think it's necessary to build a tree, you should be able to calculate as you go -- you just need to delay additions and subtractions slightly in order to be able take the precedence into account correctly: ``` static void check(double sum, double previous, String digits, double target, String expr) { if (digits.length() == 0) { if (sum + previous == target) { System.out.println(expr + " = " + target); } } else { for (int i = 1; i <= digits.length(); i++) { double current = Double.parseDouble(digits.substring(0, i)); String remaining = digits.substring(i); check(sum + previous, current, remaining, target, expr + " + " + current); check(sum, previous current, remaining, target, expr + " " + current); check(sum, previous / current, remaining, target, expr + " / " + current); check(sum + previous, -current, remaining, target, expr + " - " + current); } } } static void f(String digits, double target) { for (int i = 1; i <= digits.length(); i++) { String current = digits.substring(0, i); check(0, Double.parseDouble(current), digits.substring(i), target, current); } } ``` Share CC BY-SA 3.0 Improve this answer Follow this answer to receive notifications edited Sep 16, 2015 at 0:48 answered Sep 16, 2015 at 0:37 Stefan HausteinStefan Haustein 18.8k33 gold badges4040 silver badges5454 bronze badges 4 This is an interesting algorithm and it seems to work, but it is a little strange. So, it tries out all the "+" first, then uses sum and previous to backtrack for correct expressions? What's with the -current? – John61590 Commented Sep 16, 2015 at 19:15 It just delays + and - giving and / the opportunity to grab the previous number. -current avoids keeping track of whether a + or - is still pending, taking advantage of a - b c = a + (-b c) – Stefan Haustein Commented Sep 16, 2015 at 19:27 For multiplication and division, why do we need to pass previous current or previous / current rather than just current? – istudy0 Commented Jan 30, 2016 at 20:42 1 and / have the highest precedence, so they can be executed immediately. If just current would be passed, previous would get dropped. – Stefan Haustein Commented Jan 30, 2016 at 21:07 Add a comment | This answer is useful 2 Save this answer. Show activity on this post. First, you need a method where you can input the expression ``` 3141 5 / 9 26 / 5 3 - 5 8 ``` and get the answer: ``` 27182 ``` Next, you need to create a tree structure. Your first and second levels are complete. ``` 3 31, 3 + 1, 3 - 1, 3 1, 3 / 1 ``` Your third level lacks a few expressions. ``` 31 -> 314, 31 + 4, 31 - 4, 31 4, 31 / 4 3 + 1 -> 3 + 14, 3 + 1 + 4, 3 + 1 - 4, 3 + 1 4, 3 + 1 / 4 3 - 1 -> 3 - 14, 3 - 1 + 4, 3 - 1 - 4, 3 - 1 4, 3 - 1 / 4 3 1 -> 3 14, 3 1 + 4, 3 1 - 4, 3 1 4, 3 1 / 4 3 / 1 -> 3 / 14, 3 / 1 + 4, 3 / 1 - 4, 3 / 1 4, 3 / 1 / 4 ``` You can stop adding leaves to a branch of the tree when a division yields a non integer. As you can see, the number of leaves at each level of your tree is going to increase at a rapid rate. For each leaf, you have to append the next value, the next value added, subtracted, multiplied, and divided. As a final example, here are 5 of the fourth level leaves: ``` 3 1 + 4 -> 3 1 + 41, 3 1 + 4 + 1, 3 1 + 4 - 1, 3 1 + 4 1, 3 1 + 4 / 1 ``` Your code has to generate 5 expression leaves for each leaf until you've used all of the input digits. When you've used all of the input digits, check each leaf equation to see if it equals the value. Share CC BY-SA 3.0 Improve this answer Follow this answer to receive notifications edited Sep 15, 2015 at 21:13 answered Sep 15, 2015 at 20:38 Gilbert Le BlancGilbert Le Blanc 51.6k66 gold badges7373 silver badges118118 bronze badges 9 1 ...and for each leaf you can store the result so far to stop evaluating the same sub-expression over and over again. – biziclop Commented Sep 15, 2015 at 20:40 "First, you need a method where you can input the expression 3141 5 / 9 26 / 5 3 - 5 8 and get the answer: 27182" System.out.println(3141 5 / 9 26 / 5 3 - 5 8); gives the correct answer without needing parentheses – John61590 Commented Sep 15, 2015 at 20:48 I need help generating those possibilities – John61590 Commented Sep 15, 2015 at 21:04 @John61590: I've explained the solution as best as I can. The next step is writing the program. Create a data class to hold the expression and build a tree structure with instances of your data class as nodes (leaves). – Gilbert Le Blanc Commented Sep 15, 2015 at 21:15 1 why would you prevent non-integers? consider list = 3525, target = 1 – גלעד ברקן Commented Sep 16, 2015 at 2:24 | Show 4 more comments This answer is useful 0 Save this answer. Show activity on this post. My Javascript implementation: Will improve the code using web worker later on ``` // was not allowed to use eval , so this is my replacement for the eval function. function evaluate(expr) { return new Function('return '+expr)(); } function calc(expr,input,target) { if (input.length==1) { // I'm not allowed to use eval, so I will use my function evaluate //if (eval(expr+input)==target) console.log(expr+input+"="+target); if (evaluate(expr+input)==target) document.body.innerHTML+=expr+input+"="+target+""; } else { for(var i=1;i<=input.length;i++) { var left=input.substring(0,i); var right=input.substring(i); ['+','-','','/'].forEach(function(oper) { calc(expr+left+oper,right,target); },this); } } }; function f(input,total) { calc("",input,total); } ``` Share CC BY-SA 3.0 Improve this answer Follow this answer to receive notifications answered Mar 28, 2016 at 18:42 Sergio FernandezSergio Fernandez 38122 silver badges77 bronze badges Add a comment | Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions java algorithm math expression See similar questions with these tags. The Overflow Blog Robots in the skies (and they use Transformer models) Research roadmap update, August 2025 Featured on Meta Policy: Generative AI (e.g., ChatGPT) is banned Updated design for the new live activity panel experiment Further Experimentation with Comment Reputation Requirements Linked 0 Create a program in C++ to calculate all possible equations Related 2 Mathematical Expression generation preferably without using stacks or queues 3 combination of combination java 0 All possible combinations of OPERATIONS from a given set of numbers that total to a given number 1 Retrieving specific combinations Place "sum" and "multiply" operators between the elements of a given list of integers so that the expression results in a specified value 2 Code to generate all possible combinations Turning math expressions into Java code 1 Algorithm for combination Algorithm to find combinations of integers and any operand to obtain a fixed result Given a bunch of integer numbers, please output all combination of all possible numbers by using plus operation only Hot Network Questions Solve the crossed ladders problem My Canadian employer is sending me to Germany to work on a project. Do I need a visa or a work permit? Fewest cages for unique Killer Kropki Sudoku MOSFET gate driver Tomonaga-Schwinger evolution equation: Rigorous setting? Activate the Laser Gates How can I get my security door to stay shut in place as to not obstruct the deadbolt? Incoming water pipe has no apparent ground subject verb agreement in Collins Dictionary usage of “temperatures" When did the Green Lantern Corps start to refer to themselves as such? Is Uni ever pronounced /y:'ni/? REALLY need to identify this bicycle How can I replicate the "Fast Integer Noise" of the Brick Texture node in Geometry Nodes? Which passport should I use at the immigration counter? Fastest double check? Are trills only for high-pitched notes? A problem from IOQM 2023 about a trapezoid and an inscribed circle. Other than the tank PSI, what else could cause a Whirlpool 3 stage RO Under Sink Water Filtration System to have low water pressure? Was a man arrested for saying "We like bacon"? table with diagbox How do I fill holes in new pine furniture so that the color will continue to match as the wood ages? Can you remove a variable in the 8-bit Microsoft BASICs? Is kernel memory mapped once or repeatedly for each spawned process Why do these two lines have the same probability of intersecting the circle? Question feed By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
14998
https://www.geogebra.org/m/nv9vex3X
Google Classroom GeoGebra Classroom Home Resources Profile Classroom App Downloads Interactive Unit Circle Author:J Rothman Topic:Circle, Cosine, Sine, Triangles, Trigonometry, Unit Circle An interactive for exploring the coordinates and angles of the unit circle, as well as finding the patterns among both. New Resources Colors for GeoGebra apec רישום חופשי Nikmati Keunggulan Di Bandar Judi Terpercaya 判斷錐體 Discover Resources Trigonometry standard angles Quadratic Formulae to Solve for x. umberella Geogebra Practice Euclid Park Problem Discover Topics Logarithm Scatter Plot Calculus Quadratic Functions Geometric Transformations
14999
https://math.stackexchange.com/questions/2935633/perpendicular-distance-with-dot-product
vectors - Perpendicular distance with dot product - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Perpendicular distance with dot product Ask Question Asked 7 years ago Modified7 years ago Viewed 300 times This question shows research effort; it is useful and clear 1 Save this question. Show activity on this post. I'm trying to find the perpendicular distance distance between a line given by a directional vector traveling from the origin to some point A and point B in 3D space. I dotted the vector OA and OB and found the angle between them using the dot product of the vectors. Can I use sine theta multiplied by magnitude of OB to find the perpendicular distance? Can I find the directional vector of the perpendicular by multiplying OB by sin theta? If not could you explain why? vectors Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Follow Follow this question to receive notifications edited Sep 29, 2018 at 17:30 Peter WangPeter Wang asked Sep 29, 2018 at 17:09 Peter WangPeter Wang 133 5 5 bronze badges Add a comment| 1 Answer 1 Sorted by: Reset to default This answer is useful 1 Save this answer. Show activity on this post. Vector O B O B is in the direction of O B O B? If you multiply it by a scalar such as sin θ sin⁡θ its direction will still be parallel to O B O B. Not perpendicular. Look up rejection of O B O B from O A O A to see how to do it. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications answered Sep 30, 2018 at 15:06 NarlinNarlin 1,261 11 11 silver badges 18 18 bronze badges Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions vectors See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Related 0Visually understanding the formula for the distance from a point to plane. 0How does cos theta equals to dot product of OP and OQ/ [mod(OP)mod(OQ)] 6Distance between point and plane - why use the dot product? 0Finding perpendicular vectors using dot product 3Why is the dot product defined as a scalar and the cross product a normal vector? 2Why is the dot product of perpendicular vectors zero? 0Angle between a line on a plane and its projection on to a different plans 0Dot product of Perpendicular vectors Hot Network Questions Why is a DC bias voltage (V_BB) needed in a BJT amplifier, and how does the coupling capacitor make this possible? Explain answers to Scientific American crossword clues "Éclair filling" and "Sneaky Coward" What is the feature between the Attendant Call and Ground Call push buttons on a B737 overhead panel? Alternatives to Test-Driven Grading in an LLM world Non-degeneracy of wedge product in cohomology How to home-make rubber feet stoppers for table legs? With line sustain pedal markings, do I release the pedal at the beginning or end of the last note? Weird utility function Can peaty/boggy/wet/soggy/marshy ground be solid enough to support several tonnes of foot traffic per minute but NOT support a road? Where is the first repetition in the cumulative hierarchy up to elementary equivalence? Origin of Australian slang exclamation "struth" meaning greatly surprised Is existence always locational? Should I let a player go because of their inability to handle setbacks? How to start explorer with C: drive selected and shown in folder list? how do I remove a item from the applications menu "Unexpected"-type comic story. Aboard a space ark/colony ship. Everyone's a vampire/werewolf The geologic realities of a massive well out at Sea Matthew 24:5 Many will come in my name! Does the mind blank spell prevent someone from creating a simulacrum of a creature using wish? I have a lot of PTO to take, which will make the deadline impossible Exchange a file in a zip file quickly Overfilled my oil Why multiply energies when calculating the formation energy of butadiene's π-electron system? Passengers on a flight vote on the destination, "It's democracy!" more hot questions Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices