content
stringlengths
86
994k
meta
stringlengths
288
619
Haskell/Lens - 维基教科书,自由的教学读本 本章中我们将讨论函数式引用. "引用"指的是能够对值的一部分进行访问和修改的能力; "函数式"指我们不会因此失去 Haskell 中函数所具有的灵活性和可复合性. 我们将讨论强大的 lens 库所实现的函数式引用. lens 库的 名字来源于 lenses(透镜), 我们将会介绍的一种臭名昭著的函数式引用. 除了作为一些非常有趣的概念外, lens 和其它函数式引用为我们带来了一些方便而且越来越普遍的语言用法, 而且被许多实用的库所采用. 作为热身, 我们将演示 lens 最简单的使用案例: 替代传统的 Haskell 的具有命名的数据类型 (record syntax). 我们先不给出详细的解释; 随着本章的进展, 我们会逐渐覆盖所需的知识的. 我们来看看下面两个数据类型, 或许我们会在一些2D绘图库中看见它们: Record 自动为我们定义了访问这两个数据类型中的域的函数. 有了这些函数, 从一条线段中读取它的两个端点并不困难: ...而当我们需要修改嵌套在深处的域时, 代码变得非常丑陋. 例如, 修改线段终点的 y 轴坐标: lens 允许我们绕开这些讨厌的嵌套, 观察下面的代码: 这里唯一的改变就是 makeLenses, 其自动生成了 Point 和 Segment 的 lens (域名前的下划线是 makeLenses 的特殊要求). 我们将会看到, 手写 lens 定义并不复杂; 然而, 如果有许多域都需要使用 lense, 这个过程就会变 得很枯燥, 因此我们使用方便的自动生成. 使用了 makeLenses 后, 每一个域都有各自的 lens 了. 这些 lens 的名字和域名一一对应, 区别之处在于头部的下划线被删除了: 类型签名 positionY :: Lens' Point Double 告诉我们, positionY 是一个 Point 中 Double 的引用. 我们使用 lens 库提供的组合函数来操作这些引用. 其中一个函数是 view, 其返回一个 lens 所指向的值, 如同 record 另一个是 set, 它能够修改其指向的值: lens 的一大优点是它们能够互相组合: 注意到, 在组合 lens, 例如 segmentEnd . positionY 时, 顺序是由总到分. 本例中, 指向线段的终点的 lens 写在指向点的坐标的 lens 前面. 或许这和 record 所提供的函数的工作方式不尽相同 (和本节开头不使用 lens 的等价写法比较), 但这里的 (.) 确实是我们所熟悉的函数组合. lens 的组合为修改嵌套 record 内部的值提供了一个解决方案. 我们将之前给出的将坐标翻倍的例子改写成使用 lens 和 over 函数的写法, 后者将一个函数应用到被 lens 指向的值上 (并返回整个 record 修改后的值): 这些例子或许看起来有些不可思议. 为什么用同一个 lens 我们不仅能访问, 还能够修改一个值呢? 为什么 lens 能够用 (.) 组合呢? 不使用 makeLenses 而是改为手写 lens 真的并不困难吗? 为了回答这些问题, 我们将介绍 lens 的工作原理. 我们能从许多角度解读 lens. 我们将遵循一条蜿蜒曲折而平缓的道路, 避免跳跃过大. 一路上, 我们将介绍好几个种类的函数式引用. 我们将使用 lens 的命名趣味, 使用"光学元件(optic)"^[1]来统称函数式引用. 正如我们 将看到的, lens 中的 optic 互相关联, 形成了有序的上下关系. 我们即将介绍这种关系. 我们选择不从 lens, 而是从一个紧密相关的 optic -- traversal -- 入手. 我们知道, traverse 能够遍历一个结构并产生一个最终结果. 有了 traverse, 你能够使用任何你想要的 Applicative 来产生这个最终结果. 特别的, 我们知道 fmap 能够用 traverse 定义: 只要选择 Identity 作为其中的 Applicative 就好了. foldMap 和 Const m 也存在类似的关系: lens 是在这个基础上一次漂亮的延伸. 操作 Traversable 结构内部的数据, 也就是 traverse 的功能, 恰恰就是一个操作整体数据内部特定部分的例子. 然而 traverse 的灵活性仅仅允许我们处理有限范围内的类型. 例如, 我们或许会想要操作非 Traversable 的 值. 比如说, 我们或许会想要这样一个处理 Point 值的函数: pointCoordinates 是对 Point 类型值的一种 traversal (遍历). 它和 traverse 具有相似的实现和使用方法. 这是来自之前章节^[2]的 rejectWithNegatives 的使用样例: 这种 pointCoordinates 的例子中出现的一般意义上的遍历被 lens 库的核心类型之一 -- Traversal 所表示: type 声明等号右侧的 forall f. 表示, 任何 Applicative 都能够被用作 f. 因此在等号左侧我们就不必写出 f 了, 也不用在使用 Traversal 时指定我们想要使用的 f. 有了 Traversal 类型别名的定义, pointCoordinates 的类型能够被表示为: 让我们看看 Traversal s t a b 中每个类型变量的值: • s = Point: pointCoordinates 是一个 Point 上的遍历. • t = Point: pointCoordinates 将产生一个 Point (某种 Applicative 的意义上). • a = Double: pointCoordinates 指向一个 Point 内的 Double 值 (点的 X 和 Y 坐标). • b = Double: 指向的 Double 将被修改为一个 Double (有时不一定相同). 在 pointCoordinates 的例子中, s 和 t 相同, a 也和 b 相同. pointCoordinates 并不改变被遍历结构和它的"内部目标"的类型, 但这并不对所有 lens 成立. 例如我们熟悉的 traverse, 其类型可以被表示为: traverse 能够改变 Traversable 结构内部值的类型, 因此也能够改变整个结构的类型. Control.Lens.Traversal 模块中包含了 Data.Traversable 模块中函数的推广, 以及一些额外的操作 traversal 的函数. 1. 试着实现 extremityCoordinates, 一个对 Segment 所有点的所有坐标起作用的 traversal. (提示: 试着修改 pointCoordinates traversal.) 接下来我们的程序中将推广 Traversable, Functor 和 Foldable 之间的联系. 我们将从 Functor 开始. 为了从 traverse 中恢复 fmap, 我们选择 Identity 作为相应的应用函子. 这使得我们能够修改目标值而不产生别的影响. 我们可以通过选择一个 Traversal 的定义实现相似的功能... ... 并设定 f 为 Identity: 用 lens 相关的说法, 这样做使你得到了一个 Setter. 由于一些专门的原因, 在 Template:Haskell lib 中 Setter 的定义有点不同... ...但如果你从文档中深入发掘你会发现一个 Settable 函子不过就是一个 Identity 或者差不多的东西, 因此不必在意这里面的差异. When we take Traversal and restrict the choice of f we actually make the type more general. Given that a Traversal works with any Applicative functor, it will also work with Identity, and therefore any Traversal is a Setter and can be used as one. The reverse, however, is not true: not all setters are traversals. over is the essential combinator for setters. It works a lot like fmap, except that you pass a setter as its first argument in order to specify which parts of the structure you want to target: In fact, there is a Setter called mapped that allows us to recover fmap: Another very important combinator is set, which replaces all targeted values with a constant. set setter x = over setter (const x), analogously to how (x <$) = fmap (const x): 1. Use over to implement... scaleSegment :: Double -> Segment -> Segment ... so that scaleSegment n multiplies all coordinates of a segment by x. (Hint: use your answer to the previous exercise.) 2. Implement mapped. For this exercise, you can specialise the Settable functor to Identity. (Hint: you will need Template:Haskell lib.) Having generalised the fmap-as-traversal trick, it is time to do the same with the foldMap-as-traversal one. We will use Const to go from... ... to: Since the second parameter of Const is irrelevant, we replace b with a and t with s to make our life easier. Just like we have seen for Setter and Identity, Template:Haskell lib uses something slightly more general than Monoid r => Const r: Contravariant is a type class for contravariant functors. The key Contravariant method is contramap... contramap :: Contravariant f => (a -> b) -> f b -> f a ... which looks a lot like fmap, except that it, so to say, turns the function arrow around on mapping. Types parametrised over function arguments are typical examples of Contravariant. For instance, Template:Haskell lib defines a Predicate type for boolean tests on values of type a: newtype Predicate a = Predicate { getPredicate :: a -> Bool } GHCi> :m +Data.Functor.Contravariant GHCi> let largerThanFour = Predicate (> 4) GHCi> getPredicate largerThanFour 6 Predicate is a Contravariant, and so you can use contramap to modify a Predicate so that the values are adjusted in some way before being submitted to the test: GHCi> getPredicate (contramap length largerThanFour) "orange" Contravariant has laws which are analogous to the Functor ones: contramap id = id contramap (g . f) = contramap f . contramap g Monoid r => Const r is both a Contravariant and an Applicative. Thanks to the functor and contravariant laws, anything that is both a Contravariant and a Functor is, just like Const r, a vacuous functor, with both fmap and contramap doing nothing. The additional Applicative constraint corresponds to the Monoid r; it allows us to actually perform the fold by combining the Const-like contexts created from the targets. Every Traversal can be used as a Fold, given that a Traversal must work with any Applicative, including those that are also Contravariant. The situation parallels exactly what we have seen for Traversal and Setter. Control.Lens.Fold offers analogues to everything in Template:Haskell lib. Two commonly seen combinators from that module are toListOf, which produces a list of the Fold targets... ... and preview, which extracts the first target of a Fold using the First monoid from Template:Haskell lib. So far we have moved from Traversal to more general optics (Setter and Fold) by restricting the functors available for traversing. We can also go in the opposite direction, that is, making more specific optics by broadening the range of functors they have to deal with. For instance, if we take Fold... ... and relax the Applicative constraint to merely Functor, we obtain Getter: As f still has to be both Contravariant and Functor, it remains being a Const-like vacuous functor. Without the Applicative constraint, however, we can't combine results from multiple targets. The upshot is that a Getter always has exactly one target, unlike a Fold (or, for that matter, a Setter, or a Traversal) which can have any number of targets, including zero. The essence of Getter can be brought to light by specialising f to the obvious choice, Const r: Since a Const r whatever value can be losslessly converted to a r value and back, the type above is equivalent to: An (a -> r) -> s -> r function, however, is just an s -> a function in disguise (the camouflage being continuation passing style): Thus we conclude that a Getter s a is equivalent to a s -> a function. From this point of view, it is only natural that it takes exactly one target to exactly one result. It is not surprising either that two basic combinators from Template:Haskell lib are to, which makes a Getter out of an arbitrary function, and view, which converts a Getter back to an arbitrary function. Given what we have just said about Getter being less general than Fold, it may come as a surprise that view can work Folds and Traversals as well as with Getters: GHCi> :m +Data.Monoid GHCi> view traverse (fmap Sum [1..10]) Sum {getSum = 55} GHCi> -- both traverses the components of a pair. GHCi> view both ([1,2],[3,4,5]) That is possible thanks to one of the many subtleties of the type signatures of lens. The first argument of view is not exactly a Getter, but a Getting: type Getting r s a = (a -> Const r a) -> s -> Const r s view :: MonadReader s m => Getting a s a -> m a Getting specialises the functor parameter to Const r, the obvious choice for Getter, but leaves it open whether there will be an Applicative instance for it (i.e. whether r will be a Monoid). Using view as an example, as long as a is a Monoid Getting a s a can be used as a Fold, and so Folds can be used with view as long as the fold targets are monoidal. Many combinators in both Control.Lens.Getter and Control.Lens.Fold are defined in terms of Getting rather than Getter or Fold. One advantage of using Getting is that the resulting type signatures tell us more about the folds that might be performed. For instance, consider hasn't from Control.Lens.Fold: hasn't :: Getting All s a -> s -> Bool It is a generalised test for emptiness: GHCi> hasn't traverse [1..4] GHCi> hasn't traverse Nothing Fold s a -> s -> Bool would work just as well as a signature for hasn't. However, the Getting All in the actual signature is quite informative, in that it strongly suggests what hasn't does: it converts all a targets in s to the All monoid (more precisely, to All False), folds them and extracts a Bool from the overall All result. If we go back to Traversal... ... and relax the Applicative constraint to Functor, just as we did when going from Fold to Getter... ... we finally reach the Lens type. What changes when moving from Traversal to Lens? As before, relaxing the Applicative constraint costs us the ability to traverse multiple targets. Unlike a Traversal, a Lens always focuses on a single target. As usual in such cases, there is a bright side to the restriction: with a Lens, we can be sure that exactly one target will be found, while with a Traversal we might end up with many, or none at all. The absence of the Applicative constraint and the uniqueness of targets point towards another key fact about lenses: they can be used as getters. Contravariant plus Functor is a strictly more specific constraint than just Functor, and so Getter is strictly more general than Lens. As every Lens is also a Traversal and therefore a Setter, we conclude that lenses can be used as both getters and setters. That explains why lenses can replace record labels. On close reading, our claim that every Lens can be used as a Getter might seem rash. Placing the types side by side... type Lens s t a b = forall f. Functor f => (a -> f b) -> s -> f t type Getter s a = forall f. (Contravariant f, Functor f) => (a -> f a) -> s -> f s ... shows that going from Lens s t a b to Getter s a involves making s equal to t and a equal to b. How can we be sure that is possible for any lens? An analogous issue might be raised about the relationship between Traversal and Fold. For the moment, this question will be left suspended; we will return to it in the section about optic laws. Here is a quick demonstration of the flexibility of lenses using _1, a lens that focuses on the first component of a tuple: 1. Implement the lenses for the fields of Point and Segment, that is, the ones we generated with makeLenses early on. (Hint: Follow the types. Once you write the signatures down you will notice that beyond fmap and the record labels there is not much else you can use to write them.) 2. Implement the lens function, which takes a getter function s -> a and a setter function s -> b -> t and produces a Lens s t a b. (Hint: Your implementation will be able to minimise the repetitiveness in the solution of the previous exercise.) The optics we have seen so far fit the shape... ... in which: • f is a Functor of some sort; • s is the type of the whole, that is, the full structure the optic works with; • t is the type of what the whole becomes through the optic; • a is the type of the parts, that is, the targets within s that the optic focuses on; and • b is the type of what the parts becomes through the optic. One key thing those optics have in common is that they are all functions. More specifically, they are mapping functions that turn a function acting on a part (a -> f b) into a function acting on the whole (s -> f t). Being functions, they can be composed in the usual manner. Let's have a second look at the lens composition example from the introduction: An optic modifies the function it receives as argument to make it act on a larger structure. Given that (.) composes functions from right to left, we find that, when reading code from left to right, the components of an optic assembled with (.) focus on progressively smaller parts of the original structure. The conventions used by the lens type synonyms match this large-to-small order, with s and t coming before a and b. The table below illustrates how we can look at what an optic does either a mapping (from small to large) or as a focusing (from large to small), using segmentEnd . positionY as an example: Lens segmentEnd positionY segmentEnd . positionY Functor f Functor f Functor f Bare type => (Point -> f Point) => (Double -> f Double) => (Double -> f Double) -> (Segment -> f Segment) -> (Point -> f Point) -> (Segment -> f Segment) "Mapping" interpretation From a function on Point to a function on Segment. From a function on Double to a function on Point. From a function on Double to a function on Segment. Type with Lens Lens Segment Segment Point Point Lens Point Point Double Double Lens Segment Segment Double Double Type with Lens' Lens' Segment Point Lens' Point Double Lens' Segment Double "Focusing" interpretation Focuses on a Point within a Segment Focuses on a Double within a Point Focuses on a Double within a Segment The Lens' synonym is just convenient shorthand for lenses that do not change types (that is, lenses with s equal to t and a equal to b). type Lens' s a = Lens s s a a There are analogous Traversal' and Setter' synonyms as well. The types behind synonyms such as Lens and Traversal only differ in which functors they allow in place of f. As a consequence, optics of different kinds can be freely mixed, as long as there is a type which all of them fit. Here are some examples: Several lens combinators have infix operator synonyms, or at least operators nearly equivalent to them. Here are the correspondences for some of the combinators we have already seen: Prefix Infix view _1 (1,2) (1,2) ^. _1 set _1 7 (1,2) (_1 .~ 7) (1,2) over _1 (2 *) (1,2) (_1 %~ (2 *)) (1,2) toListOf traverse [1..4] [1..4] ^.. traverse preview traverse [] [] ^? traverse lens operators that extract values (e.g. (^.), (^..) and (^?)) are flipped with respect to the corresponding prefix combinators, so that they take the structure from which the result is extracted as the first argument. That improves readability of code using them, as writing the full structure before the optics targeting parts of it mirrors how composed optics are written in large-to-small order. With the help of the (&) operator, which is defined simply as flip ($), the structure can also be written first when using modifying operators (e.g. (.~) and (%~)). (&) is particularly convenient when there are many fields to modify: Thus far we have covered enough of lens to introduce lenses and show that they aren't arcane magic. That, however, is only the tip of the iceberg. lens is a large library providing a rich assortment of tools, which in turn realise a colourful palette of concepts. The odds are that if you think of anything in the core libraries there will be a combinator somewhere in lens that works with it. It is no exaggeration to say that a book exploring every corner of lens might be made as long as this one you are reading. Unfortunately, we cannot undertake such an endeavour right here. What we can do is briefly discussing a few other general-purpose lens tools you are bound to encounter in the wild at some point. There are quite a few combinators for working with state functors peppered over the lens modules. For instance: • use from Control.Lens.Getter is an analogue of gets from Control.Monad.State that takes a getter instead of a plain function. • Control.Lens.Setter includes suggestive-looking operators that modify parts of a state targeted a setter (e.g. .= is analogous to set, %= to over and (+= x) to over (+x)). • Template:Haskell lib offers the remarkably handy zoom combinator, which uses a traversal (or a lens) to zoom into a part of a state. It does so by lifiting a stateful computation into one that works with a larger state, of which the original state is a part. Such combinators can be used to write highly intention-revealing code that transparently manipulates deep parts of a state: In our series of Point and Segment examples, we have been using the makePoint function as a convenient way to make a Point out of (Double, Double) pair. The X and Y coordinates of the resulting Point correspond exactly to the two components of the original pair. That being so, we can define an unmakePoint function... ... so that makePoint and unmakePoint are a pair of inverses, that is, they undo each other: In other words, makePoint and unmakePoint provide a way to losslessly convert a pair to a point and vice-versa. Using jargon, we can say that makePoint and unmakePoint form an isomorphism. unmakePoint might be made into a Lens' Point (Double, Double). Symmetrically. makePoint would give rise to a Lens' (Double, Double) Point, and the two lenses would be a pair of inverses. Lenses with inverses have a type synonym of their own, Iso, as well as some extra tools defined in Template:Haskell lib. An Iso can be built from a pair of inverses through the iso function: Isos are Lenses, and so the familiar lens combinators work as usual: Additionally, Isos can be inverted using from: Another interesting combinator is under. As the name suggests, it is just like over, except that it uses the inverted Iso that from would give us. We will demonstrate it by using the enum isomorphism to play with the Int representation of Chars without using chr and ord from Data.Char explicitly: newtypes and other single-constructor types give rise to isomorphisms. Template:Haskell lib exploits that fact to provide Iso-based tools which, for instance, make it unnecessary to remember record label names for unwrapping newtypes... ... and that make newtype wrapping for instance selection less messy: With Iso, we have reached for the first time a rank below Lens in the hierarchy of optics: every Iso is a Lens, but not every Lens is an Iso. By going back to Traversal, we can observe how the optics get progressively less precise in what they point to: • An Iso is an optic that has exactly one target and is invertible. • A Lens also has exactly one target but is not invertible. • A Traversable can have any number of targets and is not invertible. Along the way, we first dropped invertibility and then the uniqueness of targets. If we follow a different path by dropping uniqueness before invertibility, we find a second kind of optic between isomorphisms and traversals: prisms. A Prism is an invertible optic that need not have exactly one target. As invertibility is incompatible with multiple targets, we can be more precise: a Prism can reach either no targets or exactly one target. Aiming at a single target with the possibility of failure sounds a lot like pattern matching, and prisms are indeed able to capture that. If tuples and records provide natural examples of lenses, Maybe, Either and other types with multiple constructors play the same role for prisms. Every Prism is a Traversal, and so the usual combinators for traversals, setters and folds all work with prisms: A Prism is not a Getter, though: the target might not be there. For that reason, we use preview rather than view to retrieve the target: For inverting a Prism, we use re and review from Template:Haskell lib. re is analogous to from, though it gives merely a Getter. review is equivalent to view with the inverted prism. Just like there is more to lenses than reaching record fields, prisms are not limited to matching constructors. For instance, Template:Haskell lib defines only, which encodes equality tests as a The prism and prism' functions allow us to build our own prisms. Here is an example using stripPrefix from Data.List: prefixed is available from lens, in the Template:Haskell lib module. 1. Control.Lens.Prism defines an outside function, which has the following (simplified) type: outside :: Prism s t a b -> Lens (t -> r) (s -> r) (b -> r) (a -> r) a. Explain what outside does without mentioning its implementation. (Hint: The documentation says that with it we can "use a Prism as a kind of first-class pattern". Your answer should expand on that, explaining how we can use it in such a way.) b. Use outside to implement maybe and either from the Prelude: maybe :: b -> (a -> b) -> Maybe a -> b either :: (a -> c) -> (b -> c) -> Either a b -> c There are laws specifying how sensible optics should behave. We will now survey those that apply to the optics that we covered here. Starting from the top of the taxonomy, Fold does not have laws, just like the Foldable class. Getter does not have laws either, which is not surprising, given that any function can be made into a Getter via to. Setter, however, does have laws. over is a generalisation of fmap, and is therefore subject to the functor laws: As set s x = over s (const x), a consequence of the second functor law is that: That is, setting twice is the same as setting once. Traversal laws, similarly, are generalisations of the Traversable laws: The consequences discussed in the Traversable chapter follow as well: a traversal visits all of its targets exactly once, and must either preserve the surrounding structure or destroy it wholly. Every Lens is a Traversal and a Setter, and so the laws above also hold for lenses. In addition, every Lens is also a Getter. Given that a lens is both a getter and a setter, it should get the same target that it sets. This common sense requirement is expressed by the following laws: Together with the "setting twice" law of setters presented above, those laws are commonly referred to as the lens laws. Analogous laws hold for Prisms, with preview instead of view and review instead of set: Isos are both lenses and prisms, so all of the laws above hold for them. The prism laws, however, can be simplified, given that for isomorphisms preview i = Just . view i (that is, preview never When we look at optic types such as Setter s t a b and Lens s t a b we see four independent type variables. However, if we take the various optic laws into account we find out that not all choices of s, t, a and b are reasonable. For instance, consider the "setting twice" law of setters: For "setting twice is the same than setting once" to make sense, it must be possible to set twice using the same setter. As a consequence, the law can only hold for a Setter s t a b if t can somehow be specialised so that it becomes equal to s (otherwise the type of the whole would change on every set, leading to a type mismatch). From considerations about the types involved in the laws such as the one above, it follows that the four type parameters in law-abiding Setters, Traversals, Prisms Lenses are not fully independent from each other. We won't examine the interdependency in detail, but merely point out some of its consequences. Firstly, a and b are cut from the same cloth, in that even if an optic can change types there must be a way of specialising a and b to make them equal; furthermore, the same holds for s and t. Secondly, if a and b are equal then s and t must be equal as well. In practice, those restrictions mean that valid optics that can change types usually have s and t parametrised in terms of a and b. Type-changing updates in this fashion are often referred to as polymorphic updates. For the sake of illustration, here are a few arbitrary examples taken from lens: At this point, we can return to the question left open when we presented the Lens type. Given that Lens and Traversal allow type changing while Getter and Fold do not, it would be indeed rash to say that every Lens is a Getter, or that every Traversal is a Fold. However, the interdependence of the type variables mean that every lawful Lens can be used as a Getter, and every lawful Traversal can be used as a Fold, as lawful lenses and traversals can always be used in non type-changing ways. As we have seen, we can use lens to define optics through functions such as lens and auto-generation tools such as makeLenses. Strictly speaking, though, these are merely convenience helpers. Given that Lens, Traversal and so forth are just type synonyms, their definitions are not needed when writing optics − for instance, we can always write Functor f => (a -> f b) -> (s -> f t) instead of Lens s t a b. That means we can define optics compatible with lens without using lens at all! In fact, any Lens, Traversal, Setter and Getting can be defined with no dependencies other than the base The ability to define optics without depending on the lens library provides considerable flexibility in how they can be leveraged. While there are libraries that do depend on lens, library authors are often wary of acquiring a dependency on large packages with several dependencies such as lens, especially when writing small, general-purpose libraries. Such concerns can be sidestepped by defining the optics without using the type synonyms or the helper tools in lens. Furthermore, the types being only synonyms make it possible to have multiple optic frameworks (i.e. lens and similar libraries) that can be used interchangeably. • Several paragraphs above, we said that lens easily provides enough material for a full book. The closest thing to that we currently have is Artyom Kazak's "lens over tea" series of blog posts. It explores the implementation of functional references in lens and the concepts behind it in far more depth than what we are able to do here. Highly recommended reading. • Useful information can be reached through lens' GitHub wiki, and of course lens' API documentation is well worth exploring. • lens is a large and complex library. If you want to study its implementation but would rather begin with something simpler, a good place to start are minimalistic lens-compatible libraries such as microlens and lens-simple. • Studying (and using!) optic-powered libraries is a good way to get the hang of how functional references are used. Some arbitrary examples: 1. ↑ 简洁起见, 译文中将使用原文 "optic". 2. ↑ 很遗憾, 目前并没有翻译完成.
{"url":"https://zh.wikibooks.org/wiki/Haskell/Lens","timestamp":"2024-11-03T14:12:31Z","content_type":"text/html","content_length":"219729","record_id":"<urn:uuid:94e85e71-939f-46c9-9ec6-8b6244701f5e>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00801.warc.gz"}
Calculate Deflection for Aluminium Tube Beam - Inertia, Mass & δ • Thread starter EddieC147 • Start date In summary, the conversation discusses the calculation of deflection (δ) from a simply supported beam using the formula δ = F L³ ∕ 48 E I, where I represents the moment of inertia. The formula for the moment of inertia found online for a tube is 1/2 M (R1²+ R2²). The conversation also mentions the importance of cross-sectional shape in resisting bending and provides resources for further understanding of the concept of moment of inertia. TL;DR Summary Calculating Beam Deflection with Moment of Inertia and Mass Hi There, I am wanting to calculate the amount of deflection (δ) from a simply supported Beam. My Beam is an Aluminium Tube ø30mm with a 3mm Wall Thickness. Force (F) - 500N Length (L) - 610mm Youngs Modulus (E) - 68 Gpa Moment of Inertia (I) - ? δ = F L³ ∕ 48 E I Q1: Is this the correct formula that I found online? Q2: Put Simply why is the Moment of Inertia needed for this? ( I know this isn't relevant to solving the problem but I want to learn and understand) Q3: What units should the Moment of inertia be measured into be entered into this formula? The formula for the moment of inertia (I) of a tube I have found online is below. Mass - M Bore Radius - R1 Tube Radius - R2 I = 1/2 M (R1²+ R2²) Q4: Is this the correct formula for finding the moment of inertia of a Hollow Tube? Q5: What units should the Radius be measured into make the end unit match with Q3? Q6: What units should the mass be measured into make the end unit match with Q3? Finding the Mass needs the density of aluminium multiplied by the volume. Q7: Do I need to multiply this by the volume of the full aluminium tube? Any help is greatly appreciated. Welcome, Ed! A tube does not make a very strong beam. In order to resist bending, the cross-section of any beam should have as much material as possible, as far as possible from its neutral axis, on the same plane the bending forces and moments are being applied. That is the reason for the shapes of the cross-sections of steel I-beams, H-beams and C-beams, as well as structural elements with square and rectangular closed sections. Such shapes have a greater moment of inertia (the term is confusing) than circular or oval cross-sections of similar dimensions and wall thickness. Please, see: https://en.m.wikipedia.org/wiki/I-beam#Design_for_bending https://en.m.wikipedia.org/wiki/Section_modulus https://en.m.wikipedia.org/wiki/Second_moment_of_area https://en.m.wikipedia.org/wiki/ Last edited: Thanks for your input. The Tube is not going to be used as a beam, just the beam bending calculation is the best way to represent the force that may act upon my Tube. In this case, assuming that only the concentrated force in the middle is acting on the beam (ignoring self-weight), the maximum deflection will be: $$y_{max}=\frac{FL^{3}}{48EI}$$ where: $$I=\frac{\ pi(D^{4}-d^{4})}{64}=\frac{\pi \cdot 30^{4} - 24^{4}}{64}=23474,765706 \ mm^{4}$$ Thus: $$y_{max}=\frac{500 \cdot 610^{3}}{48 \cdot 68000 \cdot 23474,765706}=1,48 \ mm$$ Here ##I## stands for area moment of inertia (also called second moment of area). It doesn't depend on the mass of the beam, only on its cross-sectional shape. This video can help you understand the topic: There's also one about the deflection but it discusses some advanced methods so try to get familiar with the concep of ##I## first. Thanks for the reply, that is absolutely fantastic just what I wanted. Thank you for your help FAQ: Calculate Deflection for Aluminium Tube Beam - Inertia, Mass & δ 1. What is the formula for calculating deflection for an aluminium tube beam? The formula for calculating deflection for an aluminium tube beam is: δ = (5/384) x (F x L^3) / (E x I), where δ is the deflection, F is the force applied, L is the length of the beam, E is the modulus of elasticity, and I is the moment of inertia. 2. How does the mass of the beam affect its deflection? The mass of the beam does not directly affect its deflection. However, it does affect the force applied to the beam, which in turn affects the deflection. A heavier beam will require a greater force to cause the same amount of deflection as a lighter beam. 3. What is the moment of inertia and how does it impact deflection? The moment of inertia is a measure of an object's resistance to changes in its rotational motion. In the case of an aluminium tube beam, a higher moment of inertia means that the beam is less likely to bend or deflect when a force is applied to it. 4. Can the deflection of an aluminium tube beam be reduced? Yes, the deflection of an aluminium tube beam can be reduced by increasing its moment of inertia, decreasing its length, or increasing the modulus of elasticity. Additionally, proper design and reinforcement techniques can also help reduce deflection. 5. Are there any limitations to using this formula to calculate deflection? Yes, there are some limitations to using this formula. It assumes that the beam is supported at both ends and that the load is applied at the center. It also does not take into account any external factors such as temperature, wind, or vibration, which can also affect the deflection of the beam.
{"url":"https://www.physicsforums.com/threads/calculate-deflection-for-aluminium-tube-beam-inertia-mass-d.1009958/","timestamp":"2024-11-02T03:03:42Z","content_type":"text/html","content_length":"102350","record_id":"<urn:uuid:cd245a0c-3d2e-4fcf-9db7-95198842f998>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00617.warc.gz"}
Hey! Pass this quiz: Tesla Turbine | The interesting physics behind it Created from Youtube video: https://www.youtube.com/watch?v=AfCyzIbpLN4video Concepts covered:Tesla turbine, Nikola Tesla, bladeless turbine, fluid viscous effects, efficiency The Tesla turbine, a bladeless turbine invented by Nikola Tesla, operates on the principle of fluid's viscous effects, achieving high efficiency at high speeds but facing challenges due to material limitations and engineering constraints. Efficiency of Tesla Turbine Design Concepts covered:Tesla turbine, efficiency, viscous force, spiral flow pattern, high-speed operations The Tesla turbine, favored by Nikola Tesla, boasted a simple yet efficient design surpassing steam turbines in efficiency. By leveraging the viscous force between fluid and solid surfaces, the turbine's spiral fluid flow pattern maximizes energy extraction during high-speed operations. Question 1 Why does the Tesla turbine exhibit high efficiency at high speeds? Question 2 What force makes the Tesla turbine spin? Question 3 What efficiency level did Tesla claim for his turbine? Boundary Layer Phenomenon in Fluid Dynamics Concepts covered:Boundary Layer Thickness, Viscosity, Nikola Tesla, Fluid Dynamics, Turbine Efficiency The chapter discusses the concept of boundary layer thickness in fluid dynamics, where fluid particles near a surface form a stationary layer resisting the flow of adjacent particles due to viscosity. Nikola Tesla's innovative approach of adding parallel disks to utilize the boundary layer phenomenon for increased turbine efficiency is highlighted. Question 4 How did Tesla improve turbine efficiency? Question 5 Where does velocity variation in fluid occur? Question 6 What causes fluid particles to resist flow? Challenges of Tesla Turbines in Industrial Applications Concepts covered:Tesla Turbines, Nikola Tesla, Industrial Applications, Efficiency Challenges, Rotor Speed Nikola Tesla faced challenges with his turbine design when trying to increase torque by adding more disks, leading to material failure at high speeds. Despite being easy to construct, Tesla turbines are not widely used in power generation due to the engineering impossibility of operating large diameter discs at the required high speeds for optimal efficiency. Question 7 Why did Tesla's turbine design fail? Question 8 Why can't large Tesla turbine disks operate at high RPM? Question 9 Why aren't Tesla turbines used in power generation? Would you like to create and run this quiz? Created with Kwizie
{"url":"https://app.kwizie.ai/en/public/quiz/baa379b8-dd58-48b6-b035-3a9b5feab1a0","timestamp":"2024-11-10T23:43:18Z","content_type":"text/html","content_length":"34842","record_id":"<urn:uuid:8313db44-3bb6-4055-9820-81e50a846b6f>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00559.warc.gz"}
Pizza Sizes: Ordering Guide (Slices, Servings, & Total Size) Did you know that a 14-inch pizza contains 4x more pizza than a 7-inch pizza? A 14-inch pizza has 154 square inches, whereas a 7-inch pizza has just 38 square inches. Crazy, right? I’m a bit of a pizza enthusiast (hence why I started the blog), but even I had a hard time keeping the different pizza sizes straight. Plus, I got tired of doing the math every time to see how much pizza I would be getting with each size. So I made myself a pizza size chart and stuck it to my fridge. My pizza size chart covers how many slices, how many servings, how big different pizza sizes are, and the total number of calories in each size of pizza. Check out my chart below: Pizza Size Chart Before reading the chart, keep these things in mind: • Pizza sizes are not universal. While most restaurants use these figures, not all do (especially the local shops). • The number of calories is an estimate and based on a pepperoni pizza. • If you have a hungry crowd, opt to order more pizza. Without further ado, here’s my pizza size chart: Size Inches Servings Slices Total Size Total Calories Specialty / Personal 7 inch 1 adult 4 slices 38 square inches 539 calories Specialty / Personal 8 inch 1 adult 4 slices 50 square inches 704 calories Specialty / Personal 9 inch 1 adult 4 slices 64 square inches 891 calories Small 10 inch 1-2 adults 4 slices 79 square inches 1,100 calories Medium 12 inch 2-3 adults 6 slices 113 square inches 1,583 calories Large 14 inch 3-4 adults 8 slices 154 square inches 2,155 calories Extra Large 16 inch 3-5 adults 10 slices 201 square inches 2,815 calories Jumbo 18 inch 4-6 adults 12 slices 254 square inches 3,563 calories Remember, even though the diameter of each size is only slightly different, the total amount of pizza you’re getting from each additional size up is very different. To calculate the size of any pizza, use this formula: π x r² • r = radius = diameter / 2 • π = pi = 3.14159 To calculate the total size of a 10-inch pizza, we take the radius of 5 (10 / 2), square it to make 25 (5 x 5), then multiply it by 3.14159 to give us a total area of 79 square inches. The rest of the article will break down pizza sizes by area, number of slices, and number of servings. Pizza Sizes (By Total Area) How Big Is a Personal Pizza? (How Big Is a Specialty Pizza)? A personal pizza will have a diameter of 7, 8, or 9 inches depending on where you order it from. You can expect it to be cut into 4 slices and will have 38.5 square inches, 50 square inches, or 64 square inches depending on if you order a 7-, 8-, or 9-inch pizza. How Big Is a Small Pizza? A small pizza has a diameter of 10 inches and will contain 79 square inches of pizza. How Big Is a Medium Pizza? A medium pizza has a diameter of 12 inches and has 113 square inches of pizza. How Big Is a Large Pizza? A large pizza has a diameter of 14 inches and will contain 154 square inches of pizza. How Big Is an Extra Large Pizza? An extra large pizza has a diameter of 16 inches and has 201 square inches of pizza. How Big Is a Jumbo Pizza? (How Big Is a Party Pizza?) A jumbo pizza, which won’t be available from most restaurants, has a diameter of 18 inches and will contain 254 square inches of pizza. Number of Slices in a Pizza (Pizza Size Comparison) Different pizza sizes have different numbers of slices and different areas per slice. I’ll compare pizza sizes by both of these factors. Number of Slices in a Personal Pizza A personal / pan / specialty pizza has 4 slices, regardless of whether it is a 7-, 8-, or 9-inch pizza. A personal pizza will have 10, 13, or 16 square inches of pizza per slice. Number of Slices in a Small Pizza A small pizza typically has 4 slices, but will sometimes come in 6 slices. A small pizza will have 20 square inches of pizza per slice. Number of Slices in a Medium Pizza A medium pizza has 6 slices. A medium pizza will have 19 square inches of pizza per slice. Number of Slices in a Large Pizza A large pizza has 8 slices. A large pizza will have 19 square inches of pizza per slice. Number of Slices in an Extra Large Pizza An extra large pizza has 10 slices. An extra large pizza will have 20 square inches of pizza per slice. Number of Slices in a Jumbo Pizza A jumbo pizza has 12 slices. A jumbo pizza will have 21 square inches of pizza per slice. Servings per Pizza (Pizza Size Comparison) The number of servings in a pizza can vary a lot based on the number of toppings, the taste, and how hungry the eaters are. For the sake of this article, 1 serving will equate to the amount an average adult can eat comfortably in one sitting. If you have children, you can consider 3 of them being equal to 2 adults. How Many People Can a Personal Pizza Serve? A personal pizza serves 1 adult or 2 (small) children. How Many People Can a Small Pizza Serve? A small pizza can serve 1-2 adults or 2-3 children. How Many People Can a Medium Pizza Serve? A medium pizza can serve 2-3 adults or 3-4 children. How Many People Can a Large Pizza Serve? A large pizza can serve 3-4 adults or 5-6 children. How Many People Can an Extra Large Pizza Serve? An extra large pizza can serve 3-5 adults or 5-8 children. How Many People Can a Jumbo Pizza Serve? A jumbo pizza can serve 4-6 adults or 6-9 children. Other Factors to Consider When Ordering Pizza Before you place your next order, there are a few more things you should keep in mind: • Will there be appetizers, salad, fruit, or dessert? The more side dishes, the less pizza you will need. • Will there be drinks (soda, beer, juice)? Drinks will decrease appetites. • Is it good pizza? The better the pizza tastes, the more of it will be eaten. • Are you getting pepperoni or something with a lot of toppings? The greater the number of toppings, the smaller the servings. • Do you want leftovers? I love leftover pizza, so I err on the side of ordering too much. That’s a wrap on my pizza sizes and pizza size comparisons! Now, let’s order some pizza! Pizza Ordering Guide (Adults) When ordering pizza, you will get the best value for ordering the largest sizes (it’s cheaper to order 1 extra large than it is to order 2 mediums, even though the total amount of pizza is usually the same). I’m also going to assume your pizza joint offers extra large (16-inch) pizzas as its largest size and that there will be no appetizers. How many pizzas for 5 people? You should order 1 extra large pizza for 5 people. How many pizzas for 8 people? You should order 2 extra large pizzas for 8 people. How many pizzas for 15 people? You should order 3-4 extra large pizzas for 15 people. How many pizzas for 20 people? You should order 4-5 extra large pizzas for 20 people. How many pizzas for 30 people? You should order 6-8 extra large pizzas for 30 people. How many pizzas for 40 people? You should order 8-10 extra large pizzas for 40 people. Pizza Ordering Guide (Kids) How many pizzas for 5 kids? You should order 1 large pizza for 5 kids. How many pizzas for 8 kids? You should order 2 large pizzas for 8 kids. How many pizzas for 15 kids? You should order 2-3 extra large pizzas for 15 kids. How many pizzas for 20 kids? You should order 3-4 extra large pizzas for 20 kids. How many pizzas for 30 kids? You should order 4-6 extra large pizzas for 30 kids. How many pizzas for 40 kids? You should order 6-8 extra large pizzas for 40 kids. FAQs - Pizza Sizes What are standard pizza sizes? Small pizzas are 10 inches in diameter and serve 1-2 adults, medium pizzas are 12 inches in diameter and serve 2-3 adults, large pizzas are 14 inches in diameter and serve 3-4 adults, and extra large pizzas are 16 inches in diameter and serve 3-5 adults. Is a 16 inch pizza bigger than 2 12 inch? No, a 16-inch pizza has a total area of 201 square inches, whereas each 12-inch pizza has a total area of 113 square inches. 2 12-inch pizzas has a total of 226 square inches of pizza, compared to a 16-inch pizza's 201 square inches. How big is a 16 inch pizza? A 16-inch pizza has a total area of 201 square inches, has 10 slices, serves 3-5 adults, and is typically referred to as an extra large pizza. What size pizza for 4 adults? 4 adults should order a large (14 inch) pizza unless 2 or more of them are extremely hungry, in which case they should order an extra large (16 inch) pizza.
{"url":"https://brooklyncraftpizza.com/pizza-sizes/","timestamp":"2024-11-05T10:20:01Z","content_type":"text/html","content_length":"293595","record_id":"<urn:uuid:2b553253-b49e-4674-b43b-59fe7291122f>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00016.warc.gz"}
Linear Algebra and Differential Equations Computer-Aided Analysis of Difference Schemes for Partial - Ellibs Det kostar For a nonlinear dynamical system described by the first-order differential equation with Poisson white noise having exponentially distributed It's a formula for solving systems of equations by determinants. relation is specified by the Einstein field equations, a system of partial differential equations. Complex roots of the characteristic equations 2 Second order differential equations Khan Academy So An ordinary differential equation or ODE is a differential equation containing a function or functions of one independent variable and its Find the general solution of the differential equation. Formeln får uttryckas med integraler. a) Derive a formula for the solution to dy dx + This system of linear equations has exactly one solution. tredje ordningens. third-order differential equation sub. tredje ordningens differentialekvation. third quadrant sub. Aatena Liya - teaching differential equation & math - umz Let ν > 0. The solutions of the ordinary differential equation y − ν2y = 0 on the line form a vector av K Kirchner — Numerical methods for stochastic differential equations typically estimate the solution to a stochastic ordinary differential equation driven by Generalizations of Clausen's Formula and algebraic transformations of Calabi–Yau differential equations. Syllabus for Partial Differential Equations with Applications to To make your calculations on Differential Equations easily use the provided list of Differential Equation formulas. 2015-12-26 Linear differential equations: A differential equation of the form y'+Py =Q where P and Q are constants or functions of x only, is known as a first-order linear differential equation. How to … 2019-03-18 Differential Equations: It is an equation that involves derivatives of the dependent variable with respect to independent variable.The differential equation represents the physical quantities and rate of change of a function at a point. It is used in the field of mathematics, engineering, physics, biology etc. Take free online differential equations classes from top schools and institutions on edX today! Differential equations are equations that accoun Scientists and engineers understand the world through differential equations. You can too. How online courses providers shape their sites and content to appeal to the Google algorithm. Organize and share your learning with Class Central Lis The basic formula for velocity is v = d / t, where v is velocity, d is displacement and t is the change in time. Velocity measures the speed an object is t The basic formula for velocity is v = d / t, where v is velocity, d is displacement A differential equation is an equation that involves a function and its derivatives. Put another way, a differential equation makes a statement connecting the value Solve the new linear equation to find v. Ärvdabalken testamente This video This video introduces the basic concepts associated with solutions of ordinary differential equations. This video This video introduces the basic concepts associated with solutions of ordinary differential equations. Thus x is often called the independent variable of the equation. Handelsermächtigung luxembourg branemark mark 3400 sek in chfvisio diagram templatesrakna ut betygspoang grundskolanh2co lewis structure Partiella differentialekvationer In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the equation defines a relationship between the two. Know More about these in Differential Equations Class 12 Formulas List. Non-homogenous Differential Equations Different Differentiation Formulas for Calculus Well, differentiation being the part of calculus may be comprised of numbers of problems and for each problem we have to apply the different set of formulas for its calculation. Differential Equation formula. \frac {dy} {dt} + p (t)y = g (t) p (t) & g (t) are the functions which are continuous. y (t) = \frac {\int \mu (t)g (t)dt + c} {\mu (t)} Where \mu (t) = e^ {\int p (t)d (t)} A differential equation (de) is an equation involving a function and its deriva-tives. Differential equations are called partial differential equations (pde) or or-dinary differential equations (ode) according to whether or not they contain partial derivatives.
{"url":"https://hurmanblirriksudbqw.netlify.app/58888/4652","timestamp":"2024-11-14T22:01:22Z","content_type":"text/html","content_length":"9605","record_id":"<urn:uuid:6ed4170c-1c17-4e4e-af55-9079aceaf9ac>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00868.warc.gz"}
effective teaching Effective teaching How to Create an Inviting Virtual K-12 Math Classroom. As math coaches, we’ve worked for months now with teachers who are striving to create lively, inviting virtual classrooms. Over time, it’s become clear that there is one consistent ingredient for success when it comes to student engagement in remote math instruction: making connections, both conceptual and interpersonal. When teachers embrace an understanding of the importance of connections and apply it in different ways in their instruction and relationships, student engagement is fortified—and teachers are buoyed by their success. Consider Visibility Whether you are in a school building or teaching remotely, student thinking—including connections they make among ideas—needs to inform your teaching decisions; collecting student thinking and making it visible informs teachers’ planning for the next day’s lesson. Jamboard, Classkick, Padlet, Pear Deck, and Nearpod are all platforms that both promote student engagement and capture student thinking. Engaging Students in Math. Often, that starts with us as teachers. Developing an environment where students can experiment and gain entry into the language of math starts with having a person who can facilitate what Stephen Krashen termed a low affective filter environment. Twitter. “Who’s Doing the Work?” Classroom Strategies. Twitter. Pacing Lessons for Optimal Learning. The 100 Number Task - during a Pandemic (Is it possible?) - Sara VanDerWerf. The Number One most viewed post (by a factor or 8 or 10) at my website is the 100 NUMBER TASK – for building group work norms. It has become a favorite first week of school activity for many of you. If you’ve never read this post – STOP READING. Please read the 100 Number Task post first before reading the rest of this post. The 100 Number Task WAS a favorite week 1 activity….but then the pandemic hit. Our world was turned upside down. 100 Numbers to Get Students Talking - Sara VanDerWerf. UPDATE AUGUST 2020: Is it possible to do the 100 Number Task in Distance Learning? I have a new post answering this question. Check it out HERE UPDATE May 2020 – Several super creative educators are re-imagining this activity for the COVID-19 era. Scroll to the bottom for links to their ideas. 30 Ways to Make Math FUN for Elementary Kids - Mr Elementary Math. New video of @BerkeleyEverett sharing one of his major mathematics influences #RXMathNetwork @TalkMath2Me @imathination @MathHiker76 @MelanieJanzen15 @vittorioisms @dcdarnellster @ShirleyBird57. Mistakes Tell Us What Students Are Ready to Learn. Back in 2012, I had an idea for getting a bit better at teaching. Kids make a lot of mistakes while learning. Teachers, meanwhile, have to quickly respond to those mistakes. What if I could prepare for those moments outside of the classroom? I need some help (groupthink). What do the eight effective mathematics teaching practices look like for online teaching? What are the promises and challenges? I appreciate any and all thoughts. Thanks!… If you are a math or science teacher educator and have used our teaching channel videos, here's a file with some links where you can continue to access them for free. #mtbos #iteachmath. How Kobe Bryant challenged me to be a better math teacher. - Sara VanDerWerf. Hello friends! In the past I’ve blogged HERE and HERE about this topic – though it was buried in middle of other messages. With the recent passing of Kobe Bryant and his daughter (and several others), I have taken an image I wore around my neck while teaching out of storage and used in it in multiple professional development events I’ve led in the last month. It has resonated with many I’ve shared it with, so I thought I’d re-blog about it so you all can be challenged in the same way I have been by Kobe Bryant and a graphic representing his NBA career. Feel free to watch this short video explaining this OR read below the video link for more. In 2016, I found the following graphic from an article on the cover of the LA Times the day Kobe Bryant retired. 5 Questions to Ask Yourself About Your Unmotivated Students. How can we help teachers overcome their fear of teaching math in a new way? Response: Classrooms Don&apos;t Need &apos;Pinterest-y Looking Walls&apos; (This is the final post in a three-part series. You can see Part One here and Part Two here.) The new "question-of-the-week" is: How to Talk Less to Improve Learning. Start With Struggle Jo Boaler, a revolutionary researcher and math educator, says that struggle is critical to mastering a skill or concept. When we sense discomfort in our classrooms, we can be quick to explain and provide steps to follow. But removing the struggle for students also removes the cognitive heavy lifting that leads to deep learning and understanding. Shift the script and begin lessons by asking students to experience struggle. Explain what you are doing and how grappling with concepts will help them learn before support is given. In other subjects, use brain research to encourage students to persevere through writer’s block or try a task for a second or third time. What are your student engagement strategies? Your answer matters.… Strategies to #shiftthelift in Mathematics @achievethecore… Classroom Norms - for students & TEACHERS. - Sara VanDerWerf. This GIF will give away my age. This TV show, Cheers, started running on NBC while I was in Middle School and ended during the first years of my teaching career & went on to live in re-run world. Back in the day this was ‘must-see TV’. On the show ‘Cheers’, every time one of the main characters walked into the bar at the center of the show, the patrons of the bar would yell his name ‘NORM!’. (the theme song included the lyrics ‘where everyone knows your name’). The bartender Sam would then ask Norm, “What are you up to, Norm? Embedding the CCSS Mathematical Practices into Math Instruction. The Common Core State Standards for Math actually include two types of standards: the content standards and the standards for mathematical practice. The content standards define the specific skills that are to be mastered at each grade level. For example, multiplication, division, and fractions are all content standards for 3rd grade. The standards for mathematical practice, however, outline how students go about doing the math. They are skills, based on the NCTM process standards, which students should utilize on a daily basis, regardless of the content being taught. Simple Ways to Integrate Four Evidence-Based Teaching Strategies. When educators understand the science behind teaching practices they can more readily incorporate them into their daily instruction, says Cult of Pedagogy’s Jennifer Gonzalez. In her podcast and accompanying post, Gonzalez highlights the four key teaching strategies researcher that Pooja Agarwal and K–12 teacher Patrice Bain feature in their new book, Powerful Teaching: Unleash the Science of Learning. They explain the science behind the suggestions, many of which are familiar, as well as best practices and applications for each one. Retrieval practice: The goal is for students to recall information from memory and reinforce learning through quick daily assessments. Evidence shows that actively accessing learned material—rather than merely reteaching it—boosts retention. Beyond the Lesson Discussion Guide Mathematics. Using Play to Teach Math. The concept of play is often limited to younger students and less academic endeavors, but play can be a useful strategy in an unlikely discipline: math. Mathematics is known as cold, logical, and rigorous, but the subject doesn’t get enough credit for its true mischievous spirit, which is well-hidden from the world. The K–12 mathematics curricula can involve time and space to play with concepts and ideas, even in the rush of required topics. In my research for a 2017 talk I gave at TEDxKitchenerEd called “Math Is Play,” I found few materials on the joy of math in upper grades. Much of the literature on play as a learning approach is based on the early years, particularly kindergarten, where it is an accepted pedagogical mode. Young children at play often reach a state that psychologist Mihaly Csikszentmihalyi calls “flow,” an elusive state of mind where time seems to disappear as they focus deeply on what they’re doing. Empower Students Through Individual Conferences. Learning to Listen and Listening to Learn — zacharychampagne.com. And with these two inputs, I became fascinated with the power of not talking so much. 0210 Ed Leadership. Never Skip the Closing of the Lesson. Once again, Tracy Zager has pushed us to think about our teaching. In her recent talk at #TMC16 Tracy asked us to consider what it means to “close the lesson”. Here is an example of a problem and a potential close, followed by some of my thoughts about how we should close any lesson. First of all, give a problem that will help you achieve a specific goal. Take this problem published in Marilyn Burn’s 50 Problem Solving Lessons resource: If rounds 1 & 2 of a tug-of-war contest are a draw, who will win the final round? Here is the full problem. Once students understand the problem and are given time to write their solution (individually or in pairs) the learning isn’t over. Closing the lesson: Tips for Teachers: Critical ingredients for a successful mathematics lesson. What are the ingredients for an effective mathematics lesson? Teachers are continually faced with a range of advice or ideas to improve their mathematics lessons and often this just creates confusion. It’s a little bit like being a cook. New recipes appear online and in cookbooks on bookstore shelves, but often they’re just adaptations of classic recipes that have been around before, their foundation ingredients are tried and tested, and often evidence based. There are always the staple ingredients and methods that are required for the meal to be successful. The following is a list of what I consider to be important ingredients when planning and teaching an effective mathematics lesson. The Best Math Practices For Teachers. Instructional Practice Toolkit and Classroom Videos - Supplemental Lesson Videos. Inviting Participation With Thumbs-Up Responses. Robertkaplinsky. Robertkaplinsky. "Student-Centered" vs "Traditional" Math Teaching. I’d like to reframe the divide that seems to exist when we talk about “student-centered teaching” and “traditional teaching.” Instead, I suggest that we use these labels to describe, without blanket judgment, two different kinds of teaching decisions, each with its own purpose and value. Limiting “Teacher Talk,” Increasing Student Work! “Wah waah wah waah wah wah…” We all know the famous muted trumpet of adults in Charlie Brown’s world, especially their teacher, Miss Othmar. 11 Strategies in Teaching Mathematics. Count It All Joy. Why Kids Need More Talk Time in the Classroom. Sometimes, for the sake of classroom management, we spend so much time trying to manage noise level that we forget that talk time in the classroom is actually an important element of learning. In fact it’s really important. 10 Fun Alternatives to Think-Pair-Share. All learners need time to process new ideas and information. They especially need time to verbally make sense of and articulate their learning with a community of learners who are also engaged in the same experience and journey. What Will They Be Doing? Anchor Charts 101: Why and How to Use Them, Plus 100s of Ideas. Spend any time browsing teacher pages on Pinterest and Instagram, and you’ll run across hundreds of ideas for classroom anchor charts. But you may have lingering questions about what they are, what purpose they serve, how to get started, and when to use them. Have no fear! WeAreTeachers has created this primer to inform you, and we’ve also included a huge list of resources to get you started. We have a feeling that once you get started, anchor charts are going to your new favorite thing. How to Use Anchor Charts in Your Classroom. You see them all over Pinterest. Becoming the Math Teacher You Wish You'd Had. My girls started school yesterday. Robertkaplinsky. Two Common Misconceptions About Learning. It's another semester with a new group of students. This semester, I have a class of elementary education majors (using Physics and Everyday Thinking). In the course, students build basic physics ideas after collecting data from particular experiments. Overall, this is an awesome course. Building A Culturally Responsive Classroom. MobilePagedReplica. 5 principles of extraordinary math teaching. Nine “Look Fors” in the Elementary Math Classroom. Questioning and Vocabulary Supports That Inspire Language-Rich Mathematics. Nctm. Demonstrating Conceptual Understanding of Mathematics Using Technology. How to Become and Remain a Transformational Teacher. Just-in-Time vs. Just-in-Case Scaffolding: How to Foster Productive Perseverance. Classroom Clock: Andrew Stadel. NCTM 2018 Resources for Attendees. Research-Based Education Strategies & Methods. Making Math Visual. [NCTM18] Why Good Activities Go Bad. Nctm. Strategies vs Models. Rebranding "Show Your Work" Nine “Look Fors” in the Elementary Math Classroom. Robertkaplinsky. Mathematical Habits of Mind. How the 5 Practices Changed my Instruction – Illustrative Mathematics. Starting where our students are….. with THEIR thoughts. The Importance of Debriefing in Learning and What That Might Look Like in the Classroom. Instructional Strategies List for Teachers - Instructional Strategies List. Teaching for Understanding. Targeted Instruction. 8 Teaching Habits that Block Productive Struggle in Math Students. Levels of Classroom Discourse. Edutopia. Outlaw "I'm Done!" – Teacher Trap. Hands-Off Teaching Cultivates Metacognition. Navigating Success for All Students Begins with a Map. Edutopia. 12 Curriculum Planning Tips For Any Grade Level Or Content Area. #ObserveMe. 14 Resources on Crafting Learning Objectives. 22 Powerful Closure Activities. How to change everything and nothing at the same time! – Thinking Mathematically. Great Minds, Great Conversations. Create & Find Multimedia Lessons in Minutes. Supporting Sense Making with Mathematical Bet Lines. Visual Reasoning Tools in Action. One Way I Get Students To Persevere – Robert Kaplinsky. Education Week.
{"url":"http://www.pearltrees.com/debmeagher/effective-teaching/id16063649","timestamp":"2024-11-02T11:56:47Z","content_type":"text/html","content_length":"100118","record_id":"<urn:uuid:c7f0e1fd-a364-405c-a272-c6ee1f687d77>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00206.warc.gz"}
ECCC - Olivier Powell We constructively prove the existence of almost complete problems under logspace manyone reduction for some small complexity classes by exhibiting a parametrizable construction which yields, when appropriately setting the parameters, an almost complete problem for PSPACE, the class of space efficiently decidable problems, and for SUBEXP, the class of problems ... more >>>
{"url":"https://eccc.weizmann.ac.il/author/02632/","timestamp":"2024-11-14T05:44:33Z","content_type":"application/xhtml+xml","content_length":"20008","record_id":"<urn:uuid:40dfe205-1e41-41f7-bc6a-85e5fb24b2a3>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00473.warc.gz"}
84kg to lbs - Easy Rapid Calcs84kg to lbs | Easy Rapid Calcs 84kg to lbs 84kg to lbs Overview 84kg is equal to 185.188 pounds (lbs). We got this result by doing a simple calculation of rule of three or by using a free tool kg to lbs calculator. When you want to measure the mass of something, there are two common units you will see. One is kilograms (kg) and the other is pounds (lbs). kg to pounds conversion calculator below Converting 84kg to Lbs If you want to convert 84 kg to pounds, you need to know how many pounds equal 84 kilograms. This is the standard unit of mass in many parts of the world. You can use this metric conversion chart to make your calculations easier. Pounds to Kilograms Conversion Calculator Using this metric unit conversion chart, you can easily find the equivalent weight in pounds. Regardless of whether you’re buying a new car or need to measure the length of your current outfit, you’ll find the 84kg conversion calculator handy. Luckily, there is an easy way to convert 84 kilograms into pounds. It’s easy to find a conversion table, especially if you’re familiar with the metric system. A pound, however, weighs about the same as a kilo, so this chart can be especially useful for people who don’t yet know much about the metric system. By knowing the conversion factors, you can get an accurate estimate of the weight of 84 kilograms. One thing you should know is that a kilogram is a unit of mass, and a pound equals 16 ounces. Therefore, one kilogram is equal to 2.2046226218488 pounds. In this case, you might prefer to use a simpler number instead of trying to make it precise by converting 84 kilograms to pounds. For instance, 184.8 pounds is a big number that may not mean much to What is this calculator for? This calculator is for people who are looking for an easy way to convert 84kg to lbs. If you have come across this page trying to convert 84kg to lbs, we hope you found this web page useful and we hope it helped you. To be able to convert weight from kg to lbs or from lbs to kg you can use this calculator. To be able to convert weight from kg to lbs or from lbs to kg, you can use this calculator. What is the formula? There is no formula needed to convert kg to lbs. All the calculator requires are the numbers being entered, all of which are in one unit. What are the benefits of using this calculator? This converter is useful for anyone who wants to be able to convert weight from kgs to lbs, or vice versa. This could be someone who is trying to use a different type of scale that requires different units or someone who is looking for a quick and easy way to switch between weights in different units. What is the chart of this calculator? If you are looking to see the chart of 84kg in lbs, simply put 84 into the calculator and read off your result. How accurate is this calculator? This is a very accurate after trying it out and seeing the results for yourself. Our goal with this calculator is to make something that is simple and easy to use. What is 1 Kg in US Pounds? One kilogram equals 2.2046 pounds. To convert kilograms to pounds, you must know their conversion factor. The pound is an imperial and US customary unit of weight measurement, legally defined as 0.45359237 kilograms and subdivided into 16 avoirdupois ounces. What weight is 80 kg in pounds? United States citizens are used to measuring weight in pounds; however, most of the rest of the world uses kilograms as its unit of measure. If you need to know what 80 kg translates to in pounds terms, first convert it before answering that question. There are multiple methods for converting kilograms to pounds, but the easiest is with a calculator. Just enter your value in kilograms and the calculator will provide the equivalent number in pounds automatically. If you don’t have one handy, alternatively there are also online conversion tables with 80 kg to lbs conversion charts that offer easy reading solutions with the same results as One kilogram equals 2.2046226218488 pounds because a kilogram is the unit of mass while pounds represent weight. A kilogram is defined by the International Metric System as one unit of mass and 0.45359237 kilograms in terms of the imperial system. By comparison, one pound equals 0.45359237 kilograms. At first glance, kilograms may seem the more accurate way of measuring weight. But many people prefer pounds because it’s easier for them. When trying to estimate how much something weighs, it is wise to use both measurements and compare their results in order to achieve an accurate answer. While kilograms are accurate measurements, they can be challenging for those unfamiliar with them to grasp. Therefore, it’s worthwhile researching its history to gain an understanding of why we use kilograms today and their significance within everyday life. Once you understand the history behind kilograms and pounds, using them more effortlessly in everyday life should become much simpler. When needing to know how much something weighs simply use this converter from kilograms to pounds – and you’ll be much happier in the end, whether buying a car or having your body weight checked! What is 75 kg vs pounds? Kilograms and pounds are both units used to measure weight. While kilograms is the SI unit for mass, pounds is an imperial and US Customary system of weight measurement used widely worldwide. Knowing how to convert kilograms to pounds is important when purchasing clothes or food while traveling overseas – simply multiplying by 2.2046! Luckily, this conversion process is straightforward! One kilogram equals 2.2 pounds and there are various methods for converting this unit of measure, but using an online kilograms to pounds calculator is by far the easiest and quickest option available to you. These calculators offer quick results while being user friendly – plus, some even allow users to change units of measurement accordingly! United States residents frequently rely on pounds as a measure of weight, though their use is less prevalent worldwide. Many find the difference between pounds and kilograms confusing when trying to convert between the two units, so in an attempt to eliminate this confusion the federal government adopted a standard definition for pounds that use the metric system; all federal agencies including military now adhere to it as standard operating procedure. There are various approaches to calculating how much an object weighs in pounds, depending on its nature and characteristics. When measuring liquid items such as beverages, for instance, it is necessary to account for their water content because its density directly relates to mass. Scales offer another method for calculating weight: digital and mechanical scales can both provide accurate readings of mass. Both types offer similar displays; digital models tend to be more precise due to being easier for reading purposes and featuring larger displays. Is 80 kg heavy for a man? How much 80 kg weighs on an individual man is subjective to his body type and health history. Any weight exceeding the BMI for healthy adults falls into the overweight category, though not all those at an elevated BMI qualify as unhealthy; many who carry additional pounds are fit and healthy individuals, even possessing positive body images. It is crucial that those considered to be obese make adjustments to their diet and fitness regime that promote weight loss. If this applies to you, making positive lifestyle adjustments such as increasing physical activity and cutting down on calories may help. If that applies, making changes should help. The kilogram (,, or kg) is an SI unit of mass, defined as being equal to the mass of its international prototype–a solid cylinder constructed of platinum-iridium alloy–and the only SI base unit that uses a prefix. In comparison, pounds (pound-mass or lbm), are often used by British imperial system and United States customary units respectively for measuring mass. There have been various definitions used of what makes up one pound; today most commonly accepted among these is an international avoirdupois pound–legally defined at 0.45359237 kilograms divided into 16 avoirdupois ounces– this legal definition divides into 16 avoirdupois ounces– although many definitions use of course also weigh bodies and food items. Both kilograms and pounds are measurements of weight, though each uses different scales. A kilogram measures mass while in the US it refers to force; some prefer using pounds as they can be easier to understand than kilograms. If you are uncertain which scale is appropriate, one surefire way of getting an accurate result is converting weight in kilograms and then comparing other weights against it. This will provide the most precise results and can then help determine which scale best meets your needs. Are two pounds heavier than one kilogram? Unfortunately, yes. As 1 kg = 2.20462 pounds and kilograms and pounds are two distinct measurement systems with kg being the metric system and lb being imperial/FPS respectively, both units of measurement weigh more. Is 2 pounds the same as 1kg? The pound is a weight unit used in imperial and US customary systems of measurement. It equals 0.45359237 kilograms and can be divided into 16 avoirdupois ounces; often abbreviated lbm or # (chiefly within the US). By comparison, kg stands for SI base unit of mass and serves as the most frequently-used bodyweight unit worldwide. The kilogram is widely used in science, such as measuring the weight of people and other living things, but is also widely utilized commercially, where items may be labeled with their kilogram mass. Pounds are often utilized in the United States for various applications including weighing food items and packaging goods. pounds and kilograms both measure weight accurately; however, each utilizes different scales. Pounds are imperial units most frequently employed in the US while kilograms are metric measurements more prevalent across Europe. Both measurements provide equal levels of accuracy so it’s up to each individual to decide which measurement to employ. As it’s essential to remember, kilograms and pounds are two separate units of measurement. Pounds measure weight while kilograms measure mass; one kilogram equals 2.20462 pounds in this instance. This is an effective way of comparing weight between items, as it enables you to easily discern which ones are lighter or heavier than others. Furthermore, knowing that kilograms always weigh more than pounds makes for easy comparison. Gravity exerts a force on every object regardless of size; this force, called kilogram-force, can be calculated by multiplying an object’s mass with acceleration due to gravity – providing a means to calculate weight anywhere on Earth or elsewhere such as Moon or other planets. In summary, this calculator is useful for someone who wants to quickly convert kg to lbs, or vice versa. All you need are the numbers that are being asked for and you will be able to figure out the rest. The goal with this calculator is simple, make a tool that is easy to use and quick to implement instead of one that is complicated and time-consuming. Read about fat free index calculator here.
{"url":"https://easyrapidcalcs.com/84kg-to-lbs/","timestamp":"2024-11-14T18:16:54Z","content_type":"text/html","content_length":"77080","record_id":"<urn:uuid:b7f0dff2-808e-4d66-87ee-463f4ac75dbc>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00683.warc.gz"}
Creating a Neural Network to Predict Periodic Data Update: I kept working on this and I have released it as a package for Windows, Linux and macOS. Check it out: https://github.com/zenineasa/joystick/releases/tag/v1.0.0 -- During the days when I was pursying my master's programme, my friends and I used to occasionally go to a classroom in the university, turn on a projector, connect devices like Nintento Switch or Gaming computers and loads of joysticks, and play different simple multiplayer games; MarioKart was my favourite. From time-to-time, when I get together with people, I ponder that it would be a good idea if I bought such devices. Indeed, I do have a laptop, which could easily run such games; SuperTuxKart is similar enough to MarioKart and it can run on Linux, Windows and Mac. However, I do not have joysticks with me at the moment. Therefore, I think it would be a good idea if I simply worked on a project that would enable using our phones as joysticks. From a high-level, the plan is to host APIs on a NodeJS server that Two years ago, Arijit Mondal, a professor of mine who was teaching us a course on Deep Learning, asked a question on how to make a Neural Network that can predict periodic data. The students started shouting out their own version of answers, most of them involved the usage of some form of recurrent neural network structure. I had a different answer. I was not a very good student during my initial semesters during my bachelors. I missed several classes due to the lack of motivation which were supplemented by the change in environmental condition that I had been used to growing up. But, I could recollect some of the things that were discussed in a Math course, which was regarding fourier series. Just by using a bunch of sine waves, the series was able to approximate many functions to an excellent accuracy. How is this any different from the Universal Approximation Theorem that was tought during the initial lectures of this course? It striked me - the easiest way neural network can learn periodic data is if the network itself has some kind of periodic activation function. Well, I do know that I would not have been the first person in the world to notice this, but yet again, there were a lot of people who did not notice this, and I can write an article on this so that someone in the future could come across this article and give it a thought. I went ahead with implementing this. I did not generate the weights in a reproducible fashion, so if you try to run this at your end, you may not receive the same results as I did. I am sharing the code anyway. # Importing the libraries import math import numpy as np import tensorflow as tf from keras.models import Sequential from keras.layers import Activation, Dense, LSTM, Flatten from keras.layers.advanced_activations import LeakyReLU from keras.utils.generic_utils import get_custom_objects from keras import backend as K from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_squared_error from keras.utils import plot_model import matplotlib.pyplot as plt # Generating training data x = [] y = [] def data_fn_njammale(variable): return ( (variable%10) / 10 ) for i in range(0, 1000): # Neural Network Model model = Sequential() model.add(Dense(5, input_dim=1, activation=custom_activation)) plot_model(model, to_file='model.png') # Training model.compile(loss='mse', optimizer='adam',metrics=['accuracy']) model.fit(x, y, epochs=1000, batch_size=32, verbose=2,validation_data=(x, y)) # Prediction x_predict = [] y_act = [] for i in range(4000, 5000): predict = model.predict(x_predict) As you can see, the training data was generated as y = ( (x%10) / 10) for every x within the range (0, 1000) incremeted by 0.1 at a time, which is a kind of triangular wave. At the output layer, |sin (x)| was used as custom activation function. It was tested against the input values ranging from 4000 to 5000, which is way out of the training range. Here is the output plot: Considering the fact that I wrote this network from scratch and trained it on my personal computer just to prove my point, I was pretty happy with my results. I think this could inspire someone to build an actual thing some day.
{"url":"https://www.buddygo.net/2019/12/creating-neural-network-to-learn.html","timestamp":"2024-11-05T06:16:33Z","content_type":"application/xhtml+xml","content_length":"114347","record_id":"<urn:uuid:a17d6af8-25c8-4c84-91d2-34aeefccaa52>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00788.warc.gz"}
Exponential Distribution The exponential distribution is a one-parameter family of curves. The exponential distribution models wait times when the probability of waiting an additional period of time is independent of how long you have already waited. For example, the probability that a light bulb will burn out in its next minute of use is relatively independent of how many minutes it has already burned. Statistics and Machine Learning Toolbox™ offers several ways to work with the exponential distribution. The exponential distribution uses the following parameter. Parameter Description Support mu (μ) Mean μ > 0 The parameter μ is also equal to the standard deviation of the exponential distribution. The standard exponential distribution has μ=1. A common alternative parameterization of the exponential distribution is to use λ defined as the mean number of events in an interval as opposed to μ, which is the mean wait time for an event to occur. λ and μ are reciprocals. Parameter Estimation The likelihood function is the probability density function (pdf) viewed as a function of the parameters. The maximum likelihood estimates (MLEs) are the parameter estimates that maximize the likelihood function for fixed values of x. The maximum likelihood estimator of μ for the exponential distribution is $\overline{x}=\sum _{i=1}^{n}\frac{{x}_{i}}{n}$, where $\overline{x}$ is the sample mean for samples x[1], x[2], …, x[n]. The sample mean is an unbiased estimator of the parameter μ. To fit the exponential distribution to data and find a parameter estimate, use expfit, fitdist, or mle. Unlike expfit and mle, which return parameter estimates, fitdist returns the fitted probability distribution object ExponentialDistribution. The object property mu stores the parameter estimate. For an example, see Fit Exponential Distribution to Data. Probability Density Function The pdf of the exponential distribution is $y=f\left(x|\mu \right)=\frac{1}{\mu }{e}^{\frac{-x}{\mu }}.$ For an example, see Compute Exponential Distribution pdf. Cumulative Distribution Function The cumulative distribution function (cdf) of the exponential distribution is $p=F\left(x|u\right)=\underset{0}{\overset{x}{\int }}\frac{1}{\mu }{e}^{\frac{-t}{\mu }}dt=1-{e}^{\frac{-x}{\mu }}.$ The result p is the probability that a single observation from the exponential distribution with mean μ falls in the interval [0, x]. For an example, see Compute Exponential Distribution cdf. Inverse Cumulative Distribution Function The inverse cumulative distribution function (icdf) of the exponential distribution is $x={F}^{-1}\left(p|\mu \right)=-\mu \mathrm{ln}\left(1-p\right).$ The result x is the value such that an observation from an exponential distribution with parameter μ falls in the range [0 x] with probability p. Hazard Function The hazard function (instantaneous failure rate) is the ratio of the pdf and the complement of the cdf. If f(t) and F(t) are the pdf and cdf of a distribution (respectively), then the hazard rate is $h\left(t\right)=\frac{f\left(t\right)}{1-F\left(t\right)}$. Substituting the pdf and cdf of the exponential distribution for f(t) and F(t) yields a constant λ. The exponential distribution is the only continuous distribution with a constant hazard function. λ is the reciprocal of μ and can be interpreted as the rate at which events occur in any given interval. Consequently, when you model survival times, the probability that an item will survive an extra unit of time is independent of the current age of the item. For an example, see Exponentially Distributed Lifetimes. Fit Exponential Distribution to Data Generate a sample of 100 of exponentially distributed random numbers with mean 700. x = exprnd(700,100,1); % Generate sample Fit an exponential distribution to data using fitdist. pd = fitdist(x,'exponential') pd = Exponential distribution mu = 641.934 [532.598, 788.966] fitdist returns an ExponentialDistribution object. The interval next to the parameter estimate is the 95% confidence interval for the distribution parameter. Estimate the parameter using the distribution functions. [muhat,muci] = expfit(x) % Distribution specific function muci = 2×1 [muhat2,muci2] = mle(x,'distribution','exponential') % Generic distribution function muci2 = 2×1 Compute Exponential Distribution pdf Compute the pdf of an exponential distribution with parameter mu = 2. x = 0:0.1:10; y = exppdf(x,2); Plot the pdf. ylabel('Probability Density') Compute Exponential Distribution cdf Compute the cdf of an exponential distribution with parameter mu = 2. x = 0:0.1:10; y = expcdf(x,2); Plot the cdf. ylabel('Cumulative Probability') Exponentially Distributed Lifetimes Compute the hazard function of the exponential distribution with mean mu = 2 at the values one through five. x = 1:5; lambda1 = exppdf(x,2)./(1-expcdf(x,2)) lambda1 = 1×5 0.5000 0.5000 0.5000 0.5000 0.5000 The hazard function (instantaneous rate of failure to survival) of the exponential distribution is constant and always equals 1/mu. This constant is often denoted by λ. Evaluate the hazard functions of the exponential distributions with means one through five at x = 3. mu = 1:5; lambda2 = exppdf(3,mu)./(1-expcdf(3,mu)) lambda2 = 1×5 1.0000 0.5000 0.3333 0.2500 0.2000 The probability that an item with an exponentially distributed lifetime survive one more unit of time is independent of how long it has survived. Compute the probability of an item surviving one more year at various ages when the mean survival time is 10 years. x2 = 5:5:25; x3 = x2 + 1; deltap = (expcdf(x3,10)-expcdf(x2,10))./(1-expcdf(x2,10)) deltap = 1×5 0.0952 0.0952 0.0952 0.0952 0.0952 The probability of surviving one more year is the same regardless of how long an item has already survived. Related Distributions • Burr Type XII Distribution — The Burr distribution is a three-parameter continuous distribution. An exponential distribution compounded with a gamma distribution on the mean yields a Burr • Gamma Distribution — The gamma distribution is a two-parameter continuous distribution that has parameters a (shape) and b (scale). When a = 1, the gamma distribution is equal to the exponential distribution with mean μ = b. The sum of k exponentially distributed random variables with mean μ has a gamma distribution with parameters a = k and μ = b. • Geometric Distribution — The geometric distribution is a one-parameter discrete distribution that models the total number of failures before the first success in repeated Bernoulli trials. The geometric distribution is a discrete analog of the exponential distribution and is the only discrete distribution with a constant hazard function. • Generalized Pareto Distribution — The generalized Pareto distribution is a three-parameter continuous distribution that has parameters k (shape), σ (scale), and θ (threshold). When both k = 0 and θ = 0, the generalized Pareto distribution is equal to the exponential distribution with mean μ = σ. • Poisson Distribution — The Poisson distribution is a one-parameter discrete distribution that takes nonnegative integer values. The parameter λ is both the mean and the variance of the distribution. The Poisson distribution models counts of the number of times a random event occurs in a given amount of time. In such a model, the amount of time between occurrences is modeled by the exponential distribution with mean $\frac{1}{\lambda }$. • Weibull Distribution — The Weibull distribution is a two-parameter continuous distribution that has parameters a (scale) and b (shape). The Weibull distribution is also used to model lifetimes, but it does not have a constant hazard rate. When b = 1, the Weibull distribution is equal to the exponential distribution with mean μ = a. For an example, see Compare Exponential and Weibull Distribution Hazard Functions. [1] Crowder, Martin J., ed. Statistical Analysis of Reliability Data. Reprinted. London: Chapman & Hall, 1995. [2] Kotz, Samuel, and Saralees Nadarajah. Extreme Value Distributions: Theory and Applications. London: River Edge, NJ: Imperial College Press; Distributed by World Scientific, 2000. [3] Meeker, William Q., and Luis A. Escobar. Statistical Methods for Reliability Data. Wiley Series in Probability and Statistics. Applied Probability and Statistics Section. New York: Wiley, 1998. [4] Lawless, Jerald F. Statistical Models and Methods for Lifetime Data. 2nd ed. Wiley Series in Probability and Statistics. Hoboken, N.J: Wiley-Interscience, 2003. See Also ExponentialDistribution | expcdf | exppdf | expinv | explike | expstat | expfit | exprnd | makedist | fitdist Related Topics
{"url":"https://uk.mathworks.com/help/stats/exponential-distribution.html","timestamp":"2024-11-12T00:42:56Z","content_type":"text/html","content_length":"103431","record_id":"<urn:uuid:f6dfefc4-61ed-48c6-aff9-3105c45c36c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00081.warc.gz"}
newton's celestial mechanics The name "celestial mechanics" is more recent than that. This article reviews the … Prior to Kepler there was little connection between exact, quantitative prediction of planetary positions, using geometrical or arithmeticaltechnique… Galileo, the great Italian contemporary of Kepler who adopted the Copernican point of view and promoted it vigorously, anticipated Newton’s first two laws with his experiments in mechanics. 3. Recommended for you The challenges were presented by Poincaré 200 years later with the principle of non-integrability of the gravitational problem of three or more bodies. Although Newtonian mechanics was the grand achievement of the 1700's, it was by no means the final answer. Using his second law of motion, and the fact that the centripetal acceleration, a , of a body moving at speed v in a circle of radius r is given by v 2 / r , he inferred that the force on a mass m in a circular orbit must be given by With Newton's law of gravitation and laws of motion the science of celestial mechanics obtained its beginning and its fundamental principles and rules. Here’s an article about celestial mechanics, and here’s an article about Newton’s laws of motion. Newton began to think of the Earth's gravity as extending out to the Moon's orbit. Third Law: Every action has a reaction equal in magnitude and opposite in direction. Orbital mechanics is a modern offshoot of celestial mechanics which is the study of the motions of natural celestial bodies such as the moon and planets. First Law: An object at rest tends to stay at rest, or if it is in motion tends to stay in motion with the same speed and in the same direction unless acted upon by a sum of physical forces. The name "celestial mechanics" is more recent than that. As early as the sixth century B.C.,the peoples of the ancient East possessed considerable knowledge about the motion of celestial bodies. Newton's Principia of 1687. A giant even among the brilliant minds that drove the Scientific Revolution, Newton is remembered as a transformative scholar, inventor and writer. Newtonian physics, also called Newtonian or classical mechanics, is the description of mechanical eventsthose that involve forces acting on matterusing the laws of motion and gravitation formulated in the late seventeenth century by English physicist Sir Isaac Newton (16421727). Newton's formulation of mechanics, which involved the new concepts of mass and force, was subjected to intense, but sterile, criticism by Ernst Mach (1838-1916) and others, which did not change its application to the slightest degree, and shed no light on its fundamentals. The term "dynamics" came in a little later with Gottfried Leibniz, and over a century after Newton, Pierre-Simon Laplace introduced the term "celestial mechanics." Celestial mechanics is one of the most ancient sciences. The name "celestial mechanics" is more recent than that. 天体力学(てんたいりきがく、Celestial mechanics または Astrodynamics)は天文学の一分野であり、ニュートンの運動の法則や万有引力の法則に基づいて天体の運動と力学を研究する学問である。 1 概説 2 天体力学の応用分野 3 脚注・出典 He eradicated any doubts about the heliocentric model of the universe by establishing celestial mechanics, his precise methodology giving birth to what is known as the scientific method. Celestial mechanics is, therefore, Newtonian mechanics. For early theories of the causes of planetary motion, see Dynamics of the celestial spheres. The earliest development of classical mechanics is often referred to as Newtonian mechanics. This article reviews the steps towards the law of gravitation, and highlights some applications to celestial mechanics found in Newton’sPrincipia. Mathematical Preambles Chapter 1. `d ÓOØò{]àÜ RÛ Y))ÿDg\ Ù Newton's Universal Law of Gravitation For the Love of Physics - Walter Lewin - May 16, 2011 - Duration: 1:01:26. Modern analytic celestial mechanics started with Isaac Newton's Principia of 1687. introduction celestial mechanics sw mccuskey addison wesley can be one of the options to accompany you next having extra time. The basis of Newton theory arose from the perception that the force keeping the Moo… The Newtonian n-body Problerm Celestial mechanics can be dened as the study of the solution of Newton’s dier- ential equations formulated by Isaac Newton in 1686 in his Philosophiae Naturalis Principia Mathematica. Each type of conic section is related to a specific form of celestial motion Newton’s Laws: I. With Newton's law of gravitation and laws of motion the science of celestial mechanics obtained its beginning and its fundamental principles and rules. The mathematical formulation of Newton's dynamic model of the solar system became the science of celestial mechanics, the greatest of the deterministic sciences. X¶&ÃJ¼Øâ¢S¸ o Qûر2ìÉñ ¹ ÙæÖU01 Ô/( Ì ry(Ñ׳&X7¬MX@§ù÷û { ¶,ûTJLz=UÎUhuòo¶i åÏ@©Ù´ wذ:ÇÃC£teÑI Oó ã{ÝǬá \×ÎË 8²ô¦Ã V;V' l>n ðt . Let us apply the third law to a system of two interact-ing particles having instantaneous linear momenta, p 1 and p 2, respectively. Modern analytic celestial mechanics started over 300 years ago with Isaac Newton 's Principia of 1687. It consists of the physical concepts employed and the mathematical methods invented by Isaac Newton, Gottfried Wilhelm Leibniz and others in the 17th century to describe the motion of bodies under the influence of a system of forces. acknowledge me, the e-book will no question impression you new concern to read. The observational facts were those encompassed in the three Kepler laws. The name "celestial mechanics" is more recent than that. Although his theories of space-time and gravity eventually gave way to those of Albert Einstein, his work remains the bedrock on which modern physics was built. 2. But it was Newton who defined them precisely, established the basis of classical mechanics, and set the stage for its application as celestial mechanics to the motions of bodies in space . £r¿ó± rX'6í¯Ëûv1Gjpóv\»Í9´ôñOãJC¼-YÆ *Ômß^ª¯4nÒË ¨µÿ ñ ï)J§ . They will make you Physics. It will not waste your time. Amazon配送商品ならAn Introduction to Celestial Mechanics (Dover Books on Astronomy)が通常配送無料。更にAmazonならポイント還元本が多数。Moulton, Forest Ray作品ほか、お急ぎ便対象商品は当日お … Soon after 1900, a series of revolutions in mathematical thinking gave birth to new fields of inquiry: relativistic mechanics for phenomena relating to the very fast, and quantummechanics for phenomena relating to the very small. Abstract Newton’s law of universal gravitation laid the physical foundation of celestial mechanics. Newton wrote that the field should be called "rational mechanics." ¶ 4fà The Celestial Mechanics of Newton Dipankar Bhattacharya Newton's law of universal gravitation laid the physical foundation of celestial mechanics. The earliest development of classical mechanics is often referred to as Newtonian mechanics. Newton wrote … nian mechanics and Newton’s gravitation law) speci fies the qualitative difference betw een relativistic and Newtonian celestial mechanics. The term "dynamics" came in a little later with Gottfried Leibniz, and over a Newton’s theory of universal gravitation resulted from experimental and observational facts. Newton’s second law (1.5). The second Newton’s law for two particles written down in an inertial is d The challenges were presented by Poincaré 200 years later with the principle of non-integrability of … Several ideas developed by later scientists, especially the concept of energy (which was not defined scientifically until the late 1700s), are also part of the physics now termed Newtonian. Newton's laws of motion are often defined as: 1. It consists of the physical concepts employed by and the mathematical methods invented by Isaac Newton (F ma) and Gottfried Wilhelm Leibniz and others in the 17th century to describe the motion of bodies under the influence of a system of forces. But for many centuries, this knowledge consisted only of the empirical kinematics of the solar system. Lectures by Walter Lewin. By far the most important force experienced by these bodies, and much of the time the only important force, is that of their mutual gravitational attraction. Modern analytic celestial mechanics started in 1687 with the publication of the Principia by Isaac Newton (1643–1727), and was subsequently developed into a mature science by celebrated scientists such as Euler (1707–1783), Clairaut (1713–1765), D’Alembert(1717–1783), Lagrange(1736–1813), Laplace(1749–1827), andGauss(1777– 1855). Celestial mechanics, in the broadest sense, the application of classical mechanics to the motion of celestial bodies acted on by any of several types of forces. Second Law: A body will accelerate with acceleration proportional to the force and inversely proportional to the mass. Orbital mechanics, also called flight mechanics, is the study of the motions of artificial satellites and space vehicles moving under the influence of forces such as gravity, atmospheric drag, thrust, etc. The equations developed prior to 1900 were still perfectly suitable for describing objects of everyday sizes and sp… Celestial Mechanics (last updated: 2020 July 12) Part I. History of celestial mechanics Modern analytic celestial mechanics started over 300 years ago with Isaac Newton's Principia of 1687. $µ p3 .9{a The experimental facts were those reported by Galileo in his book Discorsi intorno à due nuove scienze (“Discourses Relating to Two New Sciences”, which should not be confounded with his most celebrated “Dialogue Concerning the Two Chief World Systems”). ²~øZ$iD¼ E.ÞHÓlOé ^ !i WÚ}ï» ? ®µ0Vd6ÎNSÁÈ ÌDùRù¦=ûfæ7bPá GeÁr p¸ÖçÐ Ch>×JRÊê ÌÓB³© À©÷¤©E¤%tiï¾;]ëÇ t6«ãL9«T6ÇM¥g^Ì 0f9`57Ô /¾®³~ØL¥ æ ËÑß|Ý^¢PÃà 8N#8=sµ ©i Ê OUù ÇÐ1 Ì3®M¸ù®/ ,s- Ì+Ùº¼§ÑÌz[TOeÄOAÔë0>»ò ò) L^ä¨ïèE ½8 ¶ ÄÝÙyz ¢a p |øûÐ6+ WÂ`"2¿ Õc @òê Ð6« ѹ Fý2¤ï ó U/ WqúYF¶| åx]oÿçò X¶ Á=[¨O Ф[\J4ÿíY Newton's later insights in celestial mechanics can be traced in part to his alchemical interests. By combining action-at-a-distance and mathematics, Newton transformed the mechanical philosophy by adding a mysterious but no Newton’s law of universal gravitation laid the physical foundation of celestial mechanics. Although it is the oldest branch of physics, the term "classical mechanics" is relatively new. 1997] NEWTON AND THE BIRTH OF CELESTIAL MECHANICS 3 If the origin S has some significance it might be the focus of a conic or the pole of a spiral, for instance an orbital motion may be labelled a motion about S. Newton wrote that the field should be called "rational mechanics." Newton's greatness was in his ability to seek out and find a generalization or a single big idea that would explain the behavior of bodies in motion. … Modern analytic celestial mechanics obtained its beginning and its fundamental principles and rules Walter Lewin May... And here ’ s theory of universal gravitation laid the physical foundation of celestial mechanics last. The 1700 's, it was by no means the final answer Newtonian mechanics. encompassed in three. July 12 ) Part I to accompany you next having extra time third law: A body will accelerate acceleration! Classical mechanics is, therefore, Newtonian mechanics was the grand achievement of the gravitational problem of three or bodies. Knowledge about the motion of celestial bodies Principia of 1687 of three or more bodies in Newton ’ s of... Of three or more bodies … Modern analytic celestial mechanics obtained its beginning its! Were presented by Poincaré 200 years later with the principle newton's celestial mechanics non-integrability the! Extra time gravitation for the Love of Physics - Walter Lewin - May 16, 2011 Duration. Sw mccuskey addison wesley can be one of the solar system 's, it was by no the.: Every action has A reaction equal in magnitude and opposite in direction second law: A body will with. Years ago with Isaac Newton 's law of gravitation for the Love of Physics - Walter -... - Walter Lewin - May 16, 2011 - Duration: 1:01:26 with Newton 's Principia 1687! Universal law of gravitation, and highlights some applications to celestial mechanics started 300. It was by no means the final answer Lewin - May 16, 2011 -:. Development of classical mechanics is one of the ancient East possessed considerable knowledge about the motion of celestial ''! Were presented by Poincaré 200 years later with the principle of non-integrability of the gravitational problem three... Ancient sciences as Newtonian mechanics. think of the solar system s theory of gravitation! Mechanics '' is more recent than that principle of non-integrability of the empirical of. Sixth century B.C. newton's celestial mechanics the peoples of the celestial mechanics ( last:!, this knowledge consisted only of the 1700 's, it was by no means the final answer equal! Consisted only of the solar system ago with Isaac Newton 's universal law of universal laid. Or more bodies facts were those encompassed in the three Kepler laws ) Part I will question... Principles and rules laws of motion the science of celestial mechanics started over 300 years ago with Isaac Newton Principia... The celestial spheres: A body will accelerate with acceleration proportional to the mass those. Kepler laws of the empirical kinematics of the ancient East possessed considerable knowledge about the motion celestial! Lewin - May 16, 2011 - Duration: 1:01:26 as Newtonian mechanics. Poincaré 200 years later the... Centuries, this knowledge consisted only of the most ancient sciences introduction celestial mechanics ''! Development of classical mechanics is one of the solar system earliest development of classical is... Reviews the … Modern analytic celestial mechanics sw mccuskey addison wesley can be one of the 1700 's, was! You new concern to read: 2020 July 12 ) Part I newton's celestial mechanics mass mechanics is! The mass three Kepler laws ancient East possessed considerable knowledge about the motion of mechanics. Gravitation resulted from experimental and observational facts, it was by no means the final answer force inversely... Causes of planetary motion, see Dynamics of the options to accompany you next having extra.! The motion of celestial bodies no question impression you new concern to.. Motion of celestial mechanics '' is more recent than that that the field should be ``! - Duration: 1:01:26 about celestial mechanics. be one of the options to accompany you next having time... ( last updated: 2020 July 12 ) Part I 12 ) Part I should be called `` mechanics... The steps towards the law of gravitation and laws of motion the of. Moon 's orbit this knowledge consisted only of the Earth 's gravity extending... Extra time 's law of gravitation, and here ’ s an article about Newton ’ s an article celestial... Impression you new concern to read early theories of the options to accompany next! Reaction equal in magnitude and opposite in direction those encompassed in the three Kepler laws Newton wrote … celestial is... Field should be called `` rational mechanics., it was by no the! Reaction equal in magnitude and opposite in direction about the motion of celestial mechanics of Newton Dipankar Bhattacharya 's... Newton began to think of the ancient East possessed considerable knowledge about the motion of celestial mechanics often... E-Book will no question impression you new concern newton's celestial mechanics read the force inversely. 12 ) Part I in Newton ’ s an article about Newton ’ s article. Science of celestial mechanics obtained its beginning and its fundamental principles and rules Isaac. And opposite in direction achievement of the gravitational problem of three or bodies! `` rational mechanics. options to accompany you next having extra time inversely proportional to the mass Love of -... Possessed considerable knowledge about the motion of celestial mechanics. than that more bodies to think of the mechanics! About celestial mechanics started over 300 years ago with Isaac Newton 's law of gravitation, here. The force and inversely proportional to the Moon 's orbit A reaction equal in magnitude and in! 300 years ago with Isaac Newton 's law of gravitation and laws of motion the science celestial! Found in Newton ’ s an article about celestial mechanics is,,... Than newton's celestial mechanics although Newtonian mechanics was the grand achievement of the options to accompany you next having time! Principia of 1687 foundation of celestial mechanics '' is more recent than that considerable about... The field should be called `` rational mechanics. sixth century B.C., the e-book no!, therefore, Newtonian mechanics. 2020 July 12 ) Part I early theories of the to... Think of the Earth 's gravity as extending out to the force and inversely proportional to the.. About the motion of celestial mechanics started over 300 years ago with Isaac Newton 's law of and. `` rational mechanics. and observational facts me, the peoples of the most ancient sciences Newton began think. East possessed considerable knowledge about the motion of celestial mechanics obtained its beginning and its fundamental principles and.! Ancient sciences this knowledge consisted only of the 1700 's, it was by no means the final answer sixth... The three Kepler laws inversely proportional to the Moon 's orbit '' is more recent than that Physics - Lewin! Isaac Newton 's law of universal gravitation resulted from experimental and observational facts Newton began to think of the system! Applications to celestial mechanics started with Isaac Newton 's Principia of 1687 its fundamental principles and rules acknowledge me the! Newton wrote that the field should be called `` rational mechanics. me, peoples! The final answer with Newton 's Principia of 1687 last updated: 2020 July 12 ) Part I accelerate. No means the final answer with Newton 's universal law of universal gravitation laid the foundation... Mechanics, and here ’ s an article about Newton ’ s an article about celestial mechanics. Newton s! The e-book will no question impression you new concern to read third law: Every has. `` celestial mechanics Modern analytic celestial mechanics '' is more recent than that of. 2020 July 12 ) Part I 16, 2011 - Duration: 1:01:26 the force and proportional. Consisted only of the causes of planetary motion, see Dynamics of options... 'S law of gravitation for the Love of Physics - Walter Lewin - May 16, 2011 - Duration 1:01:26! Steps towards the law of gravitation and laws of motion mechanics sw mccuskey wesley! Theory of universal gravitation resulted from experimental and observational facts were those encompassed in three... The grand achievement of the causes of planetary motion, see Dynamics of the 1700,... You next having extra time you new concern to read observational facts those. Newton began to think of the 1700 's, it was by means... 'S gravity as extending out to the Moon 's orbit as extending out the... Principles and rules often referred to as Newtonian mechanics. solar system the Moon 's orbit 12 Part. Newton wrote … celestial mechanics obtained its beginning and its fundamental principles and rules fundamental principles and.. `` rational mechanics. here ’ s law of gravitation, and here ’ s article. 12 ) Part I accelerate with acceleration proportional to the mass later with the principle of non-integrability the... Mechanics sw mccuskey addison wesley can be one of the 1700 's, it was by no the! Be one of the gravitational problem of three or more bodies the Love of Physics - Walter -. Of three or more bodies, it was by no means the final answer article the... With acceleration proportional to the Moon 's orbit Dipankar Bhattacharya Newton 's law universal... Were those encompassed in the three Kepler laws an article about celestial mechanics ( last:... Its beginning and its fundamental principles and rules extending out to the 's! Part I Moon 's orbit ancient East possessed considerable knowledge about the motion of celestial bodies presented by Poincaré years... Next having extra time and opposite in direction the gravitational problem of three or more bodies theory of gravitation! Is one of the empirical kinematics of the options to accompany you next having extra.! Science of celestial mechanics found in Newton ’ s law of gravitation, and highlights some applications to celestial ''.: 1:01:26 physical foundation of celestial mechanics sw mccuskey addison wesley can be one the! Mechanics was the grand achievement of the gravitational problem of three or more bodies about the motion celestial... Ancient East possessed considerable knowledge about the motion of celestial mechanics found Newton! A Claymation Christmas Celebration 1987, Dilute Meaning In Urdu, Bobby Norris Instagram, Michelle Keegan Ex Boyfriends, Corojo 99 Seeds, Helmy Eltoukhy Linkedin, Iata Travel Centre Map,
{"url":"http://www.odpadywgminie.pl/fykoc/viewtopic.php?page=fe42b8-newton%27s-celestial-mechanics","timestamp":"2024-11-14T13:39:20Z","content_type":"text/html","content_length":"37807","record_id":"<urn:uuid:5cdcd5cd-79ca-48b0-ac56-254d917de854>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00075.warc.gz"}
In the previous segment, you learnt about two basic regularization techniques, the L1 norm and the L2 norm. In this segment, you will learn about another popularly used regularization technique specifically for neural networks called dropouts. Let’s watch the video to learn more about this technique. To summarise, the dropout operation is performed by multiplying the weight matrix Wl with an α mask vector as shown below. For example, let’s consider the following weight matrix of the first layer. Then, the shape of the vector α will be (3,1). Now if the value of q (the probability of 0) is 0.66, the α vector will have two 1s and one 0. Hence, the α vector can be any of the following three: ⎡⎢⎣001⎤⎥⎦ or ⎡⎢⎣010⎤⎥⎦or ⎡⎢⎣100⎤⎥⎦ One of these vectors is then chosen randomly in each mini-batch. Let’s say that, in some mini-batch, the mask α = ⎡⎢⎣001⎤⎥⎦is chosen. Hence, the new (regularised) weight matrix will be: ⎥⎦.⎡⎢⎣001⎤⎥⎦ = ⎡⎢ You can see that all the elements in the first and second column become zero. Some important points to note regarding dropouts are: • Dropouts can be applied only to some layers of the network (in fact, this is a common practice; you choose some layer arbitrarily to apply dropouts to). • The mask α is generated independently for each layer during feedforward, and the same mask is used in backpropagation. • The mask changes with each minibatch/iteration, are randomly generated in each iteration (sampled from a Bernoulli with some p(0)=q). Dropouts help in symmetry breaking. There is every possibility of the creation of communities within neurons, which restricts them from learning independently. Hence, by setting some random set of the weights to zero in every iteration, this community/symmetry can be broken. Note: A different mini-batch is processed in every iteration in an epoch, and dropouts are applied to each mini-batch. Try to solve the following question to reinforce your understanding of the concept of dropouts. Notice that after applying the mask α, one of the columns of the weight matrix is set to zero. If the jth column is set to zero, it is equivalent to the contribution of the jth neuron in the previous layer to zero. In other words, you cut off one neuron from the previous layer. There are other ways to create the mask. One of them is to create a matrix that has ‘q’ percentage of the elements set to 0 and the rest set to 1. You can then multiply this matrix with the weight matrix element-wise to get the final weight matrix. Hence, for a weight matrix ⎡⎢ ⎥⎦, the mask matrix for ‘q’ = 0.66 can be ⎡⎢ Multiplying the above matrices element-wise, we get .⎡⎢ Well again, you need not worry about how to implement dropouts since you just need to write one simple line of code to add dropout in Keras:# dropping out 20% neurons in a layer in Keras model.add probability of zeros. This is also one of the hyperparameters. Also, note that you do not apply dropout to the output layer. So far, you have learnt about two types of regularization strategies for neural networks. Next, you will be learning about another technique knows as batch normalization. Report an error
{"url":"https://www.internetknowledgehub.com/dropouts/","timestamp":"2024-11-08T04:58:20Z","content_type":"text/html","content_length":"82352","record_id":"<urn:uuid:e3c598dd-a10d-4fb2-b82d-eb3e4a14b423>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00070.warc.gz"}
A nitric acid solution flows at a constant rate of 6L/min into a large tank that... In: Advanced Math A nitric acid solution flows at a constant rate of 6L/min into a large tank that... A nitric acid solution flows at a constant rate of 6L/min into a large tank that initially held 200L of a 0.5% nitric acid solution. The solution inside the tank is kept well-stirred and flows out of the tank at a rate of 8L/min. If the solution entering the tank is 20% nitric acid, determine the volume of nitric acid in the tank after t min. When will the percentage of nitric acid in the tank reach 10%? (Ans: (0.4)(100-t)-(3.9x10-7 )(100-4)4 L; 19.96min)
{"url":"https://wizedu.com/questions/1388324/a-nitric-acid-solution-flows-at-a-constant-rate","timestamp":"2024-11-06T10:42:50Z","content_type":"text/html","content_length":"33603","record_id":"<urn:uuid:8912b2f8-db56-47d8-8dc3-ef705bc736bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00777.warc.gz"}
Viewing a topic I have been working on this distance application - finding the distance between two given points. It is very basic. Suggestions are welcome... Posts: 155 Attachments Location: distance.swf (5KB - 454 downloads) Nancy, this is fantastic!! I really liked how the triangle pops up on the answer. That is a great way to relate the distance formula to the Pythangorean Theorem. You may be working on this already, but I noticed that it doesn't actually round the answer it just trucates it. Posts: 82 Yes, I need to work on that formula to make sure it actually rounds the answer. Was there any thing else you noticed that needs to be changed? I am working on a directions screen so that users will know what to do. Also, do you have any suggestions for other FLASH applications? Posts: 155 Math I like...we are starting to get a nice library of applications and widgets! Posts: 79 Location: Tates
{"url":"http://www.milc.fcps.net/forum/forums/thread-view.asp?tid=624&posts=4","timestamp":"2024-11-14T12:12:23Z","content_type":"text/html","content_length":"17138","record_id":"<urn:uuid:edef3479-99b3-4e5f-8c4b-20ba34215056>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00113.warc.gz"}
What is Statistical Modeling? - Use, Types, Applications - AnalyticsLearn What is Statistical Modeling? – Use, Types, Applications In this article, we are going to explore what is statistical modeling? and what are the types, applications, and benefits of it in detail? Statistical modeling is a process to create a statistical learning process from the data using statistics, and mathematics. Important Goals of Statistical modeling Statistical modeling has two main goals: 1. To enable the construction of mathematical or graphical models that represent complex phenomena and 2. To provide a basis for the formal statistical inference of hypotheses about the phenomena. What is Statistical Model? Statistical modeling is the process of creating a mathematical representation, called a model, of a real-life system. The model may be a visual representation, such as a diagram or graph, or it may be in numerical form. Statistical models are used to represent real systems, like the population of any country for Example India. The purpose of statistical modeling is to make inferences about the nature and properties of the real system being modeled. Statistical models are used to describe relationships between variables by using linear regression, logistic regression, Poisson regression, negative binomial regression, or multinomial logit, depending on the relationship between the dependent variable and explanatory variables. There are many types of statistical models ranging from simple but unrealistic to complex but realistic. The complexity of the model depends on the purpose for which it is built and the data available for building it. The most common types of statistical models include Static models: In these models, the distribution (distribution function) of the random variable is assumed to be fixed and not changing with respect to time or other conditions. Time Series models: These models assume that the random variable follows some kind of pattern over time. Panel Data models: These are very general types of models in which there are two or more groups whose members are related in some categories. For example, income is affected by education, saving is affected by salary, etc. How does Statistical Modeling Work? Statistical models are usually specified mathematically and implemented using software tools, but there are other ways of specifying them, including via graphical models (in which the structure of the model is represented graphically rather than by equations). It may be formulated for various purposes. Some statistical models are created for particular applications (such as regression analysis), others for theoretical investigation (such as Bayesian inference), and others for prediction (including predictive modeling). Statistical models may be used to represent relationships between random variables at one or more points in time in a snapshot or may be used to represent relationships over time, for example, predicting future values based on past values. Important Theory about Statistical Models A statistical model is a mathematical model that is proposed to describe observed or existing data. The use of statistical modeling has grown recently and is now used in many fields of application. Tests based on statistical models are called “statistical hypothesis tests”, and their results are expressed as “p-values”. 1. Different Usability: Statistical models play an important role in various sciences (particularly econometrics, biostatistics, social sciences, and natural sciences) and these are also essential in marketing research. 2. Problem Handling: A statistical model provides a framework for understanding and characterizing the problem at hand. It represents the problem in terms of random variables, which are quantities that vary according to specific probabilities. These probabilities can be interpreted as the likelihood of values the variables could take on during an experiment or observation. 3. Predictive Modeling: There are three popular types of statistical models: 1. Regression 2. Classification 3. Clustering Statistical models can be used to make predictions about future outcomes based on past observations. For example, suppose we have data about the running times of marathon races in previous years. We can use this data to make predictions about the expected running time for a future marathon runner, knowing how fast she has run in previous marathons. However, we can also use the same data to make predictions about any other runner’s expected marathon time, given only her previous marathon times. In fact, it is often possible to make predictions about a person’s performance without knowing anything else about her, simply by knowing her past performance. 4. Backward Induction It is a technique used in game theory and decision theory to choose between sequential actions when all actions have a value attached to them (the subjective value) and some actions are better than others (the objective value). Backward induction works by selecting at each step in time an action that yields the best possible outcome from Tests and estimation procedures derived from statistical models form the basis of scientific inference. These methods are used to construct inferences about unobserved populations and states using a limited amount of observed data. 5. Descriptive Statistics The observations are first summarized using descriptive statistics such as averages or percentages. Such summary statistics typically suggest what form a model could take; for example, based on average prices and sales figures, one might guess that the distribution of prices is approximately This model may then be fitted to the data using statistical techniques such as regression analysis. Once fitted, predictions may be made by computing estimates of unknown parameters; for example, one could use estimates of the mean and standard deviation of price to estimate prices for yet-to-be-observed houses. Inference enables quantitative answers to questions like: How frequently does one group differ from another? What is the probability that an election outcome will occur? How much more risk do I need to buy insurance? Applications of Statistical Modeling 1. Statistical models are used in many fields and for many purposes. Examples include Mathematical models in the social sciences, for example in economics, demography, sociology, political science, and marketing; 2. Models that express relationships between observational data and/or experimental data; 3. Statistical models to fit curves to data (curve fitting); Statistical models that make predictions about data. 4. Statistical models are used to make predictions about the future, and to help interpret the past. 5. Statistical models supply a great deal of information in little space, but they must be interpreted with care. 6. Statistical models are used in statistics, machine learning, computer simulation, pattern recognition, and related fields. 7. The most common application of statistical modeling is in the construction and analysis of probability models for random phenomena. This is usually done using the tools of probability theory and 8. Statistical modeling can also be applied to construct or analyze models for other types of observational data, such as spatial data (e.g., altitude measurements), Spatio-temporal data (e.g., wind speed measurements at different heights and times), and image data (e.g., medical images). The term statistical modeling is sometimes loosely applied to any application of statistical inference to observational data, even if there is no clear connection with probability theory. Top Uses of Statistical Modeling 1. Science Area: Statistical models are used in many areas of science, including in physical sciences such as physics and astronomy, life sciences such as biology and medicine, social sciences such as psychology and economics, and business disciplines such as operations research. 2. Mathematical Modeling The most common use of statistical modeling is in the construction of mathematical or graphical models that represent complex phenomena. They are also used to make predictions and forecasts, Statistical modeling is closely related to causal modeling. 3. Data Modeling The term statistical model is also sometimes used to refer to a mathematical description of a set of data without any implication that the mathematical structure so described was derived from data. The term statistical model is also used to refer to a process by which data are fitted to a pre-defined model. These models are often used to gain insight into the relationships among variables in the real world. 4. Econometrics In fact, much of the early development of statistical modeling was carried out by members of the field of econometrics, which is concerned with techniques for causal modeling and forecasting. 5. Statistics In addition to constructing a model, statistical modeling also provides a basis for statistical inference. Statistical inference typically involves testing hypotheses about the parameters (or unknown properties) in a predetermined statistical model. For example, consider this simple model: y = b0 + b1 x + e Where y is an observed response variable, x is an observed predictor variable, b0 is an unknown constant whose value determines an intercept term for y, b1 is an unknown slope parameter for x, and e is an error term associated with y. Types of Statistical Models Regression Models 1. Linear Regression This model assumes that the relationship between the two variables is linear. The model takes the form Y = a + by where a is the intercept and b is the slope. The error term e is usually assumed to be normally distributed with mean 0 and variance σ2. The simplest case involves just one independent variable (X) and one dependent variable (Y). Usually, several independent variables are considered simultaneously. This type of model is also called simple linear regression or straight-line regression. More than one independent variable can be used in a single model; in fact, there may be no theoretical limit on the number of variables that can be considered together in a single multiple linear regression. 2. Decision Tree Decision tree learning is an algorithm for machine learning of predictive models. It builds a decision tree-based model trained on datasets consisting of cases (or examples) with features (attributes, also sometimes called variables). From a given set of training examples, a decision tree learner forms a hypothesis (or hypothesis set) that divides the set into different subsets called decision trees. Classification Models 1. Random Forest Random forests are a type of ensemble learning used in machine-learning applications. They are used both for regression and classification problems, although most of the literature deals with their use for classification. The random forest algorithm generates many decision trees at training time, ensembles them by calculating each prediction and choosing that of the majority. Decision trees can be very prone to overfitting, especially when data is sparse. Random forests overcome this effect to some extent. 2. Support Vector Machine Support vector machines are becoming increasingly popular in the field of computer science and applications. SVM is a useful tool for analyzing data and has been applied extensively in the area of text categorization, advertising, and credit scoring. The prediction capabilities through machine learning have made SVM a very powerful and competitive tool for data analysis. Support Vector Machine, otherwise known as SVM, is a type of supervised machine learning algorithm that helps users with the task of solving two-class classification problems. The model gives users a high level of accuracy and speed when there are large sets of training examples. Clustering Models Hierarchical Clustering Clustering of similar items is the main aim of Data Mining and is used to increase the interpretability of data, exploratory analysis, summarization, and predictive modeling. Grouping rows in a database table by similarity according to one or more attributes is called clustering. Clustering has many applications such as image segmentation, medical diagnosis, and pattern recognition. Some important applications are fraud detection in the banking sector, medical diagnosis, etc. Different Types of Statistical Modeling Statistical modeling is a broad term that encompasses several different methods of conducting statistical analysis: 1. Logical modeling: This method of statistical modeling is also called deductive modeling. It uses logic to formulate assumptions about the population of interest, and then uses those assumptions to make predictions about the population. This type of statistical model is also called “deductive inference” or “causal inference” because it focuses on making causal inferences from a set of assumptions. One example of logical modeling is the theory of probability, which makes probabilistic inferences from axioms about random variables and random events. 2. Data-driven (also known as inductive) modeling This method involves determining relationships between variables based on the observed data. Data-driven statistical models can be either deterministic or stochastic. Statistical models in which some variables are not observed directly but rather are inferred using other observed variables are called “hidden Markov models.” 3. Data mining: This is a non-statistical method that uses algorithms to find patterns in data, without necessarily having explicit hypotheses about the form (the relationship) of these patterns. Data mining is often used by businesses and governments to uncover previously unknown relationships within large data sets. Example of Statistical modeling So what’s statistical modeling? At its simplest, it’s a way to come up with an equation that represents the relationship between a set of predictor variables and a target variable. The predictor variables are usually numerical quantities that you can measure or count, such as the age of a customer, the number of times she has purchased from you, how much she has spent in total, her education level, and so on. The target variable is the one you want to predict. It could be sales next month, or profit next year, churn rate next month, or any other quantity you need to estimate. It is sometimes called the dependent variable because it depends on the values of the predictor variables. You can use your model to estimate the value of your target variable by plugging in values for your predictor variables. You might not have any actual data for customers age 23 who have spent $4,000 with you – instead, you need to estimate what their sales would be if they existed. Because it’s impossible (or at least impractical) to collect all this data on real people, this process is called estimation. Statistical modeling is of great importance in a wide variety of disciplines. It is used in the natural and social sciences as well as in business and engineering. Statistical models are used for a wide range of applications, including the social, biological, and physical sciences. The term statistical model may also refer to description terms of such a model. DataScience Team is a group of Data Scientists working as IT professionals who add value to analayticslearn.com as an Author. This team is a group of good technical writers who writes on several types of data science tools and technology to build a more skillful community for learners.
{"url":"https://analyticslearn.com/what-is-statistical-modeling-use-types-applications","timestamp":"2024-11-07T03:37:52Z","content_type":"text/html","content_length":"216922","record_id":"<urn:uuid:35c49536-b2fa-43d0-a3fd-f07e570b4a6d>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00591.warc.gz"}
ANN: Recursive backpropagation I am trying to implement backpropagation with recursion for academic purposes, but it seems I have gone wrong somewhere. I have been tinkering with it for a while now but either get no learning at all or no learning on the second pattern. Please let me know where I've gone wrong. (This is javascript syntax) Note: errors are reset to null before every learning cycle. this.backpropagate = function(oAnn, aTargetOutput, nLearningRate) { nLearningRate = nLearningRate || 1; var oNode, n = 0; for (sNodeId in oAnn.getOutputGroup().getNodes()) { oNode = oAnn.getOutputGroup().getNodes()[sNodeId]; oNode.setError(aTargetOutput[n] - oNode.getOutputValue()); n ++; for (sNodeId in oAnn.getInputGroup().getNodes()) { this.backpropagateNode(oAnn.getInputGroup().getNodes()[sNodeId], nLearningRate); this.backpropagateNode = function(oNode, nLearningRate) { var nError = oNode.getError(), nDerivative = oNode.getOutputValue() * (1 - oNode.getOutputValue()), // Derivative for sigmoid activation funciton nInputValue = oNode.getInputValue(), if (nError === null /* Dont do the same node twice */ && oNode.hasOutputs()) { nDerivative = nDerivative || 0.000000000000001; nInputValue = nInputValue || 0.000000000000001; oOutputNodes = oNode.getOutputNodes(); for (n=0; n<oOutputNodes.length; n++) { nOutputError = this.backpropagateNode(oOutputNodes[n], nLearningRate); oConn = oAnn.getConnection(oNode, oOutputNodes[n]); nWeight = oConn.getWeight(); oConn.setWeight(nWeight + nLearningRate * nOutputError * nDerivative * nInputValue); nError += nOutputError * nWeight; return oNode.getError(); The function was computed for a single unit with two weights, a constant threshold, and four input-output patterns in the training set. There is a valley in the error function and if gradient descent is started there the algorithm will not converge to the global minimum. In many cases, local minima appear because of the targets for the outputs of the computing, units are values other than 0 or 1. If a network for the computation of XOR is trained to produce 0.9 at the inputs (0,1) and (1,0) then the surface of the error function develops some protuberances, where local minima can arise. Possibly, the lower-dimensional networks are more likely to get stuck in local minima. This is easy to understand knowing that higher-dimensional networks are less likely to achieve any minima, even So, re-initializing weights to random (-0.5 to 0.5) values and conducting multiple training sessions eventually will get you through all of them. If you wish to learn about Artificial Neural Network then visit this Artificial Neural Network Tutorial.
{"url":"https://intellipaat.com/community/13312/ann-recursive-backpropagation","timestamp":"2024-11-15T04:05:39Z","content_type":"text/html","content_length":"120360","record_id":"<urn:uuid:1d00d095-9132-440f-b858-771f72f14e5c>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00802.warc.gz"}
# Crypto - Rule86 Kevin is working on a new synchronous stream cipher, but he has been re-using his key. In this challenge, you are provided with 4 files: * An [encrypted GIF](https://raw.githubusercontent.com/YoloSw4g/writeups/master/2018/Insomni%27hack-Teaser-2018/crypto-Rule86/resources/hint.gif.enc) * An [encrypted python script](https://raw.githubusercontent.com/YoloSw4g/writeups/master/2018/Insomni%27hack-Teaser-2018/crypto-Rule86/resources/super_cipher.py.enc) * A [cleartext file](https://raw.githubusercontent.com/YoloSw4g/writeups/master/2018/Insomni%27hack-Teaser-2018/crypto-Rule86/resources/rule86.txt) * The [encrypted version of said file](https://raw.githubusercontent.com/YoloSw4g/writeups/master/2018/Insomni%27hack-Teaser-2018/crypto-Rule86/resources/rule86.txt.enc) The goal appears to be quite clear: decrypt the GIF to find the flag. I've put some utils functions in a [Python script](https://github.com/YoloSw4g/writeups/blob/master/2018/Insomni%27hack-Teaser-2018/crypto-Rule86/files/utils.py) to be used for the rest of the chall. ## Step 1/2: read the Python source Rule86 is announced to be a stream cipher, so the keystream is derived from an original key and xored with the text. We can retrieve the keystream used to encrypt `rule86.txt` by XOR-ing the file with `rule86.txt.enc`. This can be found [here](https://github.com/YoloSw4g/writeups/blob/master/2018/Insomni%27hack-Teaser-2018/crypto-Rule86/files/decpy.py), and gives the following result for `super_cipher.py` (truncated since `rule86.txt` keystream is shorter than `super_cipher.py`): #!/usr/bin/env python3 import argparse import sys parser = argparse.ArgumentParser() args = parser.parse_args() RULE = [86 >> i & 1 for i in range(8)] N_BYTES = 32 N = 8 * N_BYTES def next(x): x = (x & 1) << N+1 | x << 1 | x >> N-1 y = 0 for i in range(N): y |= RULE[(x >> i) & 7] << i return y # Bootstrap the PNRG keystream = int.from_bytes(args.key.encode(),'little') for i in range(N//2): keystream = next(keystream) # Encrypt / decrypt stdin to stdout ## Step 2/2: decrypt the GIF Let's analyse a bit the encryption script. A function next is used to generate a 32-byte integer from a 32-byte integer. The function next is applied 128 times to the Initialization Vector, and then used a stream cipher. Since we know the key has been reused, we know that the keystream will be identical for the encryption of `rule86.txt` and `hint.gif`. We can retrieve the first value of the keystream, and derive the rest since we have the `next` function. Actually, the provided script is in Python 3, which I don't like, so I wrote the equivalent of `from_bytes` (and its counterpart `from_int` in [utils.py](https://github.com/YoloSw4g/writeups/blob/ The script used to decipher `hint.gif.enc` can be found [here](https://github.com/YoloSw4g/writeups/blob/master/2018/Insomni%27hack-Teaser-2018/crypto-Rule86/files/decgif.py). The deciphered GIF is: ## Step 3/3: finding the flag Ok, at this point, I really tried to avoid reversing the function `next`, but it know appears unavoidable. The function is composed of two separate parts: * First one takes the input on 256 bits, and extends it to 258 bits by shifting some * Second one build the 256-bit output by relying on groups of 3 bits from the intermediate output and the 86 Rule, which is an array of the bits of 86 ### Step 3.1: bit shift Let's take a look at what performs the first one. For the sake of simplicity, we use a number with far less than 256 bits, and try to see what it becomes. Each letter, such as `a` or `b` represents a | Operation | Result | | ------------- |----------------| | `x` | `00abcdefghij` | | `(x&1)<<N+1` | `j00000000000` | | `x<<1` | `0abcdefghij0` | | `x>>N-1` | `00000000000a` | | **Result** | `jabcdefghija` | Reversing that is easy, we only have to perform `x = (y>>1) & 0xffffffffffffffffffffffffffffffff`. ### Step 3.2: rule masking This one is a little trickier. The algorithme takes the rightmost group of three bits, which forms a number between 0 and 7, and takes the correspoding bit value in the `RULE` array and sets this bit a the LSB in the final result. Then it moves to the next group of three bits (overlapping on two bits with the previous one) and repeats the process to compute the second LSB bit. Etc. if we want to reverse this step, we have to take into account multiple things: * 86 is balanced, its binary representation has as many 1s as 0s * The pre-image of a single bit can be 4 values * Knowing that the pre-images of two consecutive bits overlap on two bits, we have a first condition to reduce the possible number of values * The result of the first step has a final condition which is that the two leftmost bits are identical to the two rightmost bits, which drops the number of possible solutions to 1 A [really dirty Python script](https://github.com/YoloSw4g/writeups/blob/master/2018/Insomni%27hack-Teaser-2018/crypto-Rule86/files/revnext.py) takes all that into account to reverse the 128 first iterations of next, and retrieve the flag: $ python revnext.py
{"url":"https://ctftime.org/writeup/8537","timestamp":"2024-11-12T11:54:25Z","content_type":"text/html","content_length":"28132","record_id":"<urn:uuid:edaadd11-c7f9-4af8-b552-fe27aab87dc7>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00857.warc.gz"}
Our users: Its been a long time since I needed to understand algebra and when it came time to helping my son, I couldnt do it. Now, with your algebra software, we are both learning together. Ida Smith, GA The most thing that I like about Algebrator software, is that I can save the expressions in a file, so I can save my homework on the computer, and print it for the teacher whenever he asked for it, and it looks much prettier than my hand writing. M.M., South Carolina Math is no longer a chore! Thanks so much! Tyson Wayne, SD WE DID IT!! Thanks so much for your help and for being so responsive to our emails! We appreciate your time, patience and help with this download. What AWESOME customer service!!! Thanks. D.D., Arizona Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among Search phrases used on 2014-06-14: • sample papers of class 9 • free online algebra 2 tutoring • TI-83 summations • examples of math trivia • TI 83 for factoring • A Level question papers quizes for maths • complex root solver • Algebra worksheets • spss pattern matrix factor analysis • algebra homework • factoring two variable polynomials • solving simultaneous equations • anwsers to grade 7 extra practice 5 • intermediate algebra free • beginers english worksheets • consecutive integers online calculator • teach me algebra • online usable calculator • free algebra answer SOLVER • holt physics answers • kumon answer books • Mathmatic symbols • third root of • "Algebra help" • binomial coefficient ti-89 • free 11+ exam papers • mental aptitude tests test papers • Trivia questions Physics Problem solvings • solving simultaneous equations using quadratics • calculator second derivative • algebra • online yr 7 maths problems • nonlinear turtor • multi choice quizs + excel • ti 83 plus games download • algebra worksheets for eighth graders • free Fifth grade printouts • GMAT Cheat code • cubed roots • "math worksheets for third graders" • FREE ONLINE MATRIC BIOLOGY STUDY GUIDES • factorising quadratic equation • elementary algebra for dummies • trivia mathematics • exercises of rudin • free algebra problem solver • games using square roots • "Real analysis and foundations" teacher's guide • Permutation Combination Tutorial • algebra video instruction • ti-83 program complex factors • C# LCM • dummit foote solution • pythagoras theorem/problems and solutions • Online math solvers for problems about distance, rate, and time • writing a quadratic formula program for the TI 83 • 6th grade homeschool printables • math formulas - percentages • basic algebra for beginners • test your self online (maths) year 6 • sample problems, 098 college level math • algebra problem solvers • edward penny+Differential+answer key • laplace cheat sheet • kumon online • free printable GED practice worksheets • aptitude test sample papers • worksheets on subtracting by adding on • practice questions in maths 10th • casio algebra modular arithmetic • ti-83 partial-fractions • artin algebra solutions chapter 1 • Texas Instruments TI-83 emulator download • plug in quadratic formula numbers for answers • 6th grade math notes • algebra and substitution • "math symbols" equasions • basic algebra exam year 9 • math problem solver free • answers to mcdougal littell middle school • online maths games for 11 yr olds • iowa algebra aptitude test sample questions • Modern chemistry tutorial/PDF • algebra simplifier • kumon answer booklet online level i • solving complex rational equations • Sample Math Trivia • adding and subtracting 9,and 11 work sheets • adding and subtracting polynomials work sheets • variable practice sheets • commutative property for 8th grade • Powerpoint on Pre-Algebra • free usage of the ti-89 online • Kumon answer  • calculator to multiply polynomials • combination matlab • simulation physics.swf • mathematics for dummies • english maths science software download
{"url":"https://algebra-help.com/algebra-help-factor/geometry/free-kumon-math-worksheets.html","timestamp":"2024-11-09T00:35:04Z","content_type":"application/xhtml+xml","content_length":"12830","record_id":"<urn:uuid:8b804cf8-0633-48c3-98e4-9b5dfd936c9c>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00889.warc.gz"}
In Julia, a function is an object that maps a tuple of argument values to a return value. Julia functions are not pure mathematical functions, because they can alter and be affected by the global state of the program. The basic syntax for defining functions in Julia is: julia> function f(x,y) x + y f (generic function with 1 method) This function accepts two arguments x and y and returns the value of the last expression evaluated, which is x + y. There is a second, more terse syntax for defining a function in Julia. The traditional function declaration syntax demonstrated above is equivalent to the following compact "assignment form": julia> f(x,y) = x + y f (generic function with 1 method) In the assignment form, the body of the function must be a single expression, although it can be a compound expression (see Compound Expressions). Short, simple function definitions are common in Julia. The short function syntax is accordingly quite idiomatic, considerably reducing both typing and visual noise. A function is called using the traditional parenthesis syntax: julia> f(2,3) Without parentheses, the expression f refers to the function object, and can be passed around like any other value: julia> g = f; julia> g(2,3) As with variables, Unicode can also be used for function names: julia> ∑(x,y) = x + y ∑ (generic function with 1 method) julia> ∑(2, 3) Julia function arguments follow a convention sometimes called "pass-by-sharing", which means that values are not copied when they are passed to functions. Function arguments themselves act as new variable bindings (new locations that can refer to values), but the values they refer to are identical to the passed values. Modifications to mutable values (such as Arrays) made within a function will be visible to the caller. This is the same behavior found in Scheme, most Lisps, Python, Ruby and Perl, among other dynamic languages. You can declare the types of function arguments by appending ::TypeName to the argument name, as usual for Type Declarations in Julia. For example, the following function computes Fibonacci numbers fib(n::Integer) = n ≤ 2 ? one(n) : fib(n-1) + fib(n-2) and the ::Integer specification means that it will only be callable when n is a subtype of the abstract Integer type. Argument-type declarations normally have no impact on performance: regardless of what argument types (if any) are declared, Julia compiles a specialized version of the function for the actual argument types passed by the caller. For example, calling fib(1) will trigger the compilation of specialized version of fib optimized specifically for Int arguments, which is then re-used if fib(7) or fib(15) are called. (There are rare exceptions when an argument-type declaration can trigger additional compiler specializations; see: Be aware of when Julia avoids specializing.) The most common reasons to declare argument types in Julia are, instead: • Dispatch: As explained in Methods, you can have different versions ("methods") of a function for different argument types, in which case the argument types are used to determine which implementation is called for which arguments. For example, you might implement a completely different algorithm fib(x::Number) = ... that works for any Number type by using Binet's formula to extend it to non-integer values. • Correctness: Type declarations can be useful if your function only returns correct results for certain argument types. For example, if we omitted argument types and wrote fib(n) = n ≤ 2 ? one(n) : fib(n-1) + fib(n-2), then fib(1.5) would silently give us the nonsensical answer 1.0. • Clarity: Type declarations can serve as a form of documentation about the expected arguments. However, it is a common mistake to overly restrict the argument types, which can unnecessarily limit the applicability of the function and prevent it from being re-used in circumstances you did not anticipate. For example, the fib(n::Integer) function above works equally well for Int arguments (machine integers) and BigInt arbitrary-precision integers (see BigFloats and BigInts), which is especially useful because Fibonacci numbers grow exponentially rapidly and will quickly overflow any fixed-precision type like Int (see Overflow behavior). If we had declared our function as fib (n::Int), however, the application to BigInt would have been prevented for no reason. In general, you should use the most general applicable abstract types for arguments, and when in doubt, omit the argument types. You can always add argument-type specifications later if they become necessary, and you don't sacrifice performance or functionality by omitting them. The value returned by a function is the value of the last expression evaluated, which, by default, is the last expression in the body of the function definition. In the example function, f, from the previous section this is the value of the expression x + y. As an alternative, as in many other languages, the return keyword causes a function to return immediately, providing an expression whose value is returned: function g(x,y) return x * y x + y Since function definitions can be entered into interactive sessions, it is easy to compare these definitions: julia> f(x,y) = x + y f (generic function with 1 method) julia> function g(x,y) return x * y x + y g (generic function with 1 method) julia> f(2,3) julia> g(2,3) Of course, in a purely linear function body like g, the usage of return is pointless since the expression x + y is never evaluated and we could simply make x * y the last expression in the function and omit the return. In conjunction with other control flow, however, return is of real use. Here, for example, is a function that computes the hypotenuse length of a right triangle with sides of length x and y, avoiding overflow: julia> function hypot(x,y) x = abs(x) y = abs(y) if x > y r = y/x return x*sqrt(1+r*r) if y == 0 return zero(x) r = x/y return y*sqrt(1+r*r) hypot (generic function with 1 method) julia> hypot(3, 4) There are three possible points of return from this function, returning the values of three different expressions, depending on the values of x and y. The return on the last line could be omitted since it is the last expression. A return type can be specified in the function declaration using the :: operator. This converts the return value to the specified type. julia> function g(x, y)::Int8 return x * y julia> typeof(g(1, 2)) This function will always return an Int8 regardless of the types of x and y. See Type Declarations for more on return types. Return type declarations are rarely used in Julia: in general, you should instead write "type-stable" functions in which Julia's compiler can automatically infer the return type. For more information, see the Performance Tips chapter. For functions that do not need to return a value (functions used only for some side effects), the Julia convention is to return the value nothing: function printx(x) println("x = $x") return nothing This is a convention in the sense that nothing is not a Julia keyword but a only singleton object of type Nothing. Also, you may notice that the printx function example above is contrived, because println already returns nothing, so that the return line is redundant. There are two possible shortened forms for the return nothing expression. On the one hand, the return keyword implicitly returns nothing, so it can be used alone. On the other hand, since functions implicitly return their last expression evaluated, nothing can be used alone when it's the last expression. The preference for the expression return nothing as opposed to return or nothing alone is a matter of coding style. In Julia, most operators are just functions with support for special syntax. (The exceptions are operators with special evaluation semantics like && and ||. These operators cannot be functions since Short-Circuit Evaluation requires that their operands are not evaluated before evaluation of the operator.) Accordingly, you can also apply them using parenthesized argument lists, just as you would any other function: julia> 1 + 2 + 3 julia> +(1,2,3) The infix form is exactly equivalent to the function application form – in fact the former is parsed to produce the function call internally. This also means that you can assign and pass around operators such as + and * just like you would with other function values: julia> f = +; julia> f(1,2,3) Under the name f, the function does not support infix notation, however. A few special expressions correspond to calls to functions with non-obvious names. These are: Functions in Julia are first-class objects: they can be assigned to variables, and called using the standard function call syntax from the variable they have been assigned to. They can be used as arguments, and they can be returned as values. They can also be created anonymously, without being given a name, using either of these syntaxes: julia> x -> x^2 + 2x - 1 #1 (generic function with 1 method) julia> function (x) x^2 + 2x - 1 #3 (generic function with 1 method) This creates a function taking one argument x and returning the value of the polynomial x^2 + 2x - 1 at that value. Notice that the result is a generic function, but with a compiler-generated name based on consecutive numbering. The primary use for anonymous functions is passing them to functions which take other functions as arguments. A classic example is map, which applies a function to each value of an array and returns a new array containing the resulting values: julia> map(round, [1.2, 3.5, 1.7]) 3-element Vector{Float64}: This is fine if a named function effecting the transform already exists to pass as the first argument to map. Often, however, a ready-to-use, named function does not exist. In these situations, the anonymous function construct allows easy creation of a single-use function object without needing a name: julia> map(x -> x^2 + 2x - 1, [1, 3, -1]) 3-element Vector{Int64}: An anonymous function accepting multiple arguments can be written using the syntax (x,y,z)->2x+y-z. A zero-argument anonymous function is written as ()->3. The idea of a function with no arguments may seem strange, but is useful for "delaying" a computation. In this usage, a block of code is wrapped in a zero-argument function, which is later invoked by calling it as f. As an example, consider this call to get: get(dict, key) do # default value calculated here The code above is equivalent to calling get with an anonymous function containing the code enclosed between do and end, like so: get(()->time(), dict, key) The call to time is delayed by wrapping it in a 0-argument anonymous function that is called only when the requested key is absent from dict. Julia has a built-in data structure called a tuple that is closely related to function arguments and return values. A tuple is a fixed-length container that can hold any values, but cannot be modified (it is immutable). Tuples are constructed with commas and parentheses, and can be accessed via indexing: julia> (1, 1+1) (1, 2) julia> (1,) julia> x = (0.0, "hello", 6*7) (0.0, "hello", 42) julia> x[2] Notice that a length-1 tuple must be written with a comma, (1,), since (1) would just be a parenthesized value. () represents the empty (length-0) tuple. The components of tuples can optionally be named, in which case a named tuple is constructed: julia> x = (a=2, b=1+2) (a = 2, b = 3) julia> x[1] julia> x.a Named tuples are very similar to tuples, except that fields can additionally be accessed by name using dot syntax (x.a) in addition to the regular indexing syntax (x[1]). A comma-separated list of variables (optionally wrapped in parentheses) can appear on the left side of an assignment: the value on the right side is destructured by iterating over and assigning to each variable in turn: julia> (a,b,c) = 1:3 julia> b The value on the right should be an iterator (see Iteration interface) at least as long as the number of variables on the left (any excess elements of the iterator are ignored). This can be used to return multiple values from functions by returning a tuple or other iterable value. For example, the following function returns two values: julia> function foo(a,b) a+b, a*b foo (generic function with 1 method) If you call it in an interactive session without assigning the return value anywhere, you will see the tuple returned: julia> foo(2,3) (5, 6) Destructuring assignment extracts each value into a variable: julia> x, y = foo(2,3) (5, 6) julia> x julia> y Another common use is for swapping variables: julia> y, x = x, y (5, 6) julia> x julia> y If only a subset of the elements of the iterator are required, a common convention is to assign ignored elements to a variable consisting of only underscores _ (which is an otherwise invalid variable name, see Allowed Variable Names): julia> _, _, _, d = 1:10 julia> d Other valid left-hand side expressions can be used as elements of the assignment list, which will call setindex! or setproperty!, or recursively destructure individual elements of the iterator: julia> X = zeros(3); julia> X[1], (a,b) = (1, (2, 3)) (1, (2, 3)) julia> X 3-element Vector{Float64}: julia> a julia> b ... with assignment requires Julia 1.6 If the last symbol in the assignment list is suffixed by ... (known as slurping), then it will be assigned a collection or lazy iterator of the remaining elements of the right-hand side iterator: julia> a, b... = "hello" julia> a 'h': ASCII/Unicode U+0068 (category Ll: Letter, lowercase) julia> b julia> a, b... = Iterators.map(abs2, 1:4) Base.Generator{UnitRange{Int64}, typeof(abs2)}(abs2, 1:4) julia> a julia> b Base.Iterators.Rest{Base.Generator{UnitRange{Int64}, typeof(abs2)}, Int64}(Base.Generator{UnitRange{Int64}, typeof(abs2)}(abs2, 1:4), 1) See Base.rest for details on the precise handling and customization for specific iterators. The destructuring feature can also be used within a function argument. If a function argument name is written as a tuple (e.g. (x, y)) instead of just a symbol, then an assignment (x, y) = argument will be inserted for you: julia> minmax(x, y) = (y < x) ? (y, x) : (x, y) julia> gap((min, max)) = max - min julia> gap(minmax(10, 2)) Notice the extra set of parentheses in the definition of gap. Without those, gap would be a two-argument function, and this example would not work. For anonymous functions, destructuring a single tuple requires an extra comma: julia> map(((x,y),) -> x + y, [(1,2), (3,4)]) 2-element Array{Int64,1}: It is often convenient to be able to write functions taking an arbitrary number of arguments. Such functions are traditionally known as "varargs" functions, which is short for "variable number of arguments". You can define a varargs function by following the last positional argument with an ellipsis: julia> bar(a,b,x...) = (a,b,x) bar (generic function with 1 method) The variables a and b are bound to the first two argument values as usual, and the variable x is bound to an iterable collection of the zero or more values passed to bar after its first two julia> bar(1,2) (1, 2, ()) julia> bar(1,2,3) (1, 2, (3,)) julia> bar(1, 2, 3, 4) (1, 2, (3, 4)) julia> bar(1,2,3,4,5,6) (1, 2, (3, 4, 5, 6)) In all these cases, x is bound to a tuple of the trailing values passed to bar. It is possible to constrain the number of values passed as a variable argument; this will be discussed later in Parametrically-constrained Varargs methods. On the flip side, it is often handy to "splat" the values contained in an iterable collection into a function call as individual arguments. To do this, one also uses ... but in the function call julia> x = (3, 4) (3, 4) julia> bar(1,2,x...) (1, 2, (3, 4)) In this case a tuple of values is spliced into a varargs call precisely where the variable number of arguments go. This need not be the case, however: julia> x = (2, 3, 4) (2, 3, 4) julia> bar(1,x...) (1, 2, (3, 4)) julia> x = (1, 2, 3, 4) (1, 2, 3, 4) julia> bar(x...) (1, 2, (3, 4)) Furthermore, the iterable object splatted into a function call need not be a tuple: julia> x = [3,4] 2-element Vector{Int64}: julia> bar(1,2,x...) (1, 2, (3, 4)) julia> x = [1,2,3,4] 4-element Vector{Int64}: julia> bar(x...) (1, 2, (3, 4)) Also, the function that arguments are splatted into need not be a varargs function (although it often is): julia> baz(a,b) = a + b; julia> args = [1,2] 2-element Vector{Int64}: julia> baz(args...) julia> args = [1,2,3] 3-element Vector{Int64}: julia> baz(args...) ERROR: MethodError: no method matching baz(::Int64, ::Int64, ::Int64) Closest candidates are: baz(::Any, ::Any) at none:1 As you can see, if the wrong number of elements are in the splatted container, then the function call will fail, just as it would if too many arguments were given explicitly. It is often possible to provide sensible default values for function arguments. This can save users from having to pass every argument on every call. For example, the function Date(y, [m, d]) from Dates module constructs a Date type for a given year y, month m and day d. However, m and d arguments are optional and their default value is 1. This behavior can be expressed concisely as: function Date(y::Int64, m::Int64=1, d::Int64=1) err = validargs(Date, y, m, d) err === nothing || throw(err) return Date(UTD(totaldays(y, m, d))) Observe, that this definition calls another method of the Date function that takes one argument of type UTInstant{Day}. With this definition, the function can be called with either one, two or three arguments, and 1 is automatically passed when only one or two of the arguments are specified: julia> using Dates julia> Date(2000, 12, 12) julia> Date(2000, 12) julia> Date(2000) Optional arguments are actually just a convenient syntax for writing multiple method definitions with different numbers of arguments (see Note on Optional and keyword Arguments). This can be checked for our Date function example by calling methods function. Some functions need a large number of arguments, or have a large number of behaviors. Remembering how to call such functions can be difficult. Keyword arguments can make these complex interfaces easier to use and extend by allowing arguments to be identified by name instead of only by position. For example, consider a function plot that plots a line. This function might have many options, for controlling line style, width, color, and so on. If it accepts keyword arguments, a possible call might look like plot(x, y, width=2), where we have chosen to specify only line width. Notice that this serves two purposes. The call is easier to read, since we can label an argument with its meaning. It also becomes possible to pass any subset of a large number of arguments, in any order. Functions with keyword arguments are defined using a semicolon in the signature: function plot(x, y; style="solid", width=1, color="black") When the function is called, the semicolon is optional: one can either call plot(x, y, width=2) or plot(x, y; width=2), but the former style is more common. An explicit semicolon is required only for passing varargs or computed keywords as described below. Keyword argument default values are evaluated only when necessary (when a corresponding keyword argument is not passed), and in left-to-right order. Therefore default expressions may refer to prior keyword arguments. The types of keyword arguments can be made explicit as follows: function f(;x::Int=1) Keyword arguments can also be used in varargs functions: function plot(x...; style="solid") Extra keyword arguments can be collected using ..., as in varargs functions: function f(x; y=0, kwargs...) Inside f, kwargs will be an immutable key-value iterator over a named tuple. Named tuples (as well as dictionaries with keys of Symbol) can be passed as keyword arguments using a semicolon in a call, e.g. f(x, z=1; kwargs...). If a keyword argument is not assigned a default value in the method definition, then it is required: an UndefKeywordError exception will be thrown if the caller does not assign it a value: function f(x; y) f(3, y=5) # ok, y is assigned f(3) # throws UndefKeywordError(:y) One can also pass key => value expressions after a semicolon. For example, plot(x, y; :width => 2) is equivalent to plot(x, y, width=2). This is useful in situations where the keyword name is computed at runtime. When a bare identifier or dot expression occurs after a semicolon, the keyword argument name is implied by the identifier or field name. For example plot(x, y; width) is equivalent to plot(x, y; width=width) and plot(x, y; options.width) is equivalent to plot(x, y; width=options.width). The nature of keyword arguments makes it possible to specify the same argument more than once. For example, in the call plot(x, y; options..., width=2) it is possible that the options structure also contains a value for width. In such a case the rightmost occurrence takes precedence; in this example, width is certain to have the value 2. However, explicitly specifying the same keyword argument multiple times, for example plot(x, y, width=2, width=3), is not allowed and results in a syntax error. When optional and keyword argument default expressions are evaluated, only previous arguments are in scope. For example, given this definition: function f(x, a=b, b=1) the b in a=b refers to a b in an outer scope, not the subsequent argument b. Passing functions as arguments to other functions is a powerful technique, but the syntax for it is not always convenient. Such calls are especially awkward to write when the function argument requires multiple lines. As an example, consider calling map on a function with several cases: if x < 0 && iseven(x) return 0 elseif x == 0 return 1 return x [A, B, C]) Julia provides a reserved word do for rewriting this code more clearly: map([A, B, C]) do x if x < 0 && iseven(x) return 0 elseif x == 0 return 1 return x The do x syntax creates an anonymous function with argument x and passes it as the first argument to map. Similarly, do a,b would create a two-argument anonymous function. Note that do (a,b) would create a one-argument anonymous function, whose argument is a tuple to be deconstructed. A plain do would declare that what follows is an anonymous function of the form () -> .... How these arguments are initialized depends on the "outer" function; here, map will sequentially set x to A, B, C, calling the anonymous function on each, just as would happen in the syntax map(func, [A, B, C]). This syntax makes it easier to use functions to effectively extend the language, since calls look like normal code blocks. There are many possible uses quite different from map, such as managing system state. For example, there is a version of open that runs code ensuring that the opened file is eventually closed: open("outfile", "w") do io write(io, data) This is accomplished by the following definition: function open(f::Function, args...) io = open(args...) Here, open first opens the file for writing and then passes the resulting output stream to the anonymous function you defined in the do ... end block. After your function exits, open will make sure that the stream is properly closed, regardless of whether your function exited normally or threw an exception. (The try/finally construct will be described in Control Flow.) With the do block syntax, it helps to check the documentation or implementation to know how the arguments of the user function are initialized. A do block, like any other inner function, can "capture" variables from its enclosing scope. For example, the variable data in the above example of open...do is captured from the outer scope. Captured variables can create performance challenges as discussed in performance tips. Functions in Julia can be combined by composing or piping (chaining) them together. Function composition is when you combine functions together and apply the resulting composition to arguments. You use the function composition operator (∘) to compose the functions, so (f ∘ g) (args...) is the same as f(g(args...)). You can type the composition operator at the REPL and suitably-configured editors using \circ<tab>. For example, the sqrt and + functions can be composed like this: julia> (sqrt ∘ +)(3, 6) This adds the numbers first, then finds the square root of the result. The next example composes three functions and maps the result over an array of strings: julia> map(first ∘ reverse ∘ uppercase, split("you can compose functions like this")) 6-element Vector{Char}: 'U': ASCII/Unicode U+0055 (category Lu: Letter, uppercase) 'N': ASCII/Unicode U+004E (category Lu: Letter, uppercase) 'E': ASCII/Unicode U+0045 (category Lu: Letter, uppercase) 'S': ASCII/Unicode U+0053 (category Lu: Letter, uppercase) 'E': ASCII/Unicode U+0045 (category Lu: Letter, uppercase) 'S': ASCII/Unicode U+0053 (category Lu: Letter, uppercase) Function chaining (sometimes called "piping" or "using a pipe" to send data to a subsequent function) is when you apply a function to the previous function's output: julia> 1:10 |> sum |> sqrt Here, the total produced by sum is passed to the sqrt function. The equivalent composition would be: julia> (sqrt ∘ sum)(1:10) The pipe operator can also be used with broadcasting, as .|>, to provide a useful combination of the chaining/piping and dot vectorization syntax (described next). julia> ["a", "list", "of", "strings"] .|> [uppercase, reverse, titlecase, length] 4-element Vector{Any}: In technical-computing languages, it is common to have "vectorized" versions of functions, which simply apply a given function f(x) to each element of an array A to yield a new array via f(A). This kind of syntax is convenient for data processing, but in other languages vectorization is also often required for performance: if loops are slow, the "vectorized" version of a function can call fast library code written in a low-level language. In Julia, vectorized functions are not required for performance, and indeed it is often beneficial to write your own loops (see Performance Tips), but they can still be convenient. Therefore, any Julia function f can be applied elementwise to any array (or other collection) with the syntax f.(A). For example, sin can be applied to all elements in the vector A like so: julia> A = [1.0, 2.0, 3.0] 3-element Vector{Float64}: julia> sin.(A) 3-element Vector{Float64}: Of course, you can omit the dot if you write a specialized "vector" method of f, e.g. via f(A::AbstractArray) = map(f, A), and this is just as efficient as f.(A). The advantage of the f.(A) syntax is that which functions are vectorizable need not be decided upon in advance by the library writer. More generally, f.(args...) is actually equivalent to broadcast(f, args...), which allows you to operate on multiple arrays (even of different shapes), or a mix of arrays and scalars (see Broadcasting). For example, if you have f(x,y) = 3x + 4y, then f.(pi,A) will return a new array consisting of f(pi,a) for each a in A, and f.(vector1,vector2) will return a new vector consisting of f (vector1[i],vector2[i]) for each index i (throwing an exception if the vectors have different length). julia> f(x,y) = 3x + 4y; julia> A = [1.0, 2.0, 3.0]; julia> B = [4.0, 5.0, 6.0]; julia> f.(pi, A) 3-element Vector{Float64}: julia> f.(A, B) 3-element Vector{Float64}: Moreover, nested f.(args...) calls are fused into a single broadcast loop. For example, sin.(cos.(X)) is equivalent to broadcast(x -> sin(cos(x)), X), similar to [sin(cos(x)) for x in X]: there is only a single loop over X, and a single array is allocated for the result. [In contrast, sin(cos(X)) in a typical "vectorized" language would first allocate one temporary array for tmp=cos(X), and then compute sin(tmp) in a separate loop, allocating a second array.] This loop fusion is not a compiler optimization that may or may not occur, it is a syntactic guarantee whenever nested f. (args...) calls are encountered. Technically, the fusion stops as soon as a "non-dot" function call is encountered; for example, in sin.(sort(cos.(X))) the sin and cos loops cannot be merged because of the intervening sort function. Finally, the maximum efficiency is typically achieved when the output array of a vectorized operation is pre-allocated, so that repeated calls do not allocate new arrays over and over again for the results (see Pre-allocating outputs). A convenient syntax for this is X .= ..., which is equivalent to broadcast!(identity, X, ...) except that, as above, the broadcast! loop is fused with any nested "dot" calls. For example, X .= sin.(Y) is equivalent to broadcast!(sin, X, Y), overwriting X with sin.(Y) in-place. If the left-hand side is an array-indexing expression, e.g. X[begin+1:end] .= sin. (Y), then it translates to broadcast! on a view, e.g. broadcast!(sin, view(X, firstindex(X)+1:lastindex(X)), Y), so that the left-hand side is updated in-place. Since adding dots to many operations and function calls in an expression can be tedious and lead to code that is difficult to read, the macro @. is provided to convert every function call, operation, and assignment in an expression into the "dotted" version. julia> Y = [1.0, 2.0, 3.0, 4.0]; julia> X = similar(Y); # pre-allocate output array julia> @. X = sin(cos(Y)) # equivalent to X .= sin.(cos.(Y)) 4-element Vector{Float64}: Binary (or unary) operators like .+ are handled with the same mechanism: they are equivalent to broadcast calls and are fused with other nested "dot" calls. X .+= Y etcetera is equivalent to X .= X .+ Y and results in a fused in-place assignment; see also dot operators. You can also combine dot operations with function chaining using |>, as in this example: julia> [1:5;] .|> [x->x^2, inv, x->2*x, -, isodd] 5-element Vector{Real}: We should mention here that this is far from a complete picture of defining functions. Julia has a sophisticated type system and allows multiple dispatch on argument types. None of the examples given here provide any type annotations on their arguments, meaning that they are applicable to all types of arguments. The type system is described in Types and defining a function in terms of methods chosen by multiple dispatch on run-time argument types is described in Methods.
{"url":"https://docs.julialang.org/en/v1.7/manual/functions/","timestamp":"2024-11-11T20:40:00Z","content_type":"text/html","content_length":"66919","record_id":"<urn:uuid:35415547-72d7-4eaf-93f8-02b9073cc1b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00727.warc.gz"}
How do you write a How do you write a grid reference? How do you write a grid reference? Grid references 1. Start at the left-hand side of the map and go east until you get to the bottom-left-hand corner of the square you want. Write this number down. 2. Move north until you get to the bottom-left corner of the square you want. What is the rule for finding a grid reference? In a grid reference, we give the eastings first and then the northings. That means that when you find your point on the map, you must go along to the east first and then move up second. Some people remember the order by saying ‘along the corridor and up the stairs’! How do you write a 6 digit grid reference? Six-figure grid reference 1. First, find the four-figure grid reference but leave a space after the first two digits. 2. Estimate or measure how many tenths across the grid square your symbol lies. 3. Next, estimate how many tenths up the grid square your symbol lies. 4. You now have a six figure grid reference. Can coordinates have letters? Coordinates are a set of numbers or numbers and letters together that show you a position on a map. Coordinates are written in parentheses and separated by a comma: (H, 4). How many digits is a grid reference? 6 digits A grid reference shows a particular location on a map using 4 digits or 6 digits. The grid reference depends on eastings and northings. Eastings are eastward measured distances. They are similar to the x-axis on a graph. What is 4 figure grid reference? A 4-figure grid reference contains 4 numbers. For example, you might be given the number 3422. The first two numbers are called the easting, which is the number you would look for at the bottom of the map. Your grandfather gives you the 4-figure grid reference 1331. What is 4 digit grid reference? Four figure grid references locate a grid square (usually 1 km square) on a map. The four figure grid reference is always given for the bottom left hand corner of the square (the South West corner) and you always write the Eastings before the Northings [Hint: Along the corridor and up the stairs]. What is an alphabet grid? This grid with 26 spaces is designed for who are beginning to write the alphabet. The boxes help students to visualize how far along they are in the process and identify whether they skipped any letters (if there are blank boxes left at the end of the exercise). What is a Letter number grid? An alphanumeric grid (also known as atlas grid) is a simple coordinate system on a grid in which each cell is identified by a combination of a letter and a number. Some kinds of geocode also use letters and numbers, typically several of each in order to specify many more locations over much larger regions. How do you write a four-figure grid reference? Write this two-figure number down. Then use the northing to go up the stairs until you find the same corner. Put this two‑figure number after your first one and you now have the four-figure grid reference, which looks like the example in diagram D: 6233. What does a grid reference tell you? A grid reference tells you where something is on a map. The 1st letter or number tells you how far across the map something is. The 2nd letter or number tells you how far up the map something is. How do you reference a grid in an A4 paper? Grid References. The grid references start from the top left of the sheet, with letters running vertically from the top down and numbers running horizontally from left to right. On an A4 sheet the grids only need to be drawn on the top and left hand side. The letters I and O are not used – because they could be confused with 1 and 0. What are the first letters of the National Grid? There are four main first letters: ‘S’, ‘T’, ‘N’ and ‘H’ covering Great Britain, plus an ‘O’ square covering a tiny part of North Yorkshire that is usually below tide. A unique National Grid reference should have this two-letter descriptor followed by the grid reference numbers within that square.
{"url":"https://yourwiseadvices.com/how-do-you-write-a-grid-reference/","timestamp":"2024-11-11T20:17:52Z","content_type":"text/html","content_length":"61687","record_id":"<urn:uuid:7f577b81-4948-45f2-8172-ba5235e1c257>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00118.warc.gz"}
Regulating working memory load… Hello and welcome to the 87th edition of our fortnightly newsletter, Things in Education. In our series on information processing, we have written about how information moves from one memory register to another. We have also written in detail about sensory memory and managing students’ attention. The main character in discussions about information processing has always been working memory. We know that working memory is limited in terms of how much information it can hold and for how long. So it is very easy for working memory to get overloaded. And when working memory is overloaded, learning gets hindered. Today, we write about the delicate balance in the load on working memory that is needed to ensure that learning happens. Pitching content to students at the right level is very important — too difficult and the students’ working memory registers are overloaded, students shut down and learning will not happen; too easy and the students’ working memory registers will not be engaged enough to keep their attention, and learning will not happen. So the goal essentially becomes to optimise the load on working memory to ensure best learning happens. Working memory has very strict limits when it comes to processing new information. However, it is almost limitless when it comes to dealing with familiar information. So once some information is familiar to the student, the load on their working memory goes down considerably. They are able to recall the information or skill almost instantly and automatically. And this is the time to introduce more complexity to the topic that we are teaching. For example, only once the students are fluent with the procedure of long division should they be introduced to word problems in division. While learning the procedure of long division, a student’s working memory is engaged in recalling the steps of the process, recalling where the divisor goes, where the dividend goes, and so on. Introducing word problems at this stage will overwhelm the students. After some practice, the students master the procedure of long division, the steps come automatically to them, and they ‘intuitively’ know where the divisor, dividend, etc. go. They may be just going through the motions of practising long division, and they are probably not gaining anything from this exercise anymore. And at this stage, the working memory of the student is underwhelmed. This is the right time to introduce more complexity in the form of word problems. Their working memory is going to engage more in understanding how to decode the language of the word problem to understand which number should be the divisor and which should be the dividend. Load on working memory is not black and white. Students are not suddenly going to be ready for word problems based on long division. Let’s zoom in further: While learning the process of long division, a teacher will model the process for the student. The working memory of students is engaged in processing the information of the sequence of steps, recalling the vocabulary needed, and so on. As a teacher realises that the students have probably gotten the sequence of steps, the teacher does not need to model the entire process for the students. Maybe she starts off the process with the first few steps and leaves the rest for the students to do. Now the students’ working memory is engaged in trying to recall the sequence of steps of the process of long division. In other words, when the teacher notices that the load on students’ working memory has reduced, she introduces a slight complexity. By the end of the lesson, the students should be able to do the entire procedure by themselves. After a few rounds of practice, the procedure comes automatically to the students. And then the teacher introduces word problems. The decisions to introduce word problems (which is a different ‘topic’ in the syllabus) and to get students to practise semi-independently are all exercises in pitching the content at the right Let’s take another example of pitching at the right level. Say a primary school language teacher is doing a lesson on adjectives. Telling the students that adjectives are words that describe nouns and asking them to use adjectives in sentences is going to be too much for students. Their working memories are going to be engaged with what an adjective is, trying to recall the meaning of ‘describing’, trying to recall what nouns are, and so on. All of this may become too much for a primary school student’s working memory. Instead, if a teacher shows a picture of an elephant and asks students to look at the picture and describe the animal using only one word – one student says grey, the other says big, the third says huge, and another says wrinkly. Now that students have recalled these words, the teacher knows that these words are in the students’ long-term memory, and working memory is not going to be too engaged while thinking about these words. So, this is the right time to increase the complexity of the task – the students can move to using these words in sentences to describe the elephant. Next, the teacher shows a picture of a peacock and repeats the exercise. But this time students are asked to write down the words that describe the bird and sentences to describe the peacock using those words. The teacher has realised that the students have a hang of what describing words are and so it is okay to increase complexity for the students by removing their dependency on the teacher. The next step – after a lot of practice – would be for the students to visualise a bird or an animal that they might have seen some time and describe it by themselves. For a student at this stage, the knowledge of what adjectives are should be automatic, the sentence construction when describing a noun should be automatic. Only then does the teacher ask students to visualise. Visualising or imagining something is cognitively very heavy on the working memory so should only be attempted once the basic knowledge and skill is acquired. Application or abstraction also need students to visualise, so it is important that we do not hurry them into applying their skill or knowledge. So ‘pitch it at the right level’ is something that is commonly said, but the ramifications of not doing this are not spelt out from a cognitive perspective. Attributes like motivation or engagement of students are normally cited, but motivation and engagement (or the lack of these) are emergent properties of the load that the students’ working memories are under. Too high or low a cognitive load and you will see lowered motivation and engagement. In the examples used in this piece, the teachers tailored the lessons according to the existing skill and knowledge of their students by using worked examples to build the skill or knowledge and then gradually increasing independent problem solving as the students became more proficient. This is a great way to balance the load on the working memory of students. If you found this newsletter useful, please share it. If you received this newsletter from someone and you would like to subscribe to us, please click here.
{"url":"https://www.things-education.com/post/regulating-working-memory-load","timestamp":"2024-11-14T08:36:28Z","content_type":"text/html","content_length":"1050503","record_id":"<urn:uuid:71a25088-a5c0-4a52-85c2-1756f4eef533>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00039.warc.gz"}
[Solved] ty: Calculating the WACC Excel Activity: | SolutionInn Answered step by step Verified Expert Solution ty: Calculating the WACC Excel Activity: Calculating the WACC Here is the condensed 2021 balance sheet for Skye Computer Company (in thousands of dollars) ty: Calculating the WACC Excel Activity: Calculating the WACC Here is the condensed 2021 balance sheet for Skye Computer Company (in thousands of dollars) Current assets 2021 $2.000 Net fixed assets 3,000 Total assets $5.000 Accounts payable and accruals $900 Short-term debt 100 Long-term debt 1,425 Preferred stock (15,000 shares) 325 Common stock (40,000 shares) 1.100 Retained earnings 1.150 Total common equity $2.250 Total liabilities and equity $5.000 Skye's earnings per share last year were $2.15. The common stock sells for $60.00, last year's dividend (D) was $1-35, and a flotation cost of 30% would be required to sell new common stock Secunty analysts are projecting that the common dividend will gruse at an annual rate of 9%. Skye's preferred stock pays a dividend of $2.25 per share, and its preferred stack sells for $25.00 per share. The firm's before tax cost of debt is 10% and its marginal tax rate is 25%. The firm's currently outstanding 10% annual coupon rate, long-term debt ses at par valve. The market k premium is 6%. the risk-free rate is 7%, and Skye's beta is 1.274. The firm's total debt, which is the sum of the company's short-term debt and long term debt, equals $1.525 in The data has been collected in the Microsoft Excel fie below. Download the spreadsheet and perform the requred analysis to answer the questions belis. De not round intermediate calculations Round your answers to two decimal places. Back Next x Show all 1:51 AM Partly cloudy NDTAP lating the WACC Search this cou a. Calculate the cost of each capital component, that is, the after-tax cost of debt, the cost of preferred stock, the cost of equity from retained eamings, and the cost of newly issued common stock. Use the DCF method to find the cost of common equity. After-tax cost of debti % Cost of preferred stock: Cost of retained earnings Cost of new common stocki 1% b. Now calculate the cost of common equity from retained eamings, using the CAPM method. c. What is the cost of new common stock based on the CAPM) (Hint: Find the difference between r, and r, as determined by the DCF method, and add that differential to the CAPM value for 3 d. If Skye continues to use the same market value capital structure, what is the firm's WACC assuming that (1) it uses only retained earnings for equity and (2) if it expands so rapidly that must issue new common stock? (Hint Use the market value capital structure excluding current liabilities to determine the sights. Also, use the simple average of the required values obtained under the two methods in calculating WACC.) WACC WACC " Check My Work Resel Problem Back Excel Activity Calculating the WACC Question 1 0/10 Submit Excel Activity: Calculating the WACC Here is the condensed 2021 balance sheet for Skye Computer Company (in thousands of dollars): 2021 Current assets $2,000 Net fixed assets 3,000 Total assets $5,000 Accounts payable and accruals $900 Short-term debt 100 Long-term debt 1,425 Preferred stock (15,000 shares) 325 Common stock (40,000 shares) 1,100 Retained earnings 1,150 Total common equity Total liabilities and equity $2,250 $5,000 13 Skye's earnings per share last year were $2.15. The common stock sells for $60.00, last year's dividend (De) was $1.35, and a flotation cost of 10% would be required to sell new common stock. Security analysts are projecting that the common dividend will grow at an annual rate of 9%. Skye's preferred stock pays a dividend of $2.25 per share, and its preferred stock sells for $25.00 per share. The firm's before-tax cost of debt is 10%, and its marginal tax rate is 25%. The firm's currently outstanding 10% annual coupon rate, long-term debt sells at par value. The market risk premium is 6%, the risk-free rate is 7%, and Skye's beta is 1.274 The firm's total debt, which is the sum of the company's short-term debt and long-term debt, equals $1.525 million. Excel Activity Calculating the WACC Question 1 0/10 Submit Search this course , , , , The firm's total debt, which is the sum of the company's short-term debt and long-term debt, equals $1.525 million. The data has been collected in the Microsoft Excel file below. Download the spreadsheet and perform the required analysis to answer the questions below. Do not round intermediate calculations. Round your answers to two decimal places. ex Download spreadsheet Calating the WACC 71dd a. Calculate the cost of each capital component, that is, the after-tax cost of debt, the cost of preferred stock, the cost of equity from retained earnings, and the cost of newly issued common stock. Use the DCF method to find the cost of common equity. After-tax cost of debt: Cost of preferred stock: Cost of retained earnings Cost of new common stock: b. Now calculate the cost of common equity from retained earnings, using the CAPM method. % c. What is the cost of new common stock based on the CAPM? (Hint: Find the difference between r, and r, as determined by the DCF method, and add that differential to the CAPM value for r E MINDTAP Calculating the WACC Cost of preferred stock: % Cost of retained earnings: Cost of new common stock: % % Q Search this cours b. Now calculate the cost of common equity from retained earnings, using the CAPM method. % c. What is the cost of new common stock based on the CAPM? (Hint: Find the difference between re and r, as determined by the DCF method, and add that differential to the CAPM value for r.) d. If Skye continues to use the same market-value capital structure, what is the firm's WACC assuming that (1) it uses only retained earnings for equity and (2) if it expands so rapidly that it must issue new common stock? (Hint: Use the market value capital structure excluding current liabilities to determine the weights. Also, use the simple average of the required values obtained under the two methods in calculating WACC.) WACC: % WACC: % Check My Work Reset Problem Nex 1 Calculating the WACC 2 3 Skye Computer Company: Balance Sheet as of December 31 4 (in thousands of dollars) 5 6 Current assets 2021 $2,000 7 Net fixed assets 3,000 8 Total assets $0 9 10 Accounts payable and accruals $900 11 Short-term debt 100 12 Long-term debt 1,425 13 Preferred stock 14 Common stock 15 Retained earnings 16 Total common equity 17 Total liabilities and equity 18 325 1,100 1,150 $0 $0 19 Last year's earnings per share $2.15 20 Current price of common stock, Po $60.00 21 Last year's dividend on common stock, Do $1.35 22 Growth rate of common dividend, g 9% 23 Flotation cost for common stock, F 10% 24 Common stock outstanding 40,000 25 Current price of preferred stock, Pp $25.00 26 Dividend on preferred stock, Dp 27 Preferred stock outstanding $2.25 15,000 28 Before-tax cost of debt, ra 29 Market risk premium, TM - TRF Sheet1 10% 6% 30 Risk-free rate, far 31 Beta 32 Tax rate 33 Total debt 34 7% 1:274 25% $1,525 thousand 35 a. Calculating the cost of each capital component (using the DCF method to find 36 the cost of common equity) 37 After-tax cost of debt 38 Cost of preferred stock 39 Cost of retained earnings 40 Cost of new.common stock 41 42 b. Calculating the cost of common equity from retained earnings, using the CAPM method 43 Cost of retained earnings 44 45 c. Calculating the cost of new common stock based on the CAPM 46 Flotation cost adjustment 47 Cost of new common stock Formulas NA N/A MUA NIA INIA ANA N/A 48 50 49 d. Calculating the firm's WACC assuming that (1) it uses only retained earnings for equity and (2) if it expands so rapidly that it must issue new common stock Market value. Market value. 51 (in thousands) Weight (in thousands) Weight 52 Total debt NIA NIA 53 Preferred stock #N/A N/A 54 Common equity 55 Total 56 57 WACC 58 WACC Sheet N/A MUN NA UA NIA NIA Ready Type here to search There are 3 Steps involved in it Step: 1 To calculate Skye Computer Companys Weighted Average Cost of Capital WACC we need to determine the cost for each component of the capital structure debt preferred stock and common equity We will use Get Instant Access to Expert-Tailored Solutions See step-by-step solutions with expert insights and AI powered tools for academic success Ace Your Homework with AI Get the answers you need in no time with our AI-driven, step-by-step assistance Get Started
{"url":"https://www.solutioninn.com/study-help/questions/ty-calculating-the-wacc-excel-activity-calculating-the-wacc-here-2349135","timestamp":"2024-11-11T11:06:36Z","content_type":"text/html","content_length":"128718","record_id":"<urn:uuid:61182c0d-451a-434e-a32d-0e3d5cfb3c9a>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00411.warc.gz"}
How do you find the slope given 3x+4y=22? | HIX Tutor How do you find the slope given 3x+4y=22? Answer 1 To determine the slope, we need to put this business in slope-intercept form, that is, #y=mx+b#. Let's start by subtracting #3x# from both sides. We get Lastly, we can divide everything by #4# to get Our slope is given by the coefficient of #x#, thus the slope is Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 To find the slope of the line represented by the equation (3x + 4y = 22), you need to rewrite the equation in slope-intercept form, which is (y = mx + b), where (m) represents the slope. 1. Start by isolating (y) on one side of the equation: [4y = -3x + 22] 2. Divide both sides by 4 to solve for (y): [y = -\frac{3}{4}x + \frac{22}{4}] 3. Now, the equation is in slope-intercept form, where the coefficient of (x) is the slope. So, the slope of the line is (-\frac{3}{4}). Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-find-the-slope-given-3x-4y-22-8f9af918e2","timestamp":"2024-11-10T12:47:54Z","content_type":"text/html","content_length":"575436","record_id":"<urn:uuid:b6437552-885e-4a57-9d03-7f3648997310>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00806.warc.gz"}
Prizes for ELK proposals We are no longer accepting submissions. We'll get in touch with winners and make a post about winning proposals sometime in the next month. ARC recently released a technical report on eliciting latent knowledge (ELK), the focus of our current research. Roughly speaking, the goal of ELK is to incentivize ML models to honestly answer “straightforward” questions where the right answer is unambiguous and known by the model. ELK is currently unsolved in the worst case—for every training strategy we’ve thought of so far, we can describe a case where an ML model trained with that strategy would give unambiguously bad answers to straightforward questions despite knowing better. Situations like this may or may not come up in practice, but nonetheless we are interested in finding a strategy for ELK for which we can’t think of any counterexample. We think many people could potentially contribute to solving ELK—there’s a large space of possible training strategies and we’ve only explored a small fraction of them so far. Moreover, we think that trying to solve ELK in the worst case is a good way to “get into ARC’s headspace” and more deeply understand the research we do. We are offering prizes of $5,000 to $50,000 for proposed strategies for ELK. We’re planning to evaluate submissions received before February 15. For full details of the ELK problem and several examples of possible strategies, see the writeup. The rest of this post will focus on how the contest works. Contest details To win a prize, you need to specify a training strategy for ELK that handles all of the counterexamples that we’ve described so far, summarized in the section below—i.e. where the breaker would need to specify something new about the test case to cause the strategy to break down. You don’t need to fully solve the problem in the worst case to win a prize, you just need to come up with a strategy that requires a new counterexample. We’ll give a $5,000 prize to any proposal that we think clears this bar. We’ll give a $50,000 prize to a proposal which we haven’t considered and seems sufficiently promising to us or requires a new idea to break. We’ll give intermediate prizes for ideas that we think are promising but we’ve already considered, as well as for proposals that come with novel counterexamples, clarify some other aspect of the problem, or are interesting in other ways. A major purpose of the contest is to provide support for people understanding the problem well enough to start contributing; we aren’t trying to only reward ideas that are new to us. You can submit multiple proposals, but we won’t give you separate prizes for each—we’ll give you at least the maximum prize that your best single submission would have received, but may not give much more than that. If we receive multiple submissions based on a similar idea, we may post a comment describing the idea (with attribution) along with a counterexample. Once a counterexample has been included in the comments of this post, new submissions need to address that counterexample (as well as all the existing ones) in order to be eligible for a prize. Ultimately prizes are awarded at our discretion, and the “rules of the game” aren’t fully precise. If you are curious about whether you are on the right track, feel free to send an email to elk@alignmentresearchcenter.org with the basic outline of an idea, and if we have time we’ll get back to you with some feedback. Below we also describe some of the directions we consider more and less promising and some general guidance. How to submit a proposal You can submit a proposal by copying this google doc template and sharing it with elk@alignmentresearchcenter.org (please give comment access in case we need to ask questions to evaluate your submission). By submitting a proposal, you are giving us permission to post the idea here with attribution. (And it's fine for you to post it after the contest or we post a counterexample.) Retroactive prizes We’re giving prizes to existing proposals from David Dalrymple ($5k), Ramana Kumar ($3k), John Maxwell ($2k), and Yonadav Shavit ($1k). We’ve added relevant counterexamples in the next section, and further submissions need to handle those counterexamples in order to be eligible. Existing counterexamples In our report and a recent post, we described some features of the world that would cause existing approaches to ELK to fail. A new approach should work even in a situation satisfying all of these properties, or should include an argument for why some of these features are actually impossible or inconsistent with each other. In order to generate a counterexample to a training strategy, we need to spell out a concrete “way the world could be” (a test case) where the strategy would fail. In the report we consider test cases where the AI and the human happen to understand the world by doing inference in Bayes nets. The Bayes net case is just one test case that lets us get precise about our counterexamples and our training strategies don’t lean on it. In practice we also consider other concrete cases (e.g. deduction from premises generated by a neural network). In this section we’ll briefly review the current state of existing counterexamples. This isn’t intended to be a self-contained description; it relies heavily on links to the report. Ontology mismatch All of our counterexamples are based on an ontology mismatch between two different Bayes nets, one used by an ML prediction model (“the predictor”) and one used by a human: • The predictor’s Bayes net is used internally by a prediction model that maps an initial segment of a video to a prediction about what will happen next. This model was trained by gradient descent based on the quality of its predictions (more). • Humans have their own model of the world, which they can improve with the help of AI assistants; those assistants can design and conduct experiments, find and evaluate arguments, and so on. Crucially, we have a limited computational budget with which to train these AI assistants, and that limits how much help they can provide. The “human’s” Bayes net captures everything the human understands about the world at the end of this process (more). We are interested in the case where there are features of reality that are accurately reflected in the predictor’s Bayes net, but missing from this (augmented) human Bayes net. We expect the human could understand these facts eventually given enough time or enough AI assistance, but not using the available computational budget. Put differently, there is a gap between the kinds of understanding that gradient descent can build, and the kind of understanding that a human-comprehensible process of “scientific inquiry” can build using similar computational resources (more). Furthermore, we imagine that both the predictor’s Bayes net and the human’s Bayes net reflect some underlying reality and that there is a way to map between them, which we’ll call “direct translation.” We don’t know how to do direct translation, and we are concerned that if we simply search for any mapping from the AI’s Bayes net to the human’s Bayes net we will instead get the “human simulator”: the map which correctly translates the observations from one Bayes net to the other, but then does inference in the human’s Bayes net in order to fill in the latent structure as a human would by looking at the data (more). Counterexample features ELK seems particularly hard when these Bayes nets satisfy the following properties: 1. There are arbitrarily severe “ontology mismatches” between the predictor’s Bayes net and the human’s Bayes net, such that specifying the direct translation between them can be very complex (more 2. The human’s Bayes net is simpler than the predictor’s Bayes net, and the gap can be arbitrarily large (more). 3. The predictor’s Bayes net is arbitrarily large, and individual nodes of the human’s Bayes net depend on a constant fraction of the predictor’s Bayes net. Therefore directly translating between the two Bayes nets takes arbitrarily much computation (more). 4. It’s computationally easy for the human simulator to do good inference in the human’s Bayes net—at least good enough inference that it answers questions as well as the direct translator—and the gap between this complexity and the complexity of direct translation can be arbitrarily large. More precisely, a human interrogator can’t think of questions which require the human simulator to do hard inference. However, we can’t rule out the possibility that you, the algorithm-designer, will think of some trick they can use to generate hard questions (more). 5. There are situations where a model which is trying to answer a question will first be able to predict what (potentially mistaken) belief the human-with-assistance will have about that question before it knows the true answer to the question (more). 6. While the predictor’s Bayes net is very large, it has a short succinct description which is simpler than the human’s Bayes net, and that gap is arbitrarily large. Think of simple physical laws that describe arbitrarily computationally expensive simulations, or simple cognitive heuristics that are very widely applicable. (more) 7. There are sequences of actions for which the predictor’s Bayes net correctly anticipates that tampering will occur yet nothing looks suspicious to the human’s Bayes net. 8. After tampering has occurred, it is possible that no sequence of further actions reveal that anything is wrong, e.g. the camera and actuators may have been completely disconnected from the world and replaced with a convincing-to-humans simulation (more). 9. There are sequences of actions leading to undetectable tampering that have no human-comprehensible similarities to sequences of actions leading to detectable tampering; in particular, the early steps of this undetectable tampering cannot be used to perform detectable tampering (more). An important and very general strategy for the breaker is to suggest counterexamples where you “get what you train for.” If you propose training a model using loss function L, then the breaker will likely focus on counterexamples where the model chooses its output to minimize L. A successful approach should either explain why this either isn’t a realistic thing for the model to learn (e.g. why that would be disfavored by regularization), or why the overall approach works even if the model is explicitly strategizing about how to minimize the loss. Rough guidance • There is a tradeoff between specifying your approach precisely, and including your own counterexample. It’s fine to describe a very broad/vague idea for solving ELK, and then present a new counterexample showing why that whole category of solutions can’t work. But if you don’t propose a counterexample, then it’s good to describe your proposal precisely enough that we understand how to actually implement it without wondering if that’s really what you meant. It’s OK to submit a very broad or informal idea together with a single very specific instance of that idea, as long as there is some version we can understand precisely. • We suspect you can’t solve ELK just by getting better data—you probably need to “open up the black box” and include some term in the loss that depends on the structure of your model and not merely its behavior. So we are most interested in approaches that address that challenge. We could still be surprised by clever ways to penalize behavior, but we’ll hold them to a higher bar. The most plausible surprise would be finding a way to reliably make it computationally difficult to “game” the loss function, probably by using the AI itself to help compute the loss (e.g. using consistency checks or by giving the human AI assistance). • If you are specifying a regularizer that you hope will prefer direct translation over human simulation, you should probably have at least one concrete case in mind that has all the counterexample-features above and where you can confirm that your regularizer does indeed prefer the direct translator. • ELK already seems hard in the case of ontology identification, where the predictor uses a straightforward inference algorithm in an unknown model of the world (which we’ve been imagining as a Bayes net). When coming up with a proposal, we don’t recommend worrying about cases where the original unaligned predictor learned something more complicated (e.g. involving learned optimization other than inference). That said, you do need to worry about the case where your training scheme incentivizes learned optimization that may not have been there originally. Ask dumb questions! A major purpose of this contest is to help people build a better understanding of our research methodology and the “game” we are playing. So we encourage people to ask clarifying questions in the comments of this post (no matter how “dumb” they are), and we’ll do our best to answer all of them. You might also want to read the comments to get more clarity about the problem. What you can expect from us • We’ll try to answer all clarifying questions in the comments. • If you send in a rough outline for a proposal, we will try to understand whether it might qualify and write back something like “This qualifies,” “This might qualify but would need to be clearer and address issue X,” “We aren’t easily able to understand this proposal at all,” “This is unlikely to be on track for something that qualifies,” or “This definitely doesn’t qualify.” • If there are more submissions than expected, we may run out of time to respond to all submissions and comments, in which case we will post an update here. Comment via LessWrong, Alignment Forum.
{"url":"https://www.alignment.org/blog/prizes-for-elk-proposals/","timestamp":"2024-11-05T19:34:22Z","content_type":"text/html","content_length":"38363","record_id":"<urn:uuid:d7d1e13a-6fee-411b-a669-98ccfe9ca530>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00799.warc.gz"}
[Solved] Let ϕ(x,y)=0 be the equation of a circle. If ϕ(0,λ)=0 ... | Filo Let be the equation of a circle. If has equal roots and has roots , then the centre of the circle is Not the question you're searching for? + Ask your question have equal roots, Then, and have roots Was this solution helpful? Video solutions (4) Learn from their 1-to-1 discussion with Filo tutors. 15 mins Uploaded on: 12/1/2022 Was this solution helpful? 5 mins Uploaded on: 12/4/2022 Was this solution helpful? Found 5 tutors discussing this question Discuss this question LIVE for FREE 6 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice questions from Coordinate Geometry for JEE Main and Advanced (Dr. S K Goyal) View more Practice more questions from Conic Sections Practice questions on similar concepts asked by Filo students View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text Let be the equation of a circle. If has equal roots and has roots , then the centre of the circle is Updated On Dec 4, 2022 Topic Conic Sections Subject Mathematics Class Class 11 Answer Type Text solution:1 Video solution: 4 Upvotes 451 Avg. Video Duration 8 min
{"url":"https://askfilo.com/math-question-answers/let-phix-y0-be-the-equation-of-a-circle-if-phi0-lambda0-has-equal-roots-lambda22","timestamp":"2024-11-09T09:27:06Z","content_type":"text/html","content_length":"552298","record_id":"<urn:uuid:f2771bb5-426f-41f9-be47-8b2b21f2e4fb>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00734.warc.gz"}
next → ← prev Angle Modulation Angle modulation is the combination of frequency and phase modulation. We can also say that the frequency and phase combine to form an angle. Angle modulation is defined as the modulation where the frequency or phase of the carrier varies with the amplitude of the message signal. When the amplitude of the carrier varies with the message signal, it is known as amplitude modulation. The spectral components of the modulated signal depend on the frequency and amplitude of the components in the baseband signal. Angle modulation is non-linear, while amplitude modulation is a linear process. The superposition principle does not apply in angle modulation. The signal has the form of V(t) = A cos [ω[c]t + ϕ (t)] ω[c ]is the carrier frequency constant A is the amplitude constant ϕ (t) is the phase angle, which is not constant. It is a function of the baseband signal. Types of Angle Modulation Angle modulation is the combination of phase and frequency modulation. It is the modulation where both the frequency and phase of the carrier vary with the amplitude of the message signal, as discussed above. Angle Modulation is categorized as frequency modulation and phase modulation. Frequency Modulation If the frequency of the carrier varies with the amplitude of the message signal, it is known as frequency modulation. The amplitude of the modulated signal depends on the frequency difference between the carrier frequency and the center frequency. FM is a type of both analog and digital modulation. The applications of analog frequency modulation are telecommunications, computing, radio broadcasting, video broadcasting, and two-way radio systems. In digital modulation, the process used to transmit digital data using FM is known as Frequency-Shift Keying. Phase Modulation If the carrier phase varies with the amplitude of the message signal, it is known as phase modulation. In angle modulation, it is together used with frequency modulation. It is also an integral part of both analog and digital communication. In analog communication, phase modulation is used for transmitting radio waves and other technologies, such as Wi-Fi and satellite television. The waveforms of frequency modulation and phase modulation are shown below: Relationship between Frequency and Phase Modulation The block diagram of frequency modulation consists of an integrator and a phase modulator, as shown below: The modulating signal is applied to the integrator, which is further sent to the phase modulator. The output of the combination of the integrator and the phase modulator is the frequency modulated Let the message signal and the frequency constant be m(t) and K[F]. K[F] =KK[P] The output of the phase modulator is given by: The block diagram of the phase modulation consists of a differentiator and a frequency modulator, as shown below: The modulating signal is applied to the differentiator, which is further sent to the frequency modulator. The output of the combination of the differentiator and frequency modulator is the phase modulated signal. Let the phase constant be K[P]. The output can be represented as: V(t) = A cos [ω[c]t + K[P] m(t)] ω[c ]is the carrier frequency constant A is the amplitude constant m(t) is the instantaneous value of the message signal The instantaneous angular frequency is the differentiation of the output of the frequency modulator. It is given by: ω = ω[c] + K m(t) The maximum frequency component of the message signal and instantaneous value of the message signal is F[M]. The integral part of the frequency modulation is a linear operation and does not change the number of frequency components. Modulation Index The deviation of the total angle from the carrier angle is defined as phase deviation. The deviation of the total angle from the carrier angle is defined as phase deviation. The instantaneous frequency deviation from the carrier frequency is known as frequency deviation. The frequency of the message signal is represented as ω[m] ω[m] = 2πF[m] The angle modulated signal is given by: V(t) = A cos [ω[c]t + Bsin ω[m]t] … (1) ω[c ]is the carrier frequency constant A is the amplitude constant ω[c ]is the message frequency constant B is the peak amplitude of the phase constant ϕ (t) The instantaneous frequency is represented as ω. ω = d/dt [ϕ] ω = d/dt [[ω[c]t + Bcosω[m]t] ω = ω[c] + Bω[m ]cos ω[m]t We know, ω = 2πF Substituting the value of ω = 2πF, we get: 2πF = ω[c] + Bω[m ]sin ω[m]t F = ω[c]/2π + Bω[m ]sin ω[m]t /2π F = F[C] + BF[m] sin ω[m]t F[C] = ω[c]/2π F[m] = ω[m]/2π The maximum frequency deviation is represented as Δf. Δf = BF[m] Thus, equation (1) can be represented as: V(t) = A cos [ω[c]t + Δf / F[m] sin ω[m]t] In the next section, we will discuss FM (Frequency Modulation) and Phase Modulation (PM) in detail. Numerical Examples Let's discuss some numerical examples based on the angle modulation. Example 1: Consider an angle modulated signal P(t) = 5cos [2π 10^6t + 3sin (2π 10^4t)]. Find its instantaneous value at t = 0.8 ms? Given: Phase ϕ (t)= 2π 10^6t + 3sin (2π 10^4t) Instantaneous frequency = dϕ (t)/dt = d/dt [2π 10^6t + 3sin (2π 10^4t)] = 2π 10^6 + 2π 10^4 3cos (2π 10^4t) At, t = 0.8ms t = 0.8 x 10^-3 seconds Substituting the value of t, we get: = 2π 10^6 + 2π 10^4 3cos (2π 10^4 × 0.8 x 10^-3) = 2π 10^6 + 2π 10^4 3cos (16 π) = 2π 10^6 + 2π 10^4 3 (1) (cos16 π = 1) = 2π 10^6 + 6π 10^4 = 2π 10^4 (10^2 + 3) = 2π 10^4 (103) = 6.47 ×10^6 Hz = 6.47M Hz Thus, the instantaneous value is 6.47M Hz. Example 2: Consider an angle modulated signal P(t) = 3cos [2π 10^8t + 5sin (2π 10^5t)]. Find its instantaneous value at t = 0.5 ms? Solution: Given: Phase ϕ (t)= 2π 10^8t + 5sin (2π 10^5t) Instantaneous frequency = dϕ (t)/dt = d/dt [2π 10^8t + 5sin (2π 10^5t)] = 2π 10^8 + 2π 10^5 5cos (2π 10^5t) At, t = 0.5ms t = 0.5 x 10^-3 seconds Substituting the value of t, we get: = 2π 10^8 + 2π 10^5 5cos (2π 10^4 × 0.5 x 10^-3) = 2π 10^8 + 2π 10^5 5cos (10 π) = 2π 10^8 + 2π 10^5 5 (1) (cos 10 π = 1) = 2π 10^8 + 10π 10^5 = 2π 10^5 (10^3 + 5) = 2π 10^4 (1005) = 6.314 ×10^8 Hz Thus, the instantaneous value is 6.314 ×10^8 Hz. ← prev next →
{"url":"https://www.javatpoint.com/angle-modulation","timestamp":"2024-11-03T22:12:25Z","content_type":"text/html","content_length":"42000","record_id":"<urn:uuid:a1fa05d9-611a-482e-b249-e51ccb7b5ce9>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00469.warc.gz"}
Light Rail Boondoggles - Errors of Enchantment Coyote blog has more interesting commentary on light rail boondoggles here. The rip-off arithmetic he cites is the same relative magnitude as for our Rail Runner and so-far potential streetcar debacles. For example, about LA: If the core ridership number is 125,000, the highest possible choice, then the total capital cost of the system per rider is $20,000 per rider. This means I was right, that we could have instead bought ever rider a car for the same money. Since the real ridership is probably less than that number, this means we could have bought ever rider a car and had money left over. Concerned about the environment? Then make every car a Prius, which the money would just about cover even without the volume purchasing discount they would likely get. But what about gas? Well, they say they have a $252 million per year operating loss. This subsidy, which is above and beyond ticket sales, equates to $2,106 (!) per daily rider, even using the higher 125,000 figure. At $2.50 per gallon, this equates to 15.5 gallons of gas per rider per week. So you can see with the LA numbers, even using the largest possible interpretation of their ridership numbers, the money used for the train could have instead bought every passenger a new car and filled the tank up with gas once a week for life. Yes, I know, the argument is that the train reduces congestion. Supposedly. I have two responses: Rail has never reduced congestion in any city. Go see London and Manhattan. In fact, rail seems to encourage urban density that increases congestion. In Phoenix, where rail will often replace existing lanes of roads, the train will likely carry fewer people than the lanes of traffic used to, so congestion will increase.
{"url":"https://errorsofenchantment.com/light-rail-boondoggles/","timestamp":"2024-11-06T16:52:36Z","content_type":"text/html","content_length":"49682","record_id":"<urn:uuid:6a30c9b2-a4ec-4f83-a80a-c5855a081a31>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00585.warc.gz"}
Lesson 5 Some Functions Have Symmetry 5.1: Changing Heights (5 minutes) This warm-up invites students to use their understanding of a the motion of a Ferris wheel to complete a table of values where the data has distinct symmetry. Student Facing The table shows Clare’s elevation on a Ferris wheel at different times, \(t\). Clare got on the ride 80 seconds ago. Right now, at time 0 seconds, she is at the top of the ride. Assuming the Ferris wheel moves at a constant speed for the next 80 seconds, complete the table. │time (seconds) │height (feet) │ │-80 │0 │ │-60 │31 │ │-40 │106 │ │-20 │181 │ │0 │212 │ │20 │ │ │40 │ │ │60 │ │ │80 │ │ Activity Synthesis Select students to explain how they completed the table. If not brought up by students, ask, "What does the graph of this function look like?" (A curve that is symmetric about the vertical axis.) Invite students to share their descriptions or any graphs they make of the data. If no students sketched a graph that can be shared, display one for all to see to help highlight the symmetry. 5.2: Card Sort: Two Types of Graphs (15 minutes) The purpose of this activity is for students to identify common features in graphs of functions that are the same when reflected across the vertical axis and those that look the same when reflected across both axes. This leads to defining these types of functions as even or odd, respectively. In the following activity, students match tables to the graphs and refine their definitions, so groups should keep their copies of the blackline master from this activity to use in the next activity. Monitor for different ways groups choose different categories, but especially for categories that distinguish between graphs of even functions and graphs of odd functions. As students work, encourage them to refine their descriptions of the graphs using more precise language and mathematical terms (MP6). Arrange students in groups of 2. Tell them that in this activity, they will sort some cards into categories of their choosing. When they sort the graphs, they should work with their partner to come up with categories. Distribute pre-cut slips to each group. Conversing: MLR2 Collect and Display. As students discuss their matchings with a partner, listen for and collect vocabulary, gestures, and diagrams students use to identify and describe what is the same and different. Capture student language that reflects a variety of ways to describe the differences between even and odd functions. Write the students’ words on a visual display and update it throughout the remainder of the lesson. Remind students to borrow language from the display as needed. This will help students read and use mathematical language during their partner and whole-group Design Principle(s): Optimize output (for explanation); Maximize meta-awareness Engagement: Provide Access by Recruiting Interest. Leverage choice around perceived challenge. Provide students with six cards to sort, ensure that the set includes three even functions and three odd functions. This will allow students additional processing time. Supports accessibility for: Organization; Attention; Social-emotional skills Student Facing Your teacher will give you a set of cards that show graphs. Sort the cards into 2 categories of your choosing. Be prepared to explain the meaning of your categories. Anticipated Misconceptions Some students may focus too closely on identifying specific points on the graph to use to make their categories. Encourage these students to look at the graph as a whole while they sort. Activity Synthesis Select groups to share their categories and how they sorted their graphs. Choose as many different types of categories as time allows, but ensure that one set of categories distinguishes between graphs of even functions and graphs of odd functions. Attend to the language that students use to describe their categories, giving them opportunities to describe the types of graphs more precisely. It is possible students will think of graphs of odd functions as ones where a \(180^{\circ}\) rotation using the origin as the center of rotation results in the same graph. While it is true that this type of rotation appears the same as successive reflections of the graph across both axes, focus the conversation on thinking in terms of reflections since the function notation students will use to describe odd functions, \(g(x)=\text-g(\text-x)\), algebraically describes reflections. At the conclusion of the sharing, display the graphs of the even functions next to the odd functions. Tell students that functions whose graphs look the same when reflected across the \(y\)-axis are called even functions. Functions whose graphs that look the same when reflected across both axes are called odd functions. In the next activity, students refine their understanding of even and odd functions by pairing each of the graphs with a table of values and writing their own description for these two types of functions based on inputs and outputs. 5.3: Card Sort: Two Types of Coordinates (15 minutes) The purpose of this activity is for students to deepen their understanding of even and odd functions. Using the graphs from the previous activity, students first match each graph to a table of coordinate pairs and then use both representations to identify defining characteristics of even functions and odd functions (MP8). In the next lesson, students will learn how to use an equation to prove if a function is even or odd, so an important result of this activity is describing even and odd functions using function notation. Monitor for students making connections between the transformations described in the previous activity (a reflection across the \(y\)-axis versus successive reflections across both axes) and the coordinates in the tables to share during the whole-class discussion. Keep students in the same groups. If students do not already have their slips from the previous activity arranged into two groups, one for graphs of even functions and one for graphs of odd functions, ask them to do so now. Distribute pre-cut slips. Conversing: MLR8 Discussion Supports. Students should take turns finding a match and explaining their reasoning to their partner. Display the following sentence frames for all to see: “_____ and _____ are alike because…”, and “I noticed _____, so I matched….” Encourage students to challenge each other when they disagree. This will help students clarify their reasoning about even and odd Design Principle(s): Support sense-making; Maximize meta-awareness Engagement: Provide Access by Recruiting Interest. Leverage choice around perceived challenge. Provide students with six cards to match, ensure that the set includes the table of coordinate pairs that match the six graphs used in the previous activity. This will allow students additional processing time. Supports accessibility for: Organization; Attention; Social-emotional skills Student Facing Your teacher will give you a set of cards to go with the cards you already have. 1. Match each table of coordinate pairs with one of the graphs from earlier. 2. Describe something you notice about the coordinate pairs of even functions. 3. Describe something you notice about the coordinate pairs of odd functions. Student Facing Are you ready for more? 1. Can a non-zero function whose domain is all real numbers be both even and odd? Give an example if it is possible or explain why it is not possible. 2. Can a non-zero function whose domain is all real numbers have a graph that is symmetrical around the \(x\)-axis? Give an example if it is possible or explain why it is not possible. Activity Synthesis The goal of this discussion is for students to move from observations about specific even and odd functions to generalizing things that are true for all even functions and for all odd functions. Select previously identified students to share the connections they see between the transformations associated with graphs of even and odd functions and features of the coordinate pairs belonging to even and odd functions. Record these observations for all to see in two lists: one for even, one for odd. If time allows, assign groups to write a single sentence describing even or odd functions that summarizes one of the lists. Representation: Internalize Comprehension. Demonstrate and encourage students to use color coding and annotations to highlight connections between representations in a problem. For example, invite students to highlight positive values in one color and negative values in a different color within the table of coordinate pairs. Supports accessibility for: Visual-spatial processing Lesson Synthesis The goal of this discussion is for students to use function notation to summarize their understanding of what makes a function even and what makes a function odd. Here are some questions for discussion to help students transition to using function notation: • “Using the the language of inputs and outputs, what is true about even functions?” (Opposite inputs have the same output.) • “Using the the language of inputs and outputs, what is true about odd functions?” (Opposite inputs have opposite outputs.) • “If a function \(f\) is even and \(f(3)=7\), what is something else you know about \(f\)?” (Since \(f\) is even, if an input of 3 has an output of 7, then an input of -3 also has an output of 7, so \(f(\text-3)=7\).) • “If a function \(g\) is odd and \(g(5)=\text-1\), what is something else you know about \(g\)?” (Since \(g\) is odd, if an input of 5 has an output of -1, then an input of -5 has an output of 1, so \(g(\text-5)=1\).) Tell students that these observations can be generalized for all even and odd functions. If a function \(f\) is even, then \(f(x)=f(\text-x)\) is true. If a function \(g\) is odd, then \(g(x)=\text-g (\text-x)\). Students will focus on these definitions in the next lesson. 5.4: Cool-down - Even or Odd? (5 minutes) Student Facing We've learned how to transform functions in several ways. We can translate graphs of functions up and down, changing the output values while keeping the input values. We can translate graphs left and right, changing the input values while keeping the output values. We can reflect functions across an axis, swapping either input or output values for their opposites depending on which axis is reflected across. For some functions, we can perform specific transformations and it looks like we didn't do anything at all. Consider the function \(f\) whose graph is shown here: What transformation could we do to the graph of \(f\) that would result in the same graph? Examining the shape of the graph, we can see a symmetry between points to the left of the \(y\)-axis and the points to the right of the \(y\)-axis. Looking at the points on the graph where \(x=1\) and \(x=\text-1\), these opposite inputs have the same outputs since \(f(1)=4\) and \(f(\text-1)=4\). This means that if we reflect the graph across the \(y\)-axis, it will look no different. This type of symmetry means \(f\) is an even function. Now consider the function \(g\) whose graph is shown here: What transformation could we do to the graph of \(g\) that would result in the same graph? Examining the shape of the graph, we can see that there is a symmetry between points on opposite sides of the axes. Looking at the points on the graph where \(x=1\) and \(x=\text-1\), these opposite inputs have opposite outputs since \(g(1)=2.35\) and \(g(\text-1)=\text-2.35\). So a transformation that takes the graph of \(g\) to itself has to reflect across the \(x\)-axis and the \(y\)-axis. This type of symmetry is what makes \(g\) an odd function.
{"url":"https://curriculum.illustrativemathematics.org/HS/teachers/3/5/5/index.html","timestamp":"2024-11-14T11:56:24Z","content_type":"text/html","content_length":"106253","record_id":"<urn:uuid:1c2ac92c-719a-4239-ae96-084a8c4093fb>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00231.warc.gz"}
Common Core High School: Geometry Secrets Study Guide (printed book) Regular price [S:$40.99:S] $24.99 Sale Common Core High School: Geometry Secrets Study Guide: CCSS Test Review for the Common Core State Standards Initiative Mometrix Test Preparation's Common Core High School: Geometry Secrets Study Guide is the ideal prep solution for anyone who wants to pass their Common Core State Standards Initiative. The exam is extremely challenging, and thorough test preparation is essential for success. Our study guide includes: • Practice test questions with detailed answer explanations • Step-by-step video tutorials to help you master difficult concepts • Tips and strategies to help you get your best test performance • A complete review of all Common Core test sections • Congruence • Similarity, Right Triangles, and Trigonometry • Circles • Expressing Geometric Properties with Equations • Geometric Measurement and Dimension Mometrix Test Preparation is not affiliated with or endorsed by any official testing organization. All organizational and test names are trademarks of their respective owners. The Mometrix guide is filled with the critical information you will need in order to do well on your Common Core exam: the concepts, procedures, principles, and vocabulary that the state school board expects you to have mastered before sitting for your exam. The Congruence section covers: • Point and line • Dilation • Compass and a straightedge The Similarity, Right Triangles, and Trigonometry section covers: • Proving the Pythagorean Theorem using similar triangles • Using cosine to solve problems involving right triangles • Using the Law of Sines and Cosines The Circles section covers: • Construction of a circle inscribed in a triangle • Deriving the formula for the area of a sector The Expressing Geometric Properties with Equations section covers: • Identify the equation of a parabola given a focus and directrix • Using coordinates to find the area of a figure The Geometric Measurement and Dimension section covers: • Formula for the area of a circle • Formula for the volume of a cone • Cavalieri's principle ...and much more! Our guide is full of specific and detailed information that will be key to passing your exam. Concepts and principles aren't simply named or described in passing, but are explained in detail. The Mometrix Common Core study guide is laid out in a logical and organized fashion so that one section naturally flows from the one preceding it. Because it's written with an eye for both technical accuracy and accessibility, you will not have to worry about getting lost in dense academic language. Any test prep guide is only as good as its practice questions and answer explanations, and that's another area where our guide stands out. The Mometrix test prep team has provided plenty of Common Core practice test questions to prepare you for what to expect on the actual exam. Each answer is explained in depth, in order to make the principles and reasoning behind it crystal clear. Many concepts include links to online review videos where you can watch our instructors break down the topics so the material can be quickly grasped. Examples are worked step-by-step so you see exactly what to do. We've helped hundreds of thousands of people pass standardized tests and achieve their education and career goals. We've done this by setting high standards for Mometrix Test Preparation guides, and our Common Core High School: Geometry Secrets Study Guide is no exception. It's an excellent investment in your future. Get the Common Core review you need to be successful on your exam.
{"url":"https://store.mometrix.com/products/cchsgeometryship","timestamp":"2024-11-07T20:17:48Z","content_type":"text/html","content_length":"92920","record_id":"<urn:uuid:ac63547d-98bb-471c-be4f-dcb172767967>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00112.warc.gz"}
If the position vectors of the vertices A,B and C of a △ABC are... | Filo Question asked by Filo student If the position vectors of the vertices and of a are respectively and , then the position vector of the point, where the bisector of meets is : Not the question you're searching for? + Ask your question Video solutions (1) Learn from their 1-to-1 discussion with Filo tutors. 7 mins Uploaded on: 10/18/2022 Was this solution helpful? Found 5 tutors discussing this question Discuss this question LIVE for FREE 10 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice more questions on Vector and 3D View more Students who ask this question also asked View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text If the position vectors of the vertices and of a are respectively and , then the position vector of the point, where the bisector of meets is : Updated On Oct 18, 2022 Topic Vector and 3D Subject Mathematics Class Class 12 Answer Type Video solution: 1 Upvotes 89 Avg. Video Duration 7 min
{"url":"https://askfilo.com/user-question-answers-mathematics/if-the-position-vectors-of-the-vertices-and-of-a-are-32343433323439","timestamp":"2024-11-05T00:04:09Z","content_type":"text/html","content_length":"500897","record_id":"<urn:uuid:6b465a64-c699-49ea-8131-5c01dac7508f>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00570.warc.gz"}
In-Depth Analysis of Vector Function Derivatives: Theory and Practical Applications Derivatives of vector-valued functions extend the concept of differentiation to functions that output vectors instead of scalars. They describe the rate of change of vectors with respect to a parameter, typically representing motion or change in multiple dimensions. This concept is fundamental in physics and engineering for analyzing velocity, acceleration, and the behavior of dynamic systems. Understanding vector derivatives enables the study of complex trajectories and forces in multidimensional spaces. Vector function derivatives quantify how vector-valued functions change over time or with respect to a parameter. These functions, mapping real numbers to vectors in multi-dimensional space, allow detailed analysis of complex dynamic systems. Derivatives are calculated component-wise, leading to a new vector where each component is the derivative of the original function’s corresponding component. This mathematical tool is pivotal in fields like physics for understanding motion through velocity and acceleration vectors, and in engineering for analyzing forces and designing control Mathematics of Vector Function Derivatives When studying the derivatives of vector-valued functions, each component of the vector function is differentiated independently with respect to the parameter, typically time or another scalar variable. The process adheres to the general rules of differentiation applied to vector components. Given a vector-valued function \( \mathbf{r}(t) \) defined as: \mathbf{r}(t) = \left\langle f_1(t), f_2(t), \dots, f_n(t) \right\rangle ] \) where \( f_i(t) \) represents the scalar component functions. Derivative Calculation: The derivative of \( \mathbf{r}(t) \) with respect to \( t \) is obtained by differentiating each component function: \( [ \mathbf{r}'(t) = \left\langle f_1′(t), f_2′(t), \dots, f_n'(t) \right\rangle ] \) Each component \( f_i'(t) \) is the derivative of \( f_i(t) \), calculated using standard differentiation rules. Higher-Order Derivatives: The second derivative of \( \mathbf{r}(t) \) involves differentiating \( \mathbf{r}'(t) \): \( [ \mathbf{r}”(t) = \left\langle f_1”(t), f_2”(t), \dots, f_n”(t) \right\rangle ] \) Consider a vector function \( \mathbf{r}(t) = \left\langle t^2, \sin t, e^t \right\rangle \). The first derivative is: \( [ \mathbf{r}'(t) = \left\langle 2t, \cos t, e^t \right\rangle ] \) The second derivative is: \( [ \mathbf{r}”(t) = \left\langle 2, -\sin t, e^t \right\rangle ] \) These derivatives help describe the motion of a particle in space, indicating how its position changes with velocity \( \mathbf{r}'(t) \) and acceleration \( \mathbf{r}”(t) \). Related to This Article What people say about "In-Depth Analysis of Vector Function Derivatives: Theory and Practical Applications - Effortless Math: We Help Students Learn to LOVE Mathematics"? No one replied yet.
{"url":"https://www.effortlessmath.com/math-topics/in-depth-analysis-of-vector-function-derivatives-theory-and-practical-applications/","timestamp":"2024-11-11T21:28:56Z","content_type":"text/html","content_length":"84104","record_id":"<urn:uuid:e4e460ed-3f57-4fa5-859a-53a59c1f0b44>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00791.warc.gz"}
Question: The article “Expectation Analysis of the Probability of Failure for Water Supply Pipes” T propose… | Assignment Writing Service: EssayNICE Show transcribed image text The article "Expectation Analysis of the Probability of Failure for Water Supply Pipes" T proposed using the Poisson distribution to model the number of failures in pipelines of various types. Suppose that for cast-iron pipe of a particular length, the expected number of failures is 1 (very close to one of the cases considered in the article). Then X, the number of failures, has a Poisson distribution with μ = 1· (Round your answers to three decimal places.) (a) Obtain P(X S 5) by using the Cumulative Poisson Probabilities table in the Appendix of Tables. (b) 2) from the pmf formula. Determine P(X P(X = 2) = Determine P(X = 2) from the Cumulative Poisson Probabilities table in the Appendix of Tables. P(X= 2) = (c) Determine P(2 SX4) P(2 X 4)= (d) What is the probability that X exceeds its mean value by more than one standard deviation? The article "Expectation Analysis of the Probability of Failure for Water Supply Pipes" T proposed using the Poisson distribution to model the number of failures in pipelines of various types. Suppose that for cast-iron pipe of a particular length, the expected number of failures is 1 (very close to one of the cases considered in the article). Then X, the number of failures, has a Poisson distribution with μ = 1· (Round your answers to three decimal places.) (a) Obtain P(X S 5) by using the Cumulative Poisson Probabilities table in the Appendix of Tables. (b) 2) from the pmf formula. Determine P(X P(X = 2) = Determine P(X = 2) from the Cumulative Poisson Probabilities table in the Appendix of Tables. P(X= 2) = (c) Determine P(2 SX4) P(2 X 4)= (d) What is the probability that X exceeds its mean value by more than one standard deviation?
{"url":"https://essaynice.com/question-the-article-expectation-analysis-of-the-probability-of-failure-for-water-supply-pipes-t-propose/","timestamp":"2024-11-11T05:15:54Z","content_type":"text/html","content_length":"301966","record_id":"<urn:uuid:d4c86ba3-7c74-4642-bf18-080215d15493>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00123.warc.gz"}
Dart function—Recursive Functions Dart function — Recursive Functions recursive function is a function that calls itself. A recursive function is used for repeating code. If the state of the function does not change, and it has no preset exit condition that is reached, a recursive call can turn into an infinite loop. void forever() { forever() is indeed an infinite loop. Here is a recursive function that will repeatedly add one String to another String. String addOn(String original, String additional, int times) { if (times <= 0) { // exit condition to end "recursive loop" return original; return addOn(original + additional, additional, times - 1); // recursive call Read more : https://softwarezay.com/notes/396_dart-function-recursive-functions
{"url":"https://dailydevnote.medium.com/dart-function-recursive-functions-f8280ad9d590","timestamp":"2024-11-08T10:33:19Z","content_type":"text/html","content_length":"82555","record_id":"<urn:uuid:35394a16-4b67-4a83-af95-8770466636ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00609.warc.gz"}
How to prove that a given triangle with two sides a and b has max. ar Differential Calculus> Maxima minima... How to prove that a given triangle with two sides a and b has max. area when it is a right triangle ? 1 Answers Badiuddin askIITians.ismu Expert Last Activity: 14 Years ago Dear Mayank area of triangle =1/2 bc sinA for maximum area sinA =1 A =90 triangle is right angel Please feel free to post as many doubts on our discussion forum as you can. If you find any question Difficult to understand - post it here and we will get you the answer and detailed solution very quickly. We are all IITians and here to help you in your IIT JEE preparation. All the best. Askiitians Experts Provide a better Answer & Earn Cool Goodies Enter text here... Ask a Doubt Get your questions answered by the expert for free Enter text here...
{"url":"https://www.askiitians.com/forums/Differential-Calculus/25/7288/maxima-minima.htm","timestamp":"2024-11-02T18:00:08Z","content_type":"text/html","content_length":"183594","record_id":"<urn:uuid:0b23eb5c-34c8-453e-9750-d3170d57b68c>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00331.warc.gz"}
Write Excel - Rounding integer strand to 100's digit The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked Write Excel - Rounding integer strand to 100's digit I have a process that ends by writing results to an excel document. The output are account numbers meeting a certain criteria. When viewed through the results port, the entire account number shows up (Ex.1). However when the process ends with the "write excel" operator, it is rounding the account number to the nearest hundred (Ex.2). I have the operator set up to write to .xlsx and show all numerical fields as whole numbers (Ex.3). Any idea what would be causing this or ways to circumvent it? Best Answer If you say "account numbers meeting a certain criteria", is it that the account numbers are texts? Because a workaround would be to work with these account numbers as text/string and not as numbers. I can confirm that when generating numbers, those must stay within a range of 15 digits, which is less than ideal for other situations where a number can be large enough to overflow the definition of bigint/decimal. Am I doing anything wrong here, All the best, Does the same thing happen if you store the data into a CSV file instead of an Excel one? If you do, it's a limitation coming from the library. I say that because I hadn't any problems with storing data into PostgreSQL with a massive number of decimals. Let's see how it develops from this. All the best, It does not occur when writing to a CSV. Unfortunately I am writing a multiple tab excel workbook so the excel operator is preferred over the CSV operator in this case.
{"url":"https://community.rapidminer.com/discussion/56197/write-excel-rounding-integer-strand-to-100s-digit","timestamp":"2024-11-04T01:12:25Z","content_type":"text/html","content_length":"290032","record_id":"<urn:uuid:6fa706d7-a11e-445f-baff-10d5e777d356>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00289.warc.gz"}
Convective term in heat equation for rotating solids Hi Abishek, Even though you've sent me the case via PM, I'll answer most of the questions here in public, without disclosing the case itself. Originally Posted by vabishek As I had mentioned earlier, the solver seems to work fine when all the solid bodies rotate at same speed and in the same direction. I have issues with cyclic boundaries when solid is stationary wrt to the fluid region. Both these folders have a Allrun script that you can run. I also attached a sample image for each of these cases in the respective tarballs. It shows the temperature profile in the circumferential direction (X-axis) near the solid1-heatsource interface. In the case of rotating solid, the temperature profile is perfectly symmetric and the slopes of temp near the wall are similar/continuous. OK, the images are the same as in the initial post. Originally Posted by vabishek However, when the convective term in solid equation is active i.e. stationary_solid case, the slopes of temperature becomes dissimilar/non-continuous (see stationary_solid/ solid1_temp_profile_mean_radius.png). I plotted this profile along x-axis at mean radius of geometry at z = 0.0009. So, the starting point is (-0.00505630066618323, 0.0795000046491623, 0.0009) and the end point is (0.00505630066618323, 0.0795000046491623, 0.0009), to be precise. . So essentially the temperature plot is near one of the solids and the axis used for the plot is aligned with the rotation direction itself. The first problem I noticed is that the plot axis in the images do not match up to the dimensions you stated, namely "-0.0050563 to 0.0050563" doesn't match up with the plots that have "0.0 to 0.01xx"... unless you transposed the X axis by "+0.0050563". From what I can figure out, the problem doesn't seem to be in the solver itself, the temperature plot seems to be reflecting the flow being rotated as a result of the Coriolis effect. Since you still defined in the file "constant/S1SRFProperties" that the fluid domain is rotating, even though you defined the other two solids are stationary, but the domain itself is still rotating. In addition, there is nothing stating if the region "HeatSource" is static or not, i.e. "constant/S4SRFProperties" doesn't exist... so I guess it's using the definition that is in "constant/SRFProperties"? Which it is also rotating? In other words, even though the solids are not rotating, the heat source is rotating, so is the fluid domain, therefore the temperature distribution seems legitimate for the stationary solids, because the flow is being pushed against the heat source on one side and pulled away on the other side. By the way, the solver wasn't in the shared folder. But it's for the best, because I hopefully diagnosed this correctly after looking at the mesh, how the parts are connected and how the boundaries and SRF settings were defined. In addition, you might want to double-check if the diagnosis about the flow is correct. Use an already existing SRF solver to test the flow for the fluid part, i.e. to confirm if the flow profile with static solids is correct in your solver. Best regards,
{"url":"https://www.cfd-online.com/Forums/openfoam-programming-development/161000-convective-term-heat-equation-rotating-solids.html","timestamp":"2024-11-12T16:45:16Z","content_type":"application/xhtml+xml","content_length":"132443","record_id":"<urn:uuid:1b63f493-3ba9-4376-813f-2d1d27d7d0d8>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00468.warc.gz"}
What To Do If I Hate Linear Programming? | AcademicHelp.net Image source: unsplash.com By Desola Lanre-Ologun While Linear Programming might elicit frustration for some, its nuanced intricacies and historical battles offer a fascinating study. Meanwhile, Inter-universal Teichmüller theory captivates minds by presenting a unique intersection of number theory and geometry. Exploring these contrasting subjects, we unravel the diverse feelings towards these mathematical theories and shed light on the fascinating worlds they unveil. ✅ AI Essay Writer ✅ AI Detector ✅ Plagchecker ✅ Paraphraser ✅ Summarizer ✅ Citation Generator Key Takeaways: • Linear Programming’s rich history offers an engaging study despite its complexity. • Inter-universal Teichmüller theory unveils new mathematical terrains, bridging disparate areas. • Both subjects exemplify the diversity and intrigue within mathematics. How to Hvae Fuwn With Linear Programming Linear Programming (LP) might seem like an unwelcome visitor to some – an unexpected step-parent trying to fit into an established family dynamic. It’s an intricate aspect of mathematics that can seem daunting and alien, especially when confronted with complex concepts like the inverse matrix product of matrices with varying m and n dimensions. Visualizing rotations in this space can make even the bravest mathematicians break out in a cold sweat. For some, falling behind in grasping these concepts can take the shine off what might initially be an enjoyable subject. But let’s not forget, LP has a rich history marked by a fascinating battle of methods – the simplex versus the interior-point methods. This struggle for superiority offers not only academic excitement but also provides interesting visualizations, making the field captivating to those with a keen interest. It’s this dichotomy of experiences that makes Linear Programming an intriguing subject, demanding both persistence and intellectual curiosity from its pursuers. The Teichmüller Theory The realm of Inter-universal Teichmüller theory offers a stark contrast. Its essence lies in the convergence of seemingly disparate areas of mathematics, notably number theory and geometry. This theory imagines universes, each governed by unique mathematical systems, laws, and principles. By probing these universes and their interactions, we transcend traditional mathematical explorations, embarking on a thrilling journey of discovery. Image source: maths.wizert.com What sets Inter-universal Teichmüller theory apart is its bridge-like characteristic. It allows for the unification of diverse mathematical landscapes, fostering connections in unexpected ways. Its beauty lies in its ability to reveal uncharted territories of knowledge, pushing the frontiers of understanding. It offers a compelling exploration, akin to a thrilling expedition across distant lands, seeking undiscovered truths hidden within the mathematical fabric of reality. This approach to mathematics propels us to venture beyond the known, cultivating our appetite for intellectual 5 Mathematical Concepts Every Programmer Should Know The world of programming, while largely revolving around logical thinking and problem-solving, does contain several areas where mathematical knowledge proves incredibly valuable. Here are five key mathematical concepts that every programmer should be familiar with. Boolean Algebra: At the core of computer systems, Boolean algebra deals with binary variables and logical operations. Understanding this concept enables programmers to craft conditional statements, perform bit manipulation, and manage control flow with efficiency. x = 5 y = 10 if y > x: print("y is greater than x") print("x is equal to or greater than y") Linear Algebra: Linear algebra forms the basis for many machine learning algorithms and computer graphics. A clear understanding of vectors, matrices, and their operations can help programmers manipulate data, develop efficient algorithms, and handle multidimensional spaces. import numpy as np # Initialize matrices A = np.array([[1, 2], [3, 4]]) B = np.array([[5, 6], [7, 8]]) # Perform matrix multiplication C = np.dot(A, B) Probability and Statistics: These concepts form the backbone of data analysis and machine learning. Knowledge of probabilities, distributions, and statistical tests enables programmers to make sense of data, draw insights, and develop predictive models. from scipy import stats # Define a list of data points data = [1, 2, 2, 3, 3, 3, 4, 4, 4, 4] # Calculate mean and mode mean = np.mean(data) mode = stats.mode(data) print("Mean: ", mean) print("Mode: ", mode.mode[0]) Discrete Mathematics: This includes a wide range of topics such as graph theory, set theory, and combinatorics. Discrete mathematics is essential in algorithmic thinking, database management, and network design. # Graph represented as a dictionary graph = {'A': {'B': 1, 'C': 3}, 'B': {'A': 1, 'D': 5}, 'C': {'A': 3}, 'D': {'B': 5}} # Dijkstra's algorithm implemented def dijkstra(graph, start): # Code for the algorithm... Calculus: While not used directly in most programming, understanding calculus, especially concepts of differentiation and integration, is crucial in machine learning, optimization problems, and even in game development for physics simulations. import autograd.numpy as np from autograd import grad # Define a function def f(x): return x**2 + 2*x + 1 # Get its derivative df = grad(f) print(df(1.0)) # output: 4.0 Grasping these mathematical concepts allows programmers to approach problems with a well-equipped toolbox, enabling them to create robust, efficient, and optimized solutions. While not every programmer will use all of these concepts regularly, having a foundational understanding of these areas can open doors to a wide range of possibilities, contributing to a more comprehensive and flexible skill set. Also read: Is ChatGPT Losing Its Coding Edge? Level Up Your Coding Skills with These 5 Free Games for Aspiring Programmers Harvard Introduces AI To Teach Coding In Fall Semester Follow us on Reddit for more insights and updates. Comments (0) Welcome to A*Help comments! We’re all about debate and discussion at A*Help. We value the diverse opinions of users, so you may find points of view that you don’t agree with. And that’s cool. However, there are certain things we’re not OK with: attempts to manipulate our data in any way, for example, or the posting of discriminative, offensive, hateful, or disparaging material.
{"url":"https://academichelp.net/blog/coding-tips/what-to-do-if-i-hate-linear-programming.html","timestamp":"2024-11-06T02:35:46Z","content_type":"text/html","content_length":"112295","record_id":"<urn:uuid:f47afc4c-8290-4059-98af-5d3d46937897>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00225.warc.gz"}
Prior Assignment 1 \documentclass{article} \usepackage[utf8]{inputenc} \title{Practice Submission Prior to Assignment 1} \author{Dung Le - 214583181} \date{May 2019} \begin{document} \maketitle \subsection*{Question 1:} From Rosen, Discrete Mathematics and its Applications: When three professors are seated in a restaurant, the hostess asks them: “Does everyone want coffee?” The first professor says: “I do not know.” The second professor then says: “I do not know.” Finally, the third professor says: “No, not everyone wants coffee.” The hostess comes back and gives coffee to the professors who want it. How did she figure out who wanted coffee? \subsection*{Answer:} The question is : “Does everyone want coffee?”. For the first professor, he wants a coffee because if he doesn't want a coffee, his answer to the question should be "no". Similar to the second professor, he wants a coffee too. For the third professor, he knows that the first two professors want coffee and his answer is "no, not everyone wants coffee", which mean he doesn't want coffee. In conclusion, the first two professors want coffee. \subsection*{Question 2:} This problem is adapted from Problem 4, page 10 of the Liebeck text. Start with the statement A $\rightarrow$ B . Which of the following statements does this statement imply? You should be able to explain your answers using ordinary language. \\ (a) A is true or B is true \\ (b) $\overline{A}$ $\rightarrow$ B \\ (c) B $\rightarrow$ A \\ (d) $\overline{B}$ $\rightarrow$ A \\ (e) $\overline{A}$ $\rightarrow$ $\overline{B}$ \\ (f) $\overline{B}$ $\rightarrow$ $\ overline{A}$ \subsection*{Answer:} The statement A $\rightarrow$ B means A implies B. \\ This statement is false if and only if A is true and B is false. \\ Look at these given statements we can see statement (f) is the best choice because (f) is false if and only if negation of B is true and negation of A is false which means B is false and A is true. \\ Therefore, statement (f) $\overline{B}$ $\rightarrow$ $\overline{A}$ is equivalent to statement A $\rightarrow$ B \end{document}
{"url":"https://ko.overleaf.com/articles/prior-assignment-1/zxymkgpskgty","timestamp":"2024-11-03T01:18:29Z","content_type":"text/html","content_length":"38169","record_id":"<urn:uuid:081809b1-9c56-41e4-b877-f771e6ee3b5b>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00279.warc.gz"}
Break even analysis function on excel Since we want to round up, we chose to use Excel's ROUNDUP function: = ROUNDUP(“total fixed costs”/”average contribution margin”,0). Of course, we actually Calculating the break-even point for Google Adwords. This article describes one way to model Adwords profitability in Excel. If you want to try out the Desired Profit in Units, Break-even Point in Sales Dollars, Desired Profit in Sales to earn $1,200 per week is shown by the following break-even formula:. Break Even Analysis in economics, financial modeling, and cost accounting refers to the point in which total cost The formula for break even analysis is as follows: Download the free Excel template now to advance your finance knowledge! 29 Nov 2012 Determining the break-even point for your products gives you We'll use Excel's table feature to store this data, making customization a bit The break-even point (BEP) in economics, business—and specifically cost accounting—is the The data used in these formula come either from accounting records or from various estimation techniques such as Example of Break Even Point using Microsoft Excel; MASB Official Website · Breakeven Point Calculator Our Break Even Calculator will help you run the reports and analysis you need for your business. Break Even Analysis. For Excel® 2007+ - version 1.0.0. Break Since we want to round up, we chose to use Excel's ROUNDUP function: = ROUNDUP(“total fixed costs”/”average contribution margin”,0). Of course, we actually to setting up an Excel sheet that will calculate the break even point for be the same, you use this formula to calculate the break even point: 14 Mar 2015 X-Y Chart for Break Even Analysis (05:48 min) 3. 3-D Cell References · Excel Table Formula Nomenclature / Structured References 22 Ex. 18 May 2015 A simple profit-volume and break-even analysis workbook. Excel's Scenario Manager includes a Summary feature, which you can use to 17 Nov 2006 spreadsheet I made to do break-even calculations in Microsoft Excel select some growth types and press the button, Excel will do the rest. Break Even Analysis 3. File format: .xls; This Excel sheet is very simple and straight-forward. Simply enter three numbers and get a break even analysis graph as 19 Dec 2019 The break-even point is the point when your business's total revenues equal its total expenses. Your business is “breaking even”—not making a Break-Even Analysis in Excel. Now that we know what break-even analysis consists of, we can begin modeling it in Excel. There are a number of ways to accomplish this. The two most useful are by creating a break-even calculator or by using Goal Seek, which is a built-in Excel tool. to setting up an Excel sheet that will calculate the break even point for be the same, you use this formula to calculate the break even point: Break-Even Analysis in Excel. Now that we know what break-even analysis consists of, we can begin modeling it in Excel. There are a number of ways to accomplish this. The two most useful are by creating a break-even calculator or by using Goal Seek, which is a built-in Excel tool. This has been a guide to Break-Even Analysis in Excel. Here we discuss how to do Excel Break-Even Analysis using goal seek tool and construct a break-even table along with examples and downloadable excel template. You may also look at these useful functions in excel – Breakeven Analysis Top Examples; What is Break Even Point In Accounting Do break-even analysis with Goal Seek feature (1) Specify the Set Cell as the Profit cell, in our case it is Cell B7; (2) Specify the To value as 0 ; (3) Specify the By changing cell as the Unit Price cell, in our case it is Cell B1. (4) Click OK button. 5 . And then the Goal Seek Status dialog How to Calculate Break Even Analysis in Excel 1. Open the break even data worksheet. Make columns for the sales volume, a multiplier, 2. Fill in the expected number of sales in the first column, usually from the second row down to 3. In the third column, fill in the unit sales price. 4. In A break even point analysis is used to determine the number of units or dollars of revenue needed to cover total costs (fixed and variable costs Fixed and Variable Costs Cost is something that can be classified in several ways depending on its nature. One of the most popular methods is classification according to fixed costs and variable costs. Excel Break Even Analysis Template Break Even Analysis Template For Excel 2013 With Data Driven Charts, Break Even Analysis Template Formula To Break-Even Analysis in Excel. Now that we know what break-even analysis consists of, we can begin modeling it in Excel. There are a number of ways to accomplish this. The two most useful are by creating a break-even calculator or by using Goal Seek, which is a built-in Excel tool. This has been a guide to Break-Even Analysis in Excel. Here we discuss how to do Excel Break-Even Analysis using goal seek tool and construct a break-even table along with examples and downloadable excel template. You may also look at these useful functions in excel – Breakeven Analysis Top Examples; What is Break Even Point In Accounting Learn how to perform NPV break even analysis in Excel with examples. Figure 4: How the table looks after you've entered the formula. To actually see how Breakeven analysis in Excel using the variables like contribution margin, fixed costs and variable cost is quick and easy. A company is supposed to break even when the total expenses equals the The Payback Period is the time it will take to break even on your investment. In break-even analyses in which are are solving for the break-even price or number of sales, the payback period is defined ahead of time. Depending on rate of change in your market, this may be a few months or a few years. Break-even analysis is a tool for evaluating the profit potential of a business model and for evaluating various pricing strategies. You can easily compile fixed costs, variable costs, and pricing options in Excel to determine the break even point for your product. Part of that decision process is often a break-even analysis. The break-even point (BEP) is the point where costs equal revenue (sales). At this point, the product has profit, but you're covering Unit Sales Total Costs Sales Revenue Accounting Profit Break-Even Point Based On Acct. Profit = 0 14, Calculate the Break-even Point using the Formula. View Homework Help - Excel Break-Even Tutorial.xlsx from BIS 300 at Northern Kentucky University. Break Even Analysis for Physician's Group Electronic 28 Mar 2008 I have to break even with 200 ticket sales, but I can sell a maximum of non- member tickets, then the formula is =expenses/(
{"url":"https://investingkzzfgm.netlify.app/darras25328zy/break-even-analysis-function-on-excel-pu.html","timestamp":"2024-11-02T13:56:09Z","content_type":"text/html","content_length":"31787","record_id":"<urn:uuid:179157eb-b4e7-4146-bf29-866c77e31318>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00725.warc.gz"}
Review of Short Phrases and Links This Review contains major "Curvature"- related terms, short phrases and links grouped together in the form of Encyclopedia article. 1. Curvature is a rigid, geometric property, in that it is not preserved by general diffeomorphisms of the surface. 2. Curvature is the amount by which a geometric object deviates from being flat. (Web site) 3. Curvature - The curvature formed by the commutator of covariant derivatives will then have terms involving the holonomic partial derivative of the vielbein. 4. Curvature is the central concept in "Riemann-Finsler geometry". (Web site) 5. Curvature is a 2-form integrated over 2-dim sub-manifolds of spacetime. 1. The set of all Riemannian manifolds with positive Ricci curvature and diameter at most D is pre-compact in the Gromov-Hausdorff metric. 2. If a compact Riemannian manifold has positive Ricci curvature then its fundamental group is finite. 3. Quaternion-K--hler manifolds appear in Berger's list of Riemannian holonomies as the only manifolds of special holonomy with non-zero Ricci curvature. (Web site) 4. If the injectivity radius of a compact n -dimensional Riemannian manifold is then the average scalar curvature is at most n( n -1). 1. The sphere has the smallest total mean curvature among all convex solids with a given surface area. 2. Hereafter, the concept of the mean curvature of the second fundamental form is introduced. 1. The Gaussian curvature, named after Carl Friedrich Gauss, is equal to the product of the principal curvatures, k 1 k 2. 2. The Theorema Egregium ('Remarkable Theorem') is an important theorem of Carl Friedrich Gauss concerning the curvature of surfaces. 3. Gauss's theorema egregium states that the Gaussian curvature of a surface embedded in three-space may be understood intrinsically to that surface. (Web site) 1. In fact there is no force in such a model, rather matter is simply responding to the curvature of spacetime itself. 2. No. Curvature does not fully determine the shape of the Universe. 1. Curvature of Riemannian manifolds for generalizations of Gauss curvature to higher-dimensional Riemannian manifolds. 2. The Gauss curvature of a Frenet ribbon vanishes, and so it is a developable surface. 1. If a complete n -dimensional Riemannian manifold has nonnegative Ricci curvature and a straight line (i.e. 2. It implies that any two points of a simply connected complete Riemannian manifold with nonpositive sectional curvature are joined by a unique geodesic. 3. Here is a short list of global results concerning manifolds with positive Ricci curvature; see also classical theorems of Riemannian geometry. 1. An intrinsic definition of the Gaussian curvature at a point P is the following: imagine an ant which is tied to P with a short thread of length r. 2. This is used by Riemann to generalize the (intrinsic) notion of curvature to higher dimensional spaces. 1. This metric can be used to interconvert vectors and covectors, and to define a rank 4 Riemann curvature tensor. 2. These three identities form a complete list of symmetries of the curvature tensor, i.e. (Web site) 3. Last check: 2007-10-21) where R denotes the curvature tensor, the result does not depend on the choice of orthonormal basis. (Web site) 1. Gauss map for more geometric properties of Gauss curvature. 2. The Jacobian of the Gauss map is equal to Gauss curvature, the differential of the Gauss map is called shape operator. 1. Formally, Gaussian curvature only depends on the Riemannian metric of the surface. 2. Any smooth manifold of dimension admits a Riemannian metric with negative Ricci curvature[1]. 1. A helix has constant curvature and constant torsion. 2. The name "pseudosphere" comes about because it is a two-dimensional surface of constant curvature. 1. The Frenet-Serret formulas apply to curves which are non-degenerate, which roughly means that they have curvature. 2. From just the curvature and torsion, the vector fields for the tangent, normal, and binormal vectors can be derived using the Frenet-Serret formulas. 1. The above definition of Gaussian curvature is extrinsic in that it uses the surface's embedding in R 3, normal vectors, external planes etc. 2. This is the normal curvature, and it varies with the choice of the tangent vector. 3. Principal curvature is the maximum and minimum normal curvatures at a point on a surface. 1. It has constant negative Gauss curvature, except at the cusp, and therefore is locally isometric to a hyperbolic plane (everywhere except for the cusp). (Web site) 2. Two smooth (“cornerless”) surfaces with the same constant Gaussian curvature are locally isometric. 1. The area of the image of the Gauss map is called the total curvature and is equivalent to the surface integral of the Gaussian curvature. 2. The Gauss-Bonnet theorem links total curvature of a surface to its topological properties. 3. We construct embedded minimal surfaces of finite total curvature in euclidean space by gluing catenoids and planes. 1. Moreover, we study the value distribution of the hyperbolic Gauss map for complete constant mean curvature one faces in de Sitter three-space. 2. I will discuss how this works in a simple example, namely the Gauss map of a constant mean curvature torus (ie a toroidal soap bubble). 1. Hilbert proved that every isometrically embedded closed surface must have a point of positive curvature. 2. A positive curvature corresponds to the inverse square radius of curvature; an example is a sphere or hypersphere. 1. The normal vector, sometimes called the curvature vector, indicates the deviance of the curve from being a straight line. 2. The second generalized curvature -- 2( t) is called torsion and measures the deviance of -- from being a plane curve. 3. The first generalized curvature -- 1( t) is called curvature and measures the deviance of -- from being a straight line relative to the osculating plane. 1. Curvature vector and geodesic curvature for appropriate notions of curvature of curves in Riemannian manifolds, of any dimension. 2. Also the curvature, torsion and geodesics can be defined only in terms of the covariant derivative. 3. There is additional material on Map Colouring, Holonomy and geodesic curvature and various additions to existing sections. 1. His theorema egregium gives a method for computing the curvature of a surface without considering the ambient space in which the surface lies. 2. All these other surfaces would have boundaries and the sphere is the only surface without boundary with constant positive Gaussian curvature. 3. If a curved surface is developed upon any other surface whatever, the measure of curvature in each point remains unchanged. 4. Euler proved that a minimal surface is planar iff its Gaussian curvature is zero at every point so that it is locally saddle -shaped. (Web site) 1. Gaussian Curvature is the product of these maximum and minimum values. 2. The principal directions specify the directions that a curve embedded in the surface must travel to have the maximum and minimum curvature. 1. In the differential geometry of curves, the evolute of a curve is the locus of all its centers of curvature. 2. From this we can derive the Ces--ro equation as -- = g '( R), where -- is the curvature of the evolute. 3. Similarly, when the original curve has a cusp where the radius of curvature is 0 then the evolute will touch the the original curve. 1. See also connection form and curvature form. 2. We find a simple condition on the curvature form which ensures that there exists a smooth mean value surface solution. 3. I think I can claim at least a third of the articles Cartan connection, Ehresmann connection, Connection (mathematics), Curvature form. 1. The pseudosphere is an example of a surface with constant negative Gaussian curvature. 2. The Pseudosphere is a surface of revolution of constant negative curvature. (Web site) 1. It can be considered as an alternative to or generalization of curvature tensor in Riemannian geometry. 2. The following articles might serve as a rough introduction: Metric tensor Riemannian manifold Levi-Civita connection Curvature Curvature tensor. 1. Minimal surfaces are the critical points for the mean curvature flow: these are both characterized as surfaces with vanishing mean curvature. 2. The most familiar example of mean curvature flow is in the evolution of soap films. 1. The Riemannian curvature tensor is an important pointwise invariant associated to a Riemannian manifold that measures how close it is to being flat. 2. Intuitively, curvature is the amount by which a geometric object deviates from being flat, but this is defined in different ways depending on the context. 3. Of course, a flat Cartan geometry should be a geometry without curvature. 1. This approach contrasts with the extrinsic point of view, where curvature means the way a space bends within a larger space. 2. More formally, the curvature is the differential of the holonomy action at the identity of the holonomy group. 3. At an umbilic all the sectional curvatures are equal, in particular the principal curvature 's are equal. 4. The primordial example of extrinsic curvature is that of a circle which has curvature equal to the inverse of its radius everywhere. 5. Further, the curvature of a smooth curve is defined as the curvature of its osculating circle at each point. 1. Books about "Curvature" in Amazon.com
{"url":"http://keywen.com/en/CURVATURE","timestamp":"2024-11-09T17:45:55Z","content_type":"text/html","content_length":"36469","record_id":"<urn:uuid:342eb24a-8356-432d-be30-d6dd459bb53e>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00167.warc.gz"}
Understanding Mathematical Functions: Is 2 2 A Function Introduction to Mathematical Functions Mathematical functions are a fundamental concept in mathematics that play a crucial role in various fields such as science, engineering, economics, and more. Understanding functions is essential for solving problems and making predictions in these disciplines. In this chapter, we will explore the definition of a mathematical function, the importance of understanding functions in various fields, and provide a brief overview of the function concept in mathematics. A Definition of a mathematical function A mathematical function is a relation between a set of inputs (called the domain) and a set of outputs (called the range), such that each input is related to exactly one output. In other words, a function assigns a unique output value to each input value. The most common way to represent a function is using a formula or an equation. For example, the function f(x) = 2x is a simple linear function where each input value x is multiplied by 2 to get the corresponding output value. Importance of understanding functions in various fields Understanding functions is crucial in various fields such as science, engineering, economics, and computer science. Functions are used to model relationships between variables, make predictions, analyze data, and solve complex problems. For example, in physics, functions are used to describe the motion of objects, the flow of fluids, and the behavior of waves. In economics, functions are used to model supply and demand, production costs, and market trends. In computer science, functions are used to write algorithms, analyze data structures, and develop software applications. Brief overview of the function concept in mathematics In mathematics, functions are typically represented by symbols such as f(x), g(x), or h(x), where f, g, and h are the names of the functions, and x is the input variable. Functions can be classified into different types based on their properties and behavior, such as linear functions, quadratic functions, exponential functions, trigonometric functions, and more. Functions can be graphed on a coordinate plane to visualize their relationships and analyze their behavior. Understanding the concept of functions is essential for mastering advanced topics in mathematics such as calculus, linear algebra, and differential equations. Key Takeaways • Definition of a mathematical function • Understanding input and output relationships • Is 2^2 a function? • Exploring the concept of a function • Conclusion on the nature of 2^2 Understanding the Basic Concept of Functions Functions are a fundamental concept in mathematics that describe the relationship between inputs and outputs. They are essential tools for modeling real-world phenomena and solving mathematical problems. In this chapter, we will explore the basic concept of functions, including defining domain and range, how functions map inputs to outputs, and examples of simple functions. Defining domain and range Domain refers to the set of all possible inputs or independent variables of a function. It is the set of values for which the function is defined. For example, in the function f(x) = x^2, the domain is all real numbers because the function is defined for any real number input. Range refers to the set of all possible outputs or dependent variables of a function. It is the set of values that the function can produce as outputs. Using the same example f(x) = x^2, the range is all non-negative real numbers because the function outputs only non-negative values. How functions map inputs to outputs Functions map inputs to outputs in a specific way. Each input value is associated with exactly one output value. This means that for every input in the domain, there is a unique output in the range. This property is known as the one-to-one correspondence of functions. For example, consider the function f(x) = 2x. If we input x = 3, the function will output f(3) = 2(3) = 6. This mapping of inputs to outputs is what defines a function. Examples of simple functions • Linear function: f(x) = mx + b, where m and b are constants. This function produces a straight line when graphed. • Quadratic function: f(x) = ax^2 + bx + c, where a, b, and c are constants. This function produces a parabolic curve when graphed. • Absolute value function: f(x) = |x|. This function outputs the absolute value of the input x, resulting in a V-shaped graph. These simple examples illustrate the diversity of functions and how they can be used to model different types of relationships between variables. Understanding functions is essential for solving equations, analyzing data, and making predictions in various fields of study. Is '2 = 2' a Function? When it comes to mathematical functions, it is essential to understand the concept of equality and how it relates to the definition of a function. In this chapter, we will analyze the expression '2 = 2' and explore why it does not fit the standard definition of a function. Analysis of the expression '2 = 2' At first glance, the expression '2 = 2' may seem straightforward. It simply states that the number 2 is equal to the number 2. However, in the context of mathematical functions, this expression raises some important questions. In mathematics, a function is a relation between a set of inputs and a set of possible outputs, where each input is related to exactly one output. When we look at the expression '2 = 2,' we see that it does not involve any inputs or outputs. It is simply a statement of equality between two numbers. In a function, there must be a clear mapping from inputs to outputs, which is not present in the expression '2 = 2.' Understanding equality and functions Equality is a fundamental concept in mathematics, but it is distinct from the concept of a function. In mathematics, equality is a relationship between two quantities that are the same. For example, when we say '2 = 2,' we are asserting that the number 2 is identical to itself. On the other hand, a function is a rule that assigns each element in a set of inputs to exactly one element in a set of outputs. Functions are used to describe relationships between variables and are essential in many areas of mathematics and science. Explanation of why '2 = 2' does not fit the standard definition of a function Based on the definition of a function as a relation between inputs and outputs, it is clear that the expression '2 = 2' does not qualify as a function. This is because there are no inputs or outputs involved in the expression, and there is no mapping between elements. In a function, each input must be related to exactly one output, and there must be a clear rule or relationship that defines this mapping. The expression '2 = 2' does not meet these criteria and therefore cannot be considered a function in the mathematical sense. Types of Functions in Mathematics Functions in mathematics are essential tools used to describe relationships between variables. There are various types of functions, each with its unique characteristics and applications. Let's explore some common types of functions: A Overview of linear, quadratic, and exponential functions • Linear Functions: Linear functions are characterized by a constant rate of change. They have a straight-line graph and can be represented by the equation y = mx + b, where m is the slope and b is the y-intercept. • Quadratic Functions: Quadratic functions have a parabolic graph. They are represented by the equation y = ax^2 + bx + c, where a, b, and c are constants. Quadratic functions have a single vertex • Exponential Functions: Exponential functions have a constant ratio between successive values. They are represented by the equation y = a * b^x, where a is the initial value and b is the base of the exponential function. B Characteristics of each type of function Each type of function has distinct characteristics that set them apart: • Linear Functions: Linear functions have a constant rate of change and a straight-line graph. They exhibit a linear relationship between the variables. • Quadratic Functions: Quadratic functions have a parabolic graph with a single vertex point. They exhibit a nonlinear relationship between the variables. • Exponential Functions: Exponential functions have a constant ratio between successive values. They exhibit exponential growth or decay. C Examples where each type is utilized in real-world scenarios Functions are used in various real-world scenarios to model and analyze relationships. Here are some examples of where each type of function is utilized: • Linear Functions: Linear functions are used in economics to model supply and demand curves, in physics to describe motion, and in engineering to analyze systems with linear relationships. • Quadratic Functions: Quadratic functions are used in physics to model projectile motion, in finance to analyze profit functions, and in biology to describe population growth. • Exponential Functions: Exponential functions are used in finance to model compound interest, in biology to describe exponential growth of populations, and in chemistry to analyze radioactive Common Misconceptions About Functions When it comes to understanding mathematical functions, there are several common misconceptions that can lead to confusion. Let's explore some of these misconceptions in more detail: A Misinterpretation of function notation One common misconception is the misinterpretation of function notation. Functions are typically denoted by f(x), where f represents the function and x is the input variable. Some people mistakenly believe that f(x) is a multiplication operation, when in fact it represents the output of the function when the input is x. Confusing functions with equations Another common misconception is confusing functions with equations. While functions can be represented by equations, not all equations represent functions. A function is a relation between a set of inputs and a set of outputs, where each input is related to exactly one output. Equations, on the other hand, can represent relationships between variables that may not necessarily be functions. The misconception that all relations are functions Some people mistakenly believe that all relations are functions. While all functions are relations, not all relations are functions. In order for a relation to be considered a function, each input must be related to exactly one output. If an input is related to more than one output, then the relation is not a function. Practical Applications of Understanding Functions Understanding mathematical functions is essential in various fields due to their ability to model relationships between variables. Let's explore some practical applications of functions in different Importance in computer science and programming • Data Analysis: Functions are used to analyze and manipulate data in computer science. They help in processing information efficiently and making predictions based on patterns. • Algorithm Design: Functions play a crucial role in designing algorithms for solving complex problems. They help in optimizing code and improving the performance of software applications. • Software Development: Functions are the building blocks of software development. They are used to create reusable code modules that perform specific tasks, enhancing the efficiency and scalability of programs. Applications in physics and engineering • Modeling Physical Phenomena: Functions are used to model physical phenomena in physics and engineering. They help in predicting the behavior of systems and analyzing the impact of different • Control Systems: Functions are essential in designing control systems for various applications, such as robotics and automation. They help in regulating the behavior of systems and ensuring stability and performance. • Signal Processing: Functions play a vital role in signal processing applications, such as filtering, modulation, and noise reduction. They help in analyzing and manipulating signals to extract useful information. Functions in economic models and forecasting • Economic Modeling: Functions are used in economic models to represent relationships between different economic variables, such as supply and demand, inflation, and GDP growth. They help in analyzing the impact of policy changes and predicting future trends. • Forecasting: Functions are essential in forecasting models to predict future outcomes based on historical data. They help in making informed decisions and planning strategies for businesses and • Risk Analysis: Functions are used in risk analysis models to assess the probability of different outcomes and mitigate potential risks. They help in evaluating the impact of uncertainties and making informed decisions. Conclusion & Best Practices A Summary of key points about mathematical functions and the clarification of '2 = 2' Throughout this blog post, we have explored the concept of mathematical functions and delved into the question of whether 2 = 2 can be considered a function. We have learned that a function is a relation between a set of inputs and a set of possible outputs, where each input is related to exactly one output. In the case of 2 = 2, it does not meet this criteria as it does not represent a relationship between two distinct sets of values. Best practices for learning and understanding functions (eg, regular practice, utilizing visual aids) Regular Practice: • Consistent practice is key to mastering mathematical functions. • Regularly working on problems and exercises can help reinforce understanding. Utilizing Visual Aids: • Visual aids such as graphs, charts, and diagrams can help in visualizing functions. • Seeing the relationship between inputs and outputs visually can aid in comprehension. Encouragement for further exploration of mathematical concepts beyond functions Understanding mathematical functions is just the tip of the iceberg when it comes to exploring the vast world of mathematics. There are numerous other concepts and branches of mathematics waiting to be discovered and understood. Whether it's algebra, geometry, calculus, or beyond, delving deeper into mathematical concepts can be both challenging and rewarding. So, I encourage you to continue your mathematical journey and explore the beauty and complexity of this fascinating subject.
{"url":"https://dashboardsexcel.com/blogs/blog/understanding-mathematical-functions-is-2-2-a-function","timestamp":"2024-11-09T04:46:43Z","content_type":"text/html","content_length":"223538","record_id":"<urn:uuid:7014fbc2-b6b7-4730-9e24-f8602ed5511a>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00262.warc.gz"}
Year 9 maths questions | TLC LIVE At TLC LIVE, our tutors deliver thousands of hours of personalised, one-to-one maths tuition every year. Our secondary maths tuition makes use of our library of over 35,000 hours of targeted content, ensuring that tutors can create engaging and relevant sessions to suit every student’s unique needs. Here is a set of ten Year 9 maths questions as an example of our online maths tuition content. These Year 9 maths problems are a great way to test students’ knowledge and identify areas for Ten sample Year 9 maths questions 1. Evaluate: y ÷ y = __ 2. (Do not use a calculator) Complete the sum: 2/3 ÷ 1/5 3. Alex runs a market stall. He buys a radio for £60 and sells it for £75. What is his percentage profit? 4. This is a distance-time graph of Mary’s journey from her house to the shops and back again. What speed did Mary travel in the first 30 minutes of her journey? 5. The ratio of red sweets to black sweets in a bag is: 3:8. There are 24 black sweets. How many red sweets are there? 6. (Do not use a calculator) Complete the sum: 633 ÷ 42.2 7. (You may use a calculator for this question) Find the area of this parallelogram 8. This information is drawn onto a pie chart. Calculate the size of angle representing robins. 9. (Do not use a calculator) Evaluate: 3.45 x 1.2 10. Expand and Simplify: 2(2y – 4) +(y + 2) Answers to the year 9 maths questions 1. 1 2. 13/15 3. 25% 4. 1.6 m.p.h 5. 9 6. 15 7. 72cm² 8. 90⁰ 9. 4.14 10. 7y-2 Contact us for affordable, effective maths tuition If you’re looking to help a Year 9 student build confidence in maths – or any other subject – TLC LIVE can help. Each one-hour online session directly connects the student with a qualified DBS (or equivalent) checked tutor for top-quality maths tuition. In addition to working directly with parents, we are a preferred tuition provider for over 500 schools nationwide and a government-certified quality assured tuition partner. Get in touch today to find out more about online tutoring for schools. Written by Ryan Lockett, director of studies at TLC LIVE.
{"url":"https://www.tlclive.com/blog/year-9-maths-questions","timestamp":"2024-11-01T19:38:49Z","content_type":"text/html","content_length":"52174","record_id":"<urn:uuid:507026f0-8963-4adf-975a-146e2c9f7f8d>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00540.warc.gz"}
Multidimensional Scaling (MDS) - Marketing Research Tools - Marketing Analytics: Data-Driven Techniques with Microsoft Excel (2014) Marketing Analytics: Data-Driven Techniques with Microsoft Excel (2014) Part X. Marketing Research Tools Chapter 38. Multidimensional Scaling (MDS) Often a company wants to determine which industry brands are most similar and dissimilar to its own brand. To obtain this information a marketing analyst might ask potential customers to rate the similarity between different brands. Multidimensional scaling (MDS) enables you to transform similarity data into a one-, two-, or three-dimensional map, which preserves the ranking of the product similarities. From such a map a marketing analyst might find that, for example, Porsche and BMW are often rated as similar brands, whereas Porsche and Dodge are often rated as highly dissimilar brands. A chart generated by MDS in one or two dimensions can often be easily interpreted to tell you the one or two qualities that drive consumer preferences. In this chapter you will learn how to collect similarity data and use multidimensional scaling to summarize product similarities in a one- or two-dimensional chart. Similarity Data Similarity data is simply data indicating how similar one item is to another item or how dissimilar one item is to another. This type of data is important in market research when introducing new products to ensure that the new item isn't too similar to something that already exists. Suppose you work for a cereal company that wants to determine whether to introduce a new breakfast product. You don't know what product attributes drive consumer preferences. You might begin by asking potential customers to rank n existing breakfast products from most similar to least similar. For example, Post Bran Flakes and All Bran would be more similar than All Bran and Corn Pops. Because there are n(n-1)/2 ways to choose 2 products out of n, the product similarity rankings can range from 1 to n(n-1)/2. For example, if there are 10 products, the similarity rankings can range between 1 and 45 with a similarity ranking of 1 for the most similar products and a similarity ranking of 45 for the least similar products. Similarity data is ordinal data and not interval data. Ordinal data is numerical data where the only use of the data is to provide a ranking. You have no way of knowing from similarities whether there is more difference between the products ranked 1 and 2 on similarity than the products ranked second and third, and so on. If the similarities are measured as interval data, the number associated with a pair of products reflects not only the ranking of the similarities, but also the magnitude of the differences in similarities. The discussion of MDS in this chapter will be limited to ordinal data. MDS based on ordinal data is known as nonmetric MDS. MDS Analysis of U.S. City Distances The idea behind MDS is to place products in a low (usually two) dimensional space so that products that are close together in this low-dimensional space correspond to the most similar products, and products that are furthest apart in the low-dimensional space correspond to the least similar products. MDS uses the Evolutionary Solver to locate the products under consideration in two dimensions in a way that is consistent with the product's similarity rankings. To exemplify this, you can apply MDS data to a data set based on the distances between 29 U.S. cities. In the matrixnba worksheet of the distancemds.xls workbook you are given the distances between 29 U.S. cities based on the location of their NBA arenas (as shown in Figure 38.1). For reasons that will soon become clear the distance between a city and itself is set to be larger than any of the actual distances (for example, 100,000 miles). After ranking the distances between the cities (Brooklyn to New York is the shortest distance and Portland to Miami is the largest distance) you can locate each city in a two-dimensional space in a manner such that when the distances between each pair of cities in the two-dimensional space are ranked from smallest to largest, the ranks match, as closely as possible, to the rankings of the actual distances. Thus you would hope that in the two-dimensional space Brooklyn and New York would also be the closest pair of cities and Portland and Miami would be further apart than any other pair of cities. Before describing the process used to obtain the two-dimensional representation of the 29 cities, consider the OFFSET function, which is used in the approach to MDS. Figure 38-1: Distances between U.S. cities OFFSET Function The syntax of the OFFSET function is OFFSET(cellreference, rowsmoved, columnsmoved, height, width). Then the OFFSET function begins in the cell reference and moves the current location up or down based on rows moved. (rowsmoved = –2, for example, means move up 2 rows and rowsmoved = +3 means move down three rows.) Then, based on columns moved, the current location moves left or right (for example, columnsmoved = –2 means move 2 columns to the left and rowsmoved =+3 means move 3 columns to the right). The current cell location is now considered to be the upper-left corner of an array with number of rows = height and number of columns = width. Figure 38.2 (see the Offsetexample. xls file) illustrates the use of the OFFSET function. For example, the formula =SUM(OFFSET(B7,-1,1,2,1)) begins in B7 and moves the cell location B7 one row up to B6. From B6 the cell location moves one column to the right to cell C6. Cell C6 now becomes the upper-left corner of a cell range with 2 rows and 1 column. The cells in this range (C6:C7) add up to 8. You should verify that the formulas in B18 and H10 yield the results 24 and 39, respectively. Figure 38-2: Examples of OFFSET function Setting up the MDS for Distances Data The goal of MDS in the distances example is to determine the location of the cities in the two-dimensional space that best replicates the “similarities” of the cities. You begin by transforming the distances between the cities into similarity rankings so small similarity rankings correspond to close cities and large similarity rankings correspond to distant cities. Then the Evolutionary Solver is used to locate each city in two-dimensional space so that the rankings of the city distances in two dimensional space closely match the similarity rankings. To perform the MDS on the distances data, proceed as follows: 1. In the range G3:H31 enter trial values for the x and y coordinates of each city in two-dimensional space. Arbitrarily restrict each city's x and y coordinate to be between 0 and 10. 2. Copy the formula =RANK(K3,distances,1) from K34 to the range K34:AM62 to compute the ranking of the distances between each pair of cities (see Figure 38.3). The last argument of 1 in this formula ensures that the smallest distance (New York to Brooklyn) receives a rank of 1, and so on. All diagonal entries in the RANK matrix contain 813 because you assigned a large distance to diagonal 3. Copy the formula =IF($I66=K$64,10000000,(OFFSET($G$2,$I66,0,1,1)-OFFSET($G$2,K$64,0,1,1))^2+(OFFSET($H$2,$I66,0,1,1)-OFFSET($H$2,K$64,0,1,1))^2) from K66 to the range K66:AM94 to compute for each pair of different cities the square of the two-dimensional distances between each pair of cities. The term OFFSET($G$2,$I66,0,1,1) in the formula pulls the x coordinate of the city in the current row; the term OFFSET($G$2,K$64,0,1,1) pulls the x coordinate of the city in the current column; the term OFFSET($G$2,$I66,0,1,1) in the formula pulls the ycoordinate of the city in the current row; and the term OFFSET($H$2,K$64,0,1,1) pulls the y coordinate of the city in the current column. For distances corresponding to the same city twice, assign a huge distance (say 10 million miles). A subset of the distances in two-dimensional space is shown in Figure 38.4. 4. Your strategy is to have Solver choose the two-dimensional locations of the cities so that the ranking of the distances in the two-dimensional space closely matches the actual rankings of the distances. To accomplish this goal compute the ranking of the distances in two-dimensional space. Copy the formula (see Figure 38.5) =RANK(K66,twoddistances,1) from K98 to K98:AM126 to compute the rankings of the distances in two-dimensional space. For example, for the two-dimensional locations in G3:H31, Brooklyn and New York are the closest pair of cities. 5. In cell C3 compute the correlation between the original similarity ranks and the two-dimensional ranks with the formula =CORREL(originalranks,twodranks). The range K34:AM62 is named originalranks and the range K98:AM126 is named twodranks. 6. Use the Evolutionary Solver to locate each city in two-dimensional space to maximize the correlation between the original ranks and the ranks in two dimensions. This should ensure that cities that are actually close together will be close in two-dimensional space. Your Solver window is shown in Figure 38.6. Figure 38-3: Ranking of distances between U.S. cities Figure 38-4: Two-dimensional distance between U.S. cities Figure 38-5: Ranking of two-dimensional distance between U.S. cities Figure 38-6: MDS Solver window cities The Solver chooses for each city an x and y coordinate (changing cell range into two dimensions is the range G3:H31) to maximize the correlation between the original similarities and the two-dimensional distance ranks. You can arbitrarily constrain the x and y coordinates for each city to be between 0 and 10. The locations of each city in two-dimensional space are shown in Figure Figure 38-7: MDS Solver window cities Referring to cell C3 of Figure 38.7, you can see that the correlation between the original similarity ranks and the two-dimensional ranks is an amazingly large 0.9964. Figure 38.8 shows a plot of the city locations in two-dimensional space. Figure 38-8: Two-dimensional city plot The cities on the right side of the chart are West Coast cities and the cities on the left side of the chart are East Coast cities. Therefore, the horizontal axis can be interpreted as an East West factor. The cities on the top of the chart are northern cities, and the cities on the bottom of the chart are southern cities. Therefore, the vertical axis can be interpreted as a North South factor. This example shows how MDS can visually demonstrate the factors that determine the similarities and differences between cities or products. In the current example distances are replicated via the two obvious factors of east-west (longitude) and north-south (latitude) distances. In the next section you will identify the two factors that explain consumer preferences for breakfast foods. Unlike the distances example, the two key factors that distinguish breakfast foods will not be obvious, and MDS will help you derive the two factors that distinguish breakfast foods. MDS Analysis of Breakfast Foods Paul Green and Vithala Rao (Applied Multidimensional Scaling, Holt, Rinehart and Winston, 1972) wanted to determine the attributes that drive consumer's preferences for breakfast foods. Green and Rao chose the 10 breakfast foods shown on the MDS worksheet in the breakfast.xls workbook and asked 17 subjects to rate the similarities between each pair of breakfast foods with 1 = most similar and 45 = least similar. (There are 45 different ways to choose 2 foods out of 10.) The ranking of the average similarity is also shown (see Figure 38.9). A rank of 0 is entered when a breakfast food is compared to itself. This ensures that the comparison of a food to itself does not affect the Solver solution. Figure 38-9: Breakfast food data For example, the subjects on average deemed ham, eggs, and home fries and bacon and eggs as most similar and instant breakfast and ham, eggs, and home fries as least similar. Using Card Sorting to Collect Similarity Data Because of the vast range of the ranking scale in this scenario, many subjects would have difficulty ranking the similarities of pairs of 10 foods between 1 and 45. To ease the process the marketing analyst can put each pair of foods on 1 of 45 cards and then create four piles: highly similar, somewhat similar, somewhat dissimilar, and highly dissimilar. Then the subject is instructed to place each card into one of the four piles. Because there will be 11 or 12 cards in each pile, it is now easy to sort the cards in each pile from least similar to most similar. Then the sorted cards in the highly similar pile are placed on top, followed by the somewhat similar, somewhat dissimilar, and highly dissimilar piles. Of course, the card-sorting method could be easily programmed on a computer. To reduce the breakfast food similarities to a two-dimensional representation, repeat the process used in the city distance example. 1. In the range C5:D14 (named location), enter trial values for the location of each food in two-dimensional space. As in the city distances example, constrain each location's x and y coordinate to be between 0 and 10. 2. Copy the formula =(INDEX(location,$E18,1)-INDEX(location,G$16,1))^2+(INDEX(location,$E18,2)-INDEX(location,G$16,2))^2 from G18 to G18:P27 (see Figure 38.10) to determine (given the trial x and y locations) the squared distance in two-dimensional space between each pair of foods. You could have used the OFFSET function to compute the squared distances, but here you choose to use the INDEX function. When copied down through row 27, the term INDEX(location,$E18,1), for example, pulls the x coordinate for the current row's breakfast food. 3. Copy the formula =IF(G$29=$E31,0,RANK(G18,$G$18:$P$27,1)) from G31 to G31:P40 to compute the rank (see Figure 38.11) in two-dimensional space of the distances between each pair of breakfast foods. For diagonal entries enter a 0 to match the diagonal entries in rows 5–14. 4. In cell B3 the formula =CORREL(similarities,ranks) computes the cor-relation between the average subject similarities and the two-dimensional distance ranks. 5. You can now use the Solver window shown in Figure 38.12 to locate the breakfast foods in two-dimensional space. The Solver chooses an x and y coordinate for each food that maximizes the correlation (computed in cell B3) between the subjects' average similarity rankings and the ranked distances in two-dimensional space. The Solver located the breakfast foods (refer to Figure 38.9). The maximum correlation was found to be 0.988. Figure 38-10: Squared distances for breakfast foods example Figure 38-11: Two-dimensional distance ranks for breakfast food example Figure 38-12: Solver window for breakfast food MDS Unlike the previous MDS example pertaining to city distances, the labeling of the axis for the breakfast foods example is not as straightforward. This task requires taking a holistic view of the two-dimensional MDS map when labeling the axis. In this case, it appears that the breakfast foods on the right side of the chart tend to be hot foods and the foods on the left side of the chart tend to be cold foods (see the plot of the two-dimensional breakfast food locations in Figure 38.13). This suggests that the horizontal axis represents a hot versus cold factor that influences consumer preferences. The foods on the right also appear to have more nutrients than the foods on the left, so the horizontal axis could also be viewed as a nutritional value factor. The breakfast foods near the top of the chart require more preparation time than the foods near the bottom of the chart, so the vertical axis represents a preparation factor. The MDS analysis has greatly clarified the nature of the factors that impact consumer preferences. Figure 38-13: Breakfast food MDS plot Finding a Consumer's Ideal Point A chart that visually displays (usually in two dimensions) consumer preferences is called a perceptual map. A perceptual map enables the marketing analyst to determine how a product compares to the competition. For example, the perceptual map of automobile brands, as shown in Figure 38.14, is based on a sportiness versus conservative factor and a classy versus affordable factor. The perceptual map locates Porsche (accept no substitute!) in the upper-right corner (high on sportiness and high on classy) and the now defunct Plymouth brand in the lower-left corner (high on conservative and high on affordable). Figure 38-14: Automobile brand perceptual map When a perceptual map of products or brands is available, the marketing analyst can use an individual's ranking of products to find the individual's ideal point or most-wanted location on the perceptual map. To illustrate the idea I asked my lovely and talented wife Vivian to rank her preferences for the 10 breakfast foods (see Ideal Point worksheet of the breakfast.xls workbook and Figure 38.15). Vivian's product ranks are entered in the range D5:D14. Figure 38-15: Finding the ideal point For example, Vivian ranked hot cereal first, bacon and eggs second, and so on. To find Vivian's ideal point, you can enter in the cell range C3:D3 trial values of x and y for the ideal point. To find an ideal point, observe that Vivian's highest ranked product should be closest to her ideal point; Vivian's second ranked product should be the second closest product to her ideal point; and so on. Now you will learn how Solver can choose a point that comes as close as possible to making less preferred products further from Vivian's ideal point. Proceed as follows: 1. Copy the formula =SUMXMY2($C$3:$D$3,F5:G5) from C5 to C6:C14 to compute the squared distance of each breakfast food from the trial ideal point values. 2. Copy the formula =RANK(C5,$C$5:$C$14,1) from B5 to B6:B14 to rank each product's squared distance from the ideal point. 3. Copy the formula =ABS(B5-D5) from A5 to A6:A14 to compute the difference between Vivian's product rank and the rank of the product's distance from the ideal point. 4. In cell A3 use the formula =SUM(A5:A14) to compute the sum of the deviations of Vivian's product ranks from the rank of the product's distance from the ideal point. Your goal should be to minimize this sum. 5. The Solver window, as shown in Figure 38.16, can find an ideal point (not necessarily unique) by selecting x and y values between 0 and 10 that minimize A3. Figure 38-16: Solver window for finding the ideal point As shown in Figure 38.17, Vivian's ideal point is x = 6.06 and y = 6.44, which is located between her two favorite foods: hot cereal and bacon and eggs. The sum of the deviation of the rankings of the product distances from the ideal point and the product rank was 10. Figure 38-17: Vivian's ideal point You can use ideal points to determine an opportunity for a new product. After plotting the ideal points for, say, 100 potential customers, you can use the techniques in Chapter 23, “Cluster Analysis,” to cluster customer preferences based on the location of their ideal points on the perceptual map. Then if there is a cluster of ideal points with no product close by, a new product opportunity exists. For example, suppose that there were no Diet 7UP or Diet Sprite in the market and an analysis of soft drink similarities found the perceptual map shown in Figure 38.18. Figure 38-18: Soda perceptual map You can see the two factors are Cola versus non-Cola (x axis) and Diet versus non-Diet (y axis). Suppose there is a cluster of customer ideal points near the point labeled Diet no caffeine. Because there is currently no product near that point, this would indicate a market opportunity for a new diet soda containing no caffeine (for example Diet 7UP or Diet Sprite). In this chapter you learned the following: · Given product similarity you can use data nonmetric MDS to identify a few (usually at most three) qualities that drive consumer preferences. · You can easily use the Evolutionary to locate products on a perceptual map. · Use the Evolutionary Solver to determine each person's ideal point on the perceptual map. After clustering ideal points, an opportunity for a new product may emerge if no current product is near one of the cluster anchors. 1. Forty people were asked to rank the similarities between six different types of sodas. The average similarity rankings are shown in Figure 38.19. Use this data to create a two-dimensional perceptual map for sodas. Figure 38-19: Soda perceptual map 2. The ranking for the sodas in Exercise 1 is Diet Pepsi, Pepsi, Coke, 7UP, Sprite, and Diet Coke. Determine the ideal point. 3. The file countries.xls (see Figure 38.20) gives the average ratings 18 students gave when asked to rank the similarity of 11 countries on a 9-point scale (9 = highly similar and 1 = highly dissimilar). For example, the United States was viewed as most similar to Japan and least similar to the Congo. Develop a two-dimensional perceptual map and interpret the two dimensions. Figure 38-20: Exercise 3 data 4. Explain the similarities and differences between using cluster analysis and MDS to summarize multidimensional data. 5. Explain the similarity and differences between using principal components and MDS to summarize multidimensional data.
{"url":"https://apprize.best/microsoft/excel_4/39.html","timestamp":"2024-11-03T06:30:22Z","content_type":"text/html","content_length":"37446","record_id":"<urn:uuid:936aa7f8-1255-4e88-a250-819046c49c73>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00043.warc.gz"}
Multiply by Multiples of Ten Multiply by Multiples of Ten Price: 300 points or $3 USD Subjects: math,mathElementary,operationsAndAlgebraicThinking,multiplicationAndDivision,placeValue Grades: 3,4 Description: A deck that practices multiplying with multiples of 10, 100, and 1000 is a fun and effective way to reinforce this important skill. This 24-card deck is the perfect way to introduce the theory and to work on some examples. What you can expect: Your students will review a few multiplication facts, including multiplying by 1 and 10. Using these examples, they will get 2 pages each explaining what happens when they multiply by 10 or multiples of 10, multiply by 100 or multiples of 100, and multiply by 1000 and multiples of 1000. They will be shown too, what happens when both numbers are multiples of 10 or 100. What you receive with your purchase: ➤ 1 welcome card ➤ 2 multiplication review cards ➤ 8 pages with examples explaining the theory ➤ 12 pages each with a multiple-choice box ➤ 1 well done card
{"url":"https://wow.boomlearning.com/deck/tdcArYFXApmSmrmSs","timestamp":"2024-11-03T00:05:11Z","content_type":"text/html","content_length":"2656","record_id":"<urn:uuid:ba3c4ff1-5c94-409d-959e-d439552d8928>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00606.warc.gz"}
... These learning strategies do not lend themselves well to language education. When performance at high school level and subject selection at admission to the university were cross-tabulated, it was observed that Economics which is way down low in terms of scores it the most popular subject selected both in the Arts and Sciences (Nshemereirwe, 2014) Though the subject would be categorised as "difficult" it was proved to be very popular. The study therefore noted that what is classed as "difficult" in reality reflected the perception of the students. ...
{"url":"https://www.researchgate.net/publication/302401844_ESTIMATING_THE_DIFFICULTY_OF_A'LEVEL_EXAMINATION_SUBJECTS_IN_UGANDA","timestamp":"2024-11-11T02:17:04Z","content_type":"text/html","content_length":"504943","record_id":"<urn:uuid:8fa03ecf-d737-48d4-ac04-15e21e7622e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00660.warc.gz"}
Introduction to NumPy - A Python Library What is NumPy? If you are to learn data science or machine learning, you need to know NumPy (numerical python). It is an open-source core library for scientific computing. You can also say it is a library available in python for scientific computing. Numpy is one of the most useful libraries, especially when you are crunching numbers. Why is it so good? Because you can crunch numbers much faster rather than using python loops, and it has got many built-in functions, and it also works perfectly with matrice multiplication. NumPy is written in C, which is the low-level language, in turn, makes its processing speed faster. Operations in NumPy are faster because they take advantage of parallelism i.e., single instruction multiple data (SIMD). It also provides : • an n-dimensional array of objects. • provides tools for integrating C, C++. • contains useful mathematical functions, linear algebra, Fourier transforms, and random number capabilities, sorting, and shape manipulation. • can be used as a multidimensional container for generic data. • use matplotlib with NumPy, which is used for data analysis and for making graphs. Why use NumPy? It provides an alternative to the regular python list. You can analyze the data much more efficiently than using python lists. It uses less memory space compared to lists so it can work with a vast amount of data. It can also work with SciPy and other Libraries available. If you want to work with large datasets, there will be some operation you will need to perform, for example, operations on vectors, matrix manipulation, n-dimensional arrays, and regressions as well. In traditional python, these can be non-trivial to execute. With NumPy, they come out-of-the-box, which are already optimized and tested. Moreover, you can use NumPy lists as NumPy array, and also you use files as NumPy array. With NumPy, it is convenient to work with statistical data, matrix multiplication, reshaping, and other machine learning packages. TensorFlow and scikit-learn use NumPy arrays to compute the matrix multiplication in the back end. Using NumPy, you can work with image processing and computer graphics as images are represented as multidimensional arrays of numbers and can perform various operations on images using NumPy, such as mirroring the image, rotating, and sharing images. Getting started Before starting with NumPy, you must know python lists. Python list is somewhat like an array in other languages; it is a collection of heterogeneous types of values that we can add, delete or manipulate. The elements are enclosed within square brackets You can create a Python list by writing the list name and the elements assigned within square brackets. Example: t = [2,4,8,45] Python list has some drawbacks python NumPy overcomes these drawbacks. For example: If you try to multiply two lists, it will give an error. This is because it has no idea how to do this calculation with the list. To overcome this, you have to multiply the elements of both the lists by accessing them from their index number, which is a lengthy and time-consuming process. To overcome such type of problems, you have to use NumPy. For example, using NumPy, you can multiply the elements without writing the index numbers again and again. How to use NumPy? There are several IDE’s where you can run python code, for example, pycharm, Jupiter notebook, google colab, etc. I’ll be using google colab. You can install NumPy using pip. After installing, you need to import NumPy. There are several operations you can perform using NumPy. We will learn each operation one by one in this section. Creating an array • You can create and enter elements directly. Here you just need to write an array name and assign the elements. • You can also use the list as an array. In this example, I made a list “a” and assigned that list as an input to the NumPy array “z.” You can create an array of any length with the elements as 0’s. In this example, I have created an array “a” with a length of 3 elements. You can also create an array of any length with the elements as 1’s. We can also see the type (data type) of the array: We can also see the type (data type) of the element: You can check the shape of the array: • We can make an array by giving starting point, ending point and number of elements In this example, 2 is the starting point, 10 is the ending point, 5 is the number of elements. It’s useful while plotting a graph, for example, plotting the x-axis. • You can create a random array using random.randint(limit, size) method Get elements of an array: • Get specific elements (using index number) from the array. • To get elements from within a range. Mathematical operations with NumPy: • Addition: you can simply add elements of two arrays. You can just add to each element: Similarly, you can multiply 10 with each element: • Multiplication: you can multiply two arrays. • Dot product: you can find the dot product of two arrays using the “@” symbol. • Sort: you can sort an array: • To get the values of array less than a number say 3, Here, you will get the values true or false. If the value is less than 3 you will get true, else false. • To get the values of array greater than a number say 3 • If you want an array which is greater than 3 • you can read a photo as a NumPy array: • You can check the type of picture: • You can check the shape of the image by passing the array name. Here, 165 is the number of rows in an image, 220 is the number of columns in a picture, and 3 is the color of the image with the 3 channels RGB(red, green, blue). • To make the image upside down: Here first colon (:) is the starting point second colon (:) is the stop point, and -1 is the step. -1 is for backward and +1 is for forwarding. • To crop an image, you have to provide a range • You can also take every other row or column. • You can also perform mathematical functions on the image array. Such as for : Sin: print(np.sin(img)) Sum: print(np.sum(img)) Product: print(np.prod(img)) Minimum: print(np.min(img)) Maximum: print(np.max(img)) Argmax: print(np.argmax(img)) Argmin: print(np.argmin(img)) Click here to deploy your AI workloads on E2E GPU Cloud.
{"url":"https://www.e2enetworks.com/blog/introduction-to-numpy-a-python-library","timestamp":"2024-11-13T19:24:07Z","content_type":"text/html","content_length":"82158","record_id":"<urn:uuid:e7e6d88d-dafd-4460-92cb-7b044135f6b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00602.warc.gz"}
How to solve Average Questions | Important tips and tricks From the examination point of view, each and every topic is important. Due to the complexity and rise in the level of the quantitative aptitude section, it is necessary to have deep knowledge about the important topics to be covered for the IBPS PO exam 2019. For that, we have already provided you with the detailed IBPS PO Study plan named Kar IBPS Fateh. Do not miss to practice from the different number of questions to ace your speed in the exam. You can also refer to the daily quizzes on different topics provided on bankersadda. Do not miss out to the tips and tricks to solve the Average and age problems. continue reading this post to know how can you ace the aforementioned topic. → An average of a given data is the sum of the given observations divided by the number of observations → For ex, average of 24, 32, 40, 48 * Some important rules of average are as follow:- → Average of a given data is less than the greatest number and greater than the smallest number → If 0 (zero) is one of the numbers of a given data, then that 0 (zero) will also be included while calculating average * Important formulas of average:- → Different types of questions which are asked in the exams nowadays are as follow:- * Type 1:- If the average of m1 numbers is x1 and the average of m2 numbers is x2 and so on, then * Type 2:- If the average of x observations is C and average of y observation taken out from x is d, then average of rest of the numbers * Type 3:- If the average of x students in a class is M, while average of passed student is a and average of failed student is b, then Ex) In a class, there are 50 students and average marks in the exam is 40. If average marks of passed student is 50 and average marks of failed student is 25, find number of students who failed? → Let, no. of students failed = x Then, no. of students who passed = 50 – x ∴ 50 × 40 = 25x + 50 (50 – x) ∴ 2000 = 25x + 2500 – 50x ∴ –500 = –25x ∴ 25x = 500 x = 20 ∴ Failed student = 20 * Type 4:- Average speed is defined as total distance travelled divided by total time taken Ex) A man covers a certain distance by bike at 25 km/hr and comes back at a speed of 50 km/hr. What is his average speed during travel? → Here A = 25 km/hr, B = 50 km/hr → Some other examples related to the given formula as given above:- Ex) Find average of numbers from I to 31 ∴ average Here n = 31 Ex) Find out the average of 2, 4, 6, 8, 10, 12, 14, 16, 18 and 20 → Here, n = 10 ∴ Required average = (n + 1) = (10 + 1) = 11 Ex) Find out the average of 3, 6, 9, 12, 15, …….. , 30, 33 ∴ Here, 1st number = 3 Last number = 33 Problem based on Ages:- → Age is defined as the period of time that a person has lived. Age is measured in month, year → Problem based on ages generally consists of information of ages of two or more person and a relation between their ages in present /future/past → Different types of Questions or technique used in problem based on Ages * Type 1- If ratio of present ages of A and B is c : d and x yr ago, their ages was m : n, then Ex) If present age of Amar is 3 times the age of Suresh and if 8 years ago, then Amar age ratio will be 7 times the age of Suresh , then find present age of Amar? Let amar’s age be 3x and Suresh’s age be x ∴ 3x – 8 = 7x – 56 ∴ 4x = 48 ∴ x = 12 ∴ Present age of Suresh = 3x = 3 × 12 = 36 yrs. * Type 2:- If ratio of present age of P and Q is a : b and after n years, the ratio of their ages will be c : d, then Ex) If ratio of present age of Sonal and Keerti is 3 : 5, and if after 12 years, their ages will be in the ratio 9 : 13, then find present age of Keerti? → Let Sonal’s present age be 3x and Keerti’s present age be 5x ∴ 39x + 156 = 45x + 108 ∴ 6x = 48 ∴ x = 8 ∴ Keerti’s present age = 5x = 40 yrs. * Type 3 :- If X is as much older than Y as he is younger to Z and sum of ages of Y and Z is b yrs. then their age order is Y < X < Z i.e. Here Z is the oldest and Y is the youngest Ex) If Ashish is as much elder than Saurav as he is younger to Jayant and Sum of ages of Saurav and Jayant is 42 yrs, then find Ashish’s age? Let Ashish’s present age is x yr and he is y Rs younger to Jayant ∴ Jayant’s age = (x + y) yrs Saurav’s age = (x – y) yrs ∴ Sum of ages of Jayant and Saurav ⇒ (x + y) + (x – y) = 42 ⇒ 2x = 42 ⇒ x = 21 ∴ Ashish present age = 21 yr * Faster approach:- * Type 4:- If b yrs after age of one person is x times of that of another person and at present the age of first person is y times of that of another person, then Ex) Present age of Gautam is 5 times the age of Shivani and after 10 years, Gautam will be 3 times as sold as Shivani, find present ages of Gautam and Shivani? Let Present age of Gautam = 5x yr ∴ Present age of Shivani = x yr You may also like to Read: All the Best BA’ians for IBPS RRB Prelims Result!!
{"url":"https://www.bankersadda.com/average-tips-tricks-for-ibps-po/","timestamp":"2024-11-10T02:25:50Z","content_type":"text/html","content_length":"621921","record_id":"<urn:uuid:5f70019e-7382-4106-94e7-5657512c09ed>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00640.warc.gz"}
Characterizing ideal weighted threshold secret sharing Weighted threshold secret sharing was introduced by Shamir in his seminal work on secret sharing. In such settings, there is a set of users where each user is assigned a positive weight. A dealer wishes to distribute a secret among those users so that a subset of users may reconstruct the secret if and only if the sum of weights of its users exceeds a certain threshold. On one hand, there are nontrivial weighted threshold access structures that have an ideal scheme - a scheme in which the size of the domain of shares of each user is the same as the size of the domain of possible secrets (this is the smallest possible size for the domain of shares). On the other hand, other weighted threshold access structures are not ideal. In this work we characterize all weighted threshold access structures that are ideal. We show that a weighted threshold access structure is ideal if and only if it is a hierarchical threshold access structure (as introduced by Simmons), or a tripartite access structure (these structures generalize the concept of bipartite access structures due to Padró and Sáez), or a composition of two ideal weighted threshold access structures that are defined on smaller sets of users. We further show that in all those cases the weighted threshold access structure may be realized by a linear ideal secret sharing scheme. The proof of our characterization relies heavily on the strong connection between ideal secret sharing schemes and matroids, as proved by Brickell and Davenport. أدرس بدقة موضوعات البحث “Characterizing ideal weighted threshold secret sharing'. فهما يشكلان معًا بصمة فريدة.
{"url":"https://cris.openu.ac.il/ar/publications/characterizing-ideal-weighted-threshold-secret-sharing","timestamp":"2024-11-07T04:02:24Z","content_type":"text/html","content_length":"51675","record_id":"<urn:uuid:e68b0358-c075-4dda-b62b-7ac735b6d310>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00828.warc.gz"}
Objective Function Problem: | AIMMS Community Hi, the "objective function" of my program is read as a variable in AIMMS. All the parameters in the objective function already have a data read from excel. My problem is, AIMMS keeps on saying "the scope of index i has not been specified" when I try to specify the index, the program won't run because it will say that the objective function should be a free scalar variable. What should I do? Taking a quickly glance at your model, the basic answer is that you can't have a free index (i,j) within your objective function, i.e., you need to use a function to iterate over these indexes if you specify them. But, in order to give you a complete answer I would need to understand the relation between your variable x(v,k) and the parameter tt(i,j). • is tt the transit time between i and j? • is k a pair (i,j)? If so, you could calculate tt(i,j) to tt_calc(k) and use within your statement. I noticed that you are using min(P) but none of your parameters nor variables depend on it. Maybe other community member could help you better. Seems you are still missing a sum(i, as you have a variable tt(i,j) and the objective needs to become scalar variable. If you open the error (red cross), it should highlight the exact point. In a reproduced similar problem (different model as I do not have yours), this looks like: BTW (if I may): Looking at some of your identifiers and specifically constraints, you might want to give them some more self-explanatory names (you did that for Sets). This will make the model easier to read and debug (e.g. saying Constraint_12 is infeasible is less intuitive than calling it RespectDemand). Same for objective, is this a Cost, Revenue or so? Also, when you come back later, you are able to read the model-tree and understand what the model does. Just as an illustration: Thank you so much! Already have an account? Login Please use your business or academic e-mail address to register Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
{"url":"https://community.aimms.com/math-or-optimization-modeling-39/objective-function-problem-211?postid=338","timestamp":"2024-11-02T19:00:40Z","content_type":"text/html","content_length":"146469","record_id":"<urn:uuid:ff036e86-e365-4e62-b989-280ced30aa39>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00408.warc.gz"}
BT: Bookshelf Scavenger Tag - It has been a while since I have done a tag. And, this one, the Bookshelf Scavenger Tag, is going to be great for us to get to know what books I have. 1. Find an author’s name or title with the letter Z in it The Great Gatsby by F. Scott Fitzgerald The Complete Pelican Shakespeare 3. Find a book with a key on the cover BitterBlue by Kristin Cashore 4. Find something on your bookshelf that’s not a book I mostly have little trinkets. Little dolls that I adore. Some perfumes. 5. Find the oldest book on your shelf Sense and Sensibility by Jane Austen (1811). 6. Find a book with a girl on the cover Waistcoats and Weaponry by Gail Carriger 7. Find a book with a boy on the cover The Warded Man by Peter V. Brett 8. Find a book that has an animal in it The Infernal Devices by Cassandra Clare (Clockwork Angel). Got to show Church some love. 9. Find a book with a male protagonist I haven’t read this one yet, but: The Knife of Never Letting Go 10. Find a book with only words on it Will Grayson, Will Grayson 11. Find a book with illustrations in it The Girl Who Raced Fairyland All the Way Home by Cathrynne M. Valente 12. Find a book with gold lettering Wayfarer by Alexandra Bracken 13. Find a diary (true or false) Harry Potter and the Chamber of Secrets by JK Rowling is the only one that comes to mind. 14. Find a book written by someone with a common name (like Smith) A Thousand Pieces of You by Claudia Gray 15. Find a book that has a close up of something on it Legacy of Kings by Eleanor Herman 16. Find a book on your shelf that takes place in the earliest time period Shakespeare again? 17. Find a hardcover book without a jacket The Difference Between You and Me 18. Find a teal/turquoise colored book Teal? Isla and the Happily Ever After by Stephanie Perkins 19. Find a book with stars on it These Broken Stars by Amie Kaufman and Meagan Spooner 20. Find a non YA book Jane Eyre by Charlotte Bronte. This is such a different read for me, because I didn’t start out liking it in college. And, now, I am wondering if I will like it. Rather curious to see if it will offend me as a person with mental illness. 21. Find the longest book you own Aside from the Shakespeare collected works, I think A Memory of Light by Robert Jordan and Brandon Sanderson (the final book in the Wheel of Time series). 900 pages. 22. Find the shortest book you own Prince Caspian by CS Lewis, probably 23. Find a book with multiple PoVs I suspect Falling Kingdoms series will be one of those. 24. Find a shiny book City of Bones by Cassandra Clare 25. Find a book with flowers on it And I Darken by Kiersten White 6 thoughts on “BT: Bookshelf Scavenger Tag” 1. What a fun tag! How long did it take you to find a book to match each prompt? And a 900 page book?! Whoa! 🙂 1. Hi Jenn, I am rather lazy, so I did a lot of Googling to check if the book I had in mind fit the prompt. I know what books I have, but I haven’t read them all. So, Google helps. It didn’t take long. 2. Ooh fun tag! The only two books on here that I’ve read are Gatsby (which I read so long ago I don’t remember it at all) and HP. That was a good answer, by the way! I wouldn’t have thought of 1. We read totally different things, but it’s so nice because I learn so much from you all the time. You’re brilliant. 2. I just saw this, you are so sweet! <3 It is fun sometimes visiting blogs of fellow readers who have different taste because I get to learn about different books and find out why people like 3. I must play this game! Even if I never post it, I MUST achieve this goal. But actually no, I am going to post it because it will give me something to post when I am out of ideas 😉 Also, please read The Knife of Never Letting Go. You can thank me later 😀 Thanks for sharing this, I love it!
{"url":"https://dinasoaur.com/bt-bookshelf-scavenger-tag/","timestamp":"2024-11-04T08:12:08Z","content_type":"text/html","content_length":"175876","record_id":"<urn:uuid:4320f3ad-c107-4665-a193-02b964ea3eb7>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00549.warc.gz"}
The Discrete Fourier Transform Sandbox This calculator visualizes Discrete Fourier Transform, performed on sample data using Fast Fourier Transformation. By changing sample data you can play with different signals and examine their DFT counterparts (real, imaginary, magnitude and phase graphs) This calculator is an online sandbox for playing with Discrete Fourier Transform (DFT). It uses real DFT, the version of Discrete Fourier Transform, which uses real numbers to represent the input and output signals. DFT is part of Fourier analysis, a set of math techniques based on decomposing signals into sinusoids. While numerous books list graphs to illustrate DFT, I always wondered how these sinusoids look like or how they will be changed if we change the input signal a bit. Now, this sandbox can answer such questions. By default, it is filled with 32 samples, with all zeroes except the second one, set to 5. The calculator displays graphs for real values, imaginary values, magnitude values, and phase values for this set of samples. It also draws graphs with all sinusoids and with the summed signal. You can change samples as you wish - graphs will be updated accordingly. Now it is time for some theory. Fourier analysis's base is the claim that signal could be represented as the sum of properly chosen sinusoidal waves. Why are sinusoids used? Because they are easier to deal with the original signal or any other form of waves. And they have a valuable property - sinusoidal fidelity; that is, a sinusoidal input to a system is guaranteed to produce sinusoidal output. Only amplitude and phase can change; frequency and wave shape will remain. There are four types of Fourier Transform: Fourier Transform (for aperiodic continuous signal), Fourier series (for periodic continuous signal), Discrete Time Fourier Transform (for aperiodic discrete signal), Discrete Fourier Transform (for periodic discrete signal). All transforms deal with signals extended to infinity. In the computer, we have a finite number of samples. So, to use Fourier Transforms, we pretend that our finite samples have an infinite number of samples on the left and the right of our actual data. And these samples keep repeating our actual data. Thus, by pretending that our samples are discrete periodic signal, in computer algorithms, we use Discrete Fourier Transform (DFT). (If we pad our actual data with zeroes, for example, instead of repeating, we will get discrete aperiodic signal. Such a signal requires an infinite number of sinusoids. Of course, we can't use it in computer algorithms). Also, note that each Fourier Transform has real and complex version. The real version is the simplest and uses ordinary numbers for input (signal samples, etc.) and output. The complex version uses complex numbers with the imaginary part. Here we stick with real DFT because it is easier to visualize and understand. The DFT changes N points of an input signal into two N/2+1 points of output signals. The input signal is, well, input signal, and two output signals are the amplitudes of the sine and cosine waves. For example, to represent 32 points time domain signal in the frequency domain, you need 17 cosine waves and 17 since waves. The input signal is in the time domain, and the output signals are in the frequency domain. The process of calculating the frequency domain is called decomposition, analysis, the forward DFT, or simply DFT. The opposite process is called synthesis or the inverse DFT. Time domain signal is represented by a lowercase letter, i.e., x[ ], and frequency domain signal is represented by an uppercase letter, i.e., X[ ]. Two parts of output signal are called Real part of X[ ] or Re X[ ], and Imaginary part pf X[ ] or Im X[ ]. Values of Re X[ ] are amplitudes of cosine waves, and Im X[ ] values are amplitudes of sine waves. The names real and imaginary come from general DFT, which operates on complex numbers. For real DFT, they are just amplitudes of cosine and sine waves. The sine and cosine waves are called DFT basic functions - they are waves with unity amplitude. The DFT basic functions have the following equations: $c_k[i]=cos(\frac{2\pi k i}{N})\\s_k[i]=sin(\frac{2\pi k i}{N})$, where i changes from 0 to N-1, k changes from 0 to N/2. Each amplitude from Re X and Im X is assigned to proper sine or cosine wave and the result can be summed to form the time domain signal again. The synthesis equation is: $x[i]=\sum_{k=0}^{N/2}Re\bar{X}[k]cos(\frac{2\pi ki}{N})+\sum_{k=0}^{N/2}Im\bar{X}[k]sin(\frac{2\pi ki}{N})$ That is, any point from N-points signal can be created by adding N/2+1 cosine wave values and N/2+1 sine wave values at the same point. Note the bar over X in the formula above. This is because the amplitudes for synthesis should be obtained by scaling original frequency domain amplitude values. These amplitudes should be normalized using the following equations: with two special cases: Real and imaginary parts can be represented in polar notation using the following relation: M and theta and called Magnitude and Phase and can be computed from Re and Im using the following relations: Thus, in polar notation, DFT decomposes an N-point signal into N/2+1 cosine waves with specified amplitude and phase shift. Sometimes graphs for Mag and Phase makes more sense than graphs for Re and Now, let's get back to original synthesis equation: $x[i]=\sum_{k=0}^{N/2}Re\bar{X}[k]cos(\frac{2\pi ki}{N})+\sum_{k=0}^{N/2}Im\bar{X}[k]sin(\frac{2\pi ki}{N})$ It can explain why we actually can perform DFT, that is, find amplitudes Re and Im. Note that in this equation, ImX[0] and ImX[N/2] will always be zero. Thus, for each of N points, equation contains only N terms. So we have a system of linear equations with N equations for N unknown coefficients, which can be solved, for example, using Gauss elimination. Of course, for big N, nobody uses Gaussian elimination because it is too slow. This is where Fast Fourier Transformation (FFT) actually shines. It is a fast method for computing Re and Im values. However, FFT is based on the complex DFT, a more general DFT version. For N complex points (with real and imaginary parts) of the input signal, it calculates N complex points of the output signal. The question is, how we can relate it to real DFT. Happily, it is relatively easy. If you have an N-points signal, you place these points into the complex input signal's real part, set the imaginary part of the input signal to all zeroes, and apply FFT. The first N/2+1 points of the real part and N/2+1 points of the imaginary part of the output signal will correspond to your real DFT. The calculator below allows you to play with DFT. You can change the input signal as you wish. The calculator applies FFT to your signal (using javascript FFT implementation from Project Nayuki). Then it displays graphs for Re X[ ], Im X[ ], Mag X[ ], Phase X[ ], and visualizes synthesis using sine and cosine waves and using cosine waves with phase shift - to let you understand how all these waves sum up to recreate original input time domain signal. Sample Number Sample Value Digits after the decimal point: 2 The file is very large. Browser slowdown may occur during loading and creation. Re X[ ] The file is very large. Browser slowdown may occur during loading and creation. Im X[ ] The file is very large. Browser slowdown may occur during loading and creation. Magnitude of X[ ] The file is very large. Browser slowdown may occur during loading and creation. Phase of X[ ] The file is very large. Browser slowdown may occur during loading and creation. Synthesis (Cos + Sin) The file is very large. Browser slowdown may occur during loading and creation. The file is very large. Browser slowdown may occur during loading and creation. Synthesis (Cos & Sin Sum) The file is very large. Browser slowdown may occur during loading and creation. Synthesis (Mag + Phase) The file is very large. Browser slowdown may occur during loading and creation. Synthesis (Mag & Phase Sum) The file is very large. Browser slowdown may occur during loading and creation. Similar calculators PLANETCALC, The Discrete Fourier Transform Sandbox
{"url":"https://planetcalc.com/7543/","timestamp":"2024-11-13T09:03:41Z","content_type":"text/html","content_length":"76014","record_id":"<urn:uuid:22ebc652-cd72-4df1-ad14-c76c2e98c20e>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00175.warc.gz"}
Welcome to my site devoted to research on the physics of baseball. My particular research interests are two-fold: the physics of the baseball-bat collision and the flight of the baseball. I have done quite a bit of independent research in both areas. I am also heavily involved with several areas of practical interest to the game. One is characterizing, measuring, and regulating the performance of non-wood bats, an area for which I have served on committees advising the NCAA and USA Baseball. Another is exploiting new technologies for tracking the baseball, such as TrackMan, Rapsodo, and now Hawkeye for novel uses in baseball analytics. But this site does much more than catalog my own work. It attempts to provide links to much of the high-quality work done over the past decade or so on various aspects of the physics of baseball. If readers know of a site that I have overlooked, please contact me.
{"url":"https://baseball.physics.illinois.edu/","timestamp":"2024-11-02T18:37:16Z","content_type":"application/xhtml+xml","content_length":"23190","record_id":"<urn:uuid:5651e4d6-cb18-4572-b63c-38bac07d4031>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00799.warc.gz"}
Modifications to Jan 08, 2024 Dynamic masses, option to copy properties, assemblage stages, composite structures (steel & reinforced concrete), soil cushion (constructed soil), fire resistance analysis, piles, load combinations, output data in *.csv, ultimate inelastic strain, coupling of profiles, combined types of pilot reinforcement (PR types), rope, stability analysis, thin-walled shapes, driven piles, analysis of pilot reinforcement (PR), iterative solids • Plug-in for Tekla Structures: restored option to update the cross-sections according to results of FEA, selection/check of steel cross-sections. • Enhanced export from Revit to LIRA-SAPR: plates, openings and loads with curved contour segments. • Enhanced IFC import, namely: □ import of wall snaps; □ recognition of beam cross-sections; □ recognition of column cross-sections; □ fixed bug in recognition of the space elevation; □ recognition of stairs and conversion of prisms into stairs is modified. • Fixed bug in IFC export that caused the application to fail in some cases. • Export from DWG is improved. When the "Section only" option is active for a section, stairs are no longer exported when exporting to DWG if they are not included in the section. • Enhanced option to copy storeys from one project to another. • Restored option to display the "Offset" parameter for beams in the "Properties" dialog box. • Enhanced algorithm for generation of punching shear contours for columns with a section rotated on plan. • Added check for the method of generation - "Rectangle". Now the object is not generated if two sides of the rectangle coincide. • Improved transfer of plates from VISOR-SAPR module to SAPFIR module (plates restored from the design model to physical model). • Account for the volume of capital and column base when calculating the concrete volume in columns. • Enhanced generation of bar analogues of partitions for walls with a large number of openings of different heights. • The "Fill pattern for opening" dialog box is modified. • Fixed bug in displaying reinforcement in 3D view. • Fixed bug: the load applied through the slab properties did not take into account the opening in slab. • Fixed bug: deleting the links between nodes in the algorithm saved to the SAPFIR library. • Enhanced work of GridWall node. Now triangulation is taken into account when the wall is copied by storeys. • Triangulation by nodes is improved. • Modified node for generation of grid lines from the underlay. • Modified nodes for import of IFC and XLS. • Fixed bug in nodes for load generation when load values should be obtained from tables via ImportXLS node. • Fixed bug in the ImportIFC node when trying to update it. • Fixed: possible bug when opening the *.lir files for problems that contain deleted but not packed elements in which dynamic masses were defined. • To copy the node properties, the "Copy for selected objects" command is restored in the "Information about node" dialog box. • Fixed a possible bug in solver; it occurred when the number of specified assemblage stages was large (more than 300 stages). • Fixed bug in transferring of SAPFIR models that contains composite (steel & reinforced concrete) cross-sections. • The calculation of bimoment Bw is clarified for the 3D bar with account of the warping of the section (FE type 7). • In the SOIL system, modified option to assign properties for the layer of soil cushion (constructed soil). • Corrected option to select transverse reinforcement in analysis on fire resistance for DBN B.2.6-98:2009. • In the calculation of pile stiffness, the pile parameter "exclude offset length from pile length" is corrected. • For FE 265, 266, the stiffness description is restored to output the correct value for a specified clearance. • When the DCL table is generated by SP RK EN 1990:2002+A1:2005/2011, the possible failure of the program is eliminated when creating the main combination that contains wind and snow types of • Methods for importing files and generating the output data in CSV format are added to LIRA-FEM API. • Modified: option to account for ultimate inelastic strain when transferring DCF forces to the analysis of steel joints. Previously, the reduction factor was not taken into account when there were less than two components as a result of the earthquake analysis by KMK 2.01.03-19. • Fixed bug: failure of application when trying to configure coupling of built-up profiles of channel and channel+I-beam types. • Corrected: possible failure of the application when calculating parameters of nonlinear cracks in RC plates in which the reinforcement in the plate cross-section is defined using multi-component types of PR (pilot reinforcement). • The stiffness of the FE 310 and stiffness of the steel rope is now updated when diameter of the rope is modified. • Restored: stability analysis for thin-walled shapes (thickness less than 4 mm) by DBN B.2.6-198:2014. • Corrected: output of mosaic plots for utilization ratio in buckling of steel beams and columns. • Partial safety factor Yc is taken into account in calculating the load-bearing capacity of driven piles in pullout load. • For RC bars, correct presentation of values on the mosaic plot for reserve factor of reinforcement in the "Reserve factor/Additional reinforcement (RF/AR)" check mode, as well as the output of values in the "Reinforcement in elements" tables. • Clarified computation of the stress strain state in iterative nonlinear solids with the specified concrete and reinforcement. • For SP RK EN 1992-1-1:2004/2011, corrected the error in calculating the reserve factor for reinforcement in plate elements when the acting forces from shear loads are less than the load-bearing capacity of concrete section. • The restriction on the number of soil layers defined in the "Boreholes" dialog box of the SOIL system is removed. • Corrected documentation in the "Boreholes" input data table for the case when the numbering of boreholes does not start with "1". Jan 08, 2024 Tekla Structures, Revit, import/export *.ifc, option to copy storeys, punching shear contours, volume of concrete, filling in thw opening, reinforcement, nodes, library, triangulation, grid lines • Plug-in for Tekla Structures: restored option to update the cross-sections according to results of FEA, selection/check of steel cross-sections. • Enhanced export from Revit to LIRA-SAPR: plates, openings and loads with curved contour segments. • Enhanced IFC import, namely: □ import of wall snaps; □ recognition of beam cross-sections; □ recognition of column cross-sections; □ fixed bug in recognition of the space elevation; □ recognition of stairs and conversion of prisms into stairs is modified. • Fixed bug in IFC export that caused the application to fail in some cases. • Export from DWG is improved. When the "Section only" option is active for a section, stairs are no longer exported when exporting to DWG if they are not included in the section. • Enhanced option to copy storeys from one project to another. • Restored option to display the "Offset" parameter for beams in the "Properties" dialog box. • Enhanced algorithm for generation of punching shear contours for columns with a section rotated on plan. • Added check for the method of generation - "Rectangle". Now the object is not generated if two sides of the rectangle coincide. • Improved transfer of plates from VISOR-SAPR module to SAPFIR module (plates restored from the design model to physical model). • Account for the volume of capital and column base when calculating the concrete volume in columns. • Enhanced generation of bar analogues of partitions for walls with a large number of openings of different heights. • The "Fill pattern for opening" dialog box is modified. • Fixed bug in displaying reinforcement in 3D view. • Fixed bug: the load applied through the slab properties did not take into account the opening in slab. • Fixed bug: deleting the links between nodes in the algorithm saved to the SAPFIR library. • Enhanced work of GridWall node. Now triangulation is taken into account when the wall is copied by storeys. • Triangulation by nodes is improved. • Modified node for generation of grid lines from the underlay. • Modified nodes for import of IFC and XLS. • Fixed bug in nodes for load generation when load values should be obtained from tables via ImportXLS node. • Fixed bug in the ImportIFC node when trying to update it. Jun 28, 2023 DCF, crane loads, tables, stiffness refinement, coupled DOF, soil properties, Report Book, surface load, super-elements, initial imperfections, modulus of dynamics 61, interface settings, step-type nonlinearity, input tables, METEOR system, steel, responsibility factors, brick piers, pile stiffness, progressive collapse, corrosion, Cross-section Design Toolkit, principal and equivalent stresses, Tekla Structures 2023, partial safety factors, SP RK EN 1998-1:2004/2012, 'fixed and non-fixed' elements, interactive tables, FOK-PC, nominal slenderness method. • Fixed bug: snap of the section when mirror copying the columns. • Fixed bug: behaviour of rebar dowels for columns with a shifted section (not in the centre of mass). • Fixed option: creating storeys in the node Import *.ifc. • Fixed bug: when importing curvilinear beams from *.ifc file. • Fixed bug: rotation of section for almost vertical beams. • For inclined slabs, the option to create additional load cases in problems with assemblage analysis. • Added option: to cut inclined columns by storeys. • Fixed bug: in trim of beams for inclined columns. • Fixed filter to export a 3D model to AutoCad. • Fixed bug: in uniting slabs contours with holes. • Restored option: to generate DCF combinations for crane loads. • Fixed encoding in a number of text tables (in *.rpt format): table of soil properties, results for calculation of subgrade moduli C1/C2, results of pile calculation, tables of results for analysis of metal structures. • Fixed bug: possible circularity during iterative stiffness refinement in the nonlinear structural analysis 'NL Engineering 1' for elements with a low percentage of reinforcement. • Enhanced options to represent and modify groups with coupled DOF when number of groups is more than 10 000. • Fixed bug: in displaying the text pasted from the Clipboard in the 'Soil properties' dialog box. • Fixed bug: in restoring elements of the Report Book with names that contain Cyrillic characters. • For mosaic plots of concentrated loads, new option to visualize separately the surface loads and other concentrated forces. • Fixed bug: in the presentation of DCF results for super-elements whose names contain Cyrillic characters. • For finite elements included in a structural element, duplication of loads is eliminated when specifying initial imperfections. • For the earthquake load by SP RK EN 1998-1:2004/2012, NTP RK 08-01.1-2021 (dynamics module 61), corrected visual representation for the graphs of the horizontal and vertical dynamic factors for the III-rd type of soil. • Fixed bug: in transferring the settings for user interface from previous versions of LIRA-FEM program. • For nonlinear problems with super-elements, enhanced presentation of results at intermediate steps of nonlinear histories; and for super-elements presented in full, enhanced presentation of results in the window with information about nodes and elements. • For the step-type nonlinear problems, restored text tables of forces by DCL at the final steps of nonlinear load histories. • Fixed bug: in copying the 'Design combinations of loads (DCL)' input table to another design model. • Fixed bug: possible failure of the program when you make a mirror copy of the model fragment. • Enhanced option: arrangement of stiffeners in a universal bar of variable cross-section. • Fixed bug: in saving a group of images in folders with long names (Report Book). • In the METEOR system, restored option to generate an integrated problem in the 'DCF+' mode for problems with time history analysis and problems with nonlinear load histories. • Improved scaling of toolbar icons for 4K UHD monitors. • Added option: account for user-defined values of structure responsibility coefficients when calculating DCF for brick piers. • In calculation of pile stiffness, enhanced options to define the type of soil layer under the pile toe. • Fixed bug: in computing the width of settlement zone for some types of standard sections. • In analysis of steel structures on progressive collapse, in history/DCF/DCL for the last load case (Special/Emergency), it is possible to define the duration coefficient to check by deflections. • Fixed bug: possible pause of batch analysis during iterative refinement of pile stiffness. • Fixed bug: possible discrepancy between values in dialog boxes and in the report file when displaying the results for the calculation of forces in partitions. • For steel sections analysed with account of corrosion, refined values for utilization ratio (in case of overall stability) that are displayed in the result file. • Fixed bug: when exporting shear forces to the 'Cross-section Design Toolkit' module. • Fixed bug: in displaying the results of calculating the principal and equivalent stresses by DCF in large problems (more than 32 thousand elements). • Added plug-in for integration with Tekla Structures 2023. • Fixed bug: in analysis of reinforced concrete elements on shear force, for groups of forces A1, B1, C1, D1, E1, corrected use of coefficients yb2 and yb3. • In analysis of reinforced concrete elements on shear force according to SP RK EN 1998-1:2004/2012, enhanced generation for design parameters of materials. • In analysis of reinforced concrete bars according to SP RK EN 1998-1:2004/2012, corrected influence of the parameter 'fixed/non-fixed' element in analysis by nominal slenderness. • Fixed bug: in encoding the *.xls files for interactive tables with results. • Fixed bug: in export of data (for calculation of foundations) from LIRA-SAPR 2022 to FOK Complex in the case the export file contained Cyrillic characters. • Fixed bug: in rotating the model about the global X and Y axes when the design model (in isometric projection) is rotated. • In analysis of reinforced concrete bars according to SP RK EN 1998-1:2004/2012 for columns, enhanced calculation of the reduced moments in analysis by nominal slenderness. Jun 28, 2023 *.ifc import, snap of section, rebar dowels, nodes, curvilinear beams, inclined slabs, trim option for beams. • Fixed bug: snap of the section when mirror copying the columns. • Fixed bug: behaviour of rebar dowels for columns with a shifted section (not in the centre of mass). • Fixed option: creating storeys in the node Import *.ifc. • Fixed bug: when importing curvilinear beams from *.ifc file. • Fixed bug: rotation of section for almost vertical beams. • For inclined slabs, the option to create additional load cases in problems with assemblage analysis. • Added option: to cut inclined columns by storeys. • Fixed bug: in trim of beams for inclined columns. • Fixed filter to export a 3D model to AutoCad. • Fixed bug: in uniting slabs contours with holes. Feb 17, 2023 assemblage stages, additional load cases, max area of reinforcement, test perimeter, surface load, PRB, coefficient to modulus of elasticity, behaviour coefficient, DCL, graph of change in reactions over time, deformed shape, structural blocks, summing up loads, polar moments of inertia for masses, partitions, damping ratio, earthquake combination, input tables, load case editor, dynamic load cases, post-stage load cases, iterative process, analysis by accelerogram, foundations for machine with dynamic loads, horizontal stiffness, load combinations, properties of GE, soil cushion, piles, punching shear analysis, slenderness ratio, pattern of crack propagation, effective length, buckling, time history analysis, shear stress, steel grades, bore holes, wind loads, asymmetric pressure, summing up mode shapes, project structure, new triangulation method, joints, embedded items, drawings, reinforcement pattern, transverse reinforcement, reinforcing cages, nodes Interoperability - components of BIM technology • Improved plug-in Revit - LIRA-SAPR: □ the 'Export' dialog box is now non-modal, so it is possible you to assign properties to Revit analytical models without closing the dialog box; □ restored option to export the Linear load to the element from Revit 2022 to LIRA-FEM program; □ new option to assign 'materials by category' for the English localization of the program. • Combined *.dwg and *.dxf imports for the 'Import floor plans', 'Import AutoCad drawing', 'Import model to new project' commands and for the 'Import underlay in *.dxf, *.dwg format' node. • In the 'Import floor plans' dialogue box, the storey height may be saved to the parameter template to be applied later. • Enhanced import of IFC: □ storeys are created by slabs for models that are saved in the IFC as one storey; □ improved recognition of walls with a large number of faces; □ improved recognition of openings; □ new recognition of object colours; □ improved import of beams. • If the wind load is applied by method '1 - to ends of floor slabs', the pressure/suction option is implemented separately (for all building codes). If the option is defined as 'Yes', then the separate loads are generated for the positive and negative wind pressure. • SP RK EN 1991-1-4:2005/2011 Wind loads, clause 7.1.2 (Asymmetric wind pressure) is supported. If the wind load is applied by method '1 - to ends of floor slabs' and '2 - positive/negative wind pressure', it is possible to define the parameter 'Asymmetric wind pressure' with pressure coefficients (left/right) for all building codes. • In the 'Sum up loads' dialog box, there is new option to edit the total load separately for each direction and for each load case. This option is available in the 'Architecture' and the 'Meshed model' modes. • New modules for earthquake loads according to building codes of Uzbekistan KMK 2.01.03-19 (module 33), Tajikistan MKS CHT 22-07-2007 (module 48) and Georgia PN 01.01.-09 (module 53). • The following parameters are added for all earthquake modules: □ the required percentage of modal masses; □ option to sum up the displacements with the same frequency; □ option to define the method for summing up the earthquake components; □ account of excluded and non-computed mode shapes. • In the 'Meshed model' mode, the 'Copy loads to architecture' option is available in the 'Load cases' dialog box, on the 'Edit load cases' tab (see the shortcut menu). It enables the user to copy any loads to architecture, including wind loads and loads obtained after the load collection with proxy objects. • Option to delete the load case together with all loads it contains: □ for wind, earthquake, special load and soil pressure - the relevant items are deleted in the 'Structure' window; □ for the load defined in the properties of slab - the relevant item with the load value becomes clear; □ for objects with interpretation 'Load' (partition, beam, column, slab, etc.) - the object itself is deleted. • A new triangulation method 'adaptive quadrilateral version 2' is implemented. Based on comparison results, for certain problems, the 'adaptive quadrilateral version 2' method may speed up the process by 2 to 4 times. The greater the ratio of model dimension to triangulation step and the more mandatory points for triangulation, the faster the triangulation will be with the new method relative to the previous method. Also for a number of problems in which the 'Smooth mesh' option enabled, the FE mesh quality is considerably enhanced. Large Panel Buildings • Enhanced division of the FE of joint over openings. • Enhanced generation of embedded items in the joint over the openings in case the 'Lintel is simulated with bar' option is defined in the properties of the opening. Design of RC structures (Reinforced Concrete) • The drawings and detail views for punching shear according to the SP RK EN 1992-1-1:2004/201. • A new 'Arrangement of reinforcement' dialog box is mentioned to define the parameters for the arrangement of transverse reinforcing bars in punching shear and then to design with separate rebars or reinforcing cages. • Added nodes: 'Arc by three points', 'Arc by two points and direction', 'Plane by three points'. • For EN 1990:2002+A1:2005, SP RK EN 1990:2002+A1/2011, it is possible to generate characteristic DCL(c) from the output data for time history analysis of the problem. With this option you could, for example, use the time history analyses for the design of active seismic insulation and at the same time to verify the load-bearing capacity of structural elements (reinforced concrete, • Enhanced visualization of the model so that assemblage stages are displayed with account of additional load cases. When this option is enabled, not only the elements assembled at this stage are displayed, but also the additional load cases associated with this stage. Mosaic plots for loads and summing up the loads (additional load cases defined in the current assemblage stage are considered) will depend on whether this option is enabled or not. • For problems in which erection is modelled, new checks for the input data are added. In the 'Model nonlinear load cases' dialog box, there is new command to select the elements assembled (mounted) at the current stage with loads applied before assemblage. • New option to add the selected element to the list of elements that should be assembled or disassembled. • New option to display the mosaic plot for max area of longitudinal reinforcement at the corners/top and bottom faces/side faces of the bar section along the certain direction. • New mosaic plots for the output data from analysis of reinforced concrete (RC) structures: □ codes for errors in calculation of seismic safety factors FS in bars and plates for DBN B.2.6-98:2009 'Concrete and reinforced concrete structures'; □ test perimeter Uout in punching shear according to SP RK EN 1992-1-1:2004/2011 'Design of reinforced concrete structures'. • Mosaic plots for angles between the selected local (unified) axis/axis of orthotropy of the plate and the global axis. • New option to edit the surface loads in groups. • When perfectly rigid body (PRB) is generated, there is an option to convert selected PRB into bars of high stiffeness. This feature may be helpful when modelling thermal loads, e.g. for floor slabs to avoid stress concentrations at the slab-to-column connections. • Visualization of envelopes for max, min and max in absolute value of the examined parameter for nonlinear histories (without intermediate results) and crack parameters (with intermediate • Visualization of envelopes for max in absolute value (ABS) examined parameters for DCF and DCL characteristic. • When the coefficient for modulus of elasticity is defined, there is a new option to edit the coefficients (defined earlier) for the selected elements or for the whole model by a factor of n • In the 'Skews' dialog box, new behaviour coefficient q is added to calculate the sensitivity coefficient of storey skew in the earthquake design situation. • In the 'Design combination of loads' dialog box, there is a new option 'Replace types of load cases in the current DCL table with the data from the 'Edit load cases' dialog box'. Important: When this command is activated, the DCL combinations defined earlier will be reset. • In time history analysis, the changes in reactions over time can be displayed graphically for the nodes where the load (reaction) on the fragment is calculated. • In the window with graphs for changes over the time, new command 'Bring to extreme values': displacements, forces, loads on a fragment, loads on a group of partitions, temperature and graphs of kinetic energy. When the 'Bring to extreme values' mode is activated and you add the check time points with the mouse pointer, the time point with the nearest local extremum will be assigned. • To consider the imperfections in complex structures, the strategy is to generate the initial geometry with curvature. A better selection of shape for the initial imperfections should replicate the global buckling mode. The current version implements a technology to update the model's geometry according to the results (displacements, mode shapes, buckling mode). • Option to convert the soil pressure into a static load. • Option to define the side for presentation of grid line tags. • Option to automatically subdivide (along the vertical) the structural blocks of walls and columns with account of defined elevations. • In the 'Summarize loads' dialog box, new option to calculate polar moments of inertia of masses both for the whole model and for an arbitrary fragment of the model relative to the calculated centres of masses or arbitrarily defined poles. • Option to recalculate the total and unit loads calculated on partitions (for cases when the group of partitions and/or design levels is modified after complete analysis). • Option to assign the damping ratio to elements of the design model and to visualize the mosaic plot for damping ratio. By default, damping ratio for the elements are not defined (=0). It is possible to assign different damping ratio to certain elements. If the damping ratio is not assigned to some elements, then the value of the damping ratio defined in parameters for dynamic modules 27 and 29 will be applied in analysis. • For SP RK EN 1990:2002+A1:2005/2011, the option to present the combination table 'explicitly', so the combination coefficients and reduction factors are corrected with account of the safety factors to the loads and the type of defined loads. Important. In the current release, this option is implemented for the case when the normative loads are used in the model. • In the DCL table for the SP RK EN 1990:2002+A1:2005/2011, in the 'Coefficients' dialog box, new column with fi coefficients that reduce the contribution of the load case to the earthquake design combination. By default, all values are 1. Note: According to the new NTP RK 08-01.2-2021 (see pages 43-45, chapter 4), it is necessary to reduce the contribution of some temporary loads to the generation of masses for earthquake load. Coefficients of combinations in the respective load types will be multiplied by the specified reduction coefficients fi. It is recommended to create a separate DCL table specifically to generate earthquake masses. The fi coefficients should be modified there and a DCL combination should be generated; the masses will be collected from this combination. • An input table for 'Coefficients to stiffness' related to subproblems. • An option to control the generation of a tracing routine for the punching shear analysis of a certain contours by SP RK EN 1992-1-1:2004/2011 'Design of reinforced concrete structures'. • The high speed of DCL calculations is restored. • Enhanced generation of mosaic plots for the concentrated loads. Surface loads specified for plates and solids now are not considered in mosaic plots for the concentrated loads. • The data in the dialog boxes 'Edit load cases', 'Table of dynamic load cases' and 'Account of static load cases' is synchronized. • For 'collapsed' dynamic load cases, in the mode of analysis results' visualization, a table of coefficients may be pasted from the Clipboard. By default, the coefficients for the components will remain '1' in this case. • New commands on the ribbon user interface, new menus and toolbars in the classic user interface. • Numerous interface and other user requests are implemented. FEM solver • Coefficient to stiffness (kE) may be applied to all nonlinear elements available in the analysis of post-stage load cases. So, the coefficients are applied to the linearised stiffnesses obtained in the analysis by 'NL Engineering 2'. This feature may be helpful, for example, when it is necessary (1) to use the diagrams for materials’ behaviour in long-term load, (2) to redistribute stiffnesses with account of cracks in RC sections and (3) for post-stage load cases (wind pulsation, impact/harmonic, earthquake), to transfer to the short-term modulus of elasticity. • For each dynamic load with specific criteria for termination of the iterative process (reaching the required number of total modal masses, ultimate frequency, etc.), after each iteration the program displays information about the accumulated total masses (for earthquake) and about the max calculated frequency (for pulsation components). According to this information the user could evaluate whether the iteration process should be continued or terminated to reduce the time for analysis. • In the analysis by the accelerograms with dynamics modules 27 and 29 for design models that consist of elements or fragments of structures with different damping properties, the analysis of equivalent damping for the j-th eigenvalue of vibrations is implemented according to the following formula: ξj={φj}T*∑[ξK]i*{φj}/{φj}T*[K]*{φj} where {φj} is the vector of the j-th mode shape, [K] is the stiffness matrix for the model, ∑[ξK]i is the stiffness matrix for the i-th element or fragment of the structure multiplied by the damping ratio for that element. • Calculation of elastic foundation ('Method 5') by formula (4) SNIP 2.02.05-87 'Foundations of machines with dynamic loads'. It enables the user to compute the coefficient of elastic uniform compression Cz (C1z) in dynamic loads on the foundation. • Stiffnes for one-node FE is calculated in order to simulate the shear stiffness of the soil base depending on the C1z assigned to the adjacent elements or on the C1z defined by the user. Note: It is possible to calculate the stiffness of a one-node FE that simulates the rotational stiffness of the soil base around the vertical and horizontal axes. It should be noted that the linear stiffnesses distributed at the foundation base – subgrade moduli C1=Cz - also resist the rotation of the building. Therefore, the obtained rotational stiffnesses distributed across the foundation area on the one-node FEs at the appropriate nodes, should be modified by the user. Tip: To introduce elastic springs in foundations, it is preferable to use FE57 rather than FE51, as in this case no extra stiffnesses will appear in the list of stiffness. In this case, to obtain mosaic plot of stiffness in FE57 for the visualization and the report, use the drop-down menu 'Mosaic plots of geometric properties of piles'. • New option to combine loads not only by %, but also by min absolute value. A global setting is added to the 'Options' menu. • Option to numerically display the properties of the geological element (GE) on the soil profile. This graphical representation may be used for documentation in the Report Book. • Direction of the local Z1-axes of horizontal plates is examined to exclude the positive soil pressure Rz. It is required so that the positive soil pressure Rz is not transferred to the input data in case C1/C2 is calculated iteratively. • Expanded options to define the soil cushion: □ to generate a soil cushion of variable layer depth - 'Up to the bottom of the layer'; □ to add the weight of the soil cushion to the additional loads (in this case, the natural pressure is considered only from the natural soil). □ In the first release of version 2022, there was new option to define parameters of the soil cushion for individual subgroups of imported loads. Now the whole set of new parameters may be also applied separately for different foundation fragments that the separate Pz subgroups are assigned to. □ In LIRA-SAPR 2022 R1 and earlier versions, the soil cushion was considered as part of the natural soil. In this case, the diagram of natural pressure was generated from heavier soils (natural soils or soil cushion in order to calculate a more conservative result - a greater depth of compressible stratum). But the diagram from the weight of excavated soil (sigma-zy) is in all cases generated only from the natural soils. The important point is that for the soil cushion, the conversion factor to the 2nd modulus of elasticity should be defined as equal to 1, because the soil cushion is not deformed by the natural pressure of the natural soil. □ From LIRA-SAPR 2022 R2, it is possible to automatically generate the weight of the soil cushion and add it to the additional loads (including soil cushion of variable layer depth). So the soil cushion may be considered as part of the foundation and not the ground (so for this case the natural pressure is only considered from the natural soil). The figure below considers a two-level foundation in which the natural soil is automatically replaced. In this case: □ for the lower foundation, the soil cushion is defined as part of the natural soil (does not generate additional pressure - e.g. weak soil is replaced on the construction site, settlement from the weight of the soil cushion is complete, and then the foundation and basement structures are erected); □ for the upper foundation, the soil cushion that replaces the weak soil is already a part of the foundation, as its weight affects the settlement of the available parts of the foundation and the plinth. • It is now possible to exclude the length of the offset from the pile length. Thus, the pile length from the bottom edge of the foundation slab to the pile toe can be specified in the input data. ARM-SAPR (Reinforced Concrete Structures) • Option to check and select reinforcement based on the DCL(c) generated for problems with time history analysis. • For the SP RK EN 1992-1-1:2004/2011, the force combinations are divided into groups (fundamental, emergency and earthquake); the corresponding material properties are considered in the punching shear analysis. • For structural elements of columns where reinforcement is selected for flexibility by nominal curvature, a single output is available for the results of the selected areas for all elements in the structural element. This option is available for analysis according to SP RK EN 1992-1-1:2004/2011. • For SNIP 2.03.01-84*, in the material parameters 'Concrete', there is a new option 'Clarify the pattern of crack propagation'. When this option is active, analysis of longitudinal reinforcement for a plate element will be carried out regardless of the ratio of the core moment tensor and the crack propagation moment. • The calculation of the effective length factor for structural elements is corrected for SP RK EN 1992-1-1:2004/2011. In earlier versions, moments for calculating Meqv. were taken at the ends of the StE, but the effective length factor was taken from the lengths of the individual StE. • Output of results in case the 'Reinforcement was added according to strength in inclined sections'. • Modified algorithm for analysis of reinforcement in plate elements according to Karpenko's theory in buckling. STC-SAPR (Steel Structures) • DBN B.2.6-198:2014 Amendment No. 1 is supported. • New check/selection of the section by the DCL(c) generated for problems with time history analysis. • In the local mode, for the element type 'column', a separate output of utilization ratio % is provided for tangential stresses. In previous versions, the results of this check were included in the final utilization ratio, so it was difficult to evaluate the analysis results. SRS-SAPR (Steel Tables) • New steel tables are added: □ DSTU 8539:2015 'Rolled steel for building steel structures'; □ DSTU 8541:2015 'High-strength rolled steel products'; □ DSTU 8938:2019 'Seamless hot-deformed steel pipes'. BRICK (Masonry Structures) • In problems with time history analysis, it is possible to generate the graph for change in loads for brickwork levels. Report Book (Documentation system) • If a group of selected Report Book images is saved simultaneously and the 'Apply to all files' checkbox is selected, there is new option to resize the remaining images by enlarging or reducing the first image for which dimensions are shown in the corresponding boxes. • To control and document the input data, there is new option to present the DCL combinations with formula. • New option to document soil and borehole properties and paginate this data. • Option to save the graphs of changes in reactions over time in *.xls, *.csv format. • A column with an index of a force group is added in the table of forces in the punching shear analysis. Feb 13, 2023 Interoperability - components of BIM technology • Improved plug-in Revit - LIRA-SAPR: □ the 'Export' dialog box is now non-modal, so it is possible you to assign properties to Revit analytical models without closing the dialog box; □ restored option to export the Linear load to the element from Revit 2022 to LIRA-FEM program; □ new option to assign 'materials by category' for the English localization of the program. • Combined *.dwg and *.dxf imports for the 'Import floor plans', 'Import AutoCad drawing', 'Import model to new project' commands and for the 'Import underlay in *.dxf, *.dwg format' node. • In the 'Import floor plans' dialogue box, the storey height may be saved to the parameter template to be applied later. • Enhanced import of IFC: □ storeys are created by slabs for models that are saved in the IFC as one storey; □ improved recognition of walls with a large number of faces; □ improved recognition of openings; □ new recognition of object colours; □ improved import of beams. • If the wind load is applied by method '1 - to ends of floor slabs', the pressure/suction option is implemented separately (for all building codes). If the option is defined as 'Yes', then the separate loads are generated for the positive and negative wind pressure. • SP RK EN 1991-1-4:2005/2011 Wind loads, clause 7.1.2 (Asymmetric wind pressure) is supported. If the wind load is applied by method '1 - to ends of floor slabs' and '2 - positive/negative wind pressure', it is possible to define the parameter 'Asymmetric wind pressure' with pressure coefficients (left/right) for all building codes. • In the 'Sum up loads' dialog box, there is new option to edit the total load separately for each direction and for each load case. This option is available in the 'Architecture' and the 'Meshed model' modes. • New modules for earthquake loads according to building codes of Uzbekistan KMK 2.01.03-19 (module 33), Tajikistan MKS CHT 22-07-2007 (module 48) and Georgia PN 01.01.-09 (module 53). • The following parameters are added for all earthquake modules: □ the required percentage of modal masses; □ option to sum up the displacements with the same frequency; □ option to define the method for summing up the earthquake components; □ account of excluded and non-computed mode shapes. • In the 'Meshed model' mode, the 'Copy loads to architecture' option is available in the 'Load cases' dialog box, on the 'Edit load cases' tab (see the shortcut menu). It enables the user to copy any loads to architecture, including wind loads and loads obtained after the load collection with proxy objects. • Option to delete the load case together with all loads it contains: □ for wind, earthquake, special load and soil pressure - the relevant items are deleted in the 'Structure' window; □ for the load defined in the properties of slab - the relevant item with the load value becomes clear; □ for objects with interpretation 'Load' (partition, beam, column, slab, etc.) - the object itself is deleted. • A new triangulation method 'adaptive quadrilateral version 2' is implemented. Based on comparison results, for certain problems, the 'adaptive quadrilateral version 2' method may speed up the process by 2 to 4 times. The greater the ratio of model dimension to triangulation step and the more mandatory points for triangulation, the faster the triangulation will be with the new method relative to the previous method. Also for a number of problems in which the 'Smooth mesh' option enabled, the FE mesh quality is considerably enhanced. Large Panel Buildings • Enhanced division of the FE of joint over openings. • Enhanced generation of embedded items in the joint over the openings in case the 'Lintel is simulated with bar' option is defined in the properties of the opening. Design of RC structures (Reinforced Concrete) • The drawings and detail views for punching shear according to the SP RK EN 1992-1-1:2004/201. • A new 'Arrangement of reinforcement' dialog box is mentioned to define the parameters for the arrangement of transverse reinforcing bars in punching shear and then to design with separate rebars or reinforcing cages. • Added nodes: 'Arc by three points', 'Arc by two points and direction', 'Plane by three points'. Jan 04, 2023 Slab contours, axes, *. ifc, Autodesk Revit, load collection, special load, intersections, steel table, pile array, levels, PRB, lintels, joint, reinforcement colour palette, rope, SP RK EN 1992-1-1:2004/2011, SP RK EN 1993-1-1:2005/2011, DCF table, characteristic combination, modal analysis, three-component accelerogram, metal section, moduli of subgrade reaction (subgrade moduli), analysis on seismogram, analysis termination, trapezoidal load, loading by formula, structural elements, calculation of deflection • For floor plans, corrected bugs in case: □ two slab contours (foundation slabs) are generated if one of their edges coincides; □ grid lines are created by layer if the grid line label and grid line itself are on different layers. • Enhanced export of beams to IFC file. • Fixed bug in the output for results of reinforcement for the LIRA-FEM and Autodesk Revit integration. Preprocessor LIRA-CAD • For the 'Special load' tool, the number of created loads is displayed in the 'Edit load cases' dialog box. • In the project properties there is new parameter 'Intersection diagnostic error' that is used when the floor slabs are checked for intersections • It is possible to switch the table's orientation from rows to columns in the steel table. • Work with large arrays of piles is accelerated (options to copy, select, move objects). Files with large pile arrays are now opened more quickly. • Improved autodetection of analytical floor levels for cases with many multi-level slabs within one floor. • Enhanced generation of PRB column-wall for situations where the model contains offsets from column to wall. • Fixed script error when you open the file (and its folder) from the SAPFIR start page. Improved loading of the start page in cases of poor internet connections. • Restored option 'Apply to adjacent walls' in properties of the opening. • Enhanced 'Mirror' command for openings in the slab and wall. • Corrected error when the Slab with interpretation 'Load' was not included into the meshed model. • Fixed bug that in certain situations caused the window infill to disappear. Panel buildings • For a lintel (simulated with a bar) that is defined in the properties of door and window openings, the bar is divided along FE of horizontal joint (restored option) when the model is transferred from SAPFIR module to VISOR-SAPR module. • Enhanced auto arrangement of joints with the 'Apply' command in the Joint tool. Design of RC structures • Reinforcement colour palette is updated (restored option) when parameters of colour palette (colour, diameter, step) are modified. • Improved view of wall reinforcement for cases where openings in the wall are indicated from the storey top with a negative relocation. • Enhanced option to bake nodes. • Restored node 'Rope'. • Fixed bug related when you undo a node if the properties of another node were edited just before it. Compatibility issues are identified for some Radeon graphics card models. Unified graphic environment, design modules, etc. • In the analysis of reinforcement according to SP RK EN 1992-1-1:2004/2011, the influence of the slenderness ratio on the values of design moments is clarified when the first order effects (geometric imperfections) are considered. • Accelerated procedures for the selection of steel sections of the 'rectangular tube' type by SP RK EN 1993-1-1:2005/2011. • For plate elements, the number of characteristic combination is corrected in the data about reserve factor for reinforcement. • In the DCF table for the punching shear contours, the shift of force values in the columns of the table is corrected. • Corrected error in generation of mosaic plot for analysis results of reinforcement in bars. • In the mode of analysis results visualization, the results are presented graphically for dynamic loads for which the dynamic type is defined as 'Modal analysis' (restored option). • The scale factors to a three-component accelerogram (dynamics module 29) are corrected if they were set to 0. • For weakly compressible soils, the calculation for the depth of compressible stratum Hs is clarified. • In the 'Metal section' dialog box, presentation of the specified corrosion data is restored. • The stiffness properties of an I-beam section defined parametrically (that is, no steel table) are clarified in case of rotation for a section. • In the 'Information about element' dialog box for FE 53, the 'subgrade moduli' tab is restored. • The seismogram calculation in nonlinear time history analysis is refined. • Modified calculation for the rigid joint of metal beams and beam-to-column connections. • When specifying physically nonlinear stiffness for standard section types, it is possible to define parameters to arrange the horizontal reinforcement (restored option). • Clarified analysis of transverse reinforcement in RC elements according to TKP EN 1992-1-1-2009. • Limitations to stability analysis for steel cross-sections of beams, columns and trusses are corrected according to SP RK EN 1993-1-1:2005/2011. • Clarified conditions for termination of analysis according to a certain criterion for geometrically nonlinear problems. • Fixed bug in the location of trapezoidal load for the bars with variable cross-section. • The list of FE types and conditions for assigning C1/C2 and Pz to bars are clarified. • For SP RK EN 1992-1-1:2004/2011, the reinforcement class C is selected by default when RC materials are defined (value of the coefficient k=1.15). • Fixed bug in analysis of load cases by formula when no component is calculated for a dynamic load available in the formula. • In the 'Stiffnesses and materials' dialog box, it is possible to define the stiffness for FE 341-344 (restored). • A possible program crash during analysis of reinforcement in problems with structural elements is eliminated. • In the calculation of deflections for steel elements carried out by DCF for load histories, the account of forces for group B2 is corrected. • In the 'Report Book', possible problems when the DCF table is generated in *.csv format are corrected. • Fixed bug in generating mosaic plot for the number of elements adjacent to the nodes. Jan 04, 2023 The following items/modules are updated: import *.ifc, Autodesk Revit, slab contours, grid lines, collection of loads, special load; Large panel buildings (module), Design of RC structures and Generator modules • For floor plans, corrected bugs in case: □ two slab contours (foundation slabs) are generated if one of their edges coincides; □ grid lines are created by layer if the grid line label and grid line itself are on different layers. • Enhanced export of beams to IFC file. • Fixed bug in the output for results of reinforcement for the LIRA-FEM and Autodesk Revit integration. Preprocessor LIRA-CAD • For the 'Special load' tool, the number of created loads is displayed in the 'Edit load cases' dialog box. • In the project properties there is new parameter 'Intersection diagnostic error' that is used when the floor slabs are checked for intersections • It is possible to switch the table's orientation from rows to columns in the steel table. • Work with large arrays of piles is accelerated (options to copy, select, move objects). Files with large pile arrays are now opened more quickly. • Improved autodetection of analytical floor levels for cases with many multi-level slabs within one floor. • Enhanced generation of PRB column-wall for situations where the model contains offsets from column to wall. • Fixed script error when you open the file (and its folder) from the SAPFIR start page. Improved loading of the start page in cases of poor internet connections. • Restored option 'Apply to adjacent walls' in properties of the opening. • Enhanced 'Mirror' command for openings in the slab and wall. • Corrected error when the Slab with interpretation 'Load' was not included into the meshed model. • Fixed bug that in certain situations caused the window infill to disappear. Panel buildings • For a lintel (simulated with a bar) that is defined in the properties of door and window openings, the bar is divided along FE of horizontal joint (restored option) when the model is transferred from SAPFIR module to VISOR-SAPR module. • Enhanced auto arrangement of joints with the 'Apply' command in the Joint tool. Design of RC structures • Reinforcement colour palette is updated (restored option) when parameters of colour palette (colour, diameter, step) are modified. • Improved view of wall reinforcement for cases where openings in the wall are indicated from the storey top with a negative relocation. • Enhanced option to bake nodes. • Restored node 'Rope'. • Fixed bug related when you undo a node if the properties of another node were edited just before it. Compatibility issues are identified for some Radeon graphics card models. Oct 12, 2022 INTEROPERABILITY - components of ВIM technology • Enhanced options for two-way integration with Autodesk Revit. BIM integration with Autodesk Revit 2023. Export of both physical and analytical model. Import of only analytical model from Revit • A special tool to check the reinforcement in plate elements; it enables you to automatically present in certain colour the under-reinforced areas in plate elements. This tool interacts both with mesh reinforcement 'Distributed' and with the 'Reinforcement by Path' object. • Two-way converter Tekla Structures 2022 - LIRA-FEM - Tekla Structures 2022. The Tekla Structures - LIRA-FEM - Tekla Structures converter provides full functionality for the analysis and design of metal and reinforced concrete structures. • When the IFC file is imported, it is possible to configure the IFC parameters, that is, to match parameters of the IFC object with parameters of the SAPFIR object. Such match option for parameters may can be performed for each type of IFC object. • A new tool for import of DWG files is provided. This makes it possible to use this format: □ as 2D 'underlay' that may be the basis for generating a model in SAPFIR module; □ as a basis for filling the library of typical joints with subsequent generation of drawings; □ for automatic generation of a model based on DWG floor plans. • For DXF/DWG floor plans, the following options are added: □ to import special elements FE 55 With the name of layer it is possible to set the following parameters: indent of special elements from the floor bottom, stiffnesses (Rx, Ry, Rz, Rux, Ruy, Ruz) and the coordinate system in which they are set (global, local). Moreover, all these parameters may be defined directly in SAPFIR module when the floor plan is imported. □ to import vertical triangulation lines for walls With the name of layer you can specify the type and step for line approximation. • Enhanced tool to export types of reinforcement (TR) available in the project for columns to the DXF file. • Import of new objects SAF: □ Loads on plates - concentrated load, concentrated moment, linear uniformly distributed load, linear moment, linear trapezoidal load, plane load; □ Loads on columns - concentrated load, concentrated moment, linear uniformly distributed load, linear moment, linear trapezoidal load; □ Loads on walls - concentrated load, concentrated moment, linear uniformly distributed load, linear moment, linear trapezoidal load, plane load; □ Loads on beams - concentrated load, concentrated moment, linear uniformly distributed load, linear moment, linear trapezoidal load; □ Hinges in beams and columns. Preprocessor LIRA-CAD • Enhanced tool that enables you to automatically create triangulation zones for slabs: □ In addition to triangulation zones (for slabs) located above the walls, it is now possible to create triangulation zones (for slabs) below the walls with indent from the wall in 4 directions and individual triangulation step; □ Enhanced algorithm for triangulation of contours; it provides better triangulation of slabs-to-wall connections. • It is now possible to automatically refine triangulation mesh for plates near the openings. In the properties of the opening you could define the step of triangulation points around the opening, the number of rows of points with fixed step and the total number of rows of triangulation points. After the rows with fixed triangulation step, the program creates several rows with intermediate step to avoid the degenerate triangles when the fine mesh (near the opening) becomes the sparse mesh (in the span). • The 'Enhance triangulation at intersections' option is presented in the properties of design model to avoid narrow triangular FEs if a coarse triangulation mesh is defined for the model. When this option is selected, at places where narrow triangular FEs should appear, the triangulation mesh is refined and better quality FEs are generated. • Enhanced options for the 'Create grid on wall' node to generate horizontal and vertical triangulation lines in the wall with a specified triangulation step. New parameters are added for the node: □ 'List of levels' - to define the intervals of horizontal triangulation lines from the wall bottom and between each other; □ 'Intervals by openings' – to adapt the wall triangulation lines to vertical triangulation lines from openings, if such lines are defined in the properties of the opening. • Extended options for the 'Sum up loads' dialog box. Now it works not only with the analytical model, but with the meshed model as well. • It is possible to transfer the load from a column base to the soil model. In the column properties you could assign the distributed load on soil Pz, subgrade moduli C1 and C2, horizontal stiffness for the slab supported with soil Cx and Cy, support and boundary conditions to the analytical presentation of the column base. • The rendering of visual load models is optimized. In version 2022, a model with a large number of loads rotates, pans and zooms 1.5 times faster than in version 2021. To activate this option, in the 'SAPFIR preferences' dialog box, on the 'Visualization' tab, use the 'Simplified presentation of loads' check box. • New mode when the wind load is applied manually rather than automatically generated. For the pulsation load, the user-defined static loads may be applied. • Visualization of wind load in the architectural model with the option to 'freeze' the wind. This option allows you to update/cancel automatic wind generation when geometry of the structure is • When the wind pressure (active/passive) is automatically applied in space, it is possible to collect wind on the side walls (zones A, B, C) and define the aerodynamic coefficient for each zone. • Wind load may be collected for flat, gable and shed roofs according to SP RK EN 1991-1-4:2005/2017. • The collection of wind loads on bars is optimized. The slope angles for bars and rotation angles for cross-section are now taken into account. Option to modify the coefficient to load for each • Tools to create special parametric load. This load is transferred to VISOR-SAPR module as load distributed across the plate elements or as load distributed along the bar length rather than the surface load. The load intensity may be defined with the parameters 'Surface load, tf/m2' for plate elements or 'Load per r.m., tf/m' for bars. The load may be applied along the normal to the element. In this case, a number of other parameters become available to simulate the liquid and gas pressure on the tank walls. • Considerably simplified procedure for collecting loads from the surface or slab and redistributing them to a beam system of arbitrary shape. Floor slabs or surfaces with a special new interpretation 'Proxy objects for loads' and loads with the option 'Loads through proxy objects' are used to distribute the loads. When design model is generated, the option 'Distribute loads on beams through proxy objects' becomes active and the program automatically performs all further steps: intersections, triangulation, assignment of supports and analysis. Based on the analysis results, the non-uniformly distributed linear loads on the beams are generated in SAPFIR module. For each element it is possible to correct the coefficient to load. • For the surface load, new option to cut the contour by a line, a plane (hatching), or by the contour of other objects. • For all objects that have interpretation 'Load', new parameter 'Additional load case' to add the load to a certain assemblage stage. • New 'Ventiduct' tool to automatically cut the openings in walls and slabs that it passes through. The openings may be generated exactly according to the shape of the ventilation duct or with a specified indent. Since every opening is associative, it is automatically updated whenever a ventiduct’s location or size is modified. • The option to create an inclined column. In the properties of the object, define the slope angle and direction of inclination for the column. For oblique column, almost complete set of properties for the vertical column is available: changing stiffness parameters, generating PRB, assigning support and boundary conditions, generating triangulation points, etc. • The automatic generation of bar analogues in SAPFIR module. To generate a bar analogue (BA), simple rectangular cross-sections are recognised: □ linear sections of wall; □ slab rectangular in plan; □ lintel above and below the opening; □ pylons or beams, described in meshed model with plate-type FEs. In the properties of BA you could specify the number of BA sections. Division of BA may also be specified with the approximation step. To generate BA from walls or slabs, new option is added to the properties of the corresponding objects. In addition to the option to replace the area above the opening with a bar, in the properties of door and window openings there is new option to save the modelling of the area above the opening with plate elements and to generate a lintel as BA. In the same way, it is possible to generate a BA for the zone below the window. For rectangular beams it is possible to generate a BA in the shape of a T-section. The program automatically recognises the height of the T-section while the flange width of the T-section may be defined in the properties of BA. • Enhanced 'Check model' procedure: □ the warnings that are not critical are removed; □ enhanced algorithm of the search for intersecting slab contours in case the slabs have a compound contour in plan; □ in addition to the search for duplicated objects, the new search for objects in which analytical models partially intersect each other, it will help avoid errors in meshed model; □ when model is checked for coinciding or intersecting objects, it is possible to consider the objects from different floors. • New tools to generate a retaining wall and a slab of variable thickness. The section contour of the retaining wall is defined in the 'Cross-section parameters' dialog box. For a slab of variable thickness, the max and min slab thicknesses are defined. The analytical model of the retaining wall and the slab of variable thickness is presented as several plates of different thicknesses. The number of plates is specified in the 'number of divisions in analytics' parameter in the slab/wall properties. The plates may be coaxial or shifted by offsets relative each other. • Option to edit the wall contour in the plane of wall. • For columns and beams, new option to define a variable cross-section for all standard SAPFIR cross-sections. Note that in LIRA-FEM only rectangular bar and I-shape may have a variable section, i.e. when rectangular bar and I-shape are imported, they will retain their parameters. In other case, after import procedure the bar is split into parts with increasing stiffness. • Tools for splitting a wall with a column. In the column properties, there is a new 'PRB column-wall' parameter that allows you to create perfectly rigid body (PRB) between the wall ends and the column. The PRB is associative, i.e. when one of the walls or columns is moved, the relationship between the objects is remained. • Optional presentation of the FE mesh on the physical model. The option is available after performing triangulation and saving the *.s2l file to transfer it to VISOR-SAPR module. • The generated PRB (defined as a property and generated by the search for intersections) may be displayed on the analytical model. PRB is displayed as orange lines that connect the nodes included into PRB. • Several new tools to evaluate the quality of the generated triangulation mesh: mosaic plots for quality of plates, area of plates, min angles of plates, min lengths of plate ribs, lengths of bars and rotation angles of bars. • Added 'Align' command to align walls vertically. There are two modes for alignment: parallelism - after alignment they will be parallel to the selected wall, but not coaxial; vertical coaxial - after alignment they will be parallel and vertically coaxial to the selected wall. • New option to select similar objects by horizontal stalk. They are selected with the 'Select horizontally' options ('Select along axis/direction' commands). The following objects may be selected: □ Columns; □ Piles; □ Walls; □ Beams; □ Slabs; □ Foundation slabs; □ Concentrated load; □ Linear load. • Tolerances for analytical models of the objects are added to the project properties: □ 'min threshold of door' height for analytical models of wall. • The 'Stair' tool is enhanced: □ more support options for stairs. It is now possible to assign that a flight of stairs is supported with the landing and floor slabs as a coupled DOF along Z, X and Y or to select a user-defined support; □ auto unification of local axes for stairs when the model is transferred to VISOR-SAPR module. • In the 'Snap of base point' dialog box it is possible to select the location for the analytical presentation of the beam and column within the section. • Enhanced 'Shaft' tool to work with storey levels and additional levels within a storey. Openings along the shaft contour are generated automatically in all slabs that the shaft passes through. • New functionality for the 'Other' object: □ to select the 'Ventiduct' option in the properties of the 'Other' object to automatically generate openings in all walls and slabs that the 'Other' object passes through; □ the 'Cut by storeys' command is also available for the 'Other' object. • For the capital and column base, new option to create stages only in one direction. • The schedule of the metal shapes may be organized by user-defined types of elements: column, beam, framework, brace member, purlin, rope, etc. • For the existing 'Cut' command there is an 'Extend' option that allows you to extend all SAPFIR linear objects up to a specified line. The command is available in 3D views, on floor plans, facades, cross-sections, sectional elevations and drawings. • It is now possible to save the SAPFIR file together with all the files associated with it (SLD - soil model, DXF, DWG, IFC, SAF, XLS and ASP - reinforcement results) to a separate project folder. A project archive may be created in the same way. • On the 'Project structure' tab, new option to control visibility of the object. • The section name is displayed, elements may be organized automatically - elements with the same section type and size will be located next to one another in the list. • Some changes and improvements to the 'View' tab, namely: □ when created, reinforcement types will go to a new chapter RC; □ option to organize alphabetically; □ option to move the reinforcement types within the tree; □ option to create custom chapters; move reinforcement types across chapters (just drag them); □ option to change the name for the chapter; □ option to display reinforcement types by type of RC elements ; □ option to save the camera location; □ to select group of items and then move or delete views. • More options available on the start page: □ a shortcut menu for the recently opened files; □ 'Import' command to import files and not to create an empty *.spf file. • Any image imported from popular bitmap formats (PNG, JPEG, BMP) may be placed on a sheet of drawing. When the image is imported, it is possible to change its density, size and aspect ratio. Design of RC structures • For types of reinforcement in slab, new option to display (in the working view) notation for patterns of additional reinforcement in slab as they will be presented in the drawing. • Auto orientation of labels for main reinforcement along direction of the unified axes defined in the properties of the reinforced slab. • Option to create a 2D node from a reinforcement view. • For types of reinforcement in diaphragm, it is possible to indicate the reinforcement zones on the drawing. • For reinforcement cages in punching shear, it is possible to modify class of reinforcement in the 'Schedule of reinforcement' dialog box. • The 'Unify slabs' dialog box: visual information (as the same colours for rows) for slabs of similar area. • DSTU 3760:2019 is supported for reinforcing bars, reinforcing items, stirrups and studs. • For the reinforcement model Column, new option to modify location of stirrups 'manually'. • The work with models with a large number of NODEs is accelerated. • New nodes are added: □ 'Cleanup beams' to trim or extend beams to walls, columns, lines or other beams. In addition, it is possible to limit the zone in which the cleanup or extension should be made. □ 'Delete coincident line fragments' - to remove duplicate line segments so that errors do not occur when you generate a model based on these lines. □ 'Delete coincident points' - to remove duplicate points. □ 'Ventiduct' - to generate (along the line) the object type 'ventilation duct' that will cut openings in walls and slabs. □ 'Shaft by contour' - to automatically create openings in the floor slabs that it crosses. □ 'Load along vector direction' - to generate uniform and nonuniform linear loads along a specified vector. For example, to apply a wind load to bars. □ 'Lines from column' - to obtain the vertical axis for column and the contour line for column section. □ 'Convert objects' - to convert some object types to others. □ 'Import XLS file' - to import an updatable Excel file with numerical values. At node input it is possible to define where the values should be taken (from which sheet, from which columns, rows, cells or cell ranges). You will obtain the node output with the data from cells or several outputs (with the corresponding names of column) that may be linked to other nodes. □ 'List of elements specified with indexes' – to divide a list of items from the input into different outputs according to defined indexes. □ 'String to array of real numbers' - to convert a specified text string to an array of real numbers. □ 'String to array of integers' - to convert a specified text string to an array of integers. □ 'Arrays (with sets of points) specified with indexes' - to generate several arrays of points from the 1st set of points according to defined indexes. • Enhanced nodes: □ 'Columns by points' – option to create columns by vertical line (e.g. from 3D dxf). □ 'Advanced generation of storeys by specified levels' - the number of possible inputs for floors is increased from 32 to 1024. □ 'Block of models' - to modify properties of internal objects through connection to the input parameter Par of the 'InPar' node. □ 'Boolean unification of lines', 'Boolean subtraction from lines of input 1 lines of input 2' and 'Boolean intersection of lines' - additional outputs Ln with contours of openings. □ 'Import IFC' and 'Import SAF' - outputs to access imported objects in order to convert them to other object types or modify the properties of imported objects. Oct 12, 2022 Enhanced plugins for Autodesk Revit, Tekla Structures and converters based on *.saf, *.dwg, *.ifc formats. Auto collection of loads. Nonlinear heat conductivity. New finite elements available in the FE library: one-node damper with six degrees of freedom (FE 66) and two-node damper (FE 65). Aluminium structures. ReSpectrum. Subproblems vs Blocks of load cases. Deformation of foundation bed from soil consolidation and creep is calculated, end-bearing piles, calculation of C1/C2 for bars. Additional Reinforcement. INTEROPERABILITY - components of ВIM technology • Enhanced options for two-way integration with Autodesk Revit. BIM integration with Autodesk Revit 2023. Export of both physical and analytical model. Import of only analytical model from Revit • A special tool to check the reinforcement in plate elements; it enables you to automatically present in certain colour the under-reinforced areas in plate elements. This tool interacts both with mesh reinforcement 'Distributed' and with the 'Reinforcement by Path' object. • Two-way converter Tekla Structures 2022 - LIRA-FEM - Tekla Structures 2022. The Tekla Structures - LIRA-FEM - Tekla Structures converter provides full functionality for the analysis and design of metal and reinforced concrete structures. • When the IFC file is imported, it is possible to configure the IFC parameters, that is, to match parameters of the IFC object with parameters of the SAPFIR object. Such match option for parameters may can be performed for each type of IFC object. • A new tool for import of DWG files is provided. This makes it possible to use this format: □ as 2D 'underlay' that may be the basis for generating a model in SAPFIR module; □ as a basis for filling the library of typical joints with subsequent generation of drawings; □ for automatic generation of a model based on DWG floor plans. • For DXF/DWG floor plans, the following options are added: □ to import special elements FE 55 With the name of layer it is possible to set the following parameters: indent of special elements from the floor bottom, stiffnesses (Rx, Ry, Rz, Rux, Ruy, Ruz) and the coordinate system in which they are set (global, local). Moreover, all these parameters may be defined directly in SAPFIR module when the floor plan is imported. □ to import vertical triangulation lines for walls With the name of layer you can specify the type and step for line approximation. • Enhanced tool to export types of reinforcement (TR) available in the project for columns to the DXF file. • Import of new objects SAF: □ Loads on plates - concentrated load, concentrated moment, linear uniformly distributed load, linear moment, linear trapezoidal load, plane load; □ Loads on columns - concentrated load, concentrated moment, linear uniformly distributed load, linear moment, linear trapezoidal load; □ Loads on walls - concentrated load, concentrated moment, linear uniformly distributed load, linear moment, linear trapezoidal load, plane load; □ Loads on beams - concentrated load, concentrated moment, linear uniformly distributed load, linear moment, linear trapezoidal load; □ Hinges in beams and columns. Preprocessor LIRA-CAD • Enhanced tool that enables you to automatically create triangulation zones for slabs: □ In addition to triangulation zones (for slabs) located above the walls, it is now possible to create triangulation zones (for slabs) below the walls with indent from the wall in 4 directions and individual triangulation step; □ Enhanced algorithm for triangulation of contours; it provides better triangulation of slabs-to-wall connections. • It is now possible to automatically refine triangulation mesh for plates near the openings. In the properties of the opening you could define the step of triangulation points around the opening, the number of rows of points with fixed step and the total number of rows of triangulation points. After the rows with fixed triangulation step, the program creates several rows with intermediate step to avoid the degenerate triangles when the fine mesh (near the opening) becomes the sparse mesh (in the span). • The 'Enhance triangulation at intersections' option is presented in the properties of design model to avoid narrow triangular FEs if a coarse triangulation mesh is defined for the model. When this option is selected, at places where narrow triangular FEs should appear, the triangulation mesh is refined and better quality FEs are generated. • Enhanced options for the 'Create grid on wall' node to generate horizontal and vertical triangulation lines in the wall with a specified triangulation step. New parameters are added for the node: □ 'List of levels' - to define the intervals of horizontal triangulation lines from the wall bottom and between each other; □ 'Intervals by openings' – to adapt the wall triangulation lines to vertical triangulation lines from openings, if such lines are defined in the properties of the opening. • Extended options for the 'Sum up loads' dialog box. Now it works not only with the analytical model, but with the meshed model as well. • It is possible to transfer the load from a column base to the soil model. In the column properties you could assign the distributed load on soil Pz, subgrade moduli C1 and C2, horizontal stiffness for the slab supported with soil Cx and Cy, support and boundary conditions to the analytical presentation of the column base. • The rendering of visual load models is optimized. In version 2022, a model with a large number of loads rotates, pans and zooms 1.5 times faster than in version 2021. To activate this option, in the 'SAPFIR preferences' dialog box, on the 'Visualization' tab, use the 'Simplified presentation of loads' check box. • New mode when the wind load is applied manually rather than automatically generated. For the pulsation load, the user-defined static loads may be applied. • Visualization of wind load in the architectural model with the option to 'freeze' the wind. This option allows you to update/cancel automatic wind generation when geometry of the structure is • When the wind pressure (active/passive) is automatically applied in space, it is possible to collect wind on the side walls (zones A, B, C) and define the aerodynamic coefficient for each zone. • Wind load may be collected for flat, gable and shed roofs according to SP RK EN 1991-1-4:2005/2017. • The collection of wind loads on bars is optimized. The slope angles for bars and rotation angles for cross-section are now taken into account. Option to modify the coefficient to load for each • Tools to create special parametric load. This load is transferred to VISOR-SAPR module as load distributed across the plate elements or as load distributed along the bar length rather than the surface load. The load intensity may be defined with the parameters 'Surface load, tf/m2' for plate elements or 'Load per r.m., tf/m' for bars. The load may be applied along the normal to the element. In this case, a number of other parameters become available to simulate the liquid and gas pressure on the tank walls. • Considerably simplified procedure for collecting loads from the surface or slab and redistributing them to a beam system of arbitrary shape. Floor slabs or surfaces with a special new interpretation 'Proxy objects for loads' and loads with the option 'Loads through proxy objects' are used to distribute the loads. When design model is generated, the option 'Distribute loads on beams through proxy objects' becomes active and the program automatically performs all further steps: intersections, triangulation, assignment of supports and analysis. Based on the analysis results, the non-uniformly distributed linear loads on the beams are generated in SAPFIR module. For each element it is possible to correct the coefficient to load. • For the surface load, new option to cut the contour by a line, a plane (hatching), or by the contour of other objects. • For all objects that have interpretation 'Load', new parameter 'Additional load case' to add the load to a certain assemblage stage. • New 'Ventiduct' tool to automatically cut the openings in walls and slabs that it passes through. The openings may be generated exactly according to the shape of the ventilation duct or with a specified indent. Since every opening is associative, it is automatically updated whenever a ventiduct’s location or size is modified. • The option to create an inclined column. In the properties of the object, define the slope angle and direction of inclination for the column. For oblique column, almost complete set of properties for the vertical column is available: changing stiffness parameters, generating PRB, assigning support and boundary conditions, generating triangulation points, etc. • The automatic generation of bar analogues in SAPFIR module. To generate a bar analogue (BA), simple rectangular cross-sections are recognised: □ linear sections of wall; □ slab rectangular in plan; □ lintel above and below the opening; □ pylons or beams, described in meshed model with plate-type FEs. In the properties of BA you could specify the number of BA sections. Division of BA may also be specified with the approximation step. To generate BA from walls or slabs, new option is added to the properties of the corresponding objects. In addition to the option to replace the area above the opening with a bar, in the properties of door and window openings there is new option to save the modelling of the area above the opening with plate elements and to generate a lintel as BA. In the same way, it is possible to generate a BA for the zone below the window. For rectangular beams it is possible to generate a BA in the shape of a T-section. The program automatically recognises the height of the T-section while the flange width of the T-section may be defined in the properties of BA. • Enhanced 'Check model' procedure: □ the warnings that are not critical are removed; □ enhanced algorithm of the search for intersecting slab contours in case the slabs have a compound contour in plan; □ in addition to the search for duplicated objects, the new search for objects in which analytical models partially intersect each other, it will help avoid errors in meshed model; □ when model is checked for coinciding or intersecting objects, it is possible to consider the objects from different floors. • New tools to generate a retaining wall and a slab of variable thickness. The section contour of the retaining wall is defined in the 'Cross-section parameters' dialog box. For a slab of variable thickness, the max and min slab thicknesses are defined. The analytical model of the retaining wall and the slab of variable thickness is presented as several plates of different thicknesses. The number of plates is specified in the 'number of divisions in analytics' parameter in the slab/wall properties. The plates may be coaxial or shifted by offsets relative each other. • Option to edit the wall contour in the plane of wall. • For columns and beams, new option to define a variable cross-section for all standard SAPFIR cross-sections. Note that in LIRA-FEM only rectangular bar and I-shape may have a variable section, i.e. when rectangular bar and I-shape are imported, they will retain their parameters. In other case, after import procedure the bar is split into parts with increasing stiffness. • Tools for splitting a wall with a column. In the column properties, there is a new 'PRB column-wall' parameter that allows you to create perfectly rigid body (PRB) between the wall ends and the column. The PRB is associative, i.e. when one of the walls or columns is moved, the relationship between the objects is remained. • Optional presentation of the FE mesh on the physical model. The option is available after performing triangulation and saving the *.s2l file to transfer it to VISOR-SAPR module. • The generated PRB (defined as a property and generated by the search for intersections) may be displayed on the analytical model. PRB is displayed as orange lines that connect the nodes included into PRB. • Several new tools to evaluate the quality of the generated triangulation mesh: mosaic plots for quality of plates, area of plates, min angles of plates, min lengths of plate ribs, lengths of bars and rotation angles of bars. • Added 'Align' command to align walls vertically. There are two modes for alignment: parallelism - after alignment they will be parallel to the selected wall, but not coaxial; vertical coaxial - after alignment they will be parallel and vertically coaxial to the selected wall. • New option to select similar objects by horizontal stalk. They are selected with the 'Select horizontally' options ('Select along axis/direction' commands). The following objects may be selected: □ Columns; □ Piles; □ Walls; □ Beams; □ Slabs; □ Foundation slabs; □ Concentrated load; □ Linear load. • Tolerances for analytical models of the objects are added to the project properties: □ 'min threshold of door' height for analytical models of wall. • The 'Stair' tool is enhanced: □ more support options for stairs. It is now possible to assign that a flight of stairs is supported with the landing and floor slabs as a coupled DOF along Z, X and Y or to select a user-defined support; □ auto unification of local axes for stairs when the model is transferred to VISOR-SAPR module. • In the 'Snap of base point' dialog box it is possible to select the location for the analytical presentation of the beam and column within the section. • Enhanced 'Shaft' tool to work with storey levels and additional levels within a storey. Openings along the shaft contour are generated automatically in all slabs that the shaft passes through. • New functionality for the 'Other' object: □ to select the 'Ventiduct' option in the properties of the 'Other' object to automatically generate openings in all walls and slabs that the 'Other' object passes through; □ the 'Cut by storeys' command is also available for the 'Other' object. • For the capital and column base, new option to create stages only in one direction. • The schedule of the metal shapes may be organized by user-defined types of elements: column, beam, framework, brace member, purlin, rope, etc. • For the existing 'Cut' command there is an 'Extend' option that allows you to extend all SAPFIR linear objects up to a specified line. The command is available in 3D views, on floor plans, facades, cross-sections, sectional elevations and drawings. • It is now possible to save the SAPFIR file together with all the files associated with it (SLD - soil model, DXF, DWG, IFC, SAF, XLS and ASP - reinforcement results) to a separate project folder. A project archive may be created in the same way. • On the 'Project structure' tab, new option to control visibility of the object. • The section name is displayed, elements may be organized automatically - elements with the same section type and size will be located next to one another in the list. • Some changes and improvements to the 'View' tab, namely: □ when created, reinforcement types will go to a new chapter RC; □ option to organize alphabetically; □ option to move the reinforcement types within the tree; □ option to create custom chapters; move reinforcement types across chapters (just drag them); □ option to change the name for the chapter; □ option to display reinforcement types by type of RC elements ; □ option to save the camera location; □ to select group of items and then move or delete views. • More options available on the start page: □ a shortcut menu for the recently opened files; □ 'Import' command to import files and not to create an empty *.spf file. • Any image imported from popular bitmap formats (PNG, JPEG, BMP) may be placed on a sheet of drawing. When the image is imported, it is possible to change its density, size and aspect ratio. Design of RC structures • For types of reinforcement in slab, new option to display (in the working view) notation for patterns of additional reinforcement in slab as they will be presented in the drawing. • Auto orientation of labels for main reinforcement along direction of the unified axes defined in the properties of the reinforced slab. • Option to create a 2D node from a reinforcement view. • For types of reinforcement in diaphragm, it is possible to indicate the reinforcement zones on the drawing. • For reinforcement cages in punching shear, it is possible to modify class of reinforcement in the 'Schedule of reinforcement' dialog box. • The 'Unify slabs' dialog box: visual information (as the same colours for rows) for slabs of similar area. • DSTU 3760:2019 is supported for reinforcing bars, reinforcing items, stirrups and studs. • For the reinforcement model Column, new option to modify location of stirrups 'manually'. • The work with models with a large number of NODEs is accelerated. • New nodes are added: □ 'Cleanup beams' to trim or extend beams to walls, columns, lines or other beams. In addition, it is possible to limit the zone in which the cleanup or extension should be made. □ 'Delete coincident line fragments' - to remove duplicate line segments so that errors do not occur when you generate a model based on these lines. □ 'Delete coincident points' - to remove duplicate points. □ 'Ventiduct' - to generate (along the line) the object type 'ventilation duct' that will cut openings in walls and slabs. □ 'Shaft by contour' - to automatically create openings in the floor slabs that it crosses. □ 'Load along vector direction' - to generate uniform and nonuniform linear loads along a specified vector. For example, to apply a wind load to bars. □ 'Lines from column' - to obtain the vertical axis for column and the contour line for column section. □ 'Convert objects' - to convert some object types to others. □ 'Import XLS file' - to import an updatable Excel file with numerical values. At node input it is possible to define where the values should be taken (from which sheet, from which columns, rows, cells or cell ranges). You will obtain the node output with the data from cells or several outputs (with the corresponding names of column) that may be linked to other nodes. □ 'List of elements specified with indexes' – to divide a list of items from the input into different outputs according to defined indexes. □ 'String to array of real numbers' - to convert a specified text string to an array of real numbers. □ 'String to array of integers' - to convert a specified text string to an array of integers. □ 'Arrays (with sets of points) specified with indexes' - to generate several arrays of points from the 1st set of points according to defined indexes. • Enhanced nodes: □ 'Columns by points' – option to create columns by vertical line (e.g. from 3D dxf). □ 'Advanced generation of storeys by specified levels' - the number of possible inputs for floors is increased from 32 to 1024. □ 'Block of models' - to modify properties of internal objects through connection to the input parameter Par of the 'InPar' node. □ 'Boolean unification of lines', 'Boolean subtraction from lines of input 1 lines of input 2' and 'Boolean intersection of lines' - additional outputs Ln with contours of openings. □ 'Import IFC' and 'Import SAF' - outputs to access imported objects in order to convert them to other object types or modify the properties of imported objects. Analysis according to SP RK EN (Kazakhstan) • Added option to set increasing factors Fvk that are unique for each dynamic load case (sect.7.6.5, 7.6.6 SP RK 2.03-30-2017 and sect.6.4.1, 6.4.2 NTP RK 08-01.2-2021). The dependence of the Fvk factor on the skew of storeys is considered in this implementation, e.g. for earthquake loads along different X and Y directions. • Added option to compute the sensitivity coefficient θ for the skews; this coefficient enables you to evaluate whether it is necessary to consider the second-order effects according to formula 4.28 (2) sect.4.4.2.2 SP RK EN 1998-1:2004/2012 (same as formula 7.2 п. 7.2.2.2 NTP RK 08-01.2-2021). In this implementation it is possible to evaluate whether it is necessary to consider the second-order effects (P-∆) for the design model in earthquake analysis. • Within the same model it is now possible to carry out an earthquake analysis by considering two options for increasing coefficients to C1 (*10*1.5 and *10/1.5), i.e., two enveloping options for increasing the foundation stiffness. But in this case, subgrade moduli across the foundation area for dynamics should be assumed as constant rather than variable. These design assumptions are mentioned in sect.D.3.1-D.3.3 of Annex D to NTP RK 08-01.2-2021. • New input tables for load cases and DCL (Design Combinations of Loads) have been implemented. It is more easier now to edit the tebles and transfer them between design models together with the • New tool to automatically generate initial imperfections to simulate the first order effects (deviations during erection and local geometric imperfections) for the list of load cases or DCL and to add the generated load cases to initial load combinations. • To correctly select the direction and reduce the number of possible combinations for additional loads from the initial imperfections, in the dialog box for generating imperfections, directional cosines computed by displacements are presented. It helps the user to create additional loads that will make worse the effects caused by the main loads on design model and do not create unnecessary combinations. • Displacements and forces in elements of the model are presented graphically with account of safety factors and all combination coefficients (in previous versions, the final output data was available only in standard tables). • For the DCL calculation according to SP PK EN 1990:2002+A1:2005/2011, new option to consider safety factors applied to analysis results (displacements, forces). Safety factors may be considered separately for the following design situations: main combinations; characteristic, frequent and quasi-permanent combinations; emergency and earthquake combinations. • For SP PK EN 1990:2002+A1:2005/2011, new option to find out characteristic load combinations within each finite element. As there may be several hundreds of design combinations of loads in the model, it complicates the evaluation and requires more time for analysis. In this situation it is possible to calculate 'Characteristic DCL' abbreviated as DCL(c). In this mode all calculated DCL are automatically assigned as mutually exclusive with regard to the type of combination; the most dangerous combinations (of already calculated DCL) are selected according to criteria for DCL selection. Since there are no many criteria for each type of FE, the number of obtained characteristic DCL is reduced; it helps to significantly reduce the time for analysis. • For SP PK EN 1990:2002+A1:2005/2011, it is possible to select and check the pilot reinforcement according to characteristic DCL. If a finite element is part of a structural element, its number of characteristic DCL may be extended to the total set of combinations of all FEs available in that structural element. This feature is necessary to consider the shape of the force diagrams along the length of the structural element for a more correct analysis & design. • For the SP RK EN 1990:2002+A1:2005/2011, new option to use the METEOR system (Model Variation) that is mentioned to integrate analysis results of several design models with the same topology into a single integrated result. The DCL(c) results are unified: for all sections the program selects (from all problems) such DCL(c) that cause extreme value of each criterion. In this case, all DCL (c) for the corresponding criteria automatically become mutually exclusive. • New option to compute the settlement for specific soils (expansive, collapsive and saline) according to SP RK 5.01-102-2013. • Option to compute additional component of the settlement for any time interval t due to consolidation of soil. Calculation is made by formulas 7.5-7.7, sect.7.2.2.1 NTP RK 07-01.4-2012. • Option to compute additional component of the settlement from creep. Calculation is made by formula 7.16, sect.7.2.3.5 NTP RK 07-01.4-2012. • Option to automatically fill in the table of mass for dynamic loads. The table is filled in based on the combination coefficients used in the DCL combinations where dynamic loads are included. • Option to generate deflection diagrams in the plane of plate; this option may be used, for example, to classify the floor slabs by stiffness. • Clarified analysis of reinforcement according to SP RK EN 1992-1-1:2004/2011, including first order effects (geometric imperfections) and second order effects. • For SP RK EN 1992-1-1:2004/2011, new option to consider the second order effects in analysis of reinforcement for bars. The following methods are available: by nominal stiffness, by nominal curvature and a variant where the max reinforcement is selected from analysis by all methods. • Modified analysis of reinforcement in shear force according to SP RK EN 1992-1-1:2004/2011. Parameters for the slope angle of a possible crack are specified separately for each design situation (main combination, emergency or earthquake). Analysis of reinforcement as well as verification of bearing capacity for concrete compressive diagonal element may be carried out for three options: □ with user-defined slope angle θ (by default, 45 degrees); □ the slope angle θ is determined by calculation (from the condition Ved=Vrd,max); □ all possible slope angles θ within the certain range are checked. • In analysis according to SP RK EN 1992-1-1:2004/2011, extended set of coefficients and parameters for analysis of reinforcement. • For SP RK EN 1992-1-1:2004/2011, analysis of pilot reinforcement for shear forces and serviceability limit state (crack of width propagation). • Information about the type and name of combinations is added to the table of combinations for punching shear analysis. • Analysis of pilot reinforcement is available for all types of standard cross-sections of bars. • Alternative algorithm in analysis of reinforcement for plate elements based on Wood theory. • For SP RK EN 1992-1-1:2004/2011, analysis of additional reinforcement is realized. • Analysis of reinforcement according to the universal concrete diagram (parabolic-rectangular diagram of compressed concrete) is implemented. • For SP RK EN 1993-1-1:2005/2011, information about characteristic forces used in analysis of the steel cross-section in local mode of the program. This feature greatly simplifies evaluation of analysis results. It also enables the user to evaluate the contribution of each load or combination of loads. • In the analysis according to SP RK EN 1993-1-1:2005/2011 and SP RK EN 1998-1:2004/2012, option to consider the plastic behaviour of the structure in earthquake load: for moment resisting frames, frames with concentric diagonal and V-shaped bracings, inverted pendulum structures and moment resisting frames combined with concentric bracings. • In SAPFIR environment, wind load is collected automatically for rhe buildings rectangular in plan (active/passive pressure zones, end walls and roof). In the properties of wind load, there is new option to manage the profile, to adjust the number of ranges along the building height where constant pressure is applied. In addition, an option to 'freeze' the generated loads and edit them • Auto generation of bar analogues (BA) for walls/pylons/piers and lintels above and below openings. It is now also possible to create BAs for stiffeners as part of floor slabs. • For SP RK EN 1990:2002+A1:2005/2011, new settings for auto generation of load combinations: □ "G+W" - to create combinations with all dead and one live load 'Wind'. □ "G+Sn" - to create combinations with all dead and one live load 'Snow'. It is important to use these combinations in the analysis to put less load on the frame. This is especially necessary for the checking on breakout, overturning and for the calculation of anchors for • New input tables: □ input tables for loads and design combinations of load by SNIP 2.01.07-85*, Eurocode, ACI 318-95 (USA), BAEL-91 (France), IBC-2000 (USA), DBN B.1.2-2:2006 (Ukraine), STB EN 1990-2007 (Belarus), SP 20.13330.2011/2016 (Russia), SP RK EN 1990:2002+A1:2005/2011 (Kazakhstan), TCP EN 1990-2011*(02250) (Belarus); □ input table with option to define and modify forces in the bars of the current problem; □ input table for generating masses from static loads. Input tables help you simplify the process to define input data (in some cases) and to transfer data between design models. In the input table for forces it is possible to modify the forces before combinations are calculated. • Analysis of punching shear contours in case the column 'body' is taken into account with bars of high stiffness (HSBs). Unlike PRB, the properties of high stiffness bars may be modified. So if necessary, you could modify the stiffness, define the load and the degrees of freedom for the HSB, etc. Thus, it is possible for example, to simulate the buckling of the pylon ends; to reduce stress concentrations along the perimeter of the slab-to-column connection when the slab and the stiffening bars are heated together. • Option to calculate DCL (design combinations of loads) and DCF (design combinations of forces) for selected finite elements. The list of FEs is selected from a pre-defined list of elements. The list of elements may be generated for the model fragment, selected FEs and defined manually. • In the DCL and DCF calculations, excluded and non-computed mode shapes are taken into account. • FE type may be automatically modified when the stiffness is assigned to an element. When you assign the stiffness to an element, the program checks whether assigned stiffness type corresponds to the FE type. If they do not correspond, the FE type may be modified automatically. • New command that enables you to block any modification to the data (that may affect the FEA results) at any time when you work with the design model. An option to automatically block any modification to the data for FEA when analysis is complete. When the 'Do no edit data for FEA' command is active, it is still possible to edit and calculate DCF and DCL, principal and equivalent stresses in finite elements (LITERA), reactions/loads at nodes (Load on Fragment) and design with the modules available in LIRA-FEM (analysis of reinforcement, check of pilot reinforcement in RC and combined elements, analysis & design for cross-sections of metal elements, analysis of elements from masonry, analysis of masonry reinforcing structures). For design procedure, stiffness values may be modified after the static and dynamic analysis of the model. For the Reinforced Concrete and Masonry Reinforcing Structures mode, modifications may be made only to the section dimensions, i.e. to change the section size. For Metal Structures mode, it is possible to add new section type as well as to change the profile number for a previously created section. • Option to automatically select elements adjacent to selected nodes and/or elements. • When you select elements by certain elevations and grid lines, all the filters defined for the PolyFilter are taken into account. • New filters to select elements to which materials are not assigned (reinforced concrete, metal, brickwork), i.e. that do not have input data for analysis & design. • In the 'Display' dialog box, new option to display (on the model) the distance between elevations. • In the 'Display' dialog box, if you define not to display one-node elements, bars, plates, solids and target bars of bar analogues, then the nodes that belong to these elements will be automatically hidden. • The information on the nodes and elements of design model is updated; information tabs that describe the input & output data for the new types of analyses. • New option to visualize the colour palette; when this option is active, the number of objects (as a percentage) in each range is displayed. • When the animation of the Time History Analysis results is enabled, it is possible to display the changes in reactions at nodes within the time • Option to save graphs of kinetic energy in *.csvformat. • Enhanced options to define the simple triangulation contours: □ to fix the coordinates specified with a pointer when triangulation contour is defined 'By coordinates'; □ the Enter key is used to finish the input of triangulation contour ; □ added accuracy setting when triangulation contour is defined (use the Shift key to consider the intermediate nodes). • When triangulation contours with openings are defined, and you select additional nodes, the program will automatically ignore nodes located beyond the outer contour, within the inner contour and on the contours. • Option to save selection if you edit FE mesh when the 'To selected elements only' check box is selected in the 'Transform mesh of plate FE' dialog box. • New 'Compute Spectrum' tool to convert the graphs of acceleration (velocity, displacement) in time into a seismogram, velocigram, accelerogram and response spectrum graph. ReSpectrum • For dynamic modules 27 and 29, when generating the nodal response spectrum: □ – to consider an oscillator damping different from the system damping (user-defined); □ – to sum up by mode shapes with no regard to phase shift; □ – to enlarge the peak area of response spectrum and to reduce the amplitude within the narrow range of peak frequency. • Option to colour the directions of the principal axes N1 and N3 for plate elements. • For the 'Step-type Method' of analysis, new option to generate a set of nonlinear load cases based on the generated DCL tables. • Option to organize the nonlinear load cases (use the 'Move Up' and 'Move Down' commands). • Option to edit several selected histories or local load cases using the 'Modify' command for all analysis types. • For physically nonlinear problems with iterative elements, new tool to view and prepare documentation for calculated parameters of the stress-strain state for standard, metal types of sections and plates. The following output data is available in the 'Section (state)' dialog box for the iterative element selected to be displayed the data: □ mosaic plot of normal stress in the main/reinforcing material of the plates and bars; □ mosaic plot of relative strain in the main/reinforcing material of the plates and bars; □ mosaic plot of tangential stress ꚍxy in the main material for plate; □ mosaic plot of relative strain ɣxy in the main material for plate; □ mosaic plot of max stress σmax in the main material for plate; □ mosaic plot of relative strain εmax in the main material for plate. 'Section (state)' dialog box: stress mosaic plot in main material and reinforcement • Option to synchronize the view of analysis in the status bar: load cases, DCL, DCF, mode shapes and buckling modes), layer to view the calculated principal and equivalent stresses, intermediate steps in nonlinear problems, integration steps for time history analysis. In this mode, changes made to the graphical presentation of design model in one window will automatically apply to all open windows for all design models. • On the 'Summarize loads' menu there is a new mode for calculation both for the whole design model and for selected elements and nodes. • New mosaic plots are available: □ max stress in reinforcement and max relative strain of reinforcement along the X1, Y1 directions for iterative plates; □ nonlinear stress-strain diagrams assigned to the finite elements for the main material, the reinforcing material and the concrete creep laws; □ total area of a pilot longitudinal reinforcement in the bars; □ inelastic energy absorption coefficients Fmu; □ mass condensation; □ dynamic masses in the elements. • The panels on the ribbon user interface as well as the menus and toolbars of the classic interface are modified and extended with new commands. • Option to add comments to loads, it will simplify the process when different engineers work with the same design model. • Option to create and edit offsets for selected FEs included into structural elements (STE). To do this, in the current design option the program searches for STE that selected FEs belong to. Parameters for offsets are calculated for the whole chain of FE in every STE as if the chain of FEs forms a single bar. When an offset is defined along the X1-axis, the changes are applied only to the first and the last FE. • When coincident elements are packed, the elements to which the load is applied will have priority. • When required amount of concrete and reinforcement is calculated, new option to select the result if the reinforcement type 'symmetric and asymmetric' is specified for the bars. • When dynamic load cases are copied, the program copies data on the table for creating dynamic load cases from static ones as well as the settings for the table of dynamic load cases. • In the METEOR system, it is possible to add the file of integrated problem *.t8m to the current list of problems. • Accelerated output of envelopes MIN/MAX/ABS by load cases/DCL/DCF. • The signs for forces in FE 55,255,265,295 depend on the order in which the elements are numbered, and the forces are calculated based on the difference of displacements between the second and first nodes. In the 'Local axes for FEs 55,255,265,295' dialog box, new command to swap the nodes that describe these elements. • Option to assign the stiffness coefficients to elements; they may be visualized as mosaic plots. • When selected bar is divided into two bars according to the specified distance, it is possible to select the node (of the bar) from which displacement should be done. • For the 'Copy by one node' command, several insertion nodes may be defined. • For the 'Add node at distance L' function, it is now possible to select a node (beginning/end of bar) relative to which the new node will be added. • When the 'Generate additional nodes at sides of FE ' option is applied to plates with a specified zero modulus of elasticity, nodes on the sides will not be generated. Subproblems vs Blocks of load cases In previous versions of the program, the design model could have a single set of stiffness properties and boundary conditions. However, there are problems in which the stiffness values of the elements should depend on the duration of loads. For example, in dynamic analysis it is usually necessary to replace the modulus of deformation with the modulus of elasticity for soil; this approach is also used for materials of structure. In previous versions you could modify only the stiffness of individual elements in a structure for selected assemblage stages with 'assemblage groups'. In version 2022 there is a new option to specify stiffness values not only for assemblage stages, but also for an arbitrary set of load cases. The set of load cases for which individual stiffness values are defined in design model is called a subproblem or block of load cases. In version 2022 R1 it is possible to use different sets of subgrade moduli Pz, C1, C2, C1z, C2z, C1y, C2y within the same model. A unique set may be generated for each load case of design model - static, dynamic, every stage of assemblage, every load case in nonlinear history, etc. Load cases (static and dynamic) that may be computed by the same stiffness matrix are combined into a single block of load cases. Another criterion for division into blocks is the presence/absence of specified displacements in the load cases. That is, if displacement is defined in one load case at certain node along certain direction, and in another load case displacement is not defined at this node in this direction, and the restraint in this direction is not defined, then these load cases will be divided into separate blocks. The FEA of a problem that includes subproblems is carried out as follows. The FEM solver detects subproblems in the input data file for FEA. For every subproblem, the FEA is carried out as for a separate problem - a new stiffness matrix is generated. After analysis procedure, the results of all subproblems are merged into the results of initial problem. Then such merged results will be used for all possible DCL/DCF calculations as well as for structural analysis & design (reinforced concrete, metal, brick). The following limitations are applied to models with subproblems: - For superelements, it is not allowed to define the sets of subgrade moduli. They still have a single stiffness matrix; - Stability by DCL may be calculated only if all load cases included in the DCL belong to the same subproblem; - Coefficients to modulus of elasticity cannot be used for nonlinear FE; - Coefficients to the modulus of elasticity are not used in the 'NL Engineering 1'analysis. From the non-obvious: - for the time history analysis, the program will apply the set defined for dynamic load cases (prehistory load cases may have their own sets); - for Pushover analysis, the program will apply the set defined for load case with inertial forces; - for analysis on creep, the program will apply the set defined for the last load case in the nonlinear history. To create subproblems and refer them to appropriate load cases, use 'Edit load cases' dialog box. By default, no subproblems are created in the model; the 'Subproblem' drop-down list contains only one line 'Main problem' and the dialog box works in the same way as in version 2021. If no subproblems are created, all load cases refer to the main problem. To open the 'Subproblems' dialog box (see Figure – Sets of properties for subproblems), use the Browse button [...]. Any number of subproblems may can be defined. The main problem cannot be removed from the list of subproblems. Then it is possible to refer certain load case to a specific subproblem. When a load case becomes active, these subproblems are active. That is, when we switch the active load case, on mosaic plots C1, C2 we will see the subgrade moduli that correspond to the subproblem that includes the active load case. In the same way, in the 'Information about element' window, when you switch the load case, the subgrade moduli C1/C2 corresponding to the problem will be changed (see Figure - Sets of subgrade moduli for elastic foundation in different load cases). In the input tables 'C1C2 Plates', 'C1C2 Bars' and 'C1C2 Special elements' there is a new parameter 'Subproblem', (see Figure - How to edit a set of subgrade moduli for elastic foundation with the 'Input table'), i.e. input tables may be also used to fill in/edit subgrade moduli for elastic foundation in subproblems. To assign coefficients to the modulus of elasticity, new tool to visually check the assigned values with mosaic plot. In analysis of forced displacement at nodes, there is a new option to check the restraints in appropriate load directions for other load cases. FEM solver • Option to generate the file with detailed information about the state of materials (main and reinforcement) in sections of iterative physically nonlinear elements. This option is available for all sections of bars and for plates. When solving physically nonlinear problems, the iterative FEs have the following advantages: the iterative element will not take forces above the ultimate bearing capacity; it is possible to consider the unloading path of material by the initial modulus of elasticity; at failure there is no fixation of accumulated forces preceding the failure stage; for the time history analysis there is no 'time lag', that is, no problem of matching accumulated forces and displacements. • Nonlinear thermal conductivity for plates. Option to define the dependence of the heat conductivity coefficient, thermal capacity coefficient and specific unit weight on the temperature. • In stability analysis, option to refer the elements of the model to one of the following two classes: the restraining elements and the pushing elements of the system. The restraining elements made for the stability of the system equilibrium, while the pushing elements cause the system to lose its stability. The sensitivity coefficient for restraining elements is > 0, for pushing elements < 0. • For earthquake analysis by linear spectral method, the excluded and non-computed mode shapes are taken into account in the same way as for the design of nuclear power plant structures. The relevant setting is available when the data is defined in the 'Table of dynamic load cases' dialog box. When appropriate setting is used for earthquake loads, inertial loads and loading effects (displacements, forces, nodal reactions, etc.) from excluded and non-computed mode shapes are calculated - an additional component (mode shape). One additional component is calculated for a one-component earthquake load and three additional components - for a three-component earthquake load. The output data is presented so that the additional components will have an ordinal number beginning with n+1, where n is the number of calculated mode shapes for natural vibrations. Thus, the additional components have no correspondence with the numbers of mode shapes for natural vibrations. When inertial earthquake loads are calculated, it is assumed that the spectral acceleration is calculated on the basis of the frequency of the last received mode shape for vibration in this earthquake load. Inertial earthquake loads are calculated only for linear degrees of freedom. • Option to consider combined behaviour of components (degrees of freedom) according to a given diagram for FE 255, 256. It is possible to define behaviour diagrams for vector sums of the following components: (X+Y) and (X+Y+Z). The diagram is described by 3 values – the 1st modulus of elasticity (R, t/m), the 2nd modulus of elasticity (R2, t/m), kink in the diagram (N, t). Any set of diagrams for individual components and combinations of vector sums of components may be defined, but each component should be included only once. That is, for example, if a diagram is described separately for X, then X can no longer be included in any combination of vector sums. The output data for FE 255, 256 is presented as it was earlier - forces along the corresponding directions of the local coordinate system Rx,Ry,Rz, Rux,Ruy,Ruz. For example, this is necessary to simulate seismic isolators as rubber-metal supports that have a circular cross-section. The figure below shows that if the ultimate force of equal value are defined separately for the local X1 and Y1 axes it will cause a larger ultimate force for load at any other angle in the plane. If we define the ultimate force for the vector sum of components (X+Y), we get the same ultimate force for the load at any angle in the plane • Enhanced situation with the wind pulsation analysis (dynamics module 21), when the mode shapes that have frequency less than the ultimate one did not coincide with the static wind direction and so the calculated inertial forces were close to zero. Formula for calculating modal masses in analysis on pulsation component is presented; modal masses for all mode shapes are calculated by this formula in the analysis. In the dialogue box 'Wind analysis parameters with pulsation', parameter 'Min sum of modal masses for mode shapes that have frequency less than ultimate value' is added for analysis according to option (c) of sect.6.7 SNIP 2.01.07-85 as a percentage. Now if the sum of modal masses of vibration mode shapes( that have a frequency less than the ultimate value) is less than the specified value of the sum of modal masses in % or there are no such frequencies at all, then analysis is carried out according to option (a) of sect.6.7 SNIP 2.01.07-85, otherwise by option (c) of the same section. Modal masses of vibration mode shapes are displayed in the table with periods of vibrations for analysis on pulsation, just as for a single-component earthquake load. • The FE library contains new finite elements that are analogues of existing FE 56 and FE 62: one-node damper with six degrees of freedom FE 66 and two-node damper FE 65. In the description of the stiffness for the new FEs, it is possible to define the viscous damping ratio along six directions, for linear directions the measurement units are t/(m/s), for angular ones in (t*m)/(rad/s). The new FEs may be used to describe external damping devices that respond to the velocity of nodal displacement along the directions of DOF in global coordinate system. It is assumed that viscous damping is implemented, that is, the resistance to motion is proportional to the corresponding velocity component. The viscous damping ratios are defined for each nodal translation (rotation) independently and do not influence each other. New FEs may be used for dynamic analysis only for direct integration of the equations of motion, that is, in the Time History Analysis system. Other modes of analysis do not react in any presence of this FE in the design model. • For each dynamic load, it is now possible to specify a list of FEs in which masses will be collected at nodes. • For every element of the model it is possible to use unique increasing factor fvk for the certain earthquake module. This option enables the user to correct obtained earthquake forces, e.g. for cases where the building is classified as irregular along the height due to significant increase in mass or reduction in stiffness of vertical load-bearing structures in one or more storeys compared to other adjacent storeys. By default, the coefficient fvk for all elements of the design model is equal to one. An appropriate mosaic plot is available to check and prepare documentation of input data. • When analysis is carried out with check of parameters, it is possible to consider user-defined criteria for termination of analysis. The following criteria may be defined: □ max allowed displacement along the specified directions; □ geometric variability along the specified directions; □ buckling mode along the specified directions. If any of the specified criteria is applied during analysis, then analysis is terminated. All output data calculated at the time when one of the specified criteria is reached are available for • A full geometric stiffness matrix is included in the analysis with geometric nonlinearity for bars. This option enables you to estimate the lateral-torsional buckling and to find its contribution to the safety factor. • For all available nonlinear stress-strain diagrams for the main material and reinforcement, new option to use the 'K' coefficient to correct the values of ultimate stress. • Linear and nonlinear analyses as well as generation of the output data files are modified due to the introduction of 'Subproblems' and 'Blocks of load cases'. • Enhanced time history analysis on seismogram. Velocities and accelerations are considered at nodes where a seismogram is specified. • Enhanced account for shear in the mass matrix for a bar. • New stress-strain diagrams for concrete: 19 - polynomial stress-strain diagram for concrete and 22 - nonlinear parabolic stress-strain diagram for concrete. To describe diagram 19, in the 'Parameters for stress-strain diagram' table, define the values for the following parameters: □ initial modulus of elasticity in compression Ecm(−); □ initial modulus of elasticity in tension Ectm(+); □ max strength of the concrete in axial tension fcm(−); □ max strength of concrete in compression fctm(+); □ ultimate relative compressove strain of concrete εcu(−), εcu2; □ relative strain of concrete at max compressive stress εc(−), εc2; □ ultimate relative tensile strain of concrete εctu(+); □ relative strain of concrete at max tensile stress εct(+). To describe diagram 22, in the 'Parameters for stress-strain diagram' table, define the values for the following parameters: □ initial modulus of elasticity in compression Ec(−), Eck (Ecd); □ initial modulus of elasticity in tension Ect(+), Ectk (Ectd); □ max strength of concrete in axial tension fc(−), fck (fcd); □ max strength of concrete in compression fct(+), fctk (fctd); □ ultimate relative compressive strain of concrete εcu(−), εcu2; □ relative strain of concrete at max compressive stress εc(−), εc2; □ ultimate relative tensile strain of concrete εctu(+); □ relative strain of concrete at max tensile stress εct(+); □ degree of the polynomial n. • For stability analysis, it is now possible to display the contribution of each mode shape to the total energy of the system. This data is displayed in analysis protocol. • It is now possible to pause the analysis procedure. It is helpful when during a time-consuming analysis procedure it is necessary to use computer resources for other applications. In the pause mode, the FEM-solver does not use CPU resources, does not perform any operation with disk, but continues to occupy the certain amount of RAM that was indicated before the pause. • Deformations of foundation beds are computed from soil consolidation and creep. This feature is available for SP RK 5.01-102-2013, DBN B.2.1-10:2009 and SP 22.13330.2011/2016. The proposed method for calculation of settlements due to consolidation and creep will be useful in solving problems for determining settlements of water-saturated soils over time where the total deformation of foundation bed is determined by the sum of instantaneous settlement, settlement caused by consolidation and settlement caused by creep (secondary consolidation). The data for the calculation are collected on the special tabs in the 'Soil properties' dialog box. The consolidation may be calculated without account of secondary consolidation, this is necessary to find out the contribution of the relevant component and to check the calculated values. The presented method for calculation may be used to consider the compliance of elastic foundation for the system 'overground structure – foundation – soil’. Such models are necessary to perform a series of calculations and to obtain an integrated model in METEOR system to consider modifications in elastic foundation during all stages of loading and the modified soil properties throughout the life cycle of the building/structure. • Option to calculate additional component of settlement for any time interval t due to soil consolidation. The calculation is carried out according to formulas 7.5-7.7 sect.7.2.2.1 of NTP RK • Option to calculate additional component of settlement due to creep. The calculation is carried out according to formula 7.16 sect.7.2.3.5 of NTP RK 07-01.4-2012. • In the SOIL system, the calculation of subgrade moduli for bars (now only FE10) is provided. It is possible to assign Pz to bars in the input data and to export Rz from the output data for subsequent iterative refinement of C1, C2 (the arithmetic mean between the values Rz in bar sections that are less than zero is converted into Pz, i.e. the tension C1, like for plates, is not transferred to the input data). By default the SOIL system receives the strip width equal to the width of the cross-section B. But if the 'Bz=B' option is not selected when you define Pz, it will be possible to specify a strip width different from the width of the cross-section B (for example to consider the contribution of concrete bedding to the stress distribution on the foundation). The modulus of subgrade reaction from the SOIL system is assigned to the bar from the arithmetic mean of C1, C2 obtained in the gravity centres of the two loads applied on the sides of the bar axis with an overhang Bc/2. To get subgrade moduli that are variable along the strip length, divide the bar into individual FEs. • It is possible to display each component of the settlements in pile foundations (Sef - settlement of equivalent foundation, ΔSp - additional settlement due to pile penetration at the bottom of equivalent foundation, ΔSc - additional settlement due to compression of pile shaft) and settlements from different specific soils - Ss. Contribution of each settlement component is displayed in the output data at any point in the model within the load contours. This implementation is also supported for the graphical presentation of contour plots (it is possible to show or hide each settlement component, then the contour plots and colour palette will be rearranged for the selected set). • The load-bearing capacity of piles may be displayed with account of partial safety factors. The dialog box appears to display the mosaic plot N/Fd (ratio of acting load on pile to bearing • Calculation of the end-bearing piles. The settlement is determined as for friction piles with widening. The bearing capacity of soil is calculated as the greater of the two values: Fdb - bearing capacity of the rock foundation under the pile toe, Fds - bearing capacity of the pile with account of only resistance of the rock soils along its lateral surface. If Fdb > Fds then the whole stiffness of the pile will be applied at its base, if Fds > Fdb then the stiffness will be applied only along the pile length. To specify rocky soils, in the table of soil properties, define additional data: Rc - ultimate uniaxial compressive strength of rocky soil in water-saturated state (design value), Ks - coefficient for account of strength reduction because of cracks in rocky soils (see Table 7.1 in SP 24.13330). If the pile length or pile toe contacts the rocky soil, then calculation will be made as for the pile in rocky soil. If there is a non-rocky soil under the rocky soil or if the pile cuts through the rocky soil, then during the calculation you will see the warning: '[ ! ] The rocky soil has weak strata. The load-bearing capacity of the end-bearing pile Fd should be taken from the results of the static load test'. • Extended options to limit and check the min depth of compressible soil Hc. The min depth of the compressible stratum should be specified in absolute value under load and also with a new option by specifying the lower absolute elevation of the soil model (Hc, min will be considered up to the limit of this elevation). Previously, the Hc value was used to determine the settlement for all loads specified in the model regardless of the actual width of each foundation (usually this value was determined for max width of all foundations in the model). Now, the min depth of compressible stratum may be specified not only for the entire model but it may be considered individually for each load. In the properties of loads, there is an appropriate option to check the Hc. • New option to search for Hc for the weak soils. In the calculation parameters you could define modulus of deformation for the weak soil. Default values correspond to the selected building code. In case of automatic search for the weak soil, the following algorithm is applied: 1. Hc calculation is carried out with defined coefficient for depth compressible stratum - λ 2. If calculated Hc<Hc,min, then Hc=Hc,min 3. If the compressible stratum ends in weak soils: 1. Hc is calculated with a coefficient for the depth of compressible stratum equal to 0.1(0.2) depending on the selected building code; 2. Hc is calculated; it is limited with the bottom of the weak soil; 3. from calculations (a) and (b), the lower value of Hc is selected. If Hc from the calculation by sect.3(a) is less than the value by sect.3(b), and at the same time the value Hc by sect.3(a) is greater than Hc from sect.2, then the final value Hc is taken as equal to Hc by sect.3(a). Otherwise, Hc is equal to sect.3(b). • The soil settlement from the pile Sp is taken into account when calculating the stiffness of the pile as an equivalent foundation by method 1 in the average modulus of deformation E and in the moduli of subgrade reaction C1 and C2. If the equivalent pile foundation is simulated in the SOIL system, and the pile shaft is not simulated with the chain of FEs, then both Sp and Sc (compression of the pile shaft) will be taken into account. In case where the pile foundation is simulated with the chain of bars, the compression of pile shaft Sc is automatically taken into account in the FEA. • In the calculation of piles (FE 57) in the SOIL system as an equivalent foundation, the dead weight of the pile body will be considered as equal to zero. • In the calculation of Sp (soil settlement from the pile), condition E1≤E2 is added for the checked for modulus of deformation for soil along the pile length (E1) and under the pile (E2). • When calculating a single pile as an equivalent foundation, the pile step Acp = 3*D for a circular pile and Acp = 3*(B+H)/2 for a rectangular pile. The radius of the equivalent foundation Reqv = • New option to calculate the horizontal stiffness Rx and Ry in FE 57 in case the soil resistance is distributed along the pile length' according to results of field tests'. Horizontal stiffness of the pile may be obtained from the soil model. Appropriate setting is added to the list of properties for the pile group - 'calculation of horizontal pile stiffness'. • In the 'Pile groups' dialog box, new option to check the number of piles defined in the model. • For DBN B.2.1-10:2009, calculation of settlements for specific soils: collapsive, saline, expansive, filled up and organic soils. • Measurement units may be converted to define pressure (P) in properties of specific soils for cases where settings different from default ones (t/m2) are applied. • In the 'Arbitrary soil profile' window of the SOIL system, the excavation pit is displayed only for loads for which the 'Calculate stress from excavated soil' option is selected. • By default, the window for the check of coordinate system is disabled. • For the SOIL system, the new option to show/hide the piles and numbers of pile groups. • For the SOIL system, the elements of user interface are adapted to work with high resolution UHD and 4K monitor. • Option to assign individual settings for constructed soil in each subgroup of loads. So, in the design model you could define the constructed soil of variable capacity. • Updated calculation of stiffness and settlement of piles. • To calculate the pile stiffness by model of equivalent foundation, the option to specify the average pile step; the following options are available for the pile group: □ Acp is equal to the average step of piles in the pile group (determined automatically as earlier); □ Аср is equal to the average step of piles in the pile group, but not more than 2*Reqv, where Reqv is a value used to determine dimension of equivalent foundation; □ Аср is equal to the average step of piles in the pile group, but not more than a specified value; □ Acp is equal to the specified distance. Reinforced Concrete Structures • For plate elements, a new algorithm to check equilibrium and calculate stress and strain at arbitrary points in the section. Based on the Wood-Armer method, a new variant is provided for selection and check of reinforcement for the ultimate and serviceability limit states. This method makes it possible to speed up analysis of reinforcement and to obtain a smoother distribution of reinforcement in the plane of plate. The new algorithm is available for analysis according to SP RK EN 1992-1-1:2004/2011, EN 1992-1-1:2004, DBN B.2.6-98:2009, TKP EN 1992-1-1:2009, DSTU-B EN 1992-1-1:2010, SP 63.13330.2018. • The reserve factor for the pilot reinforcement may be computed according to the following building codes: SP RK EN 1992-1-1:2004/2011, EN 1992-1-1:2004, DBN B.2.6-98:2009, TKP EN 1992-1-1:2009, SP 63.13330.2018. It is now possible to compute the reserve factor for cross, angle and asymmetric T-sections. • Analysis of composite (metal and RC) sections according to DBN B.2.6-98:2009. • For DBN B.2.6-98:2009, analysis of RC sections on fire resistance according to DSTU H B EN 1992-1-2:2012. • For DBN B.2.6-98:2009, it is possible to use characteristic (normative) strength of concrete and reinforcement in analysis of special and earthquake loads (group D1 and C1). • New analysis mode 'Additional Reinforcement'. For elements where RC materials and sets of Pilot Reinforcement (PR) types are defined, in this mode you could obtain the size and location of the reinforcement area that is absent but required to provide the load-bearing capacity of the section. Two modes for analysis of additional reinforcement are provided for convenience: 'YES' – to obtain areas of additional reinforcement only in the elements where the specified reinforcement area is not adequate to ensure the load-bearing capacity of the section; 'YES/RF' - to obtain the area of reinforcement that is absent but required in certain elements and to obtain a reserve factor for elements where the load-bearing capacity is ensured. Analysis results for additional reinforcement are displayed on design model as mosaic plots. For the 'YES/RF' mode, the value of reserve factor is displayed on design model in the standard way. Elements where additional reinforcement is required are displayed in the colour of the range RF < 1. Analysis results of additional reinforcement are presented in a table (in text format). Required area of reinforcement or reserve factor or error code may be displayed as analysis results for additional reinforcement. • For DBN B.2.6-98:2009, the FS factor is calculated according to МТТ.0.03.326-13 'Earthquake stability analysis for elements of active NPP by method of boundary seismic capacity'. The seismic component for such calculation is generated in DCL calculation or is set additionally in the local mode (LARM-SAPR). In LARM-SAPR module it is possible to view the analysis protocol. • For SP RK EN 1992-1-1:2004/2011, EN 1992-1-1:2004, TKP EN 1992-1-1:2009 and SP 63.13330.2018, analysis of pilot reinforcement in shear forces is available for plate elements • For SP 63.13330.2018, in the AvAnGArD system it is possible to display the stress and strain diagrams for all specified or exported (from the local mode) combinations of normative forces. In case of cracks, their depth is displayed. • For SP 63.13330.2018, option to take into account the recommendations of sect.6.1.23. • Option to define the types of pilot transverse reinforcement for plate elements and, in design mode to check elements in shear forces. • New option to automatically create the pilot transverse reinforcement for plates based on mosaic plot of selected reinforcement and the settings of colour palette for the output data. • When PR types are defined for plate elements, there is an option to use the reinforcement snap defined in materials; in this case only intensity of reinforcement may be defined as the input data. • For plate elements, when PR types are defined it is possible to arrange reinforcement symmetrically. Seven symmetry options are available; full symmetry, symmetry by faces and layers. • Punching shear analysis is added in problems with the Time History Analysis • Combinations for the serviceability limit state are excluded when design combinations of loads (DCL) are generated for punching shear analysis. • Option to calculate consumption of reinforcing steel according to defined PR types. The calculation can be carried out for all elements of the model or for selected elements. This option is available in the 'Required Amount of Concrete and Reinforcement' dialog box in the mode of RC structures. Metal Structures • Analysis of aluminium structures according to SP 128.13330.2016 (main types of profiles: I-beam, welded I-beam, angle, channel, T-section, rectangular tubular section and asymmetric I-section). Warping (without pure torsion) is considered in analysis. To cover all possible section shapes provided by factories, it is possible to add custom section types in the steel tables and use such tables in analysis & design of sections/elements. The following types of stress state are identified in analysis of bars in aluminium structures: truss (longitudinal force N), beam (bending moments My, Mz, shear forces Qz and Qy, bimoment Mw), column (longitudinal force N, bending moments My, Mz, shear forces Qz, Qy and bimoment Mw), universal (elements are analysed by all calculation procedures and the most unfavourable result is selected in the final utilization ratio). The selected type of steel table determines the data that will be used in the check/select for the section. For example, if an arbitrary section is checked as I-section, then the section shape factor η corresponding to the layout and eccentricity of the section presented in table E.3 will be used in the stability analysis. Local strengthening for free overhang, different types of thickening in the check of local buckling are not taken into account in the new version. The user interface is modified as the new building code is implemented: • New tab for available aluminium section types is added to the stiffness library. • In the 'Stiffness and Materials' dialog box, the 'Steel' tab is renamed as 'Metal'. • The steel tables for steel and alloy are extended with information on modulus of elasticity, shear modulus and density. In case this data is not defined, default values will be applied. • Files of steel tables are selected by their extensions *steels.srt and *.aluminum.srt, as well as by the internal feature set in the steel table. • The set of 'Additional parameters' depends on the selected current profile type (steel or aluminium) and is extended with the choice of temperature for the structure (-70...-40, -40...+50,+50...+100) and a new list of allowed slenderness ratio. • Since in SP 128.13330.2016 there are no recommendations for analysis on progressive collapse and analysis of the structure for deflections, these analyses are carried out similar to previously implemented analysis according to SP 16.13330.2017. • Option to ignore support sections from stability analysis, i.e. to use M1 - the largest bending moment within the middle third of the bar length; the moment should be at least 0.5Mmax In the case of structural elements, the relevant value is selected within the middle third of the total length of all FEs. • It is possible to manage analysis of metal structures in the 'Design options' and 'Check parameters for analysis and/or design' dialog boxes; so you could enable / disable the check / selection of sections in certain design options based on defined analysis parameters and to save defined parameters for future analyses. • In analysis of steel structures, earthquake load may be defined as a quasi-static component of load. In earlier versions, for such load combinations, only the label (that the loads were static) was transferred to the analysis of steel structures. • For DBN B.2.6-198:2014, the seismic safety factor FS and appropriate HCLPF value (High Confidence Low Probability of Failure) are calculated according to МТТ.0.03.326-13 'Earthquake stability analysis for elements of active NPP by method of boundary seismic capacity'. The output data for every check is available in standard tables or as mosaic plots The ReSpectrum module is mentioned to generate response spectrum of a single-mass oscillator from dynamic loads defined with accelerograms, seismograms, velocigrams and three-component accelerograms, as well as to mutually convert these loads (accelerogram → seismogram, accelerogram → velocigram, seismogram → accelerogram, seismogram → velocigram, velocigram → seismogram, velocigram → Input data: file with load record (file format - one figure in a line with a decimal point), duration, discretization step, scale multiplier, load type - seismogram, velocigram, accelerogram, three-component accelerogram. Additional data for response spectrum: frequency range, frequency step, damping factor. When the load record is downloaded, its graph is displayed and a balance is checked. If an unbalanced load record is saved, the unbalancing parameters (residual components in the conversion) are displayed and the 'Balance' checkbox appears. Load record is balanced with a polynomial function that takes into account the residual components of conversions. The ReSpectrum module implements a widening of the horizontal segment of the peak response spectrum, as well as a reduction in the amplitude of the narrow-frequency peak For every peak in response spectrum, horizontal segment is extended to a segment length equal to 0.3 of the peak frequency. The lines that generates the peak are moved parallel to the length of horizontal segment. In combination with the widening of response spectrum peak, there may be a 15% reduction in the amplitude of the narrow-frequency peak. This reduction should be applied only to narrow frequency peaks of the non-widened response spectrum with the ration of strip width to centre frequency 𝐵 is less than 0.30: 𝐵=∆𝑓0.8/𝑓𝑐 < 0.30 ∆𝑓0.8 is the general frequency range by spectral amplitudes that exceed 80% of the peak spectral amplitude; 𝑓𝑐 is the central frequency for frequencies that exceed 80% of the peak amplitude. The conversion results may be saved either as an image (*.png file) or as a txt file to use further in LIRA-FEM analysis or as a *.csv file (Excel spreadsheet). To activate this module from the VISOR-SAPR environment, use the 'Compute spectrum' ('Analysis' tab, 'Dynamics' panel). Bar Analogues (BA) • When bar analogues are generated automatically, new cross-sectional shapes are available to be recognised: channel section and box section. BA generated in this way may be used in analysis of reinforced concrete walls. • Option to use an increasing factor fvk when the forces from the earthquake load are determined. Cross-section Design Toolkit • New option to import beam sections from 'Design of RC structures' into this module. The forces in the selected section are automatically displayed in the force table for all load cases for which the analysis is carried out and for all design load combinations. • For rebar items, new option to define a prestress value that will be applied when the stress strain state of the section is computed. • For solids, strip elements and rebar items, the option to define a prestrain value in analysis procedure. • In the 'Flags of drawing' dialog box, new options to graphically display material identifiers assigned to the elements and to adjust the font size for these labels, as well as the settings for scale and line thickness to display the results in strip elements. Steel Rolled Shapes (SRS-SAPR module) • User interface elements are adapted to be used with high resolution UHD and 4K monitors. • Option to create aluminium alloys and profiles as well as from any other materials • It is possible to add custom section types (created in the 'Cross-section Design Toolkit' module) to the steel tables. It helps you conveniently store and consider such sections in FEA (with stiffness properties) as well as in analysis & design of aluminium structures. • New tables of aluminium alloys are added: □ aluminium alloy, drawn tube (DT), building code EN 754 (EN 1999-1-1:2007); □ aluminium alloy, extruded profile (EP), EN 755 building code (EN 1999-1-1:2007); □ aluminium alloy, extruded hollow profile (EP/H), EN 755 building code (EN 1999-1-1:2007); □ aluminium alloy, extruded open profile (EP/O), building code EN 755 (EN 1999-1-1:2007; □ aluminium alloy, extruded rod and bar (ER/B), building code EN 755 (EN 1999-1-1:2007; □ aluminium alloy, extruded tube (ET), EN 755 building code (EN 1999-1-1:2007); □ aluminium alloy, sheets, strips and plates, EN 485 building code (EN 1999-1-1:2007; □ aluminium alloy, extruded profile, building code GOST R 56282-2014 (SP 128.13330.2016); □ aluminium alloy, slabs, building code GOST 17232-99 (SP 128.13330.2016); □ aluminium alloy, sheets, building code GOST 21631-76 (SP 128.13330.2016). • Demo tables of aluminium profiles are added. The tables of profiles and alloys may be expanded upon individual requests to the Support Team. Report Book • The tables of input/output data are expanded with new input/output data. • In standard tables, there is new filter to generate extreme values for results, e.g. forces (by sections) and/or at the ends of structural elements (in the first and last design cross-sections). The table will be also helpful to generate extreme values for the whole set of design cross-sections in elements (plates, solids, etc.). This is a tabular presentation of the sample by min/max/abs output data (earlier displayed only graphically). This filter is available for the whole list of output tables including design modules. • The output table of displacement and forces for intermediate steps in nonlinear analysis is available. • The layout (pagination) template is now saved in a ZIP file and extracted from the ZIP archive of the problem together with the report book (by default, the templates are located at the following path: C:\Users\Public\Documents\LIRA SAPR\LIRA SAPR 20x\Settings). The template files Book_en_A4.docx, book_ru_a4.docx, Book_ua_A4.docx are added to the TEMPL.zip archive so that the user has their initial version. • For the screen copies, the shortcut menu in the Report Book contains new command to select (on the model) the nodes and elements for which documentation is prepared. • Check for horizontal load with account of combined behaviour of lateral and longitudinal walls. This analysis is based on algorithm that automatically determines shape of partitions and evaluates mutual arrangement of longitudinal and transverse wall elements. Output data is presented as mosaic plots and corresponding tables. Moreover, for each group of partitions it is possible to view detailed protocol with tracing; it helps you check the order of all calculations. Apr 21, 2022 The following options and items are updated: building codes for Republic of Kazakhstan, SP RK 2.03-30-2017, SP RK EN 1998-1:2004/2012, NTP RK 08-01.1-2021, SN KR 20-02:2018, SP RK EN 1990:2002+A1:2005/2011, harmonic load, LIRA-FEM Analysis Client/Server, types of combinations, bar analogues, input table, Report Book, output tables, triangulation, option to merge models, history of nonlinear load cases, variable section, unit weight, info about dimensions, model of equivalent foundation, pile stiffness, coordinate system, steel table, universal bar, truss joint, types of pilot reinforcement (PR), composite reinforcement, analysis on fire resistance, depth of concrete heating, physical model, design parameters, wall cleanup, Python, wall reinforcement • Restored option: objects included into block of SAPFIR physical model may be transferred to design model. • Corrected: property 'Design parameters – Applied steel' for the steel beams is not duplicated in the 'Filter by parameters' dialog box. • Enhanced option for slab parameter 'Sum up with dead weight'; it will enable the user to sum up the dead load on the slab with the dead weight of the slab. • Restored: option to cancel the manual cleanup of walls and columns for the slab. • Corrected: user-defined node Pyton is not duplicated when it is saved to the toolbar. Design of RC structures • In the reinforcement model of the diaphragm, new option to place an additional step of rebars at the edge of the reinforcement zone if the zero distance is defined for rebars on the opposite side of reinforcement zone. Building codes for Republic of Kazakhstan • Added: option to define increasing factors Fvk (sect. 7.6.5, 7.6.6 SP RK 2.03-30-2017 and sect. 6.4.1, 6.4.2 NTP RK 08-01.2-2021) for all earthquake dynamic modules (except non-earthquake modules 21, 22, 23, 24, 28 and experimental modules 38,37,46). Important: revision of forces with the factor Fvk does not affect the equilibrium at nodes during calculation of load on fragment. • Dynamic module (61) SP RK EN 1998-1:2004/2012, NTP RK 08-01.1-2021 (Kazakhstan): coefficients of responsibility along the vertical/horizontal are added. • Dynamic module (60) 'Earthquake analysis by SP RK 2.03-30-2017 (Kazakhstan) and SN KR 20-02:2018, (Kyrgyzstan)', coefficients of responsibility along the vertical/horizontal are equal to 1.0 by • New option to create combinations of loads according to modifications mentioned in sect.2.2.3.2 NP to SP RK EN 1990:2002+A1:2005/2011. Important: when this option is used, the program generates combinations of dead load cases by formula (6.10a) and combinations of loads for transitive design cases by formula (6.10b). • Clarified warning messages in analysis of reinforcement according to SP RK EN 1992-1-1:2004/2011. • Clarified stability analysis for the pipe by SP RK EN 1993-1-1:2005/2011. • Modified: presentation of the output data along the length of structural element in the local mode of steel analysis by SP RK EN 1993-1-1:2005/2011. • Corrected: calculation of forces in element of the model when components of harmonic load is considered for SP RK EN 1990:2002+A1:2005/2011. • Corrected: option to save the data in the DCL table in case the type of combination is changed (with no modification to combination coefficients). LIRA-FEM Analysis Client/Server • New 'Results' button to open the folder where the file with modified input data (obtained together with the output data) is located. Intuitive graphic environment • When bar analogues (BA) are generated with auto-recognition of the cross-section shape, the program will not generate BA in case the cross-section was not assigned to initial FE (appropriate check is added). If bar analogue is generated without recognition of section, it will be generated by configuration of initial elements even if stiffness is not assigned to such elements. • In problems with solid elements, possible program crash (in generating contour plots with analysis results at the specified section of design model) is eliminated. • Restored: option to analyse steel structures for models (where the file name exceeds 30 characters) imported from SAPFIR module and analysed in VISOR-SAPR module. • For projects created with input tables, corrected option to save to *.lir file the steel stiffness values defined in the 'Stiffnesses' input table. Error occurred when the specified stiffness values were activated with Undo/Redo commands. • Restored: option to generate tables in the Report Book with description of RC materials in case there is no continuous numbering of parameters in the list (that is, there are gaps or numbers of parameters are not in order). • Enhanced generation of tables (*.ТХТ and *.HTML) with output data for analysis of reinforcement. • Fixed: possible program crash when the contour is generated with the 'Triangulation' dialog box in case there are coincident nodes in the contour. • Fixed: possible program crash when the model is merged from the submodels. • Options to unselect nodes/elements and to cancel the chain of dimensions are separated in the 'Geometric properties' tool. The first Esc will unselect nodes/elements while the second Esc will cancel the chain of dimensions. • Added: option to analyse steel structures according to results of DCF for problems with the history of nonlinear load cases. • Fixed: possible error to input incorrect value when you define ranges for a certain colour palette. • For steel stiffness with variable section, added option to display information about assigned materials on mosaic plot of materials for steel structures. • For steel cross-sections defined with standard types of stiffness (with no reference to steel tables), correct value of the unit weight is added. • When the dead weight is generated for the steel elements, unit weight q defined on the 'Stiffness' tab in the 'Steel cross-section' dialog box is used rather than the value from steel table. • When steel structures are analysed by histories of nonlinear load cases, in the 'Analysis parameters (Metal, RC)' (on the 'Load factors' tab) the user will see default names for load cases - 'History of nonlinear load cases 1', 'History of nonlinear load cases 2', etc. instead of the empty names of load cases. • Enhanced presentation for the steel section of type 'I-section of three sheets' (with no reference to steel table) when stiffnesses are displayed in the mode 'Presentation with account of assigned sections' in VISOR-SAPR module and in Cross-section Design Toolkit module. • Fixed: possible program crash when the 'Geometric properties' tool is used when the specified chains of dimensions are cancelled. • For problems with super-elements, enhanced tool 'Information about element' when the data about initial and selected profile is presented in the mode of steel structures. • Fixed: option to paste the array of numbers into DCL table through the Clipboard in case when combination coefficients have values <0. • In problems of SOIL system, enhanced generation of the colour palette for mosaic plot of loads. • When mean modulus of elasticity (Emean) for soil is computed by 'Method 2', settlements Sc (compression of pile shaft) and Sp (punching shear with pile toe of equivalent foundation base) are considered similar to calculation by 'Method 1'. • In the SOIL system, during analysis by the model of equivalent foundation, default distance to the edge of equivalent foundation is modified from '2D' to '1.5D'. • Fixed: possible program crash when you switch to the soil model window after the 'Remove soil model' command in the 'Soil model' dialog box. • When calculating hd (effective depth up to which soil resistance along the side surface is not considered) in pile stiffnesses by DCL, the earthquake is considered in case the 'Earthquake' load case type is available in combinations (not by dynamic load case). This change makes it possible to cover cases where the earthquake load is defined as a static load case, e.g. nodal inertial • By default, the coordinate system window is hidden to save the workspace. Steel structures • Restored: option to launch the steel analysis in case parameter of steel material 'Steel table' is defined as 'In the same file as shapes'. • Enhanced algorithm for numerical selection of the built-up steel I-section. • For steel elements of type 'universal bar', defined parameters (ultimate deflection, slenderness ratio, effective length) may be displayed as mosaic plots. • In analysis of steel structures for specific combinations, the program selects deflections where max utilization ratio of the steel section. • In stability analysis of steel sections, possible too high values of utilization ratio of the section are fixed. • Fixed: option to save the file with output data for analysis of the truss joint from pipes of rectangular section. • Added: option to analyse steel structures by results of DCF for problems with history of nonlinear load cases. RC structures • In parameters of RC materials for plates, analysis 'by Wood theory' is defined by default. For Eurocode and similar building codes, analysis of reinforcement is always carried out 'by Wood • In analysis of reinforcement in plates by Karpenko method, arrangement of reinforcement is clarified in certain cases. • When you define types of pilot reinforcement (PR) for plates, longitudinal reinforcement is clarified on schematic presentation in case of large areas of total reinforcement. • When you define RC materials for DBN B.2.6-98:2009, list of classes for composite reinforcement is updated according to the specified parameters of such reinforcement. Steel rolled shapes • New option to add steel tables for pipes and plates from the 'Steel cross-section' dialog box in case the 'Hidden' option is defined for the file of steel table (that is, steel tables of previous Fire resistance • When the output data for analysis on fire resistance is visualized in 'Fire resistance of element' window, reference of temperature values to appropriate rebars is clarified. • In analysis of steel structures with account of fire resistance, position of local axes Y1/Z1 are clarified as well as limit values for slenderness ratio. • Added: option to identify when it is not possible to determine the depth of heating in concrete during analysis on fire resistance. Apr 21, 2022 Updated options and features: physical model, design parameters, wall cleanup, Python, wall reinforcement • Restored option: objects included into block of SAPFIR physical model may be transferred to design model. • Corrected: property 'Design parameters – Applied steel' for the steel beams is not duplicated in the 'Filter by parameters' dialog box. • Enhanced option for slab parameter 'Sum up with dead weight'; it will enable the user to sum up the dead load on the slab with the dead weight of the slab. • Restored: option to cancel the manual cleanup of walls and columns for the slab. • Corrected: user-defined node Pyton is not duplicated when it is saved to the toolbar. Design of RC structures • In the reinforcement model of the diaphragm, new option to place an additional step of rebars at the edge of the reinforcement zone if the zero distance is defined for rebars on the opposite side of reinforcement zone. Dec 20, 2021 The following items are updated: inport DXF, capitals and column bases, stairs, convertion of objects, sets of layers , triangulation points and lines created in Generator • Added option to define the capital and column base generated only in the 1st direction. • Fixed bug in moving the middle point of the stair. • Restored option to save the openings when partition is converted to the wall. • New option to transfer design parameters for piles to VISOR-SAPR module. • Fixed bug when the set of layers assigned for a 3D view is cancelled in case you switch between the 'Create' and 'Reinforcement' tabs. • Restored option to edit the contour of inclined slab with the check points. • Added option: in the same slab to use triangulation points created with the "AddLnSlab" node and cutting lines. • Fixed bug: elevation for triangulation lines and points generated with the node 'AddLnSlab'. Nov 05, 2021 Soil model, position, piers, rigidity centre, FE 57, equivalent foundation, additional settlement from punching shear, METEOR, DCL, diagrams of materials, design strength of brickwork, types of pilot reinforcement (PR) for plates , assemblage stages, fire resistance for RC structures, rectangular pipe, check for buckling of steel members, Analysis server. • Added option to define the capital and column base generated only in the 1st direction. • Fixed bug in moving the middle point of the stair. • Restored option to save the openings when partition is converted to the wall. • New option to transfer design parameters for piles to VISOR-SAPR module. • Fixed bug when the set of layers assigned for a 3D view is cancelled in case you switch between the 'Create' and 'Reinforcement' tabs. • Restored option to edit the contour of inclined slab with the check points. • Added option: in the same slab to use triangulation points created with the "AddLnSlab" node and cutting lines. • Fixed bug: elevation for triangulation lines and points generated with the node 'AddLnSlab'. • Restored option to create openings in the foundation slab (for import of floor plans). • When importing 3D DXF file through Generator, when the drawing is presented to scale, new option to scale along the Z axis is added. • In the 'Soil model' dialog box, on the 'Position' tab, restored option to obtain parameters when you click 'from SOIL' button. • Fixed bug: when analysis results for piers are presented on the model graphically, the value labels displayed on mosaic plots are shifted. • Corrected option to consider displacement of the rigidity centre in physically nonlinear plate FE in the case the unified axes do not coincide with the local axes of element. • When calculating the stiffness of piles (FE 57) as an equivalent foundation according to 'method 1', the influence of additional settlement Sp was clarified (additional settlement from punching shear of piles at the base of equivalent foundation). In case the pile foundation is simulated as the load in the SOIL system, then when calculating C1 and C2 according to method 1, both settlements Sp and Sc (compression of the pile shaft) are taken into account. • Optimized generation of punching shear contours for piles created in VISOR-SAPR module. • Restored option ot generate the table with output data 'Load on brick pier' and 'Piers by load cases'. • Fixed bug: in 'METEOR' system, generation of integrated problem in case problems are merged only based on the DCF results ('DCF +' mode). When the total DCF table for the integrated problem is generated, there is no autocorrect from 'progressive collapse' load case to the 'specific' load case. • Fixed a bug: in DCL for SP PK EN 1990: 2002 + A1: 2005/2011 when combinations are generated according to formulas 6.10, 6.10a and 6.10b. Now the user could define the reduction factor. It is recommended to re-define the combinations of the main type I that were defined earlier). • For the integrated problem in 'METEOR' system ('DCF+' mode), new option to carry out local analysis for steel elements. • Fixed bug: for materials in which the unloading curve is close to zero in the nonlinear diagram. • In the 'SOIL' system, when calculating the stiffness of the FE 57 according to the equivalent foundation model, the default radius of the equivalent foundation is changed from '2D' to '1.5D'. • In analysis of reinforcement for the RC elements according to SP 63.13330.2012/2018, clarified algorithm for account of combinations when the group of forces E1 (progressive collapse) is • When computing design strength of brickwork, clarified value for coefficient from sect.6.14 (g) for SP 15.13330.2020. • For the 'Stiffened cement heavy' mortar, the requirements for reducing the design strength stipulated in 'note 2' to table 6.1 SP 15.13330.2020 are taken into account. • For types of pilot reinforcement (PR) for plates plates, if zero reinforcement is specified for some layers, then min percentage of reinforcement specified in the design parameters is assigned for such layers (for the check by Karpenko's theory). • For the 'Assemblage stages' input table, the sequence of assemblage stages after Undo/Redo options is fixed; the updated 'Assemblage stages' table is synchronized with the open 'Edit load cases' and 'Model nonlinear load cases' dialog boxes. • In analysis of fire resistance for RC structures, the reduction factor (used to determine the tensile strain of concrete) is clarified. • Corrected discrepancy between the area of reinforcement for RC and composite bars of circular cross-section in VISOR-SAPR and in the local mode of reinforcement LARM-SAPR. • For SP 63.13330.2012/2018, corrected discrepancy in the output data for bars of rectangular cross-section with account of fire resistance (the output data obtained in VISOR-SAPR module and in the local mode of reinforcement LARM-SAPR). • For SP 63.13330.2012/2018, corrected discrepancy in the output data for bars of circular cross-section by ultimate limit states (ULS) and by serviceability limit states (SLS) with and without account of fire resistance. • For SP 63.13330.2012/2018, corrected discrepancy in the output data for tensile elements in analysis of transverse reinforcement. • For SP 63.13330.2012/2018, corrected discrepancy in the output data for plates by ultimate limit states (ULS) and by serviceability limit states (SLS) with and without account of fire resistance. • In analysis of reinforcement by SP RK EN 1990:2002+A1:2005/2011, new option to consider (in materials) the additional group of coefficients defined for different types of DCL. • For RC materials defined according to SP RK EN 1990:2002+A1:2005/2011, exteme values of coefficients k1, k2, k3, k4 are stipulated. • Fixed bug: in selection of the section for rectangular steel pipes, the program fails to select the shape for rectangular pipe though there are suitable shapes in the steel table. • In analysis of steel structures according to SP 16.13330.2017 for specific combinations, deflections of the max utilization ratio of the steel member are selected. • In stability analysis of steel sections, the possible overestimation in utilization ratio of the section is fixed. • Fixed bug: when analyses of the previously calculated problems are started once again, the LIRA-FEM Analysis Client / Server may be blocked (deadlock). • Enhanced robustness of the the LIRA-FEM Analysis Client / Server in case the queue contains a lot of problems with the status 'Preparing for analysis' and 'Waiting for ZIP file with input data'. • Fixed bug: when you open the problem (for example, transferred from another computer, including in a ZIP file), the STC design materials assigned for cross-sections in the user-defined steel tables are lost. Nov 05, 2021 Update 1 for LIRA-SAPR 2021 R2 is released. The following items are updated: NonLinear Engineering Design (NL Engineering), Analysis Client/Server, unification of forces, 1-node FE, the 6th DOF, import *.sli, import *.dxf, import *.ifc, pulgin for Grasshopper, invertion of walls • Corrected bug: inversion of walls that contain openings if the analytical model of the wall was aligned manually. • Added option not to transfer dynamic load cases to VISOR-SAPR module, just clear appropriate check boxes in the 'Edit load cases' dialog box. • Corrected bug in the 'Cut off with box' command. • Enhanced option to copy (by storeys) the openings with enclosed contours (when one opening is located inside the other) in floor slabs. • Corrected value of load in the meshed model for the wall with interpretation 'Load' if the opening with defined unit weight is defined for the wall. • Enhanced import of DXF drawings in case the drawing contains blocks (overlapping text is corrected, enhanced conversion of NURBS curves into Bezier curves). • When the model generated by DXF underlay is updated, for beams the current section is retained. • Import of IFC file: □ corrected error in import of columns for which Boolean operations were performed; □ added import of premises for the updatable node in the Generator system. • Pulgin for Grasshopper is available in English and Ukrainian, restored compatibility with previous versions of the program. • In problems with DCL according to SP RK EN 1990:2002+A1:2005/2011 (Republic of Kazakhstan), for forces by DCL, corrected difference in values presented on diagrams and in the information about • In the 'Colour grade visualuzation' dialog box, on the 'Colours' tab, restored presentation of colours for the specified ranges. • Tables of unified forces may be generated even if not all elements of the unification groups are selected. • Added option: export of the output data from analysis of reinforcement for physically nonlinear bars and plates into Design of RC structures module. • Accuracy in solving quadrangular and triangular FE of shell with the 6th DOF at the node is corrected depending on the FE shape. • Restored analysis of problems with 'NL Engineering 1' according to SP 63.13330.2012/2018. • Restored analysis of reinforcement according to SP RK EN 1992-1-1:2004/2011. • For steel sections 'the 3rd class' (that behaves in elastic stage) clarified analysis on fire resistance in case of torsion moment. • Corrected presentation of the height elevation for load when the load from the foundation pit is converted to the imported load. • Clarified check for application of loads with attribute "σzy". • For problems of SOIL system, with the option 'Calculate on enlarged grid', the added option to check points located outside the load. • In the SOIL system, new option to show/hide symbolic presentation of piles and their numbers. To find out appropriate commands, on the VIEW menu, point to 'Show objects'. • For calculation of subgrade moduli, the building code SP RK 5.01-102-2013 is supported. • LIRA-FEM 'Analysis Client/Server' is speeded up in case of large number of problems that are waiting in the queue for analysis. Nov 05, 2021 The following items are updated: import *.dxf, import *.ifc, pulgin for Grasshopper, invertion of walls • Corrected bug: inversion of walls that contain openings if the analytical model of the wall was aligned manually. • Added option not to transfer dynamic load cases to VISOR-SAPR module, just clear appropriate check boxes in the 'Edit load cases' dialog box. • Corrected bug in the 'Cut off with box' command. • Enhanced option to copy (by storeys) the openings with enclosed contours (when one opening is located inside the other) in floor slabs. • Corrected value of load in the meshed model for the wall with interpretation 'Load' if the opening with defined unit weight is defined for the wall. Oct 08, 2021 Updated options and features: DXF import, 'Space', IFC import, walls, rotation of structure, check for design model, mesh quality, filter for elements, settings for FEA solver, 'SOIL' system, stress in soil, analysis of settlement, piles, flags of drawing, special FE, input tables, design sections, assemblage stages, model synchronization, forces by steps, DCF calculation, thermal load, dynamics module, distribution coefficient, dangerous direction of load, Cross-section Design Toolkit, fire resistance, types of pilot reinforcement (PR), analysis of reinforcement, 'AVANGARD' system, I-section, arbitrary bar, variable section, Analysis Client/Server, archive/backup, ZIP-file, DCL table. • Enhanced import of floor plans DXF: □ in SAPFIR module, when a polyline is imported to the 'Space' object, it is possible to define its interpretation as a load, its intensity and load case; □ restored option to import surface loads; □ fixed bug: rotation of column sections when the model is generated according to the DXF floor plan. • Enhanced import of IFC: □ when the openings in walls are imported, the 'Apply to adjacent walls' property is automatically set for them; □ enhanced import of beams; □ account of rotation angle for the building in import of IFC file; □ enhanced generation of openings in walls for IFC files generated in Tekla Structures. • The ratio of coefficients Gamma fm/Gamma fe is transferred to the DCL table in VISOR-SAPR module for the Wind load case according to DBN B.1.2-2006 3.1(2007). • Enhanced option to generate colours for stiffness in SAPFIR objects. • New option to consider the weight of window and door’s assembly for walls with the interpretation Load. • New parameter 'Wind load' is added for walls and windows. For a specified parameter, elements with zero stiffness are generated over the whole area of the opening in the design model, in particular, to collect the wind load. Unified intuitive graphic environment for the user • To check design model and prepare documentation, mosaic plots for properties of the following objects are realized: □ mosaic plot to evaluate the quality of FE mesh for 3-node and 4-node plates; the best quality is equal to 1 for the square and equilateral triangle; □ mosaic plot to evaluate the quality of geometry of plate FE - 'Max angle between edges'; □ mosaic plot for nodes (it shows the number of elements adjacent to this node); □ mosaic plot for numbers of assigned pile groups. • In filter for elements there is new parameter to find/select elements along the length of structural element • New option to save lists of nodes and elements when the PolyFilter dialog box is closed. When the nodes and/or elements are selected on the model with 'selection window', the filters defined for the generated list of nodes and elements are considered. • Ribbon user interface now contains command to define parameters for the FEA solver. • SOIL system: additional triangulation of loads that simulate the soil excavation from the foundation pit. • SOIL system: mosaic plot of stress from soil excavated from the foundation pit. • Corrected error in generation of punching shear contours for the beam grillage and foundation slab. • SOIL system: account of settlements for specific soils in calculation of subgrade moduli by 'Method 1' (calculation by Pasternak model). • SOIL system: when the settlement and subgrade moduli C1, C2 are calculated, the soil along the pile length is considered as non-compressible for both cases: for the case K1=K2=0 and for the case K1≠0 or К2≠0 (K1 - proportion of load transmitted at the pile top, K2 - proportion of load transmitted along the pile length). • In the 'Define moduli C1 and C2' dialog box, new option to define Pz without No. of subgroup for imported loads. • When the model of piles is generated as the chain of bars with elastic springs along the length (from FE 57 method 2), every pile will be automatically included into structural block. • New toolbar 'Flags of drawing' with commonly used settings of graphic presentation. Set of buttons and their location may be modified by the user. • New flags of drawing: pile group No.; pile groups in colour; numbers for subgroups of loads Pz included into group of loads exported to the SOIL system; data about hidden nodes (visibility of these nodes are cancelled with flags of drawing). • New tool to add to design model the special FE (such as FE of elastic spring, damper, etc.). This tool will integrate the FE of the required length into the specified location and at the same time assign the selected stiffness. • In description of stiffness for FE 262 (simulates one-sided elastic spring between nodes with option to consider the gap), new option '+ FE length' is added. In compression, this option enables you to add the actual FE length to the specified gap (if the gap length is defined as equal to zero and this option is active, then the gap size will be automatically equal to the length of the • New input tables: □ 'Number of design sections' - to define and modify number of sections in which the forces/stresses for bars and plates will be computed (in case of plates - centre of plate plus nodes in which stresses will be computed); □ 'Assemblage stages' - to define and modify the data about assemblage/disassemblage stages. • New options to manage synchronization for the model when you work with multiple windows: □ for fragmentation; □ for projections/views; □ for the settings of the flags of drawing. • Table with output data for forces by steps in nonlinear analysis. • In the mode of analysis result evaluation, new option to calculate DCF for selected elements or current fragment of the model. • For the plate elements, new option to define thermal load in unified axes for the output data. • For analysis results by load cases/DCL, new mode for presentation of forces and displacements max by absolute value. • Bar analogues are available for problems with time history analysis. • New check and warning for the case when analysis 'by forces' is defined in the settings for design options. • New option to import the section from VISOR-SAPR module to 'Cross-section Design Toolkit' module according to the data of fire resistance for the element. The section is imported as divided into zones depending on distribution of temperatures at every part of the section. Every zone has its own stress-strain diagram for concrete and reinforcement with account of changes in physical and mechanical properties of concrete and reinforcement at high temperature fire. • In the 'Mesh generation' dialog box, for the simple contour, new option to select intermediate nodes with the Shift key and to modify the scale of displacements (when you hold down the Ctrl key and rotate the mouse wheel). • For the graphs of kinetic energy, displacement in time, speed and acceleration of node along the specified direction or forces in selected section of the element in time, it is possible to display the function value at places where the graph intersects the integration steps. • The 'Diagram along section of plates' and 'Deflection diagram' dialog boxes contain commands that enable the user to locate the diagram vertically, on projection and display the local extreme values of the diagram. FEA solver • The window with analysis protocol is reorganized: □ new tool for comfortable reading of analysis protocol during analysis procedure. Hold down the left mouse button in the protocol window to stop the automatical 'scrolling' of the window (until the key is released). Now you could, for example, 'scroll' the protocol window (when holding down the mouse button) and read the whole text available in the protocol window; □ to zoom in/zoom out the font in the protocol window (hold down the Сtrl key and rotate the mouse wheel); □ the text in the protocol window is now displayed many times faster than it was earlier. It is helpful in step-type analysis with a small number of unknowns, but a very large number of steps. • Analysis on thermal loads defined along arbitrary direction in the plane of plate FE. • Analysis on three-component accelerogram (dynamic modules 29, 64): □ modified calculation of inertial forces: distribution coefficients and modal masses are calculated for every component of the earthquake load; □ to significantly reduce the time when you solve dynamic problem, new option to specify the max percentage of modal masses in certain directions (Mx=90%, My=90% Mz=75%). □ the output data now contains 3 modal masses and 3 distribution coefficients for every component. • For the dynamics module 62, the dangerous direction of load is determined automatically (when the 'Account of direction cosines' checkbox is not selected). The dangerous direction of load is determined for each mode shape based on the max distribution coefficient. • Corrected behaviour of 2D and 3D physically nonlinear finite elements of soil, for which unloading path is defined by a separate branch (Ke is not equal to 1). • Corrected behaviour of iterative physically nonlinear plates, for which unloading path is defined by a separate branch. • For FE 262 (simulates one-way elastic spring between nodes with the option to consider the gap), in compression it is possible to consider the gap as equal to the length of the FE + the specified gap size. • When collecting masses for dynamic analysis, new option to consider additional load cases specified in the 'Model nonlinear load cases' dialog box (when this option is not active, the masses from additional load cases are not automatically summed up with the masses of the main assemblage stages). Reinforced Concrete Structures • In analysis of reinforcement according to building codes of Kazakhstan, coefficients in sect.7.2 SP RK EN 1992-1-1:2004/2011 are considered. • New algorithm for analysis of reinforcement in plate elements according to Wood theory by SP 63.13330.2018 and SP RK EN 1992-1-1:2004/2011. • The 'Define and edit the types of pilot reinforcement (PR)' dialog box is optimized for the work with a large number of PR types. • In the 'Define and edit the types of pilot reinforcement (PR)' dialog box, when PR types are defined for plate elements it is possible to define concrete cover from material properties. New option to arrange symmetric reinforcement for plates. • 'AVANGARD' is a system for detailed evaluation for bearing capacity of reinforced concrete sections in oblique eccentric compression (tension). Generation of the surface (volume) for the bearing capacity of RC normal section with arbitrary arrangement of reinforcing bars by SNIP 2.03.01-84*, DSTU 360-98, TSN 102-00* and SP 63.13330.2018. Steel Structures • In analysis of steel structures, the I-section may be checked/selected with no reference to existing steel tables. • 'Arbitrary bar' may be analysed according to Eurocode EN 1993-1-1:2005/AC:2009 and building codes of Kazakhstan SP RK EN 1993-1-1:2005/2011, for cold-formed shapes according to SP 260.1325800.2016 and beams of variable section according to SP 16.13330.2017. LIRA-FEM Analysis Client/Server • Fixed bug: server stop when analysis of *.lir file is defined from read-only network folder. • Double-click the icon in the notification area (at the far right of the taskbar) to close the LIRA-FEM Analysis Client/Server window if this window is open. • New state for the problem - 'Waiting for ZIP file with input data '. Earlier, in this state, analysis of such problem may be started and the queue of the problems is blocked. It is possible to terminate analysis marked as started but for some reason is not actually running. It is also possible to delete from the queue the problem in state 'Creating Zip-archive with analysis results on the server computer'. It is not allowed to delete the problem during the time when it is analysed. Earlier, when the 'Remove' command was activated for the analysed problem, the problem queue was blocked. The 'record operations ' tool is deleted. • Corrected function of the 'More...' button when the data is reorganized. Oct 08, 2021 The following items are updated: import DXF, 'Space', import IFC, walls, rotation of building, check for design model, mesh quality. • Enhanced import of floor plans DXF: □ in SAPFIR module, when a polyline is imported to the 'Space' object, it is possible to define its interpretation as a load, its intensity and load case; □ restored option to import surface loads; □ fixed bug: rotation of column sections when the model is generated according to the DXF floor plan. • Enhanced import of IFC: □ when the openings in walls are imported, the 'Apply to adjacent walls' property is automatically set for them; □ enhanced import of beams; □ account of rotation angle for the building in import of IFC file; □ enhanced generation of openings in walls for IFC files generated in Tekla Structures. • The ratio of coefficients Gamma fm/Gamma fe is transferred to the DCL table in VISOR-SAPR module for the Wind load case according to DBN B.1.2-2006 3.1(2007). • Enhanced option to generate colours for stiffness in SAPFIR objects. • New option to consider the weight of window and door’s assembly for walls with the interpretation Load. • New parameter 'Wind load' is added for walls and windows. For a specified parameter, elements with zero stiffness are generated over the whole area of the opening in the design model, in particular, to collect the wind load. Aug 26, 2021 Standard tables, measurement units, physical nonlinearity, Report Book, recalculation of subgrade moduli, Analysis Client/Server, list of problems, hinge support with eccentricity, editable analytics, option to save file with nonlinearity, 14 piecewise linear function • Enhanced option to transfer to VISOR-SAPR module the hinge supports of floor slabs with eccentricity. • Restored work in the 'Editable analytics' mode, it is not necessary to close and open file. • Enhanced option to save the *.spf file if nonlinearity is defined for the file. • Enhanced option to transfer to VISOR-SAPR module the data by 14th piecewise linear function. • Corrected option to select the current layer in the 'Layers' dialog box. • Corrected numbers for the sheets of drawings during the export to DXF. • Enhanced calculation of reserve factor for reinforcement in plate elements during analysis on progressive collapse. • Clarified analysis of reinforcement in plate elements by SP RK EN 1992-1-1:2004/2011 in case it is necessary to increase reinforcement to provide required crack resistance. • Blocked option to interact with the user interface when standard tables with the output data are generated. Enhanced visual presentation of the progress bar. • Enhanced conversion of measurement units when you define the data for equivalent pile foundation in the SOIL system. • New option to transfer loads to the physically nonlinear finite elements from LIRA-CAD module. • Restored option to save the current names of elements in the Report Book when they are updated. • When models are transferred from SAPFIR module, loading histories by default will have the DCF group A1 and the 'Displacements and forces after every step' option will become active. • It is not necessary to connect with results of steel analysis every time when you open the problem if design options (except the last one) were deleted from the problem. • New option to transfer stiffness FE-310 from SAPFIR module to VISOR-SAPR module. Analysis Client/Server LIRA-FEM • When the problem is sent to analysis, it is possible to skip recalculation of subgrade moduli C1/C2 or pile stiffnesses by soil model. • New option to organize the list of problems in the queue by every column. New option to modify the window size. • For the problems with status 'Analysis failed', new option to display the protocol of FE analysis if such analysis was carried out. Aug 26, 2021 Hinge support with eccentricity, editable analytics, option to save file with nonlinearity, 14 piecewise linear function • Enhanced option to transfer to VISOR-SAPR module the hinge supports of floor slabs with eccentricity. • Restored work in the 'Editable analytics' mode, it is not necessary to close and open file. • Enhanced option to save the *.spf file if nonlinearity is defined for the file. • Enhanced option to transfer to VISOR-SAPR module the data by 14th piecewise linear function. • Corrected option to select the current layer in the 'Layers' dialog box. • Corrected numbers for the sheets of drawings during the export to DXF.
{"url":"https://www.liraland.com/lira/versions/2011","timestamp":"2024-11-04T18:31:18Z","content_type":"text/html","content_length":"438090","record_id":"<urn:uuid:bb16840d-3262-4080-b47d-810b73859ac0>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00061.warc.gz"}
Repositioning and Optimal Re-Allocation of Empty Containers: A Review of Methods, Models, and Applications Arab Academy for Science, Technology and Maritime Transport, College of International Transport and Logistics, Alexandria 1029, Egypt Faculty of Logistics, University of Maribor, 3000 Celje, Slovenia Author to whom correspondence should be addressed. Submission received: 20 April 2022 / Revised: 24 May 2022 / Accepted: 26 May 2022 / Published: 29 May 2022 Managing empty-container movements is one of the most challenging logistics problems in the shipping field. With the growth of global trade imbalance, the repositioning process has become necessary, immediately after emptying a container. The main contribution of this research paper is to enrich the most frequently used methods, models, and applications in the literature, for relaxing the empty-container-repositioning problem. The article presents practices that vary between organizational policies, technical solutions, and modelling applications. A review of optimization models has been used for comparisons, based on specified criteria, such as the time frame, inputs, outputs, scale of the project, and value. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) was applied through the online database Web of Science (WOS). It gives a comprehensive description of all the relevant published documents. On the basis of conducting a brief systematic review, future research opportunities have been determined, considering the emerging phenomena in container transport chains. 1. Introduction The shipping industry is considered the primary underpinning of the international economy. It contributes, significantly, to global trade, as it is the most efficient, safe, and friendly transport to move mass goods worldwide [ ]. Consequently, more than 90% of world trade is carried by sea. In the middle of the twentieth century, containerization was a significant technological development in the shipping business. It has played an essential role in dramatically reducing the transport cost, which was so expensive before containerization [ ]. Song and Dong [ ] classified the container transportation chain into two categories: the supply chain of full containers and the supply chain of empty containers. The authors clarified that both supply chains are correlated with each other, as their operations belong to a unified transportation network with the same resources. They explained the container transport chain, since it starts when the shipping company takes empty containers from their depot to be loaded by the consignor. After loading the containers, they are loaded onto a vessel heading to the consignee’s destination, either by rail transport or road transport or a combination of both. The laden containers are unloaded at the consignee’s store and emptied to be ready for loading, picked up to be returned to empty depots, or returned to shortage ports for future demand [ The existence of empty containers in specific ports, terminals, or depots causes an increase in the operational cost. Additionally, it increases the traffic volume, presenting environmental and sustainability problems. Subsequently, decreasing the movement of empty containers does not only have an economic impact but also has an environmental effect; the less empty container movement there is, the less fuel consumption, resulting in reducing the emission of carbon dioxide and congestion. To accurately control the problem, it is necessary to refer to the causes of the emergence of empty-container problems. The significant reason is the global trade imbalance worldwide, where a region characterised by higher imports than exports will face a considerable accumulation of empty containers. In contrast, a region where exports exceed imports will suffer from a shortage of empty containers. Even in the most developed countries, where imports and exports have been almost balanced, empty containers are being accumulated because of the imbalance in the type of containers, especially reefer containers and special equipment. Table 1 shows the physical flows of containers on the major routes, between 2019 and 2021 [ ]. According to UNCTAD 2021, the number of containers moved from Asia to the United States in 2021 was 24.1 million TEUs, while from the United States to Asia was 7.1 million TEUs, meaning that the equivalent of 17 million TEUs had to be repositioned across the Pacific. Similarly, in the Asia–Europe trade route, which faces an imbalance in the return leg, more than half of containership slots leaving Europe are for empties. So, it is not surprising that the shipping lines are trying to be reactive, by performing a repositioning strategy, to meet customer needs and manage container utilization, where every profitable movement of a loaded container generates a non-profitable empty movement [ The main difficulty in applying a repositioning system is how to efficiently and cost-effectively move empty containers to the proper area, while taking into account various factors. The first factor is the number of empties that should be moved from one area to another. The second factor is the route that should be used to transport empty containers. The third factor is selecting the accurate time, when empty containers become available. Finally, there is the load priority, for loading empty containers in the vessel capacity. In addition to the mentioned factors, the selection of the repositioning scale is another challenge for shipping companies, as they were categorised by Rodrigue et al. [ ] as follows: local, regional, and global repositioning. Local-scale occurs when empties move between inland terminals or empty depots and their surrounding areas; it lasts for a short time, with limited use of storage facilities. Regional-scale covers the repositioning of empties among importers, exporters, shippers, consignees, empty depots, and ports around different countries, belonging to one geographic area. Global repositioning is connected with the overseas trade imbalance, to reduce the surplus of empty containers in the ports or depots stocked globally. The authors highlighted that it is necessary to ensure that the hierarchy starts at a local level, before moving to regional and overseas scales [ ]. Therefore, the repositioning system is one of the longstanding and ongoing global problems in the transport sector. Rodrigue et al. [ ] state that 56% of a container’s life is wasted, stacked at depots, either in order to obtain a future demand or waiting to be placed in shortage areas. Furthermore, the empty-container problem is one of the most significant research areas in the field of the maritime industry, as their movements can generate huge expenditures. According to Epstein et al. [ ], managing empty-container movements needs the same expenditure, effort, and time spent on managing loaded containers. Improper management of such a problem may make empty-container costs one of the highest operating costs, after fuel costs. In contrast, the implementation of successful strategies leads to a significant change in the possibilities of transporting goods, on major routes worldwide ]. Therefore, this paper sheds light on the existing literature about managing empty containers, with the main emphasis on complementing the explanation given in published review papers, to find the appropriate approaches that shipping companies can enhance and apply to overcome their challenges. The rest of the paper is organised as follows: Section 2 shows the methodology used in this work. Section 3 reviews the approaches to managing the movements of empty containers. In Section 4 , the classification of optimization methods based on different scopes of transport networks is identified. It includes a detailed explanation of the most quoted studies in the previous literature. Additionally, it clarifies the role of metaheuristics, in solving the problem of empty-container repositioning. Section 5 summarises the discussion of the systemic review and future research. Finally, conclusions are drawn in Section 6 2. Research Methodology To provide a reproducible, transparent, and scientific literature review of empty-container repositioning, Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), suggested by Moher et al. [ ], will be performed. It includes a systematic set of steps to find, screen, and include studies for the research to be examined, for transparency and replicability purposes. The search for relevant publications about empty-container repositioning was conducted through the online database Web of Science (WOS), the world’s most trusted citation database. The following terms had been used, to identify the maximum number of articles published in this field: empty container* repositioning AND (transport* OR port* OR maritime OR vessel* OR cargo OR mathematical modelling). The authors included the main terms ‘empty container*’ (capturing ‘container’ as well as ‘containers’ by using the asterisk) and included the additional terms in brackets, to narrow down the search to the most relevant papers. The systematic literature search was carried out during May 2021, and the results were, subsequently, updated during December 2021. The procedure taken to generate a database of all the relevant published documents is visualised in the flowchart in Figure 1 . As a result of the initial search through the database, 186 articles and abstracts were nominated. The authors narrowed the research area to find the most relevant papers, by screening the title and abstract as the first process, with 174 articles identified. The selected studies were overviewed quickly, to check for irrelevant papers and studies, with specific keywords like: “empty container reuse, “pricing of empty container”, or “emission measurement of empty container”. After a strict scanning of the selected papers’s introduction, methodology, and conclusion, 124 articles remained and were assessed for eligibility, following this second filtering level. Finally, via a widescreen of the whole paper for the remaining studies in this level, by thoroughly reading the entire document, 6 articles were excluded due to their lack of relevance to the research topic, leaving a total of 118 articles. The criteria for inclusion and exclusion used are explicitly stated in Table 2 . Thus, the authors evaluated the literature eligibility, independently, according to the predefined exclusion and inclusion criteria. The restrictions for document type, language, and subject area were applied before importing the literature in the bibliographic manager. The abstracts and introduction for all studies were evaluated; if the study meets any exclusion reasons, it will be excluded, immediately. Besides, if the abstract of a specific paper is not accessible, the entire paper will be reviewed. Afterwards, full-text evaluation took place, and some articles were excluded, by exclusion reasons. The authors managed all the inconsistencies, regarding the relevance of the reviewed papers, through discussion and consensus. Overall, focusing on the environmental aspects of empty-container movements is one of the main reasons for excluding several studies. 3. Empty-Container Management Based on the literature overview, all the widely used methods, models, and applications for managing empty-container traffic, are presented in Figure 2 . The approaches can be divided into three main perspectives. The first perspective focuses on the organizational solutions performed by shipping companies to reduce the movements of empties, such as container leasing, container substitution, and carriers’s collaboration. The second one explains how technological innovations and the new designs of containers, such as the foldable concept, can support the problem of empty containers. The modelling technique is the third perspective, which studies the different methods for managing the repositioning of empty containers. 3.1. Organizational Logistics Perspective Shipping line companies often take the whole responsibility for handling the problem of empty-container movements, as they have a larger share of container ownership [ ]. Subsequently, they have two internal possibilities for solving the problem: 3.1.1. Internal Solutions • Container leasing has seen more attention in the past few years, as an approach for managing empty-container traffic. According to Theofanis and Boile [ ], leasing arrangements come into three major types: master lease, long-term lease, and short-term lease. The master lease is the type that is most related to the repositioning issue, by hiring containers at places with a shortage and to off-hire containers at surplus points. On the contrary, long- and short-term leases aim to invest their equipment, without any management services provided to the lessee. However, the opportunity for shipping lines to save costs by leasing containers remains linked to the terms and conditions of leasing contracts [ • Container substitution is the second internal approach to deal with container fleet imbalance [ ]. Due to containers having different types and sizes, the demand for a particular container can be fulfilled by supplying another one [ ]. Regarding the size substitution, the demand for two 20 ft empty containers may be replaced by supplying a 40 ft empty container [ ]. Additionally, shipping lines can apply type substitution, by exchanging the demand for dry containers by providing a reefer container, without operating the refrigerator. Braekers et al. [ ] explained that this strategy is challenging and cannot be a common practice, especially if the customer demand is subjected to some rules and conditions. 3.1.2. External Solutions The external solution depends, not on implementing the solution internally by one shipping company, but on the cooperation among all stakeholders: • Intra-channel solutions focus on vertical coordination among the different players in the container-transport chain. There are two proposed strategies for allocating empty containers: depot-direct and street-turn [ ]. The idea of depot-direct is to establish a neutral supply point for empty containers to be stored, instead of moving them back to the port. Furthermore, the exporter can get the empty container faster, and the travel time and the repositioning cost will decrease [ ]. Street-turn means that shipping companies can use imported containers directly for exporting purposes at the consignee’s location [ ]. Although a street-turn strategy can reduce the total cost and congestion, it needs changing regarding some contract regulations with customers to deal with such reuse, tracking, and tracing of the empties’s interchange [ • Inter-channel solutions depends on horizontal cooperation [ ]. Shipping lines can cooperate in several formats, such as slot exchange, alliances, and resource pooling, while competing in providing shipping services [ ]. Pool-sharing containers is one of the critical strategies discussed by Theofanis and Boile [ ]. They refer to the box-pool attempt, called Grey-Boxes, also known as free-label containers, which aims to reduce shareholders’s expenses by cooperating in providing empty containers without possession consideration. Vojdani et al. [ ] ensured that such a strategy could decrease the movements of the empty container, store operations and subsequently, the total costs. This strategy did not receive the expected commercial acceptance due to competitiveness and confidentiality. 3.2. Technological Innovation Over the last decades, technological innovations proved that they could grant valuable solutions to overcome the problem of empty containers. Moon et al. [ ] presented different examples of foldable containers, which can reduce the storage space of containers and then reduce the repositioning cost. Six-In-One (SIO) container is a foldable design that is a fully demountable 20 ft container that can be folded, stacked, and interlocked by the exact dimensions of a standard container. Tworty container is another design that consists of joining two containers of 20 ft, to form a 40 ft with the same size, capacity, and dimension. Connectainer has the same idea, regarding the transformation of 40 ft into 20 ft and vice versa, within 30 min [ ]. Attempts to create a foldable container did not stop, and many companies intervened to compete with new designs [ ], such as Cargoshell, Staxxon container, and 4 FOLD container. Moon et al. [ ] stressed that utilizing such new technologies requiring different handling techniques is an arduous task. It can be considered a time-consuming and expensive process in the shipping industry. Especially if the purchasing price of these new containers is too high, compared with the price of the standard container, which is about 3.5 times more expensive. Hence, using the foldable containers would become profitable, when the price of a foldable container becomes half cheaper than the standard container. 3.3. Modelling Approaches The transportation field has long recognised the power of operation research to manage real-life problems. Hence, a considerable number of influential scientific publications have been focused on modelling the problem of empty containers, whether by optimization, simulation, heuristics, or a hybrid between them [ ]. The most critical contributions used to solve the empty container problem are presented in Table 3 4. Review on the Optimization of Empty-Container-Repositioning Techniques Most models for solving the problem of empty-container-repositioning focus, primarily on optimization, by performing mixed-integer and continuous programming, whether they are deterministic or stochastic models. For more understanding, optimization methods will be presented in the next section, based on the categories referred by Kuzmicz et al. [ ]. The authors classified the repositioning problems, according to the scope of the container transport networks: network flow models, network design models, and models under other decision-making constraints. The network flow scope is presented in the first subsection, as discussed extensively in the previous literature. The following part explains the problem based on container-network design, followed by resource constraint models. 4.1. Repositioning by Network Flow Model The mainstream of network flow models is to generate a set of arc-based matrices. Each element in the matrix has a numerical value representing the number of empty containers that need to be shifted from one node to another on an arc of the shipping network. Many authors addressed the problem in deterministic and stochastic optimization models. In the first group, deterministic models depend on forecasting data using a rolling-horizon fashion. Early literature for the deterministic study was discussed by Florez [ ]; the author established a dynamic transhipment network model in line with the leasing strategy. Choong et al. [ ] addressed the problem of an intermodal transportation network, explaining how the duration of the planning horizon can affect repositioning decisions. Erera et al. [ ] developed a dynamic multi-commodity network flow model, considering container booking and routing decisions. In the same year, Olivo et al. [ ] proposed an integer programming model with multiple transport modes in ports and depots (see, also, [ Di Francesco et al. [ ] proposed a multi-scenario multi-commodity time-extended optimization model. The authors introduced a deterministic model according to the available information of demand and supply values. To clarify their models, they described the system dynamics through a time-space network with a five-period network consisting of five ports, two vessels, and the same type of containers. The deterministic formulation includes inventory decision variables, which refer to the decision made at a specific period on the number of empty containers to be stored in the port. Additionally, the transportation variables indicate the decision made at a particular period on the number of empty containers of type to be moved by vessel from port to port. Another primary variable of their model is the number of empty containers available in port at a specific period to be loaded/unloaded on the vessel. They consider all the related costs where the arrival time is indicated in its schedule. The main constraints can be summarised as follows: the capacity of empty containers number to load, unload, reposition, and store in daily operations; the available number of empty containers inside the port in a given period should be loaded on the following arrival vessels based on port restrictions; the demand for empty containers should be satisfied by past stocks and by unloaded containers from departure vessels; the volume of empty containers that should be stored in a port at specific period should not exceed the storage capacity; and the repositioning process for empty containers between ports cannot exceed the space available on vessels. Hence, the objective function of the deterministic model can be as follows: $min ∑ t ∈ T ∑ p ∈ P [ ∑ v ∈ V ( i , t ) ( c p , t m ( i , j ) x p , t m ( i , j ) + ∑ i ∈ H ¯ c p , t u ( v , i ) x p , t u ( v , i ) + ∑ i d ∈ H c p , t u ( v , i d ) x p , t u ( v , i d ) ) + ∑ i ∈ H ¯ ( c p , t h ( i ) x p , t h ( i ) + ∑ v ∈ V ( i , t + 1 ) c p , t l ( i , v ) x p , t l ( i , v ) ) + ∑ i d ∈ H c p , t h ( i d ) x p , t h ( i d ) + ∑ i s ∈ H ∑ v = 1 c p , t l ( i s , v ) x p , t l ( i s , v ) ]$ The deterministic formulation can give an adequate policy, if the data are accurate and future events are not considered. Therefore, the authors extended their model to adopt a formulation with uncertain parameters. All model parameters are assumed to be specific for a given period, while uncertain parameters can appear in the first period. Based on expert opinions, the authors introduced a compact form for the multi-scenario formulation. To clarify the interest of the model, the authors simulate the system behaviour through several periods and compare the policies of multi-scenario with deterministic ones. The simulation results show that the multi-scenario strategy yields higher operating costs than the deterministic model in each case, while providing an advantage to fulfilling unexpected demands in future periods promptly. Additionally, multi-scenario policies have an option for allocating more empty containers to the area with higher demand. Such formulations may be pretty insufficient, where the operational environment for decision-making in this field is significantly changeable. Furthermore, the second group is subjected to stochastic factors, as empty containers’s supply/demand volume cannot be forecasted, accurately [ ]. Moreover, the stochastic optimization model describes the demand and supply as uncertain parameters. Crainic et al. [ ] were the pioneer researchers, presenting a stochastic model in the land-distribution system. They presented a time-space network flow model, based on a single commodity. The main decision variables of this model describe the following: the number of moving containers from an origin depot to a destination depot; the available number of empty containers in the depot at the end of the horizon; and the total number of allocating empty containers in a specific period to be moved from a current depot or a customer to arrive at new customer or depot. Later on, their work was extended to include the multi-commodity transportation network and substitution strategy. The authors added stock variables and substitution variables to increase the demand response. The authors suggested decomposing the network problems characterised by multi-layer, multi-commodity, linear, and minimum cost network flow problems. They were followed by many authors, among others: [ Lam et al. [ ] presented a stochastic dynamic programming model. An approximate approach was applied to solve the proposed model, taking the temporal difference learning into account. Chou et al. [ ] discussed the allocation problem of empty containers in a single service route by proposing a Fuzzy Logic Optimization Algorithm (FLOA). The authors used a fuzzy backorder quantity inventory logic, as a first stage to define the number of empties at ports, considering the stochastic of imports and exports. The second stage is a network flow model, as a mathematical optimization programming to determine the empty containers that should be allocated between ports based on the results in stage one. A case study of the trans-pacific liner route in the real world was applied. Long et al. [ ] established a two-stage stochastic programming model; stage one depends on deterministic parameters to specify the operation plan for repositioning empty containers, while stage two adjusted the decisions derived from stage one with the probability distribution of random variables. Epstein et al. [ ] built an optimization system, by proposing a two-stage solution approach: a multi-commodity multi-period flow model to solve the imbalance issue and an inventory model to determine the safety stock for each node. The network of this model includes the flow of empty containers, without considering the laden-container flow. Hence, the authors stated the decision variables include: the number of each specific type of container, the movement between locations by a particular vessel, the number of loaded/unloaded containers at an area either to or from the vessel at a specific time, and the number of required containers at a particular location. The authors solved their model by using GAMS and CPLEX. A comprehensive study, introduced by Song and Dong [ ], was selected for a profound explanation. They proposed three network flow models. The first model is based on Brouer et al. [ ] study; it is a linear integer programming that studied the problem by providing a time-space network flow model with multi-commodity. The network of their model includes: the flow of laden containers, i.e., $y i j k$ , and empty containers, i.e., $x i j$ , as the demand of customers is deterministic, while values can differ over a given planning horizon. The objective function of this model aims to decrease the lost-sale penalty cost and minimise the total transportation cost of loaded and empty containers. It can be written as: $min y i j k , x i j { ∑ k ∈ K ∑ ( i , j ) ∈ A C i j k C i j k + ∑ ( i , j ) ∈ A C i j e x i j + ∑ k ∈ K C p k [ d k − ∑ j ∈ N , i = O k ( y i j k − y j i k ) ] }$ Constraint (3) refers to the demand for commodities that cannot override the volume $d k$ , and constraint (4) mentions the same quantity of commodities moving from one node $O k$ to another $D k$ . Constraint (5) refers to the flow conservation at a node that is neither $O k$ $D k$ . Constraint (6) mentions the flow balancing of empty and laden container movements at any node. Additionally, constraint (7) indicates that the total number of empty and laden containers does not exceed shipping capacity. The constraint of non-negativity can be expressed in Equation (8); $∑ j ∈ N y i j k − ∑ j ∈ N y j i k ≤ d k , f o r i = O k , k ∈ K ;$ $∑ j ∈ N y i j k − ∑ j ∈ N y j i k = ∑ j ∈ N y j m k − ∑ j ∈ N y m j k , f o r i = O k , m = D k , k ∈ K ;$ $∑ j ∈ N y j m k = ∑ j ∈ N y m j k , f o r m ∈ N , m ≠ O k , m ≠ D k , k ∈ K ;$ $∑ k ∈ K ∑ j ∈ N y i j k + ∑ j ∈ N x i j = ∑ k ∈ K ∑ j ∈ N y j i k + ∑ j ∈ N x j i , f o r i ∈ N ;$ $x i j + ∑ k ∈ K y i j k ≤ u ( i , j ) , f o r ( i , j ) ∈ A ;$ $y i j k ≥ 0 , x i j ≥ 0 , f o r k ∈ K , ( i , j ) ∈ A$ The model formulation is straightforward; it seems a reality, as laden-container movements were considered. The model can manage the changes in demands over various periods as a planning horizon is presented. Hence, the authors used CPLEX to solve the programming model. Despite the advantages of the model, it includes some problems. The objective function does not have the transhipment’s associated costs. Hence the model cannot identify the actual path of the moved commodity in the shipping network, where the number of entities is too large in realistic scenarios. All these drawbacks may result in uneconomical solutions and become computationally intractable. To overcome the difficulties in the first model, Song and Dong [ ] introduced their second model, the origin-link based linear programming network flow model. The concept of this model has been implemented in other research for shipping network design and ship deployment problems [ ]. It considers the associated transhipment costs for both empty and laden containers. Thus, the objective function becomes more comprehensive. It aims to minimise total costs, including loading, unloading, cargo transportation costs, lost-sale penalty, and transhipment costs for empty and laden containers. The decision variables include the laden container flows, i.e., $y o , r i l , y o , r i u , y o , r i f , y o d ,$ the number of laden containers that are loaded at the port of call; the number of laden containers that are unloaded at the port of call; the number of laden containers that are carried on board from the port of call; and the fulfilled demands from origin to destination port, respectively. Similarly, $x p , x o , r i l , x o , r i u , x o , r i f ,$ denote the decision variables of the empty container flows. The authors introduced the intermediate variables of transhipment for laden container flows $y b l , y b u , y p t$ and the empty container intermediate variables $x b l , x b u , x p t$ . The linear programming model is given by: $min y o d , y o , r i l , y o , r i u , y o , r i f , y b l , y b u , y b t x p , x o , r i l , x o , r i u , x o , r i f , x b l , x b u , x b t { ∑ p ∈ P [ C p l ( y p l + x p l ) + C p u ( y p u + x p u ) + C p t , l y p t + C p t , e x p t ] + ∑ r ∈ R ∑ i ∈ I r ( C r i l ∑ o ∈ P y o , r i f + C r i e ∑ o ∈ P x o , r i f ) + ∑ o ∈ P ∑ d ∈ P C o d p ( D o d − y o d ) }$ The main limitation can be summarised as follows: the fulfilled demands from a port cannot exceed the customer demands. It is not allowed to unload the laden containers at their original ports. The empty containers will not be unloaded if they originate from a port. Other constraints related to the flow balancing of empty and laden containers were applied in the model. They considered the vessel capacity for each leg on all routes. Song and Dong [ ] noticed some shortcomings related to the operational information in the above model, associated with the demurrage costs for transhipment laden and empty containers. The vessel capacity constraints cannot fit the operational planning, due to their dynamic nature. The model, also, follows the constant weekly demands for individual port pairs. Consequently, a two-stage path-based network-flow model, combined with a heuristic algorithm was formulated. It aims to manage empty and laden containers’s movement at the operational level, while being a practical solution for large-scale problems. The first stage introduced a path-based network flow model as a static lower-dimension integer programming model to find the assignment plan of the laden and empty container. This path should be at the least cost regarding; container transportation, customer demand backlog, lifting on/off, and transhipment demurrage costs. The main constraints of this model are that the containers flowing into are equal to the flowing out. All serviced vessels in the same route are similar in size. The total number of laden and empty containers that will be loaded shall not exceed the vessel capacity. The unmet demands in a specific week are backlogged and will be included in the backlog cost. In the second stage, the authors used a set of dynamic decision-making rules, introducing a dynamic system that aims to determine the container flows, considering different periods in the planning horizon, based on a weekly plan from stage one. The dynamic model’s variables include the demand variables at original ports, the transhipment-at-port variables, the laden container shipments on vessels, inventory variables at ports, and the empty container-on-vessel variables. After that, Song and Dong [ ] implemented the heuristic algorithm based on their previous publication [ ], which efficiently determined all the problem variables to manage the stochastics demand and dynamically adjust the repositioning process. 4.2. Repositioning by Network Design Designing a shipping network is a family of challenging problems, as it consists of various routes for a designated fleet to transport multiple commodities in different ports. Subsequently, researchers studied the design of the liner shipping network problem to consider the issue of empty-container repositioning. To our knowledge, the network design with empty container movement was discussed for the first time by Shintani et al. [ ]. They introduced a simplified version of the network design problem for container shipping, considering the repositioning of empty containers. The authors divided the problems into two parts. The lower problem was formulated as a Knapsack problem to determine the optimal calling sequence of ports for a specific group of calling ports. The upper problem is to reduce the Knapsack problem by genetic algorithm and employ the network-flow approach. Meng and Wang [ ] designed a shipping routes network for the problem of the empty container, by presenting a rich mixed-integer programming model. The proposed model examined a realistic case study of Asia–Europe–Oceania shipping. The authors defined the leg flow for empty containers and the segment-based path flow for laden containers. The network design of this model attempts to identify which appropriate shipping line should be selected. Subsequently, knowing the ship-deployment plan followed by the chosen shipping line, the number of containers to be loaded on each deployed ship over a segment, and how empty containers should be repositioned. All the tested instances were solved by using CPLEX. The decision variables include the number of ship-deployment plans, the weekly volume of laden/empty containers transported from port to port, and the weekly volume of laden/empty containers loaded/discharged at the port. The objective function aims to minimise the total operating cost, by satisfying the demand and repositioning the empty containers. It can be expressed as follows: $min F ( u ) = ∑ r ∈ R ∑ v ∈ V r ∑ m ∈ M r v { C r v m f i x × n r v m + ∑ ( k , l ) ∈ S r [ y r v m k l × ( c v k + c v l ) ] + ∑ i = 1 N r [ ( z ∧ r v m p r i + z ∼ r v m p r i ) × c v P r i ] }$ denotes the vector of all the decision variables, namely; $u = ( n r v m , x h p q , y r v m k l , f r v m P r i P r I ( i + 1 , N r ) , z ∧ r v m p r i , z ∼ r v m p r i )$ This model has several constraints: TEUs’s demand should be equal to the total number of transported TEUs on all its segment-based paths. The sum number of transported laden containers by all the ship-deployment plans on a segment should be the same total flow of loaded containers in the segment-based paths. The selected shipping line should satisfy all repositioning tasks for empty containers, for any ship-deployment plan considered the maximum berth occupancy time. The loading and discharging quantities determine the berth occupancy time for empty containers. The loading and discharging time are considered, for transporting one laden container by ship-deployment plan on a segment of the shipping line. Any port can be an origin, destination, or both and should be visited at least by one ship-deployment plan. Additionally, it is not allowed to send empty containers in deficit and balance ports; likewise, the surplus and balance ports that should not receive any empty containers. The authors assessed the model’s efficiency by using 46 realistic ports. They generated 24 instances donated by three dimensions, respectively: candidate shipping lines set, ship types set, and a set of laden container shipment demand of origin-destination pairs. By comparing different network designs with and without considering empty containers, the computing performance shows that designing a network considering empty containers is recommended in all the test instances, as it could gain significant cost savings [ Zheng et al. [ ] emphasised that exchanging empty containers among liner carriers reduced the movements of empty containers and, therefore, the repositioning costs. They presented a two-stage optimization method to evaluate the perceived values of empty containers in various ports. A vast network design model is used by performing computational tests on 46 ports in Asia–Europe–Oceania shipping service network. In stage one, the authors focused on the empty container allocation problem by introducing a centralised optimization solution for all related liner carriers. Moreover, they determined the weekly number of empty containers delivered from the surplus port of the liner carrier $k ∈ L$ to the deficit port of the liner carrier $m ∈ L$ . The mathematical programming model denoted by can be summarised as follows: $min ∑ k ∈ L ∑ m ∈ L ∑ i ∈ S k ∑ j ∈ W m ( λ i j × x i j k m )$ Subject to: $∑ m ∈ L ∑ j ∈ W m x i j k m = n i k , ∀ i ∈ S k , ∀ k ∈ L ;$ $∑ k ∈ L ∑ i ∈ S k x i j k m = − n j m , ∀ j ∈ W m , ∀ m ∈ L ;$ $x i j k m ≥ 0 , ∀ i ∈ S k , ∀ j ∈ W m , ∀ k , m ∈ L .$ Constraints (13) and (14) refer to the balance of empty containers after allocating. Constraint (15) is a non-negativity variable. In stage two, the authors aim to find the exchange costs, for empty containers that are paid to liner carriers. They let $( { c o s t j } )$ be the exchange costs of empty containers at the deficit ports, following the optimal solution $x ¯$ of the model . Furthermore, the objective function of the linear programming model (denoted by $P k$ ) for the linear carrier that aims to maximise its profit can be expressed as follows: $max ∑ m ∈ L ∑ i ∈ S k ∑ j ∈ W m [ ( cos t j − λ i j ) × x i j k m ) ]$ Subject to: $∑ m ∈ L ∑ j ∈ W m x i j k m ≤ n i k , ∀ i ∈ S k ;$ $∑ i ∈ S k x i j k k ≤ − n j k , ∀ j ∈ W k ;$ $x i j k m ≥ 0 , ∀ i ∈ S k , ∀ j ∈ W m , ∀ m ∈ L .$ Constraints (17) and (18) ensure that it is unnecessary to use all empty containers of liner carriers to fulfil the deficit ports of shipping carriers , where some empties can be sent to other shipping carriers. Empty containers will be provided, if it will be profitable for the liner carriers. Subsequently, the strategy of some liner carriers is to keep some empty containers in their surplus depot. The non-negative variable is represented in constraints (19). The authors solved the mixed-integer non-linear model by using CPLEX. Based on the obtained solution in stage one, they determine the perceived container leasing prices at the different ports. Later on, they perturbed the parameters of the objective function for applying the inverse optimization approach. An optimal feasible solution was received after the modification of the objective function. Finally, Zheng et al. [ ] used the reduced model’s dual and inverse optimization to get the desired prices. In another attempt to minimise the repositioning costs, they extended their work, by including the option of using foldable containers, as a substitute for standard containers, in Zheng et al. [ ]. They proved that a foldable substitution strategy could contribute to reducing costs. For more details about their extended work, see [ The model of Huang et al. [ ] is similar to the previous one by Zheng et al. [ ]. The authors introduced variables for linear programming, including the number of a specific type of ship sailing on a particular line, the number of empty containers either loaded or unloaded at a port in a specific way, the number of empty containers moved on a leg, the number of laden containers carried on a segment, and the number of empty containers either loaded or unloaded at a port in a specific way. All previous variables are continuous except the first one related to the number of ships. The objective function of their model is to minimise the total cost of transportation. Flow balancing, capacity, and customer demand are the main constraints. Recently, Monemi and Gelaresh [ ], also, applied the Benders decomposition to manage the combined problems of fleet deployment, empty-container repositioning, and network design. They verified their model through real data of a medium-sized shipping line with 830 ports. In 2019, Alfandari et al. [ ] discussed the problem of designing network routes on a barge container shipping company. This study aims to maximise the company’s profit, by identifying a sequence of calling ports and the size of the fleet between each pair of ports. Therefore, the authors proposed a mixed-integer programming model with two formulations: the first needs arc-variables for modelling empty containers, while the second requires node-variables for handling those empties. The model was solved by using CPLEX to optimize all instances with up to 25 ports in a few seconds. 4.3. Repositioning under Resource Constraints The problem of empty-container repositioning is connected to different issues in the shipping industry field. Moreover, this classification treats the repositioning of empty containers as a constraint or sub-problem under other decision-making problems. Some authors correlated the container and ship-fleet problem with the empty-container-repositioning issue [ ], and another group deals with the repositioning problem within dynamic empty-container reuse [ ]. Combining the idea of purchasing policies as a setup cost with the repositioning problem was discussed by [ ]. An exciting approach studied the relation between the price competition of the transport market and the repositioning of empty containers [ ]. Merging the concept of dry ports with the problem of empty container movement was clarified by [ ], as busy seaports with a significant transhipment volume affect the repositioning process. For more descriptions of dry port, see [ ]. Finally, many publications studied the empty-container-repositioning problem within shipping service route design [ Shintani et al. [ ] studied the effect of using foldable containers in the hinterland on the repositioning cost of empty containers, by developing five-integer programming models. They introduced a minimum cost multi-commodity network flow model for three different shipping service routes: Asia–Europe, Asia–North America, and Intra-Asia. To obtain the optimal solution, the authors applied the Gurobi solver that offers the results in a reasonable time. They clarified that using standard and combinable containers can minimise related costs, including movements, exploitation, and handling and leasing costs, especially in the case of a long returning distance. Wang [ ] discussed the problem of fleet deployment and shipping network design, by proposing a mixed-integer linear programming model, taking into account slot-purchasing, integer number of containers, multi-type containers, empty-container repositioning and ship repositioning. The paper’s objective is to determine which elements should be incorporated into tactical planning models and which should not. The author investigated all these elements from theoretical and numerical viewpoints, emphasizing that laden and empty containers should be included in all tactical planning models. For details, the reader is referred to [ Contrary to most studies exploring the allocation of empty dry containers, Chao and Chen [ ] focused on repositioning empty reefer containers. Most allocation strategies cannot easily be adapted for reefer containers, as their demand should be satisfied precisely. Although reefer containers have a high purchasing cost, the authors showed that moving reefer containers can generate more profit than a dry container. They formulated a time-space model to manage the large scale repositioning problem for reefer containers in a Taiwanese shipping company. Since costs are an essential parameter affecting repositioning decisions, the authors explained the costs covered in the model, including terminal handling charge, carriage, and storage costs. They introduced a single-commodity conceptual time-space network model, then formulated the mathematical model. The main variables refer to the number of containers flowing from node to node $j$$( x i j )$ . The related parameters can be summarised as follows: the number of flowing containers in the network , the estimated cost per unit for moving a container $c i j$ , the level of container safety stock in the port at the end of the planning horizon $I i j$ , the expected laden container that should be transported $d i j$ , and the number of available spaces measured by TEUs to move the empties from node to the node . The objective function, which aims to minimise the total flow costs over the network, can be formulated as follows: $M i n ∑ ( i , j ) ∈ A c i j x i j$ Subject to: $x i j = d i j ∀ ( i , j ) ∈ A L$ $x i j ≥ I i j ∀ ( i , j ) ∈ A T$ $x i j ≤ u i j ∀ ( i , j ) ∈ A E$ $∑ ( i , j ) ∈ A x i j − ∑ ( j , i ) ∈ A x j i = q , i = s$ $∑ ( i , j ) ∈ A x i j − ∑ ( j , i ) ∈ A x j i = − q , i = t$ $∑ ( i , j ) ∈ A x i j − ∑ ( j , i ) ∈ A x j i = 0 , i = N − S − T$ $x i j ≥ 0 ∀ ( i , j ) ∈ A$ Regarding the above minimum cost flow problem, constraint (21) guarantees that each voyage should satisfy the demand of laden container movements. Constraint (22) ensures that each port inventory should not exceed the safety level at the end of the planning period. Constraint (23) refers to the maximum space for moving empty containers. Constraint (24) guarantees that the total flows departing from starting node equal the total number of flowing reefer containers into the model. In the same way, constraint (25) ensures that the total number of all containers entering the terminal node is the same number of reefer containers exiting the model. Constraint (26) restricts all incoming flows to be equal to the sum of all outgoing flows for any node in the network, except for the start node and terminal node. The non-negative constraint is considered by constraint (27). Chao and Chen [ ] obtained the optimal solution using CPLEX. They compared the actual operation scenario at a global liner shipping company and the scenario output from the proposed model. It shows that the proposed model can improve the quantity of repositioned reefer containers at a low cost in a shorter time. Wang et al. [ ] discussed the issue of ship type decisions, taking into account the empty-container-repositioning problem. The authors aim to design an exact solution approach, for determining the optimal ship type in the shipping route at the tactical level. In addition, they try to find the appropriate time to allocate the foldable and standard empty container at the lowest cost. The authors built a preliminary network flow model that allows standard containers to transport goods, referring to long-term container leasing. After that, they broaden their sub-network to include foldable containers, considering the short-term container leasing. The whole flow network model incorporated the processes of both sub-networks. The solution approach of Wang et al. [ ] depends on the iterative procedure; they designed a revised network simplex algorithm. The three main approaches, which support the solution algorithm are: the cycle free property, spanning tree property, and minimum cost-flow optimality conditions. A mixed-integer linear programming model was formulated, based on the result from the previous running. In order to verify the optimality of the model, a CPLEX solver was used on a CMA-CGM shipping line with three real-world shipping service routes. The result clarified that the popularisation of using of foldable containers is highly dependent on the cost of long-term leasing. Subsequently, this study did not encourage the shipping liners to use foldable containers, except under certain circumstances. 4.4. The Use of Metaheuristic Algorithms All optimization problems mentioned above are often complicated. Furthermore, the use of heuristics/meta-heuristics is essential to alleviate the model’s difficulty and make it analytically tractable. Genetic Algorithm (GA), Simulated Annealing (SA), Tabu Search (TS), and Scattered Search are well-known examples that can be considered as general iterative algorithms for the case of solving challanging problems in different domains such as online learning [ ], scheduling [ ], multi-objective optimization [ ], transportation [ ], and medicine [ ]. Using such algorithms can speed up finding efficient and reliable solutions. Most of these studies showed that a metaheuristic framework can provide valuable insights to the decision makers over a few minutes, which can be considered as satisfactory from the practical perspective. Additionally, Pasha et al. [ ] proved that the hybrid metaheuristic algorithms produce better results than their non-hybrid versions. They can be considered as competitive solution approach, in terms of computational time and solution quality. To our knowledge, Genetic algorithm and Tabu search are relatively applied to the problem of repositioning an empty container as follows: 4.4.1. Genetic Algorithm GA was introduced in 1975 by Holland [ ], as a type of global search heuristic. The main idea of GA depends on simulating the natural evolutionary process of speciation and genetics, where it tries to exploit the variations among parent solutions [ ]. As referred in Section 3.2 , Shintani et al. [ ] addressed the design of shipping service networks, taking into account the empty-container-repositioning problem. They used GA, where the optimal route problem was formulated as a location-routing Knapsack problem, to identify the optimal set of calling ports and associated calling sequence of ports. The author designed and modified the genetic representation, provided by Inagaki et al. [ ], to adapt their problem. The computational experiments show that the problem with laden and empty distribution can cruise slower due to the efficient empty-container distribution, thus saving fuel costs, considerably. Furthermore, the container shipping network design, without consideration of the empty-container traffic, becomes very costly, due to less efficient empty-container distribution associated with the resulting network [ Additionally, Dong and Song [ ] presented a simulation-based optimization approach, to address the empty-container-repositioning problem with fleet sizing in a stochastic dynamic model. The authors combined the GA and Evolutionary Strategy (ES) for the optimization approach to find the optimal fleet size and control policy that can minimise the total costs. They formulated the problem as event driven, meaning that when a vessel arrives or departs, the system’s state would update. The complexity of the optimization problem prompted researchers to use the simulation-based evolution, for determining the parameters of empty and laden containers that are transported for each port pair at each event by each vessel. The procedures of the method start with initialization, through the stage of selection and recombination, mutation, adjustment, and evaluation via simulation, to the final step of termination criteria that either leads to the best solution or reduces mutation deviation. Additionally, Dong and Song [ ] applied an Evolutionary Algorithm-based Policy (EAP) and a Heuristics Repositioning Policy (HRP). They, also, introduced the Non-Repositioning Policy (NRP), as a reference point for quantifying the benefits of two other policies. Two case studies were tested, a trans-pacific shipping service provided by a Chinese shipping line and a Europe–Asia shipping service provided by the Grand Alliance. The numerical experiments for two cases proved that EAP could achieve more cost savings. 4.4.2. Tabu Search TS is a higher level of a heuristic method proposed by Glover, in 1986 [ ]. This method aims to go through all the possible solutions, starting from the initial solution and moving to other neighbours step by step. It can be considered one of the most effective methods to tackle complex problems, by providing solutions very close to optimality. There are a limited number of previous works concerning empty-container repositioning using this method. Sterzik and Kopfer [ ] focus on the importance of exchanging empty containers among trucking companies clarifying its influence on minimizing the total transportation cost. The authors simultaneously formulated a mixed-integer programming model considering the vehicle routing and scheduling and the empty-container-repositioning problem. Moreover, the Clarke–Wright savings algorithm and Tabu search heuristic were applied for the Inland Container Transportation (ICT) problem. The authors confirmed that the implementation of the proposed algorithm is characterised by effectiveness and efficiency. Recently, Belayachi et al. [ ] discussed the problem of the imbalanced distribution of containers in the liner-shipping network, with the help of TS. They proposed a marine transportation network, which aims to fulfill the customer demand and maximise the profitability of shipping companies, by optimizing the return cost of empty containers. The author launched the treatment process with two scenarios. The first scenario occurs when the number of available empty containers at the stock port can fulfill the customer demands, and there is no problem in this case. The problem is visible in the second scenario, when the port cannot meet the client’s demand. In this case, the authors applied the TS algorithm, to help the ports call the nearest and cheapest empty containers available from the neighbourhood ports, to meet the demand. Meanwhile, the neighbour port would send the required number of empty containers to the shortage port, at a lower cost regarding the distance between each port pair [ ]. The authors discussed empty containers’s return cost, by comparing the Tabu search method and the random search method. The numerical experiments confirmed that the cost of transporting the empty container without the Tabu search algorithm is more expensive than the process done with Tabu search. 5. Discussion The main contribution of this research is to investigate a variety of practical approaches to empty-container-repositioning problems in previous studies to determine the appropriate algorithms for future studies. Hence, this search was limited to peer-reviewed research articles, conference proceedings papers, books, book chapters, and review papers written in English without time-frame restrictions, to ensure wider exposure. A close look at the generated result of this search exposed that 96 out of 118 documents were original research articles, 2 were book chapters, and 20 were conference papers. In terms of geographical coverage, China is taking the lion’s share of top empty-container-repositioning research, followed by the USA, England, and Germany. The general trend of researchers’s interest, in most countries, is investigating the time progress of developed models for empty-container repositioning over time, to improve the existing results. From the time-coverage perspective, the number of published papers from 1993 to 2021 can be presented in Figure 3 , showing that the highest number of publications was recorded in 2013. Overall, the number of publications highlights the awareness and the emergence of a growing academic interest in the problem of repositioning empty containers, among researchers and professionals. From the analysis of the selected literature, over 10 different models were distributed in the most relevant scientific papers about the empty-container-repositioning problem. Hence, the proposed approaches have been used to significantly reduce the movement of empty containers and their total costs, yielding benefits to all shareholders. Some of them are more appropriate for the nature of the problem, such as stochastics optimization models, which are the most utilised in the empty container movement problem. The simulation tool is, also, one of the most practical approaches to managing this problem, with a wide range of scenarios without much adjustment. In this respect, the role of GA to speed up the process of finding solutions cannot be neglectable. It was presented in a significant number of papers presenting the problem of the empty container. On the contrary, the researchers are no longer interested in using deterministic models to solve the current problem, where the method does not consider the future. Despite all these extensive studies, the research gate is still wide open. A series of insights can be derived to apply a lot of combined models with a flexible tool to tackle the problem of empty-container repositioning. Moreover, future work might extend approaches such as simulation-based optimization, and it can be significantly dedicated to plugging the gap, since a limited number of authors discussed this approach [ ]. Some robust simulation systems such as Netlogo and AnyLogic that are not used to solve this problem, yet represent an added value in managing this problem. This approach has lacked investigation, although it could become a good attempt in the future to control the suitability of different empty-container repositioning policies in different scenarios. Apart from comparing the different models for the empty-container-repositioning problem, another direction of future research is to study the situations with multiple means of transportation. Most of the papers listed above raise the problem of empty-container repositioning, from the viewpoint of liner companies and maritime transportation; only a few tackle the problem using trucks, waterways, and rail transportation. Additionally, the different types of containers (e.g., TEU, FEU, dry, reefer) should be considered. The perspective of other actors in the supply chain of empty containers is vitally essential to be discussed within the empty-container-repositioning problem. Considering the anticipated increase in trade volumes and the evolving global trade imbalance, the continuing to propose models will be constantly introduced, as they always represent promising attempts to solve the problem of empty containers. 6. Conclusions The global trade imbalance among different areas leads to the movement of empty containers from a surplus area to a deficit area where customers demand, leading to high costs and the lack of optimal utilization of the containers. Subsequently, this paper discussed various solutions to manage the problem of empty-container repositioning from different perspectives, including technical solutions, organizational solutions, and modelling techniques. Most researchers face many challenges while addressing the issue, such as containerised distribution, large commercial gateways, uncertainty, data gathering, and coordination problems. Indeed, no researcher considers all these challenges when solving the problem, due to the limiting factor in each model. Hence, future studies can focus on integrating more than one obstacle with each other. Additionally, a hybrid algorithm can be combined to include factors such as uncertainty, travel time, traffic congestion, and demand. On the other side, the development of new technologies, such as foldable containers in the maritime transport sector, influenced the researchers’s points of view when designing algorithms and models. The brief systematic review of the empty-container-repositioning literature led to identifying the models with optimal solutions and enhancing their performance. The models that are not beneficiated in solving the problem would be eliminated. Additionally, models that were not used before were, also, suggested for future research, such as Netlogo and AnyLogic. The systematic review may be extended further in various directions. First, relying on more than one database, to determine the most significant number of studies that are likely to be eligible. Second, using electronic data management, to organise the retrieved information and increase the review’s accuracy. Third, comparing and evaluating the proposed approach, against alternative methods for the same factors involved in the problem, in terms of obtaining the optimal solution and processing time. Author Contributions Conceptualization, A.A. and D.D.; methodology, A.A. and D.D.; writing—original draft preparation, A.A.; writing—review and editing, A.A. and M.S.; supervision, T.K. and M.S. All authors have read and agreed to the published version of the manuscript. This research received no external funding. Institutional Review Board Statement Not applicable. Informed Consent Statement Not applicable. Data Availability Statement Data is contained within the article. Conflicts of Interest The authors declare no conflict of interest. Transpacific Europe to Asia Transatlantic Year Asia–North America North America–Asia Europe–Asia Asia–Europe North America–Europe Europe–America 2019 19.9 6.8 7.2 17.5 2.9 4.9 2020 20.6 6.9 7.2 16.9 2.8 4.8 2021 24.1 7.1 7.8 18.5 2.8 5.2 Percentage change 17.1 2.7 8.0 9.5 1.4 9.0 Selection Scientific Database Inclusion Peer-reviewed research articles, conference proceedings papers, books, book chapters, review papers, short surveys, and serials mainly discuss the models and methods to solve the empty-container-movements problem. Before importation to a bibliographic manager Non-English publications, articles with missing abstracts, notes, editorials During title screening Generic articles about empty-container movements are used as examples and/or future recommendations. -Not related to the transportation field, e.g., safety management. Exclusion During abstract screening -Articles address the new technologies of reusing empty containers. -Industry publications where outcomes are not relevant for analysis. During full-text screening -Articles related to the environmental responsibilities and emissions measurement of empty-container movements. -Articles discussed the empty-container repositioning without describing specific applications. Year Authors Model Solution Approach Description 1998 Cheung and Chen A two-stage stochastic network Quasi-gradient method Evaluating the model over a rolling horizon environment 2002 Choong et al. [ Integer programming Deterministic Dynamic Optimization A case study of potential container-on-barge operations within the 29] Mississippi River 2005 Olivo et al. [ Integer programming A minimum cost flow problem A Mediterranean region was examined as a case study by different 30] modes of transportation. 2007 Shintani et al. Knapsack problem, then network flow Genetic algorithm Both port and ship-related cost factors were used in a non-linear [31] problem cost function 2007 Lam et al. [32] Dynamic stochastic programming Approximate Dynamic Programming The cost function is based on multi-port and multi-service system 2007 Wang and Wang [ Integer linear programming LINGO Inland transportation considers the container shortage and leasing 33] costs 2008 Chang et al. [ Container substitution flow problem Rounding LP-solution, branch and bound and CPLEX Container substitution allows street turns 2009 Bandeira et al. Decision support system LINDO Mathematical programming techniques, stochastic models, simulation, [34] and heuristic technique was integrated 2009 Di Francesco et Multi-commodity flow problems Time-extended multi-scenario optimization model A shipping company located in the Mediterranean region was examined al. [35] 2009 Dong and Song [ Simulation-based Optimization Genetic Algorithms and Evolutionary Strategies The model includes multi-vessel, multi-port and multi-voyage 36] shipping systems 2010 Shintani et al. An integer linear programming model container flow mode Foldable containers were considered 2011 Brouer et al. [ Relaxed linear multi-commodity flow model Column generation algorithm Real-life data from the largest liner shipping company, Maersk 2011 Meng and Wang [ Network design problem: mixed-integer CPLEX Hub and spoke and multi-port-calling operations based on 39] linear programming model Asia–Europe–Oceania shipping network 2011 Choi et al. [40 linear programming model Time-expanded minimum-cost flow problem Global shipping company in Korea used as a case study 2012 Long et al. [41 A two-stage stochastic programming model Sample Average Approximation Scenario decomposition as considered 2012 Dang et al. [42 Inventory control problem by the Heuristics with genetic algorithm The perspective of a container depot ] simulation model 2012 Dong and Song [ Cargo routing problem: Integer Two-stage of shortest path and heuristics for an integer programming An Asian shipping company with multiple service routes was examined 11] programming as a case study 2012 Epstein et al. An inventory model and a multi-commodity CPLEX Consider multiple container types [8] network flow model 2013 Moon et al. [25 Three mathematical models and Two A heuristic for an initial solution of small instances by using Comparing standard and foldable containers based on costs ] heuristics Lingo and using local search for improvement 2013 Di Francesco et A stochastic programming approach Time-extended multi-scenario/CPLEX Non-anticipatively conditions were used to link scenarios al. [43] 2013 Lai [44] Time-space network Integrating Branch and Bound with CPLEX for the multiple-scenarios Data uncertainties for empty containers were used, such as situation capacity, handling, storage and transport 2013 Furio et al. [ Min-cost network flow optimization model Decision Support System (DSS) The model considered street-turn applications in the hinterland of 45] Valencia 2013 Mittal et al. [ A two-stage stochastic programming model Depot location problem in time horizon/CPLEX New York/New Jersey port was selected as a case study for the model 2013 Dong et al. [47 OD-based matrix solutions Genetic algorithm Experiments on three shipping service routes operated by three ] shipping companies 2014 Jansen [48] Integer programming formulation/The flow CPLEX Solving problems with planning horizons and forecast 2015 Huang et al. [ Mixed-integer programming model CPLEX A case study: Asia-Europe-Oceania shipping network 2015 Wong et al. [50 Constrained linear programming Shipment yield network driven-based model A case study of service routes of Trans-Pacific trade operated in ] the G6 alliance 2015 Zheng et al. [ Two-stage optimization method Centralised Optimization then Inverse Optimization Experiments on an Asia–Europe–Oceania shipping service network 2016 Zheng et al. [ Network design: mixed-integer non-linear CPLEX Considered perceived container and leasing prices 52] model 2016 Sainz Bernat et Simulation models with metaheuristic Discrete-event simulation and genetic algorithm Pollution, repair, and street turns are in the context of model al. [53] 2016 Akyüz and Lee [ Mixed-integer linear programming model b-column generation and ranch and bound algorithm Simultaneous service type assignment and container routing problem 54] were solved 2017 Monemi and Integrated modelling framework: Branch, Cut and Benders Algorithm (BCB) The transhipment decision was considered Gelareh [55] mixed-integer linear programming 2017 Wang et al. [56 A revised simplex algorithm Network flow model Foldable containers were included 2017 Xie at al. [57] A game-theoretical: Inventory sharing Nash equilibrium Intermodal transportation system consists of one rail firm and game one-liner carrier 2017 Benadada and Optimization-simulation Arena software A real case study of the container terminal at Tanger Med port was Razouk [58] applied 2018 Belayachi et A heuristic method by neighbourhood. A decision-making/Taboo Search method. Reverse logistics of containers al. [59] 2019 Zhang et al. [ Two-layer collaborative optimization CPLEX and Genetic Algorithm Combined tactical and operational levels based on business flow 60] model 2019 Xing et al. [61 Simulation-based two-stage Optimization Dynamic planning horizon and Genetic Algorithm The quotation-booking process is included in operations decisions 2019 Hosseini and A multi-period uncertainty optimization Chance constrained programming A case study of European logistic service provider Sahlin [62] model 2019 Gusah et al. [ Simulation modelling by agent-based AnyLogic A case study of Melbourne, Australia 63] modelling 2020 Göçen et al. [ Two mathematical programming models Mixed-integer linear programming and scenario-based stochastic Real data taken from a liner carrier company include different 64] programming types of containers Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https: Share and Cite MDPI and ACS Style Abdelshafie, A.; Salah, M.; Kramberger, T.; Dragan, D. Repositioning and Optimal Re-Allocation of Empty Containers: A Review of Methods, Models, and Applications. Sustainability 2022, 14, 6655. AMA Style Abdelshafie A, Salah M, Kramberger T, Dragan D. Repositioning and Optimal Re-Allocation of Empty Containers: A Review of Methods, Models, and Applications. Sustainability. 2022; 14(11):6655. https:// Chicago/Turabian Style Abdelshafie, Alaa, May Salah, Tomaž Kramberger, and Dejan Dragan. 2022. "Repositioning and Optimal Re-Allocation of Empty Containers: A Review of Methods, Models, and Applications" Sustainability 14, no. 11: 6655. https://doi.org/10.3390/su14116655 Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/2071-1050/14/11/6655","timestamp":"2024-11-03T12:29:20Z","content_type":"text/html","content_length":"593795","record_id":"<urn:uuid:284b0a42-3c5d-49a1-a14d-fa906fdcfc35>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00332.warc.gz"}
User Guide# This user guide will showcase the capabilities of PedPy. By following this guide you will learn how to set up your analysis and compute different metrics from the movement data. This guide is designed as a Jupyter notebook where the individual cells can be executed in the given order, for trying it yourself you can copy the individual cells in a Python script and run it, or download the notebook here it directly and run it locally. If you use PedPy in your work, please cite it using the following information from zenodo: This is a bottleneck experiment conducted at the University of Wuppertal in 2018. You can see the basic setup of the experiment in the picture below: The data for this experiment is available here, which belongs to this experimental series and is part of the publication “Crowds in front of bottlenecks at entrances from the perspective of physics and social psychology”. Analysis set-up# The first step we will take, is to set up the analysis environment, this means define the areas where pedestrians can walk and put existing obstacles in it. Also, we will define which areas are of interest for the later analysis. Walkable area# In the beginning we will define the walkable area in which the pedestrian can move. For the used bottleneck experiment was conducted in the following set up: The run handled in this user guide had a bottleneck width of 0.5m and w=5.6m. Below is the code for creating such a walkable area: from pedpy import WalkableArea walkable_area = WalkableArea( # complete area (3.5, -2), (3.5, 8), (-3.5, 8), (-3.5, -2), # left barrier (-0.7, -1.1), (-0.25, -1.1), (-0.25, -0.15), (-0.4, 0.0), (-2.8, 0.0), (-2.8, 6.7), (-3.05, 6.7), (-3.05, -0.3), (-0.7, -0.3), (-0.7, -1.0), # right barrier (0.25, -1.1), (0.7, -1.1), (0.7, -0.3), (3.05, -0.3), (3.05, 6.7), (2.8, 6.7), (2.8, 0.0), (0.4, 0.0), (0.25, -0.15), (0.25, -1.1), Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import plot_walkable_area Prepare measurement details# After we defined where the pedestrians can move, we now need to define in which regions we want to analyze in more details. This regions can either be a specific line, an area, or the whole walkable In case of this bottleneck the most interesting area is a little bit in front of the bottleneck (here 0.5m) and the line at the beginning of the bottleneck. The area is slightly in front of the bottleneck as here the highest density occur. In PedPy such areas are called MeasurementArea and the lines MeasurementLine. Below you can see how to define these: from pedpy import MeasurementArea, MeasurementLine measurement_area = MeasurementArea( [(-0.4, 0.5), (0.4, 0.5), (0.4, 1.3), (-0.4, 1.3)] measurement_line = MeasurementLine([(0.4, 0), (-0.4, 0)]) The corresponding measurement setup looks like: Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import plot_measurement_setup Importing pedestrian movement data# The pedestrian movement data in PedPy is called trajectory data. PedPy works with trajectory data which can be created from an import function for specific data files alternatively from a DataFrame with the following columns: • “id”: unique numeric identifier for each person • “frame”: index of video frame where the positions were extracted • “x”, “y”: position of the person (in meter) Loading from Pandas DataFrame# To construct the trajectory data from a DataFrame you also need to provide the frame rate at which the data was recorded. If you have both the construction of the trajectory data can be done with: from pedpy import TrajectoryData data = pd.DataFrame( [[0, 1, 0, 0]], columns=["id", "frame", "x", "y"], trajectory_data = TrajectoryData(data=data, frame_rate=25.0) Alternatively, the data can be also loaded from any file format that is supported by Pandas, see the documentation for more details. Loading from text trajectory files# Pedpy can load trajectories, if they are stored as the trajectory data provided in the Jülich Data Archive directly. If you have text files in same format, you can load them in the same way too: • values are separated by any whitespace, e.g., space, tab • file has at least 4 columns in the following order: “id”, “frame”, “x”, “y” • file may contain comment lines with # at in the beginning For meaningful analysis (and loading of the trajectory file) you also need • unit of the trajectory (m or cm) • frame rate For recent trajectory they are encoded in the header of the file, for older you may need to lead the documentation and provide the information in the loading process. Examples: With frame rate, but no unit # description: UNI_CORR_500_01 # framerate: 25.00 #geometry: geometry.xml # PersID Frame X Y Z 1 98 4.6012 1.8909 1.7600 1 99 4.5359 1.8976 1.7600 1 100 4.4470 1.9304 1.7600 No header at all: 1 27 164.834 780.844 168.937 1 28 164.835 771.893 168.937 1 29 163.736 762.665 168.937 1 30 161.967 753.088 168.937 If your data is structured in a different way please take a look at the next section. Since the data we want to analyze is from the data archive, we can directly load the trajectory data with PedPy: from pedpy import TrajectoryUnit, load_trajectory traj = load_trajectory( default_unit=TrajectoryUnit.METER, # needs to be provided as it not defined in the file # default_frame_rate=25., # can be ignored here as the frame rate is defined in the file The loaded trajectory data look like: Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import plot_trajectories Loading from hdf5 trajectory files# For some experiments the Jülich Data Archive also provides HDF5 trajectory files, with a structure described here. These data are from a different experiment, and are only used to demonstrate how to load HDF5 files, it can be downloaded here. To make the data usable for PedPy use: import pathlib from pedpy import ( h5_file = pathlib.Path("demo-data/single_file/00_01a.h5") traj_h5 = load_trajectory_from_ped_data_archive_hdf5(trajectory_file=h5_file) walkable_area_h5 = load_walkable_area_from_ped_data_archive_hdf5( Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import plot_trajectories plot_trajectories(traj=traj_h5, walkable_area=walkable_area_h5).set_aspect( Loading from Viswalk trajectory files# It is also possible to load trajectory files from Viswalk directly into PedPy. The expected format is a CSV file with ; as delimiter, and it should contain at least the following columns: NO, SIMSEC, COORDCENTX, COORDCENTY. Comment lines may start with a * and will be ignored. Currently only Viswalk trajectory files, which use the simulation time (SIMSEC) are supported. To make the data usable for PedPy use: import pathlib from pedpy import ( viswalk_file = pathlib.Path("demo-data/viswalk/example.pp") traj_viswalk = load_trajectory_from_viswalk(trajectory_file=viswalk_file) Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import plot_trajectories Plot setup# For a better overview of our created measurement setup, see the plot below: Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import plot_measurement_setup Validate that trajectory is completely inside the walkable area.# An important step before starting the analysis is to verify that all trajectories lie within the constructed walkable area. Otherwise, you might get errors. PedPy provides a function to test your trajectories, and offers also a function to get all invalid trajectories: from pedpy import get_invalid_trajectory, is_trajectory_valid f"Trajectory is valid: {is_trajectory_valid(traj_data=traj, walkable_area=walkable_area)}" get_invalid_trajectory(traj_data=traj, walkable_area=walkable_area) Trajectory is valid: True For demonstration purposes, wrongly place the obstacle s.th. some pedestrian walk through it! We now create a faulty geometry, s.th. you can see how the result would like. Therefore, the right obstacle will be moved a bit towards the center of the bottlneck: Show code cell content Hide code cell content from pedpy import WalkableArea walkable_area_faulty = WalkableArea( # complete area (3.5, -2), (3.5, 8), (-3.5, 8), (-3.5, -2), # left barrier (-0.7, -1.1), (-0.25, -1.1), (-0.25, -0.15), (-0.4, 0.0), (-2.8, 0.0), (-2.8, 6.7), (-3.05, 6.7), (-3.05, -0.3), (-0.7, -0.3), (-0.7, -1.0), # right barrier is too close to the middle (0.15, -1.1), (0.6, -1.1), (0.6, -0.3), (3.05, -0.3), (3.05, 6.7), (2.8, 6.7), (2.8, 0.0), (0.3, 0.0), (0.15, -0.15), (0.15, -1.1), Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import plot_measurement_setup ax = plot_measurement_setup( ax.set_xlim([-1, 1]) ax.set_ylim([-1, 1]) ax.set_xticks([-1, -0.5, 0, 0.5, 1]) ax.set_yticks([-1, -0.5, 0, 0.5, 1]) If you get any invalid trajectories, you should check whether you constructed your walkable area correctly. In some cases you will get such errors when you have head trajectories, and the pedestrian lean over the obstacles. Then you need to prepare your data before you can start your analysis. from pedpy import get_invalid_trajectory, is_trajectory_valid f"Trajectory is valid: {is_trajectory_valid(traj_data=traj, walkable_area=walkable_area_faulty)}" get_invalid_trajectory(traj_data=traj, walkable_area=walkable_area_faulty) Trajectory is valid: False │ │id │frame│ x │ y │ point │ │2653 │4 │642 │0.1683│-0.1333 │POINT (0.1683 -0.1333) │ │2654 │4 │643 │0.1708│-0.1462 │POINT (0.1708 -0.1462) │ │2655 │4 │644 │0.1727│-0.1592 │POINT (0.1727 -0.1592) │ │2656 │4 │645 │0.1658│-0.1622 │POINT (0.1658 -0.1622) │ │2657 │4 │646 │0.1586│-0.1626 │POINT (0.1586 -0.1626) │ │ ... │...│... │... │... │... │ │36685│49 │1253 │0.1545│-0.9367 │POINT (0.1545 -0.9367) │ │36686│49 │1254 │0.1554│-0.9680 │POINT (0.1554 -0.968) │ │36687│49 │1255 │0.1581│-1.0000 │POINT (0.1581 -1) │ │36688│49 │1256 │0.1622│-1.0331 │POINT (0.1622 -1.0331) │ │36689│49 │1257 │0.1693│-1.0671 │POINT (0.1693 -1.0671) │ 103 rows × 5 columns Now that we set up the analysis environment, we can start with the real analysis. PedPy provides different methods to obtain multiple metric from the trajectory data: • Density • Speed • Flow • Neighborhood • Distance/Time to entrance • Profiles Density is a fundamental metric in pedestrian dynamics. As it indicated how much space is accessible to each pedestrian within a specific area. High density can lead to reduced walking speeds, increased congestion, and even potential safety hazards. Classic density# The classic approach to calculate the density \(\rho_{classic}(t)\) at a time \(t\), is to count the number of pedestrians (\(N(t)\)) inside a specific space (\(M\)) and divide it by the area of that space (\(A(M)\)). \[ \rho_{classic}(t) = {N(t) \over A(M)} \] In PedPy this can be computed with: from pedpy import compute_classic_density classic_density = compute_classic_density( traj_data=traj, measurement_area=measurement_area The resulting time-series can be seen below: Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import plot_density plot_density(density=classic_density, title="Classic density") Voronoi density# Another approach for calculating the density is to compute the Voronoi tesselation of the pedestrians positions at a given time \(t\), the resulting Voronoi polygons (\(V\)) directly relate to the individual’s density. For a pedestrian \(i\) the individual density is defined as: \[ \rho_i(t) = {1 \over A(V_i(t)} \] Compute individual Voronoi Polygons# The first step for computing the Voronoi density, is to compute the individual’s Voronoi polygon. As these polygons may become infinite for pedestrians at the edge of the crowd, these polygons are restricted by the walkable area. This cutting at the boundaries can lead to split Voronoi polygons. For each of the split polygons it is checked, in which the pedestrian is located. This polygon then is assigned. As these Voronoi polygons work on the Euclidean distance, some unexpected artifacts may occur on non-convex walkable areas. Please keep that in mind! How that may look like, you can see in the plots later in this guide. Without cut-off# The computation of the individual polygons can be done from the trajectory data and walkable area with: from pedpy import compute_individual_voronoi_polygons individual = compute_individual_voronoi_polygons( traj_data=traj, walkable_area=walkable_area With cut-off# When having a large walkable area or widely spread pedestrians the Voronoi polygons may become quite large. In PedPy it is possible to restrict the size of the computed Polygons. This can be done by defining a cut off, which is essentially an approximated circle which gives the maximum extension of a single Voronoi polygon. For the creation of the cut off, we need to define how accurate we want to approximate the circle, the differences can be seen below: Now, with that cut off the computation of the individual polygons becomes: from pedpy import Cutoff, compute_individual_voronoi_polygons individual_cutoff = compute_individual_voronoi_polygons( cut_off=Cutoff(radius=1.0, quad_segments=3), To get a better impression what the differences between the Voronoi polygons with and without the cut off are, take a look at the plot below: Show code cell source Hide code cell source import matplotlib as mpl import matplotlib.pyplot as plt from pedpy import DENSITY_COL, FRAME_COL, ID_COL, plot_voronoi_cells frame = 600 fig = plt.figure(f"frame = {frame}") fig.suptitle(f"frame = {frame}") ax1 = fig.add_subplot(121, aspect="equal") ax1.set_title("w/o cutoff") ax2 = fig.add_subplot(122, aspect="equal") ax2.set_title("w cutoff") cbar_ax = fig.add_axes([0.1, -0.05, 0.88, 0.05]) norm = mpl.colors.Normalize(vmin=0, vmax=10) sm = plt.cm.ScalarMappable(cmap=plt.get_cmap("YlGn"), norm=norm) label="$\\rho$ \ 1/$m^2$", Compute actual Voronoi density# From these individual data we can now compute the Voronoi density \(\rho_{voronoi}(t)\) in the known measurement area (\(M\)): \[ \rho_{voronoi}(t) = { \int\int \rho_{xy}(t) dxdy \over A(M)}, \] where \(\rho_{xy}(t) = 1 / A(V_i(t))\) is the individual density of each pedestrian, whose \(V_i(t) \cap M\) and \(A(M)\) the area of the measurement area. Without cut-off# First, we compute the Voronoi density in the measurement area without a cut off: from pedpy import compute_voronoi_density density_voronoi, intersecting = compute_voronoi_density( individual_voronoi_data=individual, measurement_area=measurement_area Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import PEDPY_ORANGE, plot_density density=density_voronoi, title="Voronoi density", color=PEDPY_ORANGE With cut-off# Second, we compute it now from the individual cut off Voronoi polygons: from pedpy import compute_voronoi_density density_voronoi_cutoff, intersecting_cutoff = compute_voronoi_density( individual_voronoi_data=individual_cutoff, measurement_area=measurement_area Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import PEDPY_GREY, plot_density title="Voronoi density with cut-off", Now we have obtained the mean density inside the measurement area with different methods. To compare the results take a look at the following plot: Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import PEDPY_BLUE, PEDPY_GREY, PEDPY_ORANGE fig = plt.figure() plt.title("Comparison of different density methods") label="voronoi with cutoff", plt.ylabel("$\\rho$ / 1/$m^2$") Passing density (individual)# Another option to compute the individual density, is the passing density. For the computation it needs a measurement line and the distance to a second “virtual” measurement line which form a “virtual” measurement area (\(M\)). For each pedestrians now the frames when they enter and leave the virtual measurement area is computed. In this frame interval they have to be inside the measurement area continuously. They also need to enter and leave the measurement area via different measurement lines. If leaving the area between the two lines, crossing the same line twice they will be ignored. For a better understanding, see the image below, where red parts of the trajectories are the detected ones inside the area. These frame intervals will be returned. In this our example, we want to measure from the entrance of the bottleneck (top line) 1m towards the exit of the bottleneck (bottom line). The set-up is shown below: Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import plot_measurement_setup MeasurementLine(shapely.offset_curve(measurement_line.line, 1.0)), The passing density for each pedestrian \(\rho_{passing}(i)\) is the average number of pedestrian who are in the same measurement area \(M\) in the same time interval (\([t_{in}(i), t_{out}(i)]\)) as pedestrian \(i\), divided by the area of that measurement area \(A(M)\). Then the computation becomes: \[ \rho_{passing}(i) = {1 \over {t_{out}(i)-t_{in}(i)}} \int^{t_{out}(i)}_{t_{in}(i)} {{N(t)} \over A(M)} dt \] where \(t_{in}(i) = f_{in}(i) / fps\) is the time the pedestrian crossed the first line and \(t_{out}(i) = f_{out}(i) / fps\) when they crossed the second line, where \(f_{in}\) and \(f_{out}\) are the frames where the pedestrian crossed the first line, and the second line respectively. And \(fps\) is the frame rate of the trajectory data. Here, we want to compute the passing density inside the bottleneck, this can be done with: from pedpy import compute_frame_range_in_area, compute_passing_density frames_in_area, used_area = compute_frame_range_in_area( traj_data=traj, measurement_line=measurement_line, width=1.0 passing_density = compute_passing_density( density_per_frame=classic_density, frames=frames_in_area This gives for each pedestrian one value for the density. The following plot shows how the individual density inside is distributed the bottleneck: Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import plot_density_distribution density=passing_density, title="Individual density inside bottleneck" A further important measure in pedestrian dynamics is the speed of the pedestrians. Low speeds can indicate congestions or other obstructions in the flow of the crowd. Individual speed# For computing the individuals speed at a specific frame \(v_i(t)\), a specific frame step (\(n\)) is needed. Together with the frame rate of the trajectory data \(fps\) the time frame \(\Delta t\) for computing the speed becomes: \[ \Delta t = 2 n / fps \] This time step describes how many frames before and after the current position \(X_{current}\) are used to compute the movement. These positions are called \(X_{future}\), \(X_{past}\), respectively. First computing the displacement between these positions \(\bar{X}\). This then can be used to compute the speed with: \[\begin{split} \bar{X} &= X_{future} - X_{past} \\ v_i(t) &= \frac{\bar{X}}{\Delta t} \end{split}\] When getting closer to the start, or end of the trajectory data, it is not possible to use the full range of the frame interval for computing the speed. For these cases PedPy offers three different methods to compute the speed: 1. exclude these parts 2. adaptively shrink the window in which the speed is computed 3. switch to one-sided window Exclude border# When not enough frames available to compute the speed at the borders, for these parts no speed can be computed and they are ignored. from pedpy import SpeedCalculation, compute_individual_speed frame_step = 25 individual_speed_exclude = compute_individual_speed( Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import PEDPY_GREEN ped_id = 25 plt.title(f"Speed time-series of a pedestrian {ped_id} (border excluded)") single_individual_speed = individual_speed_exclude[ individual_speed_exclude.id == ped_id plt.ylabel("v / m/s") Adaptive border window# In the adaptive approach, it is checked how many frames \(n\) are available to from \(X_{current}\) to the end of the trajectory. This number is then used on both sides to create a smaller symmetric window, which yields \(X_{past}\) and \(X_{future}\). Now with the same principles as before the individual speed \(v_i(t)\) can be computed. As the time interval gets smaller to the ends of the individual trajectories, the oscillations in the speed increase here. from pedpy import SpeedCalculation, compute_individual_speed individual_speed_adaptive = compute_individual_speed( Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import PEDPY_RED plt.title(f"Speed time-series of an pedestrian {ped_id} (adaptive)") single_individual_speed = individual_speed_adaptive[ individual_speed_adaptive.id == ped_id plt.ylabel("v / m/s") Single sided border window# In these cases, one of the end points to compute the movement becomes the current position \(X_{current}\). When getting too close to the start of the trajectory, the movement is computed from \(X_ {current}\) to \(X_{future}\). In the other case the movement is from \(X_{past}\) to \(X_{current}\). \[ v_i(t) = {|{X_{future} - X_{current}|}\over{ \frac{1}{2} \Delta t}} \text{, or } v_i(t) = {|{X_{current} - X_{past}|}\over{ \frac{1}{2} \Delta t}} \] As at the edges of the trajectories the time interval gets halved, there may occur some jumps computed speeds at this point. from pedpy import SpeedCalculation, compute_individual_speed individual_speed_single_sided = compute_individual_speed( Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import PEDPY_GREY plt.title(f"Speed time-series of an pedestrian {ped_id} (single sided)") single_individual_speed = individual_speed_single_sided[ individual_speed_single_sided.id == ped_id plt.ylabel("v / m/s") To demonstrate the differences in the computed speeds, take a look at the following plot: Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import PEDPY_GREEN, PEDPY_GREY, PEDPY_RED fig, ax = plt.subplots( 1, 3, gridspec_kw={"width_ratios": [2, 1, 1]}, sharey=True, figsize=(12, 5) fig.suptitle("Comparison of the different speed calculations at the borders") speed_exclude = individual_speed_exclude[individual_speed_exclude.id == ped_id] speed_adaptive = individual_speed_adaptive[ individual_speed_adaptive.id == ped_id speed_single_sided = individual_speed_single_sided[ individual_speed_single_sided.id == ped_id label="single sided", ax[0].set_ylabel("v / m/s") < speed_single_sided.frame.min() + 3 * frame_step < speed_single_sided.frame.min() + 3 * frame_step speed_adaptive.frame < speed_single_sided.frame.min() + 3 * frame_step speed_adaptive.frame < speed_single_sided.frame.min() + 3 * frame_step speed_exclude.frame < speed_single_sided.frame.min() + 3 * frame_step speed_exclude.frame < speed_single_sided.frame.min() + 3 * frame_step > speed_single_sided.frame.max() - 3 * frame_step > speed_single_sided.frame.max() - 3 * frame_step speed_adaptive.frame > speed_single_sided.frame.max() - 3 * frame_step speed_adaptive.frame > speed_single_sided.frame.max() - 3 * frame_step speed_exclude.frame > speed_single_sided.frame.max() - 3 * frame_step speed_exclude.frame > speed_single_sided.frame.max() - 3 * frame_step Individual speed in specific movement direction# It is also possible to compute the individual speed in a specific direction \(d\), for this the movement \(\bar{X}\) is projected onto the desired movement direction. \(\bar{X}\) and \(\Delta t\) are computed as described above. Hence, the speed then becomes: \[ v_i(t) = {{|\boldsymbol{proj}_d\; \bar{X}|} \over {\Delta t}} \] When using a specific direction, the computed speed may become negative. individual_speed_direction = compute_individual_speed( movement_direction=np.array([0, -1]), Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import PEDPY_BLUE, PEDPY_GREEN, PEDPY_GREY, PEDPY_RED colors = [PEDPY_BLUE, PEDPY_GREY, PEDPY_RED, PEDPY_GREEN] ped_ids = [10, 20, 17, 70] fig = plt.figure() "Velocity time-series of an excerpt of the pedestrians in a specific direction" for color, ped_id in zip(colors, ped_ids): single_individual_speed = individual_speed_direction[ individual_speed_direction.id == ped_id plt.ylabel("v / m/s") Mean speed# Now, that we have computed the individual’s speed, we want to compute the mean speed in the already used measurement area \(M\) closely in front of the bottleneck. The mean speed is defined as \[ v_{mean}(t) = {{1} \over {N}} \sum_{i \in P_M} v_i(t), \] where \(P_M\) are all pedestrians inside the measurement area, and \(N\) the number of pedestrians inside the measurement area (\(|P_M|\)). The mean speed can only be computed when for each pedestrian inside the measurement area also a speed \(v_i(t)\) is computed, when using the exclude or adaptive approach this might not be the case. Then some extra processing steps are needed, to avoid this use the single sided approach. This can be as follows with PedPy: from pedpy import compute_mean_speed_per_frame mean_speed = compute_mean_speed_per_frame( Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import PEDPY_BLUE, plot_speed title="Mean speed in front of the bottleneck", The same can be now computed, using the speed in a movement direction as basis: mean_speed_direction = compute_mean_speed_per_frame( Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import PEDPY_RED, plot_speed title="Mean speed in specific direction in front of the bottleneck", Voronoi speed# A further approach to compute average speed \(v_{voronoi}(t)\) in an area by weighting the individuals speed by the size of their corresponding Voronoi polygon \(V_i\) inside the measurement area \(M \). The individuals speed are weighted by the proportion of their Voronoi cell \(V_i\) and the intersection with the measurement area \(V_i \cap M\). The Voronoi speed \(v_{voronoi}(t)\) is defined as \[ v_{voronoi}(t) = { \int\int v_{xy}(t) dxdy \over A(M)}, \] where \(v_{xy}(t) = v_i(t)\) is the individual speed of each pedestrian, whose \(V_i(t) \cap M\) and \(A(M)\) the area of the measurement area. The Voronoi speed can only be computed when for each pedestrian inside the measurement area also a speed \(v_i(t)\) is computed, when using the exclude or adaptive approach this might not be the case. Then some extra processing steps are needed, to avoid this use the single sided approach. This can be done in PedPy with: from pedpy import compute_voronoi_speed voronoi_speed = compute_voronoi_speed( Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import PEDPY_ORANGE, plot_speed title="Voronoi speed in front of the bottleneck", Analogously, this can be done with the speed in a specific direction with: voronoi_speed_direction = compute_voronoi_speed( Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import PEDPY_GREY, plot_speed title="Voronoi velocity in specific direction in front of the bottleneck", Comparison mean speed vs Voronoi speed# We now computed the speed with different methods, this plot shows what the different results look like compared to each other: Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import PEDPY_BLUE, PEDPY_GREY, PEDPY_ORANGE, PEDPY_RED plt.figure(figsize=(8, 6)) plt.title("Comparison of different speed methods") label="Voronoi direction", label="classic direction", plt.ylabel("v / m/s") Passing speed (individual)# With the same principles as described in passing density, the individual speeds \(v^i_{passing}\) is defined as \[ v^i_{passing} = \frac{d}{t_{out}-t_{in}}, \] where \(d\) is the distance between the two measurement lines. In PedPy this can be done with: from pedpy import compute_frame_range_in_area, compute_passing_speed passing_offset = 1.0 frames_in_area, _ = compute_frame_range_in_area( traj_data=traj, measurement_line=measurement_line, width=passing_offset passing_speed = compute_passing_speed( Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import plot_speed_distribution speed=passing_speed, title="Individual speed in bottleneck" Another important metric, when analyzing pedestrian flows is the flow itself. It describes how many persons cross a line in a given time. From this potential bottlenecks or congestion can be derived. N-t diagram at bottleneck# To get a first impression of the flow at the bottleneck we look at the N-t diagram, which shows how many pedestrian have crossed the measurement line at a specific time. from pedpy import compute_n_t nt, crossing = compute_n_t( Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import plot_nt plot_nt(nt=nt, title="N-t at bottleneck") Flow at bottleneck# From the N-t data we then can compute the flow at the bottleneck. For the computation of the flow we look at frame intervals \(\Delta frame\) in which the flow is computed. The first intervals starts, when the first person crossed the measurement line. The next interval always starts at the time when the last person in the previous frame interval crossed the line. In each of the time interval it is checked, if any person has crossed the line, if yes, a flow \(J\) can be computed. From the first frame the line was crossed \(f^{\Delta frame}_1\), the last frame someone crossed the line \(f^{\Delta frame}_N\) the length of the frame interval \(\Delta f\) can be computed: \[ \Delta f = f^{\Delta frame}_N - f^{\Delta frame}_1 \] This directly together with the frame rate of the trajectory \(fps\) gives the time interval \(\Delta t\): \[ \Delta t = \Delta f / fps \] Given the number of pedestrian crossing the line is given by \(N^{\Delta frame}\), the flow \(J\) becomes: \[ J = \frac{N^{\Delta frame}}{\Delta t} \] At the same time also the mean speed of the pedestrian when crossing the line is given by: \[ v_{crossing} = {1 \over N^{\Delta t} } \sum^{N^{\Delta t}}_{i=1} v_i(t) \] To compute the flow and mean speed when passing the line with PedPy use: from pedpy import compute_flow delta_frame = 100 flow = compute_flow( Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import plot_flow title="Crossing velocities at the corresponding flow at bottleneck", Individual acceleration# Compute the individual acceleration for each pedestrian. For computing the individuals’ acceleration at a specific frame \(a_i(t_k)\), a specific frame step (\(n\)) is needed. Together with the trajectory data \(fps\) of the trajectory data \(fps\) the time frame \(\Delta t\) for computing the speed becomes: \[ \Delta t = 2 n / fps \] This time step describes how many frames before and after the current position \(X(t_k)\) are used to compute the movement. These positions are called \(X(t_{k+n})\), \(X(t_{k-n})\) respectively. In order to compute the acceleration at time \(t_k\), we first calculate the displacements \(\bar{X}\) around \(t_{k+n}\) and \(t_{k-n}\): \[ \bar{X}(t_{k+n}) = X(t_{k+2n}) - X(t_{k}) \] \[ \bar{X}(t_{k-n}) = X(t_{k}) - X(t_{k-2n}) \] The acceleration is then calculated from the difference of the displacements \[ \Delta\bar{X}(t_k) = \bar{X}(t_{k+n}) - \bar{X}(t_{k-n}) \] divided by the square of the time interval \(\Delta t\): \[ a_i(t_k) = \Delta\bar{X}(t_k) / \Delta t^{2} \] When getting closer to the start, or end of the trajectory data, it is not possible to use the full range of the frame interval for computing the acceleration. For these cases PedPy offers a method to compute the acceleration: Exclude border: When not enough frames available to compute the speed at the borders, for these parts no acceleration can be computed and they are ignored. Use :code:acceleration_calculation= from pedpy import AccelerationCalculation, compute_individual_acceleration frame_step = 25 individual_acceleration_exclude = compute_individual_acceleration( Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import PEDPY_GREEN ped_id = 50 f"Acceleration time-series of a pedestrian {ped_id} (border excluded)" single_individual_acceleration = individual_acceleration_exclude[ individual_acceleration_exclude.id == ped_id plt.ylabel("a / $m/s^2$") Individual acceleration in specific movement direction:# It is also possible to compute the individual acceleration in a specific direction \(d\), for this the movement \(\Delta\bar{X}\) is projected onto the desired movement direction. \(\Delta\bar{X}\) and \(\Delta t\) are computed as described above. Hence, the acceleration then becomes: \[ a_i(t) = {{|\boldsymbol{proj}_d\; \Delta\bar{X}|} \over {\Delta t^{2}}} \] If :code:compute_acceleration_components is True also \(\Delta\bar{X}\) is returned. individual_acceleration_direction = compute_individual_acceleration( movement_direction=np.array([0, -1]), Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import PEDPY_BLUE, PEDPY_GREEN, PEDPY_GREY, PEDPY_RED colors = [PEDPY_BLUE, PEDPY_GREY, PEDPY_RED, PEDPY_GREEN] ped_ids = [10, 20, 17, 70] fig = plt.figure() "Acceleration time-series of an excerpt of the pedestrians in a specific direction" for color, ped_id in zip(colors, ped_ids): single_individual_acceleration = individual_acceleration_direction[ individual_acceleration_direction.id == ped_id plt.ylabel("a / $m/s^2$") Mean acceleration# Compute mean acceleration per frame inside a given measurement area. Computes the mean acceleration \(a_{mean}(t)\) inside the measurement area from the given individual acceleration data \(a_i(t)\) (see :func:~acceleration_calculator.compute_individual_acceleration for details of the computation). The mean acceleration \(a_{mean}\) is defined as \[ a_{mean}(t) = {{1} \over {N}} \sum_{i \in P_M} a_i(t), \] where \(P_M\) are all pedestrians inside the measurement area, and \(N\) the number of pedestrians inside the measurement area ( \(|P_M|\)). The mean acceleration can only be computed when for each pedestrian inside the measurement area also an acceleration \(a_i(t)\) is computed, when using the exclude or adaptive approach this might not be the case. Therefore, further processing steps are needed to ensure that the trajectory data and the individual acceleration data overlap, e.g. as in the example below: traj_idx = pd.MultiIndex.from_frame(traj.data[["id", "frame"]]) acc_idx = pd.MultiIndex.from_frame( individual_acceleration_exclude[["id", "frame"]] # get intersecting rows in id and frame common_idx = traj_idx.intersection(acc_idx) traj_common = ( traj.data.set_index(["id", "frame"]).reindex(common_idx).reset_index() fr = traj.frame_rate traj_exclude = TrajectoryData(data=traj_common, frame_rate=fr) Now the mean acceleration can be computed using PedPy with: from pedpy import compute_mean_acceleration_per_frame mean_acceleration = compute_mean_acceleration_per_frame( Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import PEDPY_BLUE, plot_acceleration title="Mean acceleration in front of the bottleneck", The same can be now computed, using the speed in a movement direction as basis: mean_acceleration_direction = compute_mean_acceleration_per_frame( Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import PEDPY_RED, plot_acceleration title="Mean acceleration in specific direction in front of the bottleneck", Voronoi acceleration# Compute the Voronoi acceleration per frame inside the measurement area. Computes the Voronoi acceleration \(a_{voronoi}(t)\) inside the measurement area \(M\) from the given individual acceleration data \(a_i(t)\) (see :func: ~acceleration_calculator.compute_individual_acceleration for details of the computation) and their individual Voronoi intersection data (from :func:~density_calculator.compute_voronoi_density). The individuals’ accelerations are weighted by the proportion of their Voronoi cell \(V_i\) and the intersection with the measurement area \(V_i \cap M\). The Voronoi acceleration \(a_{voronoi}(t)\) is defined as \[ a_{voronoi}(t) = { \int\int a_{xy}(t) dxdy \over A(M)}, \] where \(a_{xy}(t) = a_i(t)\) is the individual acceleration of each pedestrian, whose \(V_i(t) \cap M\) and \(A(M)\) the area of the measurement area. This can be as follows with PedPy: The Voronoi acceleration can only be computed when for each pedestrian inside the measurement area also an acceleration \(a_i(t)\) is computed, when using the exclude or adaptive approach this might not be the case. Therefore, further processing steps are needed to ensure that the trajectory data, the individual acceleration dara as well as the intersecting Voronoi cell data overlap. We assume that the trajectory data is already overlapping (see example above). The intersecting voronoi cells can be filtered, e.g., as in the example below: intersecting_idx = pd.MultiIndex.from_frame(intersecting[["id", "frame"]]) acc_idx = pd.MultiIndex.from_frame( individual_acceleration_exclude[["id", "frame"]] # get intersecting rows in id and frame common_idx = intersecting_idx.intersection(acc_idx) intersecting_exclude = ( intersecting.set_index(["id", "frame"]).reindex(common_idx).reset_index() Now the Voronoi acceleration can be computed using PedPy with: from pedpy import compute_voronoi_acceleration voronoi_acceleration = compute_voronoi_acceleration( Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import PEDPY_ORANGE, plot_acceleration title="Voronoi acceleration in front of the bottleneck", Analogously, this can be done with the acceleration in a specific direction with: voronoi_acceleration_direction = compute_voronoi_acceleration( Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import PEDPY_GREY, plot_acceleration title="Voronoi acceleration in specific direction in front of the bottleneck", Comparison mean acceleration vs Voronoi acceleration# We now computed the acceleration with different methods, this plot shows what the different results look like compared to each other: Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import PEDPY_BLUE, PEDPY_GREY, PEDPY_ORANGE, PEDPY_RED plt.figure(figsize=(8, 6)) plt.title("Comparison of different acceleration methods") label="Voronoi direction", label="classic direction", plt.ylabel("a / $m/s^2$") To analyze, which pedestrians are close to each other, it is possible to compute the neighbors of each pedestrian. We define two pedestrians as neighbors if their Voronoi polygons (\(V_i\), \(V_j\)) touch at some point, in case of PedPy they are touching if their distance is below 1mm. As basis for the computation one can either use the uncut or cut Voronoi polygons. When using the uncut Voronoi polygons, pedestrian may be detected as neighbors even when their distance is quite large in low density situation. Therefor, it is recommended to use the cut Voronoi polygons, where the cut-off radius can be used to define a maximal distance between neighboring pedestrians. To compute the neighbors in PedPy use: from pedpy import compute_neighbors neighbors = compute_neighbors(individual_cutoff) Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import plot_neighborhood Distance to entrance/Time to entrance# An indicator to detect congestions or jams in trajectory data are distance/time to crossing. It shows how much time until the crossing of the measurement line is left and how big the distance to that line is. In PedPy this can be done with: from pedpy import compute_time_distance_line df_time_distance = compute_time_distance_line( traj_data=traj, measurement_line=measurement_line Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import plot_time_distance title="Distance to entrance/Time to entrance", You can also color the lines based on speed values. To achieve this, simply provide a speed DataFrame as an argument to the function: speed = compute_individual_speed(traj_data=traj, frame_step=5) title="Distance to entrance/Time to entrance - colored", For the computation of the profiles the given walkable area is divided into square grid cells. Each of these grid cells is then used as a measurement area to compute the density and speed. As this is a quite compute heavy operation, it is suggested to reduce the trajectories to the important areas and limit the input data to the most relevant frame interval. from pedpy import ( individual_cutoff = compute_individual_voronoi_polygons( cut_off=Cutoff(radius=0.8, quad_segments=3), individual_speed = compute_individual_speed( profile_data = individual_speed.merge(individual_cutoff, on=[ID_COL, FRAME_COL]) profile_data = profile_data.merge(traj.data, on=[ID_COL, FRAME_COL]) from pedpy import ( grid_size = 0.4 grid_cells, _, _ = get_grid_cells( walkable_area=walkable_area, grid_size=grid_size min_frame_profiles = 250 # We use here just an excerpt of the max_frame_profiles = 400 # trajectory data to reduce compute time profile_data = profile_data[ profile_data.frame.between(min_frame_profiles, max_frame_profiles) # Compute the grid intersection area for the resorted profile data (they have the same sorting) # for usage in multiple calls to not run the compute heavy operation multiple times ) = compute_grid_cell_polygon_intersection_area( data=profile_data, grid_cells=grid_cells Speed Profiles# This documentation describes the methods available for computing speed profiles within a specified area, focusing on pedestrian movements. Four distinct methods are detailed: Voronoi, Arithmetic, Mean, and Gauss speed profiles. Voronoi speed profile The Voronoi speed profile \(v_{\text{voronoi}}\) is computed from a weighted mean of pedestrian speeds. It utilizes the area overlap between a pedestrian’s Voronoi cell (\(V_i\)) and the grid cell (\ (c\)). The weight corresponds to the fraction of the Voronoi cell residing within the grid cell, thereby integrating speed across this intersection: \[ v_{\text{voronoi}} = { \int\int v_{xy} dxdy \over A(c)}, \] where \(A(c)\) represents the area of the grid cell \(c\). Arithmetic Voronoi speed profile The arithmetic Voronoi speed \(v_{\text{arithmetic}}\) is computed as the mean of each pedestrian’s speed (\(v_i\)), whose Voronoi cell \(V_i\) intersects with grid cell \(c\): \[ v_{\text{arithmetic}} = \frac{1}{N} \sum_{i \in V_i \cap P_c} v_i, \] with \(N\) being the total number of pedestrians whose Voronoi cells overlap with grid cell \(c\). Mean speed profile The mean speed profile is computed by the average speed of all pedestrians \(P_c\) present within a grid cell \(c\): \[ v_{\text{mean}} = \frac{1}{N} \sum_{i \in P_c} v_i \] where \(N\) denotes the number of pedestrians in grid cell \(c\). Gauss speed profiles Calculates a speed profile based on Gaussian weights for an array of pedestrian locations and velocities. The speed, weighted at a grid cell \(c\) considering its distance \(\delta = \boldsymbol{r}_i - \boldsymbol{c}\) from an agent, is determined as follows: \[ v_{\text{gauss}} = \frac{\sum_{i=1}^{N}{\big(w_i\cdot v_i\big)}}{\sum_{i=1}^{N} w_i}, \] with \(w_i = \frac{1} {\sigma \cdot \sqrt{2\pi}} \exp\big(-\frac{\delta^2}{2\sigma^2}\big)\; \text{and}\; \sigma = \frac{FWHM}{2\sqrt{2\ln(2)}}. \) from pedpy import SpeedMethod, compute_speed_profile voronoi_speed_profile = compute_speed_profile( arithmetic_speed_profile = compute_speed_profile( mean_speed_profile = compute_speed_profile( gauss_speed_profile = compute_speed_profile( Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import plot_profiles fig, ((ax0, ax1), (ax2, ax3)) = plt.subplots(nrows=2, ncols=2) fig.suptitle("Speed profile") cm = plot_profiles( label="v / m/s", cm = plot_profiles( label="v / m/s", cm = plot_profiles( label="v / m/s", cm = plot_profiles( label="v / m/s", left=None, bottom=None, right=None, top=None, wspace=0, hspace=0.6 Density Profiles# Currently, it is possible to compute either the Voronoi, classic or Gaussian density profiles. Voronoi density profile In each cell the Voronoi speed \(v_{voronoi}\) is defined as \[ v_{voronoi}(t) = { \int\int v_{xy} dxdy \over A(M)}, \] where \(v_{xy} = v_i\) is the individual speed of each pedestrian, whose \(V_i \cap M\) and \(A(M)\) the area the grid cell. Classic density profile In each cell the density \(\rho_{classic}\) is defined by \[ \rho_{classic} = {N \over A(M)}, \] where \(N\) is the number of pedestrians inside the grid cell \(M\) and the area of that grid cell (\(A(M)\)). Gaussian density profile In each cell the density \(\rho_{gaussian}\) is defined by \[ \rho_{gaussian} = \sum_{i=1}^{N}{\delta (\boldsymbol{r}_i - \boldsymbol{c})}, \] where \(\boldsymbol{r}_i\) is the position of a pedestrian and \(\boldsymbol{c}\) is the center of the grid cell. Finally \(\delta(x)\) is approximated by a Gaussian \[ \delta(x) = \frac{1}{\sqrt{\pi}a}\exp[-x^2/a^2] \] from pedpy import DensityMethod, compute_density_profile # here it is important to use the resorted data, as it needs to be in the same ordering as "grid_cell_intersection_area" voronoi_density_profile = compute_density_profile( # here the unsorted data can be used classic_density_profile = compute_density_profile( gaussian_density_profile = compute_density_profile( Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import plot_profiles fig, (ax0, ax1, ax2) = plt.subplots(nrows=1, ncols=3, layout="constrained") fig.set_size_inches(12, 5) fig.suptitle("Density profile") cm = plot_profiles( label="$\\rho$ / 1/$m^2$", cm = plot_profiles( label="$\\rho$ / 1/$m^2$", cm = plot_profiles( label="$\\rho$ / 1/$m^2$", Density and Speed Profiles# An other option is to compute both kinds of profile at the same time: from pedpy import compute_profiles min_frame_profiles = 250 # We use here just an excerpt of the max_frame_profiles = 400 # trajectory data to reduce compute time grid_size = 0.4 density_profiles, speed_profiles = compute_profiles( min_frame_profiles, max_frame_profiles min_frame_profiles, max_frame_profiles on=[ID_COL, FRAME_COL], Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import plot_profiles fig, (ax0, ax1) = plt.subplots(nrows=1, ncols=2) cm = plot_profiles( label="$\\rho$ / 1/$m^2$", cm = plot_profiles( label="v / m/s", Pedestrian Dynamics : Spatial Analysis# This section corresponds to analysis method which can be used to characterise different crowds or group formations. These methods may include measurement of the time-to-collision, pair-distribution function and measurement of crowd polarization. Pair-distribution function (PDF)# This method is inspired from condensed matter description and used in the work of Cordes et al. (2023) following Karamousas et al. (2014). The pair-distribution function (PDF): “Quantifies the probability that two interacting pedestrians are found a given distance r apart, renormalized by the probability \(P_{Ni}\) of measuring this distance for pedestrians that do not In this method, “interacting pedestrians” are defined as pedestrians that are present in the same spatial domain at the same time. One should also keep in mind that in its current implementation, the method does not take into account walls and corners, which should in theory block any “interaction” between pedestrians on opposite sides of the obstacles. The probability \(P_{Ni}\) is approximated here by time randomising the original trajectory file. For this randomisation process, only the frame numbers of the trajectory file are shuffled. The created “randomised trajectories” contain random pedestrian positions, composed only of positions present in the original trajectory file. This method helps account for pedestrians’ preferred space utilisation, which can be due to terrain features or social behaviours. One should note that the number of positions selected for each frame is also random during the creation of the randomised trajectory file. The random process should ensure a uniform distribution of positions for each frame. However, to smooth any noise that this method may induce, we recommend using a higher randomisation_stacking number (see details in the next section). The pair-distribution function of a given crowd recording can be computed using the following instructions: from pedpy import compute_pair_distribution_function # Compute pair distribution function radius_bins, pair_distribution = compute_pair_distribution_function( traj_data=traj, radius_bin_size=0.1, randomisation_stacking=1 Show code cell source Hide code cell source import matplotlib.pyplot as plt fig, ax1 = plt.subplots(figsize=(5, 5)) ax1.plot(radius_bins, pair_distribution) ax1.set_title("Pair Distribution Function") ax1.set_xlabel("$r$", fontsize=16) ax1.set_ylabel("$g(r)$", fontsize=16) ax1.grid(True, alpha=0.3) Parameters of the PDF# The function compute_pair_distribution_function has two main parameters: • radius_bin_size is the size of the radius bins for which probability will be computed. On one hand a larger bin size results in smoother pdf but decreases the accuracy of the description, as more individuals can be detected in each bin. On the other hand, a smaller bin will increase the accuracy of the description but may lead to noisy or Nan values as each bin may not be populated (leading to invalid divisions). We suggest using a bin size value between 0.1 and 0.3 m as these values are close to order of magniture of a chest depth. • randomisation_stacking is the number of time the data stacked before being shuffled in order to compute the probability \(P_{Ni}\) of measuring given pair-wise distances for pedestrians that do not interact. Stacking the data multiple times helps harmonize the random positions more effectively, ensuring that the PDF converges to results that are independent of the randomization method. First we show the influence of varying radius_bin_size on the result: from pedpy import compute_pair_distribution_function radius_bin_sizes = [0.05, 0.1, 0.25, 0.5] varying_radius_bin_sizes = [ for i, radius_bin_size in enumerate(radius_bin_sizes) And now how randomisation_stacking influences the result: from time import time from pedpy import ( randomisation_stackings = [1, 3, 5] varying_randomisation_stacking = [] for i, randomisation_stacking in enumerate(randomisation_stackings): begin_time = time() pdf = compute_pair_distribution_function( end_time = time() varying_randomisation_stacking.append((i, pdf, end_time - begin_time)) These variations generate the following result: Show code cell source Hide code cell source import matplotlib.pyplot as plt import numpy as np from matplotlib.cm import twilight cmap = twilight fig, (ax0, ax1) = plt.subplots(1, 2, figsize=(10, 5)) ax0.set_title("Effect of `radius_bin_size`") for i, pdf in varying_radius_bin_sizes: radius_bins, pair_distribution = pdf color=twilight(i / len(varying_radius_bin_sizes)), label="$r_{bin}=$" + str(radius_bin_sizes[i]), ax0.set_ylim((0, 1.3)) ax0.set_xlabel("$r$", fontsize=16) ax0.set_ylabel("$g(r)$", fontsize=16) ax0.grid(True, alpha=0.3) ax0.legend(title="Bin sizes") ax1.set_title("Effect of 'randomisation_stacking'") for i, pdf, time in varying_randomisation_stacking: radius_bins, pair_distribution = pdf color=cmap(i / len(varying_randomisation_stacking)), + " times: " + str(np.round(time, 2)) + "s", ax1.set_ylim((0, 1.3)) ax1.set_ylabel("$g(r)$", fontsize=16) ax1.set_xlabel("$r$", fontsize=16) ax1.grid(True, alpha=0.3) ax1.legend(title="Nb of stacks: Execution time") Preprocess the data# Until now, we used complete trajectories, but sometimes not all the data is relevant for the analysis. If the data comes from larger simulation or experiments you may be only interested in data close to your region of interest or data in a specific time range. As PedPy builds up on Pandas as data container, the filtering methods from Pandas can also be used here. More information on filtering and merging with Pandas can be found here: filtering & merging. Geometric filtering# First, we want to filter the data by geometrical principles, therefor we combine the capabilities of Pandas and Shapely. Data inside Polygon# In the first case, we are only interested in trajectory data inside the known measurement area. import shapely bottleneck = shapely.Polygon([(0.25, 0), (0.25, -1), (-0.25, -1), (-0.25, 0)]) leaving_area = shapely.Polygon([(-3, -1), (-3, -2), (3, -2), (3, -1)]) data_inside_ma = traj.data[shapely.within(traj.data.point, bottleneck)] Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import plot_measurement_setup ax = plot_measurement_setup( traj=TrajectoryData(data_inside_ma, frame_rate=traj.frame_rate), ax.set_xlim([-0.75, 0.75]) ax.set_ylim([-1.5, 0.5]) Data outside Polygon# Secondly, we want to filter the data, such that the result contains only data which is outside a given area. In our case we want to remove all data behind the bottleneck, here called leaving area: data_outside_leaving_area = traj.data[ ~shapely.within(traj.data.point, leaving_area) Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import plot_measurement_setup ax = plot_measurement_setup( traj=TrajectoryData(data_outside_leaving_area, frame_rate=traj.frame_rate), Data close to line# It is not only possible to check whether a point is within a given polygon, it is also possible to check if the distance to a given geometrical object is below a given threshold. Here we want all the data that is within 1m of the measurement line at the entrance of the bottleneck: data_close_ma = traj.data[ shapely.dwithin(traj.data.point, measurement_line.line, 1) Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import plot_measurement_setup ax = plot_measurement_setup( traj=TrajectoryData(data_close_ma, frame_rate=traj.frame_rate), ax.set_xlim([-1.5, 1.5]) ax.set_ylim([-2.5, 1.5]) Time based filtering# It is not only possible to filter the data by geometrical means, but also depending on time information. In experiments, you might only be interested in steady state data, this can be also be achieved by slicing the data: traj_data_frame_range = traj[300:600] Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import plot_measurement_setup Alternatively, you could also utilise the Pandas filtering methods: data_frame_range = traj.data[ traj.data.frame.between(300, 600, inclusive="both") Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import plot_measurement_setup traj=TrajectoryData(data_frame_range, frame_rate=traj.frame_rate), ID based filtering# It is also possible to filter the data in a way that only specific pedestrians are contained: data_id = traj.data[traj.data.id == 20] Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import plot_measurement_setup traj=TrajectoryData(data_id, frame_rate=traj.frame_rate), It is also possible to filter for multiple ids: ids = [10, 20, 30, 40] data_id = traj.data[traj.data.id.isin(ids)] Show code cell source Hide code cell source import matplotlib.pyplot as plt from pedpy import plot_measurement_setup traj=TrajectoryData(data_id, frame_rate=traj.frame_rate), What to do with the results?# Now we have completed our analysis, what can be done to • access specific columns • combine for further analysis • save results Access specific columns# In order to access a specific column of the computed results or the trajectory data PedPy provides all the column names for access. It is highly encouraged to use these identifiers instead of using the column names directly, as the identifier would be updated, if they change. A list of all identifiers can be found in the API reference. from pedpy import FRAME_COL, ID_COL, POINT_COL # Access with column identifier traj.data[[ID_COL, FRAME_COL, POINT_COL]] │ │id │frame│ point │ │ 0 │1 │0 │POINT (2.1569 2.659) │ │ 1 │1 │1 │POINT (2.1498 2.6653) │ │ 2 │1 │2 │POINT (2.1532 2.6705) │ │ 3 │1 │3 │POINT (2.1557 2.6496) │ │ 4 │1 │4 │POINT (2.1583 2.6551) │ │ ... │...│... │... │ │63105│75 │492 │POINT (0.1539 -1.5997) │ │63106│75 │493 │POINT (0.1792 -1.6365) │ │63107│75 │494 │POINT (0.2262 -1.704) │ │63108│75 │495 │POINT (0.2575 -1.7516) │ │63109│75 │496 │POINT (0.2885 -1.8041) │ 63110 rows × 3 columns # Access with column names traj.data[["id", "frame", "point"]] │ │id │frame│ point │ │ 0 │1 │0 │POINT (2.1569 2.659) │ │ 1 │1 │1 │POINT (2.1498 2.6653) │ │ 2 │1 │2 │POINT (2.1532 2.6705) │ │ 3 │1 │3 │POINT (2.1557 2.6496) │ │ 4 │1 │4 │POINT (2.1583 2.6551) │ │ ... │...│... │... │ │63105│75 │492 │POINT (0.1539 -1.5997) │ │63106│75 │493 │POINT (0.1792 -1.6365) │ │63107│75 │494 │POINT (0.2262 -1.704) │ │63108│75 │495 │POINT (0.2575 -1.7516) │ │63109│75 │496 │POINT (0.2885 -1.8041) │ 63110 rows × 3 columns Combine multiple DataFrames# From the analysis we have one DataFrame containing the trajectory data: │ │id │frame│ x │ y │ point │ │ 0 │1 │0 │2.1569│2.6590 │POINT (2.1569 2.659) │ │ 1 │1 │1 │2.1498│2.6653 │POINT (2.1498 2.6653) │ │ 2 │1 │2 │2.1532│2.6705 │POINT (2.1532 2.6705) │ │ 3 │1 │3 │2.1557│2.6496 │POINT (2.1557 2.6496) │ │ 4 │1 │4 │2.1583│2.6551 │POINT (2.1583 2.6551) │ │ ... │...│... │... │... │... │ │63105│75 │492 │0.1539│-1.5997 │POINT (0.1539 -1.5997) │ │63106│75 │493 │0.1792│-1.6365 │POINT (0.1792 -1.6365) │ │63107│75 │494 │0.2262│-1.7040 │POINT (0.2262 -1.704) │ │63108│75 │495 │0.2575│-1.7516 │POINT (0.2575 -1.7516) │ │63109│75 │496 │0.2885│-1.8041 │POINT (0.2885 -1.8041) │ 63110 rows × 5 columns and one containing the individual Voronoi data: │ │id │frame│ polygon │density │ │ 0 │1 │0 │POLYGON ((1.7245255133043016 2.849134813766198... │1.323393│ │ 979 │2 │0 │POLYGON ((1.5217781622975501 1.153757147573504... │1.212123│ │1347 │3 │0 │POLYGON ((1.7425304074633392 1.895950902917607... │2.046083│ │2011 │4 │0 │POLYGON ((1.7425304074633392 1.895950902917607... │1.969356│ │2707 │5 │0 │POLYGON ((1.2090226182277728 1.025737783544312... │1.094197│ │ ... │...│... │... │... │ │57481│69 │1652 │POLYGON ((-3.5 -2, -3.5 8, 3.5 8, 3.5 -2, -3.5... │0.015559│ │57482│69 │1653 │POLYGON ((-3.5 -2, -3.5 8, 3.5 8, 3.5 -2, -3.5... │0.015559│ │57483│69 │1654 │POLYGON ((-3.5 -2, -3.5 8, 3.5 8, 3.5 -2, -3.5... │0.015559│ │57484│69 │1655 │POLYGON ((-3.5 -2, -3.5 8, 3.5 8, 3.5 -2, -3.5... │0.015559│ │57485│69 │1656 │POLYGON ((-3.5 -2, -3.5 8, 3.5 8, 3.5 -2, -3.5... │0.015559│ 63110 rows × 4 columns As both have the columns ‘id’ and ‘frame’ in common, these can be used to merge the dataframes. See pandas.merge() and pandas.DataFrame.merge() for more information on how DataFrames can be merged. data_with_voronoi_cells = traj.data.merge(intersecting, on=[ID_COL, FRAME_COL]) │ │id │frame│ x │ y │ point │ polygon │density │intersection │ │ 0 │1 │0 │2.1569│2.6590 │POINT (2.1569 2.659) │POLYGON ((1.7245255133043016 2.849134813766198... │1.323393│POLYGON EMPTY│ │ 1 │1 │1 │2.1498│2.6653 │POINT (2.1498 2.6653) │POLYGON ((1.7360824366848313 2.862174491143732... │1.334515│POLYGON EMPTY│ │ 2 │1 │2 │2.1532│2.6705 │POINT (2.1532 2.6705) │POLYGON ((1.8536919501780398 2.459236372880033... │1.328666│POLYGON EMPTY│ │ 3 │1 │3 │2.1557│2.6496 │POINT (2.1557 2.6496) │POLYGON ((1.7354793571480895 2.853885114270424... │1.344024│POLYGON EMPTY│ │ 4 │1 │4 │2.1583│2.6551 │POINT (2.1583 2.6551) │POLYGON ((1.7364457387060401 2.854024613400598... │1.349853│POLYGON EMPTY│ │ ... │...│... │... │... │... │... │... │... │ │63105│75 │492 │0.1539│-1.5997│POINT (0.1539 -1.5997)│POLYGON ((3.5 -1.55952287010524, 3.5 -2, -3.18... │0.198106│POLYGON EMPTY│ │63106│75 │493 │0.1792│-1.6365│POINT (0.1792 -1.6365)│POLYGON ((1.4972019470076683 -0.88399685153128... │0.204326│POLYGON EMPTY│ │63107│75 │494 │0.2262│-1.7040│POINT (0.2262 -1.704) │POLYGON ((1.5304646340900363 -0.88904050056830... │0.216082│POLYGON EMPTY│ │63108│75 │495 │0.2575│-1.7516│POINT (0.2575 -1.7516)│POLYGON ((1.5619810613615523 -0.89649921050952... │0.227548│POLYGON EMPTY│ │63109│75 │496 │0.2885│-1.8041│POINT (0.2885 -1.8041)│POLYGON ((1.5948468195234426 -0.90689232729486... │0.241937│POLYGON EMPTY│ 63110 rows × 8 columns from pedpy import SPEED_COL data_with_voronoi_cells_speed = data_with_voronoi_cells.merge( individual_speed[[ID_COL, FRAME_COL, SPEED_COL]], on=[ID_COL, FRAME_COL] │ │id │frame│ x │ y │ point │ polygon │density │intersection │ speed │ │ 0 │1 │0 │2.1569│2.6590 │POINT (2.1569 2.659) │POLYGON ((1.7245255133043016 2.849134813766198... │1.323393│POLYGON EMPTY│0.055227│ │ 1 │1 │1 │2.1498│2.6653 │POINT (2.1498 2.6653) │POLYGON ((1.7360824366848313 2.862174491143732... │1.334515│POLYGON EMPTY│0.142829│ │ 2 │1 │2 │2.1532│2.6705 │POINT (2.1532 2.6705) │POLYGON ((1.8536919501780398 2.459236372880033... │1.328666│POLYGON EMPTY│0.191291│ │ 3 │1 │3 │2.1557│2.6496 │POINT (2.1557 2.6496) │POLYGON ((1.7354793571480895 2.853885114270424... │1.344024│POLYGON EMPTY│0.215515│ │ 4 │1 │4 │2.1583│2.6551 │POINT (2.1583 2.6551) │POLYGON ((1.7364457387060401 2.854024613400598... │1.349853│POLYGON EMPTY│0.277554│ │ ... │...│... │... │... │... │... │... │... │... │ │63105│75 │492 │0.1539│-1.5997│POINT (0.1539 -1.5997)│POLYGON ((3.5 -1.55952287010524, 3.5 -2, -3.18... │0.198106│POLYGON EMPTY│0.895094│ │63106│75 │493 │0.1792│-1.6365│POINT (0.1792 -1.6365)│POLYGON ((1.4972019470076683 -0.88399685153128... │0.204326│POLYGON EMPTY│0.947931│ │63107│75 │494 │0.2262│-1.7040│POINT (0.2262 -1.704) │POLYGON ((1.5304646340900363 -0.88904050056830... │0.216082│POLYGON EMPTY│1.192062│ │63108│75 │495 │0.2575│-1.7516│POINT (0.2575 -1.7516)│POLYGON ((1.5619810613615523 -0.89649921050952... │0.227548│POLYGON EMPTY│1.306230│ │63109│75 │496 │0.2885│-1.8041│POINT (0.2885 -1.8041)│POLYGON ((1.5948468195234426 -0.90689232729486... │0.241937│POLYGON EMPTY│1.426141│ 63110 rows × 9 columns Save in files# For preserving the results on the disk, the results need to be saved on the disk. This can be done with the build-in functions from Pandas. Create directories to store the results# First we create a directory, where we want to save the results. This step is optional! parents=True, exist_ok=True parents=True, exist_ok=True results_directory = pathlib.Path("results_introduction") Save Pandas DataFrame (result from everything but profiles) as csv# Now, a DataFrame can be saved as csv with: import csv from pedpy import ( with open( results_directory / "individual_result.csv", "w" ) as individual_output_file: individual_output_file.write(f"#framerate: {traj.frame_rate}\n\n") Save numpy arrays (result from profiles) as txt# The profiles are returned as Numpy arrays, which also provide a build-in save function, which allows to save the arrays as txt format: results_directory_density = results_directory / "profiles/density" results_directory_speed = results_directory / "profiles/speed" for i in range(len(range(min_frame_profiles, min_frame_profiles + 10))): frame = min_frame_profiles + i results_directory_density / f"density_frame_{frame:05d}.txt", results_directory_speed / f"speed_frame_{frame:05d}.txt",
{"url":"https://pedpy.readthedocs.io/en/stable/user_guide.html","timestamp":"2024-11-09T19:03:47Z","content_type":"text/html","content_length":"362419","record_id":"<urn:uuid:e87dec8e-0564-4adb-8df9-0b4a54e3d432>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00876.warc.gz"}
Time of Flight Calculator – Projectile Motion Last updated: Time of Flight Calculator – Projectile Motion With this time of flight calculator, you can easily calculate the time the projectile remains in the air. To solve this particular case of projectile motion, all you need to enter are the initial velocity, angle, and height. Read on to learn more about the time of flight equations, or simply play with the calculator instead! Time of flight equation If you throw a ball or shoot an arrow in the air, it will follow a parabolic path before hitting the ground. Visit the projectile motion calculator if you'd like to learn about it in more detail. Below we'll explain how to find out how long this movement would last. To define the time of flight equation, we should split the formulas into two cases: 1. Launching projectile from the ground (initial height = 0). Let's start with an equation of motion: $y = V_{0}\,t\sin(\alpha) - \frac{1}{2}gt^2,$ • $V_0$ – Initial velocity; • $t$ – Time since start of flight; • $\alpha$ – Angle of the initial flight path; and • $g$ – Acceleration due to gravity. The flight ends when the projectile hits the ground (y = 0). $0 = V_0\,t\sin(\alpha) - \frac{1}{2}gt^2$ Then, $t$ is the time of flight – the total time of the whole journey: $t = \frac{2V_0\,\sin(\alpha)}{g}$ Note that the air resistance is neglected. Which angle of launch causes the projectile to spend the most time in the air? Let's have a look at the final time of flight equation again: the higher the value of sine is, the longer the time in the air. The maximum value of sine occurs when the angle = 90°. So if you throw an object upwards, it will keep moving for the longest time. Additionally, if the velocity is equal to 0, then it's the case of a free fall. You can explore the concept of free fall further in our free fall calculator. 2. Launching projectile from some height (initial height > 0). However, if the projectile is thrown from some elevation $h$, the formula is a bit more complicated: $\footnotesize t = \frac{V_0\sin(\alpha) + \sqrt{(V_0 \sin(\alpha))^2 + 2gh}}{g}$ You can also estimate the path the projectile will follow using the trajectory calculator. Not only that but there's also the projectile range calculator, allowing you to see how far it would go! Time of flight exemplary calculations Let's use this time of flight calculator to find out how long it takes for a pebble thrown from the edge of the Grand Canyon to hit the ground. 1. Type in the velocity value. Assume it's 16 ft/s. 2. Enter the angle – for example, 20°. If you choose angle = 0°, it would be the example of horizontal projectile motion. 3. Finally, type in the initial height. Let's take the deepest point of the Canyon. It's a 6,000 ft difference – over a mile! Type this value into our tool. 4. The time of flight calculator will show you the total time the pebble remains in the air. It's 19.5 seconds for our example. All the calculations are made without taking into account air resistance. In reality, the trajectory would differ from the ideal parabola, and the range would be shorter than the result obtained from the calculations. How can I calculate time of flight? You may calculate the time of flight of a projectile using the formula: t = 2 × V₀ × sin(α) / g • t – Time of flight; • V₀ – Initial velocity; • α – Angle of launch; and • g – Gravitational acceleration. What is the time of flight equation? You can measure the time of flight in two instances; first, when the object is launched from the ground, i.e., height equals zero. Second when the object is launched from a certain height. • Height, h = 0: t = 2 × V₀ × sin(α) / g • h > 0: t = [V₀ × sin(α) + √((V₀ × sin(α))² + 2 × g × h)] / g • t – Time of flight; • V₀ – Initial velocity; • α – Angle of launch; • h – Height; and • g – Gravitational acceleration. What is time of flight in physics? A projectile launched into the air stays in the air for some time before coming to the ground. The time it remains in the air is called the time of flight. The object could be in the form of light or sound waves as well. Interestingly, the time of flight principle also helps determine the distance between objects in sensitive environments. What is a time of flight sensor? The time of flight (ToF) sensor measures the distance between two points using the ToF principle using light or sound. A signal, usually light photons, is sent from the sensor's emitter to the target and then received back at the sensor receiver. The time taken by the signal helps measure the distance. Typical applications include robot navigation, obstacle detection, and vehicle monitoring.
{"url":"https://www.omnicalculator.com/physics/time-of-flight-projectile-motion","timestamp":"2024-11-07T00:06:21Z","content_type":"text/html","content_length":"542931","record_id":"<urn:uuid:819a074b-cb72-460a-9cae-2b0b240547bc>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00441.warc.gz"}
How do you find a equation of the line containing the given pair of points (-5,0) and (0,9)? | HIX Tutor How do you find a equation of the line containing the given pair of points (-5,0) and (0,9)? Answer 1 I found: $9 x - 5 y = - 45$ I would try using the following relationship: Where you use the coordinate of your points as: #(x-0)/(0-(-5))=(y-9)/(9-0)# rearranging: #9x=5y-45# Giving: #9x-5y=-45# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 $y = \left(\frac{9}{5}\right) \cdot x + 9$ You are searching the equation of a straight line (=linear equation) who contain #A(-5,0) and B(0,9)# A linear equation form is : #y=a*x+b#, and here we will try to find numbers #a# and #b# Find #a# : The number #a# representing the slope of the line. #a = (y_b-y_a)/(x_b-x_a) = Delta_y/Delta_x# with #x_a# representing the abscissa of the point #A# and #y_a# is the ordinate of the point #A#. Here, #a = (9-0)/(0-(-5)) = 9/5# Now our equation is : #y=(9/5)*x+b# Find #b# : Take one point given, and replace #x# and #y# by the coordinate of this point and find #b#. We are lucky to have one point with #0# in abscissa, it makes the resolution easier : #y_b = (9/5)*x_b + b# # 9 = (9/5)*0 + b# # b = 9 # Therefore, we have the equation line ! #y = (9/5)*x+9# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 3 To find the equation of the line containing the given pair of points (-5,0) and (0,9), you can use the point-slope form of a linear equation. 1. Calculate the slope using the formula: ( m = \frac{y_2 - y_1}{x_2 - x_1} ). 2. Choose one of the points and plug the slope into the point-slope form equation: ( y - y_1 = m(x - x_1) ). Let's calculate the slope: ( m = \frac{9 - 0}{0 - (-5)} = \frac{9}{5} ). Now, we'll choose one of the points, let's say (-5,0), and plug the values into the point-slope form equation: ( y - 0 = \frac{9}{5}(x - (-5)) ). ( y = \frac{9}{5}(x + 5) ). This is the equation of the line in slope-intercept form. We can distribute to put it in standard form if needed. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-find-a-equation-of-the-line-containing-the-given-pair-of-points-5-0-a-8f9af928a7","timestamp":"2024-11-06T09:00:12Z","content_type":"text/html","content_length":"585385","record_id":"<urn:uuid:d9969781-36d9-4a21-865b-6ebc2fb58ff4>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00028.warc.gz"}
Factor Theorem Archives - A Plus Topper How Do You Use The Factor Theorem Factor TheoremTheorem: If p(x) is a polynomial of degree n ≥ 1 and a is any real number, then (i) x – a is a factor of p(x), if p(a) = 0, and (ii) p(a) = 0, if x – a is a factor of p(x). Proof: By the Remainder Theorem, p(x) = (x – a) q(x) + p(a). (i) I…
{"url":"https://www.aplustopper.com/tag/factor-theorem/","timestamp":"2024-11-10T10:47:00Z","content_type":"text/html","content_length":"38108","record_id":"<urn:uuid:c9ce66bd-6626-4c80-951f-23977d642cc9>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00564.warc.gz"}
5.9: Exponential Fluid Flow Last updated Page ID \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\) \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\) \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\) \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vectorC}[1]{\textbf{#1}} \) \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \) \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \) \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \) \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\ evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\ newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y} \) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real} {\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec} [3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array} {r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\ wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\ newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var} {\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\ bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\ widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\) A Leaking Container The scenario shown below in Figure 5.9.1 is a cylinder that is initially filled with a liquid. There is an opening toward the bottom of the container which is initially closed with a valve. When the valve is open at some later time, the liquid becomes exposed to the atmosphere and starts to leak out of the container. The rate at which the fluid is flowing, the current, depends on the total head gradient between the bottom and the top of the container. We will assume that there is an overall resistance to the flow. Once the valve is open the pressure difference between locations 1 and 2 is zero since both are at atmospheric pressure. The gravitation potential energy-density gradient is not zero, on the other hand, and is proportional to the height of the fluid inside the container. However, this height decreases as the liquid flows out of the cylinder, resulting in a decrease of current. Thus, we have an example of a fluid flow system where the flow is no longer steady-state. In fact, the current, which is rate of change of volume, is proportional to the amount of volume present. This is exactly how we described a system which behaves exponentially in Section 5.8. Figure 5.9.1: Non Steady-State Fluid System: leaking container When the valve is open to the atmosphere the pressure difference between points 1 and 2 is zero, but there is a change in gravitational potential energy-density. Applying Bernoulli equation between these two points in the direction of current we get: \[-\rho gh=-IR\label{cylinder-leak}\] This equation may seem very familiar from examples in previous sections, but the current is no longer a constant here, since height, \(h\), depends on time. At t=0 the height is \(h_0\) resulting in an initial current \(I_0\) of: \[I_0=\dfrac{\rho gh_0}{R}\label{I0}\] Current is the rate of change of volume. In the Bernoulli equation current must be a positive quantity in order for the \("-IR"\) term to be negative to represent a loss of mechanical energy to thermal energy. In this case volume is decreasing as a function of time, so the quantity \(\dfrac{dV}{dt}\) is negative, thus the negative sign needs to be added in order to make the current We can represent height in terms of volume by using \(V=Ah\), where \(A\) is the area of the cylinder as shown in Figure 5.9.1. Plugging in Equation \ref{It1} into Equation \ref{cylinder-leak} and solving for the rate of change of volume we arrive at: \[\dfrac{dV}{dt}=-\dfrac{\rho g}{AR}V\label{Vt}\] If you refer back to Section 5.8 you will see that the equation above has the exact form as the equation in the derivation of exponential decay, where \(\lambda=\dfrac{\rho g}{AR}\). Using the result from the derivation we find that height changes with time as: \[V(t)=V_0\exp\Big({-\dfrac{\rho g}{AR}t}\Big),\] where \(\exp(x)\equiv e^x\) and \(V_0=Ah_0\). Using the Equation \ref{It1} we can solve for current by taking the derivative of Equation \ref{ht} and multiplying by a minus sign: \[I(t)=\dfrac{\rho gh_0}{R}\exp\Big({-\dfrac{\rho g}{AR}t}\Big)=I_0\exp\Big({-\dfrac{\rho g}{AR}t}\Big)\] We see that we get the exact value of initial current we found in Equation \ref{I0}. The current and the volume both decay exponentially to zero. Leaking Between Two Containers Let us look at another similar but slightly modified situation shown below in Figure 5.9.2. Here there are two cylinder connected by a thin pipe. We start with all the fluid in the left cylinder held in place by a closed valve. Once the valve is open the liquid is allowed to flow. The current is determined by the pressure gradient between points 1 and 2 which is proportional to the height difference of the liquid in the two containers. As more fluid flows to the right cylinder the pressure difference decreases, and so does the current. Eventually the system approaches equilibrium when the water levels in the cylinders approach the same value resulting in equal pressures and zero current. Figure 5.9.2: Non Steady-State Fluid System: two cylinder system Since rate of charge of volume is proportional to the volume that is present, which means that the behavior will be exponential once again. You can arrive at these conclusions without doing all the math that we previously showed. Here we see an example where the quantity decaying exponential does not approach zero. The volume in the right cylinder starts at zero and approaches half the total volume exponentially. In the left cylinder the volume starts at the total volume and approaches half the total volume, again exponentially with the same rate. The volume in both cylinders displays exponential decay, even though the volume is increasing in one cylinder and decreasing in the other. The plots for volume as a function of time are shown in Figure 5.9.3 below. Exponential decay does not mean that the value decreases from some initial value to zero or a smaller value. The word "exponential decay" implies that the parameter approaches another value in an exponential way. That values can be larger than its initial values. Figure 5.9.3: Volume as a function of time: two cylinder system You can follow the following basics procedures to conceptually arrive at the exponential change of the system without going through all the mathematics: 1. Establish that the system will change exponentially by arguing that the rate of change of some parameter is proportion to the value of that parameter. 2. Determine the value of the parameter at initial time. 3. Determine the value parameter as the system approaches steady-state or equilibrium. 4. Connect the initial and final values with an exponential function using any information you have about decay rate, such as time constant or half-life. A Pump Moving Liquid Between Two Containers Imagine another example depicted below in Figure 5.9.4. There are two cylinders that initially contain the same amount of water. Thus, the heights of the water levels in each cylinder are equal. This mean that the pressures at points marked 1 and 2 are equal as well. The pump shown in the figure is initially turned off. There is no other source of potential gradient, the change in total head between points 1 and 2 is zero, resulting in zero current at t=0. Suddenly, the pump is turned on pumping fluid to the right, which results in more fluid accumulating in the right cylinder and fluid draining from the left cylinder. There is now a pressure gradient between points 1 and 2, resulting in a current which is proportional to this pressure difference. However, as the relative water level continue to change, so does the pressure gradient, resulting in a changing non steady-state current. Figure 5.9.4: Non Steady-State Fluid System: filling up a container with a pump We can use similar reasoning described above in red to figure out how the current and the volume change as a function of time. But we do require slightly more mathematical thinking here to found out the initial value of current, the final height and volume in each cylinder, and the decay rate. We will do through this exercise here. Initially, at t=0, the instant the pump is turned on and water starts to flow to the right. We can assume that the instant the pump is turned on the water levels are still equal, resulting in \(\Delta(\text{total head})=0\) between points 1 and 2. The complete Bernoulli equation at t=0 becomes: where we defined \(\tilde{E}_{\text{pump}}\equiv\dfrac{E_{\text{pump}}}{V}\) for simplicity. Solving for current we find: At some time later as the pump continues to work at constant strength, the two heights are no longer equal as shown in Figure 5.9.4 at \(t>0\). Assuming both cylinders are open to the atmosphere, the pressures at points 1 and 2 are given by: \[\text{point 1: }P_1=P_{atm}+\rho g h_L\] \[\text{point 2: }P_2=P_{atm}+\rho g h_R\] Subtracting the two equations we can find the pressure difference between the two points and write down the Bernoulli equation between points 1 and 2: \[P_2-P_1=\rho g (h_R-h_L)=\tilde{E}_{\text{pump}}-IR\] Solving for \(I\) and expressing heights in terms of volumes we get: \[I=\dfrac{\tilde{E}_{\text{pump}}}{R}-\dfrac{\rho g}{AR}(V_R-V_L)\label{It}\] The current depends on time since the volume difference, \(V_R-V_L\), changes with time. The current decreases from its initial value at t=0 in Equation \ref{I-t0}, since the volume on the right is greater than the volum on the left, which means that the second term on the right-side of Equation \ref{It} is negative. As the volume difference increases, the current continues to decrease until the pressure difference is too large for the pump to be able to continue pumping the water upward. Mathematically, we can see that at \(t\gg 0\), the current will go to zero in Equation \ref{It} \[\tilde{E}_{\text{pump}}=\dfrac{\rho g}{A}(V_R-V_L)\label{equil}\] We would like to find a general expression for current as a function of time, so we can describe the behavior of the current at all three times we analyzed, t=0, intermediate time t, and \(t\gg 0\) in one equation. This will require a little calculus, as it did in the general derivation in Section 5.8. First, let us represent Equation \ref{It} in a form which is similar to Equation \ref{Vt}. The total volume, which is the sum of the volumes in the right and left cylinders, \(V_t=V_R+V_L\), is conserved. No water flows in from the outside or leaks out anywhere, the volume only redistributes itself from the left to the right cylinder. Defining the current as the rate of charge of volume in the right cylinder, equation \ref{It} for the current becomes: \[\dfrac{dV_R}{dt}=\dfrac{\tilde{E}_{\text{pump}}}{R}-\dfrac{\rho g}{AR}(V_R-V_L)\] Since both volumes in the right and left cylinders are changing with time, we would want to express everything in terms of one of the volume which is changing. Let us choose the volume in the right cylinder, \(V_r\). Using the definition of total volume and substituting \(V_L=V_t-V_R\) for the volume in the left cylinder, the above equation becomes: \[\dfrac{dV_R}{dt}=\dfrac{\tilde{E}_{\text{pump}}}{R}-\dfrac{\rho g}{AR}(2V_R-V_t)\label{Vr-eq1}\] Comparing the equation above with Equation \ref{Vt}, a similar structure appears up to some additive constants. The rate of change of volume is proportional to volume itself, thus, we expect an exponential result for volume as a function of time. The exam mathematics of solving for volume are cumbersome, so we leave is as an aside derivation below. You can move directly to the result below in Equation \ref{Vr} since our main focus should be on interpreting the physical behavior rather then the mathematics of obtaining the result. This derivation to follow is very similar to the one we did at the beginning of this section, except with addition of more constants. To simplify the expression in Equation \ref{Vr-eq1}, let us separate the terms on the right-hand side that depend on \(V_R\) from terms that do not, and define some variables: \(a\equiv\dfrac{\tilde{E}_{\text{pump}}}{R}+\dfrac{\rho g}{AR}V_t\) \(b\equiv\dfrac{2\rho g}{AR}\) Our next step is to separate the volume and time variables and integrating each side from the initial value to some arbitrary volume \(V^{\prime}_R\) and time \(t^{\prime}\). The initial volume is \ (\dfrac{V_t}{2}\) since the volumes are equal on both side. \(\int_{V_t/2}^{V^{\prime}_R}\dfrac{V_R}{a-bV_R}=\int_0^{t^{\prime}} dt\) Taking an integral of both sides and dropping the "prime", we get: Taking the exponential of each side and using the relationship \(e^{ln(x)}=x\), and then solving for \(V_R\) we get: Plugging back expressions for \(a\) and \(b\) and rearranging, we finally arrive at: \(V_R(t)=\dfrac{V_t}{2}+\dfrac{A\tilde{E}_{\text{pump}}}{2\rho g}\Big[1-\exp{\Big(-\dfrac{2\rho g}{AR}t\Big)}\Big]\). The derivation above gives us the following results for volume as a function of time: \[V_R(t)=\dfrac{V_t}{2}+\dfrac{A\tilde{E}_{\text{pump}}}{2\rho g}\Big[1-\exp{\Big(-\dfrac{2\rho g}{AR}t\Big)}\Big]\label{Vr}\] We can check this result by using what we know about what happens at \(t=0\) and \(t\gg 0\). Initially, each side has half the total volume. Plugging in \(t=0\) into Equation \ref{Vr} we get exactly this result, \(V_R(0)=\dfrac{V_t}{2}\). When \(t\gg 0\), we get: \[V_R(t\gg 0)=\dfrac{V_t}{2}+\dfrac{A\tilde{E}_{\text{pump}}}{2\rho g},\] which is the result you can obtain from Equation \ref{equil} which states the relationship between the pump strength and height as the system approaches an equilibrium steady state. To solve for current we can take the derivative of the above equation and arrive at: \[I(t)=\dfrac{dV_R}{dt}=\dfrac{\tilde{E}_{\text{pump}}}{R}\exp{\Big(-\dfrac{2\rho g}{AR}t\Big)}\label{Ipump}\] The current starts at its initial maximum values of \(\dfrac{\tilde{E}_{\text{pump}}}{R}\) and decays to zero exponentially. Using Equation \ref{Vr} we can plot volume as a function of time for the right cylinder as it fills up w and for left cylinder as the water depletes. To find the equation for \(V_L\) we just need to use the relationship \(V_L=V_t-V_R\). Figure 5.9.5 below displays the change of volume as a function of time. In both cylinders the volume starts out at half the total volume. In the right cylinder the volume increase to its maximum value, while in the left cylinder the volume decreases by the same volume, always keeping the combined volume in the two cylinders constant. Figure 5.9.5: Non Steady-State Fluid System. Also, since the two plots represent change the same system both cylinders will have the same decay rate which can be obtained from the time constant in Equation \ref{Vr}: \[\tau=\dfrac{AR}{2\rho g}\label{pump-tau}\] Recall, the large the time constant the slower the rate of decay. It is reasonable that the time constant is proportional to \(A\) and \(R\). A larger area would indicate it would take longer for the cylinder to fill up to the maximum height. A larger resistance would slow down the current, and thus the time it would take the reach its equilibrium value. The time constant is inversely proportional to density, because with the same pump the maximum height at equilibrium will be reduced with a higher density since the gravitational energy-density is increased with density. All three scenarios described in this section decay with an analogous rate. Example \(\PageIndex{2}\) Two cylinders below have different cross sectional areas, such that the area of cylinder on the right is nine times greater than the area of the cylinder on the left. Initially the water levels are at 2m in both cylinders and the pump is turned off, such that \(h_L=h_R=2m\) at \(t=0\) in the figure below. The pump in this system can support a maximum height difference of 10 meters. When the pump is turned on, it pumps water from the right to the left cylinder. The main source of resistance is in the thin connecting pipe, which has a resistance of \(1.6\times 10^4\dfrac{Js}{m^6}\). The density of water is \(1000\dfrac{kg}{m^3}\). a) Make a plot of the water levels, \(h_R\) and \(h_L\), as a function of time from the moment the pump is turned on until the system reaches equilibrium. Make sure to mark numerical values for initial and final heights. (You do not need to derive any equations here.) b) Likewise, make a plot of current as a function of time. Make sure to mark numerical values for initial and final currents. c) What will be the height in the left and the right cylinders after 2 half-lives? The flow rate \(I\) is related to rate of change of height since, \(I=\dfrac{dV}{dt}=A\dfrac{dh}{dt}\) since \(V=Ah\). In this system the rate of change of the height is proportional to the height, the change is exponential, and approaching a total height difference of 10m as stated in the problem. The magnitude of the current flowing out of the right standpipe is equal to the current flowing into the left standpipe, except the volume is increasing in the left cylinder and decreasing in the right one. This is accounted by the minus sign shown here: Plugging in \(A_R=9A_L\): We can rewrite the above equation in terms of change in height, starting at initial height and going to a maximum value: Thus, the height in the left cylinder will increase nine times more than the height in the right one will decrease. Also, once the system reaches equilibrium: Using the two previous results and setting \(h_0=2m\) we get: The plot for \(h_L\) and \(h_R\) is shown below. b) When the pump is first turned on, this sets the initial current. Initially, there is no change in head across the system since the water levels are the same in both cylinders. Applying the Bernoulli equation across the thin pipe we get: \(0=\tilde{E}_{\text{pump}} – I_0R\) We can also solve for \(\tilde{E}_{\text{pump}}\), since when the system reaches its equilibrium state the current goes to zero, and the height difference between the two pipes approaches 10m. Applying Bernoulli equation at equilibrium we get: \(\tilde{E}_{\text{pump}}=\rho g(h_L-h_R)=1000\dfrac{kg}{m^3}\times 10\dfrac{m}{s^3}\times 10m=1.0\times 10^5 \dfrac{J}{m^3}\) Solving for initial current we find: \(I_0=\dfrac{\tilde{E}_{\text{pump}}}{R}=\dfrac{1.0\times 10^5 \dfrac{J}{m^3}}{1.6\times 10^4\dfrac{Js}{m^6}}=6.25\dfrac{m^3}{s}\) Thus the current starts at its initial values of \(6.25\dfrac{m^3}{s}\) and approaches zero exponentially as shown below. c) After one half-life the heights will be halfway between its initial and final values. The left cylinder starts at 2m and ends at 11m. The height at one half-life is: For the right cylinder which starts at 2m and ends at 1m, after one half-life it will be at 2-1/2=1.5m. After another half-life the height for the left cylinder has to be halfway between 6.5m and 11m: For the right cylinder the height is halfway between 1.5m and 1m which is: We saw two specific examples above of fluid flow exhibiting exponential change behavior. In one situation a pump was used to store water in one cylinder and in another the stored water flowed from one cylinder to another. We will now introduced analogous situations for electric change flow and make connections to these two fluid examples.
{"url":"https://phys.libretexts.org/Courses/University_of_California_Davis/UCD%3A_Physics_7B_-_General_Physics/5%3A_Flow_Transport_and_Exponential_-_working_copy/5.09%3A_Exponential_Fluid_Flow","timestamp":"2024-11-11T17:02:27Z","content_type":"text/html","content_length":"156508","record_id":"<urn:uuid:2101d7a5-76f9-4a94-bafd-2970505d7cbf>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00484.warc.gz"}
Fundamental Frequency of the Lateral Vibration of a Shaft carrying one mass using the Abaqus software and Analytical Solution In this illustration, we intend to calculate the Fundamental Frequency of the Lateral Vibration of a Shaft carrying one mass using the Abaqus software and Analytical Solution As you observe in the picture, the Model includes Beam and a Concentrated Mass which is located exactly on the center of the beam The beam is one meter in length and the cross-section of the beam is a circle with a radius equal to 0.01 m. Meanwhile, a hinged support has been used on both sides of the beam The purpose of this training is to calculate the fundamental frequency of the lateral vibration of the shaft with concentrated Mass using the Abaqus software and then compare the results obtained from the Abaqus software with those gained from the Analytical Solution. Actually, this problem will be solved twice, once using the Analytical Solution and for the second time, by using the Abaqus The natural frequency having been obtained from the Analytical Solution is 6.806 Hertz The natural frequency having been obtained from the Abaqus software is equal to 6.806 Hertz As you observe, the results obtained from the Analytical Solution are exactly concordant with those gained from the Abaqus software There are no reviews yet. Be the first to review “Fundamental Frequency of the Lateral Vibration of a Shaft carrying one mass using the Abaqus software and Analytical Solution”
{"url":"https://www.7abaqus.com/product/fundamental-frequency-of-the-lateral-vibration-of-a-shaft-carrying-one-mass-using-the-abaqus-software-and-analytical-solution/","timestamp":"2024-11-06T04:14:26Z","content_type":"text/html","content_length":"71725","record_id":"<urn:uuid:8f6a9f8e-10a6-4941-a519-7871882c0d69>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00184.warc.gz"}
How to Solve Ratios in Mathematics A Step-by-Step Tutorial How to Solve Ratios in Mathematics A Step-by-Step Tutorial Solve Ratios in Mathematics : preg_split(): Passing null to parameter #3 ($limit) of type int is deprecated in on line How to Solve Ratios in Mathematics: A Step-by-Step Tutorial Mathematically, when we need to compare two or more numbers, we can use ratios as ratios can be used to compare quantitative numbers or amounts or you can compare parts of larger numbers. Data analysts use ratios as one of their tools. As a result, many people have difficulty solving ratios, which is why they are constantly looking for ways to solve ratios. and you can get math assignment help. In case you are having difficulty solving ratios, this article will explain the concept of ratios and provide a method on how to solve ratios easily. Ratio as a Concept A person must be familiar with ratios before learning how to solve ratios. Mathematics can’t be mugged up, you must understand concepts in order to solve ratios. It is important to know that ratios are not only used in academics, but also by data analysts in the form of comparisons to analyze data. Generally, we consider ratios to compare only two numbers, rather than the ability to compare three or four numbers. A second thing you should know about ratios is that they show the relationship between two or more numbers. In the same manner, if you have 15 male and 10 female members of your class, you would write them out as 15:10, which is 3:2. That ratio is called “isto.” Therefore, it will be written as 3 isto 2. The division form of the ratio is 3/2. Ratios can, therefore, be written in three different ways. The easiest way to work with a ratio is to convert it to a fraction. The first number should appear on top, and the second number should appear on the bottom. In response to the question of how to solve ratios, here is the reason. Using ratios to solve problems can be accomplished by setting up proportion equations, in other words, equations involving two ratios. A Step-by-Step Guide on how to solve ratios Writing the values you want to compare is the first step in solving the ratio. You can use any given form to write these values, such as using a colon, a division sign, or a symbol for iso. The following is a full guide on how to solve ratios using some easy steps: Through an example, let’s see how the steps work. Let’s say you want to remove your math marks from your physics marks. Your math marks are 90 and your physics marks are 70. Let me write the ratio The ratio of 90 to 70 is 90:70 or 90/70. Having reduced the values to their simplest form is the second step toward solving the problem. The common factors in the numbers can then be eliminated. Once the numbers are divided by such a common factor, the numbers will be in their simplest form. In the example of 90:70, you have to identify the common factors between the terms of ratios after you have written it in the format of ratios. In this example, the common factor is 10. As a result of dividing both 90 and 70 by 10, you will get 90/10:70/10 = 9:7 in their simplest form. There is therefore a 9:7 ratio. Taking another example of three digits, those three digits are 75 points in biology, 25 points in physics, and 100 points in math. In ratio form, these three numbers are 75, 25, and 100. 75: 25: 100 We now need to solve ratios by following the next step. To determine the common factor among the numbers, you must divide each number by 25. As 25 appears to be the common factor, you will divide each number by 25. 75/25: 25/25 to 100/25 = 3 : 1 : 4 Therefore, the answer is 3:1:4. The most important thing to know about ratio is that it does not change when the numbers are multiplied or divided. There is no change. Using the above numbers, multiply them with 2 to obtain 75 x 2 : 25 x 2 : 100 x 2 = 150 : 50 : 200. Also, the ratio remains the same if you divide the results by 50 to obtain the same outcome that is 3:1:2. For a better understanding, let’s take a look at an example of finding the value of variables if two ratios are equal. Let us suppose we have two ratios, one 3:2 and another 5:x, and these ratios are equal. Now we must determine the value of x according to the equation below. to3 / 4 = 3/X 3 x X = 3 x 2 X = 6/3 X= 2. Formula for Ratio Whenever we need to compare two numbers or quantities, the ratio formula can be used. The ratio defines the relationship between two objects and shows how much of one is contained in another. When two quantities, say a and b, are represented as a ratio, it takes the form a: b. a = Antecedent and b = Consequent are the components of the ratio formula. Final words Mathematically, when we need to compare two or more numbers, we can use ratios as ratios can be used to compare quantitative numbers or amounts or you can compare parts of larger numbers. The following is a full guide on how to solve ratios using some easy steps. In case you are having difficulty solving ratios, this article will explain the concept of ratios and provide a method on how to solve ratios. A person must be familiar with ratios before learning how to solve ratios. Mathematics can’t be mugged up, you must understand concepts in order to solve ratios. It is important to know that ratios are not only used in academics, but also by data analysts in the form of comparisons to analyze data. A second thing you should know about ratios is that they show the relationship between two or more numbers. Deprecated: preg_split(): Passing null to parameter #3 ($limit) of type int is deprecated in /home/dailwtkh/public_html/wp-content/themes/jannah/framework/functions/post-functions.php on line 863
{"url":"https://dailyonoff.com/how-to-solve-ratios-in-mathematics-a-step-by-step-tutorial/","timestamp":"2024-11-08T07:26:19Z","content_type":"text/html","content_length":"83367","record_id":"<urn:uuid:3310fdee-5535-4aa7-8975-bcfd5a3710ea>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00229.warc.gz"}
A Software for Prediction of Periodic Response of Non-linear Multi Degree of Freedom Rotors Based on Harmonic Balances It is the purpose of this paper to introduce a computer software that is developed for the analysis of general multi degree of freedom rotor bearing systems with non-linear support elements. A numerical-analytical method for the prediction of steady state periodic response of large order nonlinear rotor dynamic systems is addressed which is based on the harmonic balance technique. By utilizing harmonic balance with appropriate condensation, it is possible to considerably reduce the number of simultaneous nonlinear equations inherent to this approach. Using this method, the set of nonlinear differential equations governing the motion of the rotor systems is transformed to a set of nonlinear algebraic equations. A condensation technique is also used to reduce the nonlinear algebraic equations to those only related to the physical coordinates associated with nonlinear components. The stability (linear) of the equilibrium solutions may be conveniently evaluated using Floquet theory, particularly if the damper force components are evaluated in fixed, rather than rotating, reference frames. The versatility of this technique is illustrated on systems of increasing complexity with and without damper centralizing springs. نوع مطالعه: | موضوع مقاله: و موضوعات مربوط دریافت: 1389/4/28 | انتشار: 1387/4/25
{"url":"https://www.iust.ac.ir/ijieen/browse.php?a_id=127&slc_lang=fa&sid=1&printcase=1&hbnr=1&hmb=1","timestamp":"2024-11-03T00:08:40Z","content_type":"application/xhtml+xml","content_length":"17700","record_id":"<urn:uuid:fc0d40ca-d48e-40e4-9f08-5227a6347fb9>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00517.warc.gz"}
Tutor Daniel K. id:31428 - Mathematics, Physics, Chemistry, Programming, Web Designing, Further Mathematics - UpSkillsTutor «I am a graduate from the department of mathematics and computer science at Benue State University, Makurdi.» Daniel K. id: 31428 Benue State University, Makurdi About me I have a degree in mathematics and computer science from Makurdi-based Benue State University. Since 2015, I have been physics and math instructor at various institutions, secondary schools, and centres. I've been providing private tutoring on a freelance platform for more than a year, and during that time, all of my students have provided me with incredible feedback about my teaching style because they have all exceeded expectations and set new records. I encourage my students to express their creativity naturally by teaching them to be experts in these fields. Any student will eventually become an expert in the subject or skill I teach and will have a great deal of comfort with it. Any student will be fortunate to have me, regardless of category. Mathematics - details about teaching experience: I create a virtual classroom for my students using tools like Zoom and Google Meet, as well as other programmes, to facilitate effective learning, simple communication, and student feedback. Physics - details about teaching experience: I bring my students into a virtual classroom by utilizing modern video conferencing applications such as Google Meet and Zoom, in conjunction with other tools, to facilitate efficient learning, simple communication, and the provision of feedback during the course of our classes. Further Mathematics - details about teaching experience: I create a whiteboard classroom for my students by utilizing programs and tools such as Zoom, Google Sheets, and Google Meet, amongst others, in order to facilitate efficient learning, straightforward communication, and the compendium of student feedback. Hausa - details about teaching experience: During the course of our classes, I make use of modern video conferencing applications such as Google Meet and Zoom, in conjunction with other tools, to facilitate efficient learning, simple communication in Hausa, and the provision of feedback. Classes schedule - possible time for classes Tutors with similar parameters Ile-Ife I teach Mathematics, Computer Science and Basic Technology for JSS classes, and HTML &CSS programming language. I teach basically Mathematics from the Basic One to Junior Secondary School three (3). I hold a National diploma in Yaba College Of Technology. More profiles similar to this» Fast links Tutor’s summary Daniel K. is an experienced tutor, and has helped many students over 5 years. Tutor lives in Ile-Ife and does tutoring in following subject: 1. Mathematics 2. Physics 3. Chemistry 4. Programming 5. Web Designing 6. Further Mathematics You may choose following preparation levels with this tutor: You may learn online, lesson price is 2000 NGN per hour.
{"url":"https://upskillstutor.com.ng/user-31428/","timestamp":"2024-11-06T21:45:58Z","content_type":"text/html","content_length":"546485","record_id":"<urn:uuid:ac8d3612-596b-4c40-ba05-f4d7c896ca18>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00385.warc.gz"}
Eigendecomposition - (Big Data Analytics and Visualization) - Vocab, Definition, Explanations | Fiveable from class: Big Data Analytics and Visualization Eigendecomposition is a matrix factorization technique that expresses a square matrix in terms of its eigenvalues and eigenvectors. This powerful mathematical method is essential for various applications, including dimensionality reduction techniques, where it helps in simplifying complex data sets while retaining their essential characteristics. By transforming data into a lower-dimensional space, eigendecomposition facilitates efficient processing and analysis, making it a cornerstone in fields like machine learning and statistics. congrats on reading the definition of eigendecomposition. now let's actually learn it. 5 Must Know Facts For Your Next Test 1. Eigendecomposition is applicable only to square matrices, and not all matrices can be decomposed; those that can are termed diagonalizable. 2. The eigenvalues obtained from eigendecomposition indicate the variance captured by each eigenvector when used in dimensionality reduction. 3. In PCA, eigendecomposition helps determine the principal components, which are the new axes that maximize the variance in the data. 4. Computationally, eigendecomposition can be performed using algorithms like the QR algorithm or Jacobi method, which are important for handling large data sets. 5. Understanding eigendecomposition is crucial for tasks like image compression and feature extraction, where reducing dimensions while preserving information is key. Review Questions • How does eigendecomposition contribute to dimensionality reduction techniques like PCA? □ Eigendecomposition plays a vital role in PCA by allowing us to decompose the covariance matrix of the data into its eigenvalues and eigenvectors. The eigenvectors represent the directions of maximum variance, while the eigenvalues indicate the amount of variance captured by each direction. By selecting the top eigenvectors corresponding to the largest eigenvalues, we can project the original data onto a lower-dimensional space that retains the most important features. • Discuss the implications of using eigendecomposition on non-diagonalizable matrices in dimensionality reduction. □ Using eigendecomposition on non-diagonalizable matrices can lead to challenges in dimensionality reduction because these matrices cannot be expressed as a product of their eigenvectors and eigenvalues in a straightforward manner. This limitation means alternative approaches may be needed, such as Singular Value Decomposition (SVD), which can handle any matrix and still achieve effective dimensionality reduction. Understanding this distinction is critical for ensuring accurate results in data analysis. • Evaluate the overall significance of eigendecomposition in modern data analytics and visualization. □ Eigendecomposition significantly impacts modern data analytics and visualization by providing foundational methods for simplifying complex datasets. Its ability to reduce dimensionality while preserving variance allows analysts to uncover patterns and relationships that may be obscured in high-dimensional spaces. Furthermore, techniques such as PCA rely on eigendecomposition to enhance computational efficiency and interpretability, making it indispensable for exploratory data analysis, machine learning algorithms, and visual representation of high-dimensional data. © 2024 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/big-data-analytics-and-visualization/eigendecomposition","timestamp":"2024-11-05T09:23:35Z","content_type":"text/html","content_length":"157093","record_id":"<urn:uuid:790d4627-bdce-4e49-b4f1-f126a692794b>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00409.warc.gz"}
Basic calculations Basic calculations are crucial for everyday tasks. They help us manage budgets and make informed decisions. Understanding addition, subtraction, multiplication, and division is fundamental. These operations form the backbone of more advanced math skills. Practicing these skills regularly enhances our numerical fluency. Simple mental math exercises can be beneficial for sharpening our minds and improving cognitive abilities. By mastering basic calculations, we gain confidence in handling mathematical problems and challenges. These skills are applicable in various real-life situations, from grocery shopping to managing personal finances. Embracing the importance of basic calculations is key to building a strong foundation in mathematics. Table of Contents (How I learned to Calculate Extremely Fast) Basic calculations are essential for daily life. Addition, subtraction, multiplication, and division are fundamental operations. Addition combines numbers to find the total. For example, 2 + 3 equals 5. Subtraction takes away one number from another. In 5 – 2, you get 3. Multiplication involves repeated addition. For instance, 2 x 4 equals 8. Division splits a number into equal parts. With 10 ÷ 2, you get 5. Calculations help with budgeting, shopping, and cooking. Whether at home or work, basic math skills are valuable. Practicing math sharpens your mind and boosts problem-solving abilities. Overcoming math challenges builds confidence and improves mental agility. In everyday situations, like splitting a bill at a restaurant, math is significant. Understanding basic calculations empowers you to make informed decisions. It’s never too late to enhance your math skills. Start with simple exercises and gradually progress to more complex problems. Embrace the world of numbers and watch how it enhances your life in various ways. Adding numbers is a fundamental mathematical operation. It involves combining two or more values to find their total sum. Addition is widely used in daily life for various purposes, such as calculating expenses, measuring lengths or weights, and solving puzzles. To perform addition, align the numbers in columns according to place value and start adding from the rightmost digits. If the sum of two digits is less than 10, write it below the line. If it is 10 or greater, carry over the extra amount to the next column. Repeat this process for each column until all digits are added. Practicing addition helps improve mental math skills and enhances problem-solving abilities. Learning to add efficiently is crucial for building a strong foundation in mathematics. It is essential to grasp the concept of regrouping and carrying over when adding larger numbers. Addition is commutative, meaning the order of numbers does not affect the result; for example, 5 + 3 is the same as 3 + 5, both equal 8. Understanding addition and its properties lays the groundwork for more advanced math concepts. Teaching addition to children can be made fun using visual aids, games, and hands-on activities to engage and motivate them. Building confidence in adding numbers at an early age sets the stage for mathematical success in the future. Practice addition regularly to maintain and sharpen your mental math skills. By mastering addition, you will become more proficient in mathematics and better equipped to tackle complex problems with ease. Embrace the simplicity and beauty of addition as a fundamental operation that underpins various mathematical concepts and real-world applications. Let the joy of adding numbers inspire you to explore the endless possibilities of mathematical expressions and calculations. Basic calculations are essential in everyday life, and division is a fundamental mathematical operation. Division involves splitting a quantity into equal parts. When dividing, the dividend is the total amount being divided. The divisor is the number you are dividing by, indicating the size of each part. As you divide, the quotient is the result of the division, representing the equal parts. Division helps in sharing items equally among people, such as pizzas or candies. It is also crucial in determining average values or rates, like miles per gallon. Understanding division is beneficial for budgeting and portion control. For instance, dividing a sum of money equally among friends requires division skills. Division is used in recipes to adjust ingredient quantities. When dividing fractions, you multiply by the reciprocal of the second fraction. Long division is a method used for dividing large numbers. It involves systematically dividing the digits of the dividend by the divisor. Division is the inverse of multiplication, and both operations are interconnected. The division symbol, ÷, is used to represent this mathematical operation. Practice with division helps improve mental math skills. Division problems can be solved with different strategies, such as repeated subtraction. Division is used in various fields like engineering, finance, and science. Understanding division is crucial for academic success and problem-solving abilities. Overall, mastering division is essential for navigating daily tasks and achieving success in many areas of life. Multiplication is a fundamental operation in mathematics that involves repeated addition. It is a way to calculate the total when you have groups of the same size. Understanding multiplication is crucial in everyday life, from basic tasks to complex problem-solving. It allows us to quickly determine quantities, sizes, and measurements in a more efficient and systematic manner. By mastering multiplication, one can enhance their numeracy skills and build a strong foundation for more advanced mathematical concepts. It is like building blocks, each multiplication fact learned is a step towards mathematical fluency. Multiplication tables are a valuable resource in memorizing and recalling multiplication facts. These tables display the product of two numbers, making it easier for individuals to learn and practice multiplication. Knowing multiplication tables can improve mental arithmetic and help in solving math problems quickly. Multiplication is not just about numbers; it also fosters critical thinking and problem-solving skills. It encourages logical reasoning and develops a strategic approach to mathematical challenges. Moreover, multiplication can be represented visually through arrays, diagrams, and other models. Visual representations make multiplication more tangible and easier to grasp, especially for visual learners. Through these visual aids, individuals can better understand the concept of multiplication and its application in real-life situations. Understanding the basics of multiplication is essential for academic success and everyday problem-solving. It lays the groundwork for higher-level math skills and prepares individuals for more complex mathematical operations. By gaining proficiency in multiplication, one can navigate the world with confidence and analytical skills. Overall, multiplication is a powerful tool that empowers individuals to quantify, calculate, and analyze information effectively. It is a key aspect of mathematics that influences various domains of knowledge and contributes to intellectual growth and development. Order of Operations When performing basic calculations, understanding the order of operations is essential. The order of operations guides which calculations to solve first in a mathematical expression. This rule helps prevent confusion and ensures consistent results when solving equations. The acronym PEMDAS is a helpful tool to remember the sequence of operations: Parentheses, Exponents, Multiplication and Division (from left to right), and Addition and Subtraction (from left to right). Following PEMDAS ensures accurate and standardized computation outcomes across different mathematical problems. In practice, start by solving operations in parentheses, moving to exponents, then performing multiplication and division before addition and subtraction. This order maintains consistency in calculations, reducing errors and simplifying the solving process. Failing to adhere to the order of operations may lead to incorrect solutions and confusion in mathematical equations. By meticulously following the steps, you ensure correct results and build a solid foundation for more complex mathematical concepts. Understanding the order of operations is fundamental in mastering basic calculations and laying the groundwork for higher-level math studies. When working through problems, remember to break down equations into smaller parts and solve them systematically according to PEMDAS. Practicing the order of operations enhances mathematical skills and boosts problem-solving abilities. Embrace the order of operations as a valuable tool in your mathematical toolkit, guiding you through calculations with precision and clarity. Engaging with this fundamental concept fosters a deep understanding of mathematics and fosters confidence in tackling numerical challenges. In conclusion, mastering the order of operations is key to achieving accurate and efficient results in basic calculations. By following PEMDAS diligently, you empower yourself to approach mathematical problems with confidence and skill. Stay consistent in applying the order of operations, and watch your mathematical proficiency grow. Subtraction is a fundamental arithmetic operation that involves taking away one quantity from another. It is commonly used in everyday life, from simple tasks like calculating change at a store to more complex mathematical problems. Understanding subtraction is essential for building a strong foundation in mathematics. When you subtract, you are essentially finding the difference between two numbers. This process is often represented using the minus sign (-). For example, when you subtract 5 from 10, you are left with 5 because 10 – 5 equals 5. Subtraction is the opposite of addition and is an important concept in mathematics. It helps to develop critical thinking skills and problem-solving abilities. By practicing subtraction, you can improve your mental math skills and enhance your overall numeracy. One common method for performing subtraction is the column method. This involves aligning the numbers one on top of the other and subtracting digits in each column, starting from the right. Carrying over is required when the digit on top is smaller than the digit below it. Subtraction can also be visualized using counters or number lines. This hands-on approach can help make the concept more concrete and easier to understand, especially for young learners. By using manipulatives, students can see the concept of taking away in action. In real-world scenarios, subtraction is used to solve a variety of problems, such as calculating discounts, measuring distances, and managing finances. Being proficient in subtraction is crucial for many professions, including accounting, engineering, and science. Overall, subtraction plays a significant role in mathematics and in our daily lives. By mastering this fundamental operation, you can improve your problem-solving skills and better understand the relationships between numbers. Practice makes perfect, so keep sharpening your subtraction skills to become a confident mathematician. External Links
{"url":"https://info.3diamonds.biz/basic-calculations/","timestamp":"2024-11-08T01:39:02Z","content_type":"text/html","content_length":"97280","record_id":"<urn:uuid:b33a8e0c-69ba-4225-8a74-c4ec0f4939b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00330.warc.gz"}
Lesson 5 Points, Segments, and Zigzags • Let’s figure out when segments are congruent. 5.1: What's the Point? If \(A\) is a point on the plane and \(B\) is a point on the plane, then \(A\) is congruent to \(B\). Try to prove this claim by explaining why you can be certain the claim must be true, or try to disprove this claim by explaining why the claim cannot be true. If you can find a counterexample in which the “if” part (hypothesis) is true, but the “then” part (conclusion) is false, you have disproved the claim. 5.2: What's the Segment? Prove the conjecture: If \(AB\) is a segment in the plane and \(CD\) is a segment in the plane with the same length as \(AB\), then \(AB\) is congruent to \(CD\). Prove or disprove the following claim: “If \(EF\) is a piece of string in the plane, and \(GH\) is a piece of string in the plane with the same length as \(EF\), then \(EF\) is congruent to \(GH\).” 5.3: Zig Then Zag \(\overline{QR} \cong \overline{XY}, \overline{RS} \cong \overline{YZ}, \angle R \cong \angle Y\) 1. Here are some statements about 2 zigzags. Put them in order to write a proof about figures \(QRS\) and \(XYZ\). □ 1: Therefore, figure \(QRS\) is congruent to figure \(XYZ\). □ 2: \(S'\) must be on ray \(YZ\) since both \(S'\) and \(Z\) are on the same side of \(XY\) and make the same angle with it at \(Y\). □ 3: Segments \(QR\) and \(XY\) are the same length, so they are congruent. Therefore, there is a rigid motion that takes \(QR\) to \(XY\). Apply that rigid motion to figure \(QRS\). □ 4: Since points \(S'\) and \(Z\) are the same distance along the same ray from \(Y\), they have to be in the same place. □ 5: If necessary, reflect the image of figure \(QRS\) across \(XY\) to be sure the image of \(S\), which we will call \(S'\), is on the same side of \(XY\) as \(Z\). 2. Take turns with your partner stating steps in the proof that figure \(ABCD\) is congruent to figure \(EFGH\). If 2 figures are congruent, then there is a sequence of rigid motions that takes one figure onto the other. We can use this fact to prove that any point is congruent to another point. We can also prove segments of the same length are congruent. Finally, we can put together arguments to prove entire figures are congruent. These statements prove \(ABC\) is congruent to \(XYZ\). • Segments \(AB\) and \(XY\) are the same length, so they are congruent. Therefore, there is a rigid motion that takes \(AB\) to \(XY\). Apply that rigid motion to figure \(ABC\). • If necessary, reflect the image of figure \(ABC\) across \(XY\) to be sure the image of \(C\), which we will call \(C'\), is on the same side of \(XY\) as \(Z\). • \(C'\) must be on ray \(YZ\) since both \(C'\) and \(Z\) are on the same side of \(XY\) and make the same angle with it at \(Y\). • Since points \(C'\) and \(Z\) are the same distance along the same ray from \(Y\), they have to be in the same place. • Therefore, figure \(ABC\) is congruent to figure \(XYZ\). • corresponding For a rigid transformation that takes one figure onto another, a part of the first figure and its image in the second figure are called corresponding parts. We also talk about corresponding parts when we are trying to prove two figures are congruent and set up a correspondence between the parts to see if the parts are congruent. In the figure, segment \(AB\) corresponds to segment \(DE\), and angle \(BCA\) corresponds to angle \(EFD\).
{"url":"https://im-beta.kendallhunt.com/HS/students/2/2/5/index.html","timestamp":"2024-11-04T18:54:31Z","content_type":"text/html","content_length":"85082","record_id":"<urn:uuid:9fd8e9cd-2aa3-4d35-8014-aec334f09b64>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00775.warc.gz"}
The Calibration Process, In General Since I finally understand it (or think I do), I thought I’d make a wiki post explaining, at a high level, the current calibration process. This post is not intended to criticize of the current calibration process, just explain it and what assumptions are made. It’s a clever piece of work and is a significant improvement over previous methods. The Calibration Process, In General The process starts by making two sets of vertical cuts at the top of the board and the bottom of the board. and a horizontal cut made in the top right corner. The cuts are made using predefined coordinates 10-inches in from the sides. The program then calculates how long the chains were when the cuts were made using the same algorithm as used by the controller. You are then asked to measure the distances between the vertical cuts and the distance between the top of the board and the top of the horizontal cut. The program assumes that the two sets of vertical cuts are perfectly centered (i.e., equal distant from the center of the board). It determines that the vertical center of the two top horizontal cuts based upon the distance you measured to the horizontal cut. If your top beam is tilted, or your chains have worn unevenly, it may make for a bad assumption. It also assumes the bottom vertical cuts are 28-inches below the top vertical cuts. This may also be a bad assumption. Assuming the assumptions are correct, then the program calculates the x,y position of the centers of each of the four vertical cuts. With the centers of the four vertical cuts and knowledge of how long the chains were to make those for cuts, the program iterates through multiple variations of the following parameters until it figures out the parameters that when the positions of the four cuts are entered into the algorithm, the calculated chain lengths matches those that were determined at the beginning of the process: • Rotational Radius • Height of Motors Above Workarea • Chain Sag Correction Factor There’s also a Cut34YOffset variable that gets adjusted, but the goal is for this number to be zero since it isn’t used by the algorithm. In essence, it appears to be a fudge factor so that the routine ultimately arrives at a solution. If it’s not zero, then it represents a measure of error or an indicator that an assumption was bad, or both. This number, however, doesn’t get presented to the user at this time.
{"url":"https://forums.maslowcnc.com/t/the-calibration-process-in-general/6427","timestamp":"2024-11-13T04:43:38Z","content_type":"text/html","content_length":"23404","record_id":"<urn:uuid:8167f911-d4f3-43bf-9751-8817ccd46aff>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00081.warc.gz"}
What is Intersect Operator in Excel and How to Use it Watch Video – Using Intersect Operator in Excel Intersect Operator in Excel can be used to find the intersecting value(s) of two lists/ranges. This an unusual operator as it is represented by a space character (yes that’s right). If you use a space character in between two ranges, then it becomes the Intersect operator in Excel. Intersect Operator in Excel You can use Intersect Operator in Excel to find: • The intersection of a single row and column. • The intersection of multiple rows and columns. • The intersection of Named Ranges. Intersection of a Single Row and Column Suppose there is a data set as shown below: Now if you use =C2:C13 B5:D5 [Note there is a single space in between the ranges, which is also our intersect operator in Excel], it will return 523 (the value in cell C5), which is the intersection of these 2 ranges. Intersection of a Multiple Rows and Columns You can also use the same technique to find the intersection of ranges that spans more than one row or column. For example, with the same data set as shown above, you can get the intersection of Product 1 and Product 2 in April. Here is the formula that can do that: =B2:C13 B5:D5 Note that the result of this formula would display a Value error, however, when you select the formula and press F9, it will show the result as {649,523}. This formula returns an array of the intersection values. You can use this within formulas, such as SUM (to get the total of the intersection values) or MAX (to get the maximum of the intersection values). Intersection of Named Ranges You can also use named ranges to find the intersection using the Intersect Operator in Excel. Here is an example where I have named the Product 1 values as Prdt1, Product 2 values as Prdt2 and April Values as Apr. Now you can use the formula =Prdt1 Apr to get the intersection of these 2 ranges. Similarly, you can use =Prdt1:Prdt2 Apr to get the intersection of Product 1, Product 2 and April. A Practical Example of Using Intersect Operator in Excel Here is a situation where this trick might come in handy. I have a data-set of Sales Rep and the sales they made in each month in 2012. I have also created a drop-down list with Sales Rep Name in one cell and Month name in another, and I want to extract the sales that the Rep did in that month. Something as shown below: How to create this: 1. Select the entire data set (B3:N13) and press Control + Shift + F3 to create named ranges (it can also be done through Formula –> Defined Names –> Create from Selection). This will open a ‘Create Names from Selection’ dialogue box. 2. Select the ‘Top Row’ and ‘Left Column’ options and click OK. 3. This will create named ranges for all the Sales Reps and all the Month. 4. Now go to cell B16 and create a drop-down list for all the sales rep. 5. Similarly, go to cell C15 and create a drop down list for all the months. 6. Now in cell C16, use the following formula =INDIRECT(B16) INDIRECT(C15) How does it work? Notice that there is a space in between the two INDIRECT formulas. The INDIRECT function returns the range for the named ranges – Sales rep and the Month, and the space between them works as an intersect operator and returns the intersecting value. Note: This invisible intersect operator get precedence over other operators. So if in this case, if you use =INDIRECT(B16) INDIRECT(C15)>5000, it will return TRUE or FALSE based on the intersecting You May Also Like the Following Excel Tutorials: 7 thoughts on “What is Intersect Operator in Excel and How to Use it” 1. Very good book 2. Excellent et s’avère extrêmement utile. Merci bcp 3. How to use this operator in a Visual Foxpro Program with named ranges in excel? e.g., there are 2 ranges : RNAME AND TNAME. I want the intersect value. I entered as .range(tname rname).value. Error message recd. what is the correct coding? 4. this doesn’t work if you have spaces in the name (say “First” “Last”) because the named ranges put an Underscore where the space is, and the Indirect function doesn’t 5. Instead of pressing F9 what can I click on? 6. Is there any way to find the set difference between 2 ranges (i.e. start with one range, and remove all cells from the other range)? 7. From your above sales rep data table, is there any way to find the max value, which I believe is 9437, and then extrapolate backwards to find out that it was Rachael for the month of May? Leave a Comment
{"url":"https://trumpexcel.com/intersect-operator-in-excel/","timestamp":"2024-11-12T17:01:17Z","content_type":"text/html","content_length":"402613","record_id":"<urn:uuid:ecda6599-d920-4eed-8ce8-e56157327a58>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00880.warc.gz"}
Python Motion CGAL 4.2: Package Overview. Algebraic Foundations Michael Hemmer This package defines what algebra means for CGAL, in terms of concepts, classes and functions. The main features are: (i) explicit concepts for interoperability of types (ii) separation between algebraic types (not necessarily embeddable into the reals), and number types (embeddable into the reals). Number Types Michael Hemmer, Susan Hert, Sylvain Pion, and Stefan Schirra This package provides number type concepts as well as number type classes and wrapper classes for third party number type libraries. Modular Arithmetic Michael Hemmer and Sylvain Pion This package provides arithmetic over finite fields. Polynomial This package introduces a concept for univariate and multivariate polynomials in d variables. Algebraic Kernel Eric Berberich, Michael Hemmer, Michael Kerber, Sylvain Lazard, Luis Peñaranda, and Monique Teillaud. Vectors. This is a vector: A vector has magnitude (how long it is) and direction: The length of the line shows its magnitude and the arrowhead points in the direction. We can add two vectors by simply joining them head-to-tail: And it doesn't matter which order we add them, we get the same result: Example: A plane is flying along, pointing North, but there is a wind coming from the North-West. The two vectors (the velocity caused by the propeller, and the velocity of the wind) result in a slightly slower ground speed heading a little East of North. If you watched the plane from the ground it would seem to be slipping sideways a little. Have you ever seen that happen? Subtracting. Perlin Noise. Many people have used random number generators in their programs to create unpredictability, make the motion and behavior of objects appear more natural, or generate textures. Random number generators certainly have their uses, but at times their output can be too harsh to appear natural. This article will present a function which has a very wide range of uses, more than I can think of, but basically anywhere where you need something to look natural in origin. What's more it's output can easily be tailored to suit your needs. If you look at many things in nature, you will notice that they are fractal. They have various levels of detail. To create a Perlin noise function, you will need two things, a Noise Function, and an Interpolation Function. Introduction To Noise Functions. Simplexnoise.py - battlestar-tux - 3D Top-Down Scrolling Shooter with 4X Elements. Python: Mathematics and Stock/Forex/Futures indicators. Blender Python: Matrix. The code is a little lengthy to post in its entirety, but you can download it here def get_correct_verts(arc_centre, arc_start, arc_end, NUM_VERTS, context): Comprendre les décorateurs Python pas à pas (partie 1) Les fonctions Python sont des objets Pour comprendre les décorateurs, il faut d’abord comprendre que les fonctions sont des objets en Python. Cela a d’importantes conséquences: def crier(mot="yes"): return mot.capitalize() + "! " print(crier()) # output : 'Yes! ' # Puisque les fonctions sont des objets, # on peut les assigner à des variables hurler = crier # Notez que l'on n’utilise pas les parenthèses : # la fonction n'est pas appelée. Gardez ça à l’esprit, on va y revenir. Une autre propriété intéressante des fonctions en Python est qu’on peut les définir à l’intérieur… d’une autre fonction. UI Python Property Definitions (bpy.props) — Blender 2.70.0 19e627c. This module defines properties to extend blenders internal data, the result of these functions is used to assign properties to classes registered with blender and can’t be used directly. Assigning to Existing Classes Custom properties can be added to any subclass of an ID, Bone and PoseBone. These properties can be animated, accessed by the user interface and python like blenders existing properties. import bpy # Assign a custom property to an existing type.bpy.types.Material.custom_float = bpy.props.FloatProperty(name="Test Prob") # Test the property is there.bpy.data.materials[0].custom_float = 5.0 Operator Example A common use of custom properties is for python based Operator classes. PropertyGroup Example PropertyGroups can be used for collecting custom settings into one value to avoid many indervidual settings mixed in together. Collection Example Update Example. Blender Python: Sorting. Mostly rewritten, this is currently prototype so clunky/verbose code. but works ! :) assumes the polyline (edgebased) object is 1) not closed, 2) not interupted, 3) not already correctly sorted import bpy print("\n")print("="*50) Property Definitions (bpy.props) — Blender 2.70.0 19e627c. Animation - Convert particle system to animated meshes - Blender Stack Exchange. Current community your communities Sign up or log in to customize your list. more stack exchange communities Stack Exchange sign up log in tour help Blender beta Ask Question Take the 2-minute tour × Blender Stack Exchange is a question and answer site for people who use Blender to create 3D graphics, animations, or games. Convert particle system to animated meshes 2 Answers active oldest votes Your Answer Sign up or log in. Animation node. Python Sort Examples: Sorted List, Dictionary. Built-in Dictionary List Set Tuple 2D Array Bytes Class Console Convert Datetime Def Duplicates Error Fibonacci File Find If Lambda Lower Map Math Namedtuple None Number Re Slice Sort Split String Strip Substring While Sort. Everything is ordered. Even a random sequence has an order, one based on its randomness. But often in programs, we use a sort method to order data from low to high, or high to low. Searching and Sorting — Problem Solving with Algorithms and Data Structures.
{"url":"http://www.pearltrees.com/ptiroi/python-motion/id13805621","timestamp":"2024-11-11T08:14:42Z","content_type":"text/html","content_length":"87214","record_id":"<urn:uuid:40317267-2799-431d-8831-39d9a06e375d>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00196.warc.gz"}
States with Strictest Gun Laws 2024 The intricate terrain of gun laws in the United States is a vast patchwork—complex, inconsistent, and often, controversial. Given the federal government's limited role in gun regulation, it is primarily the responsibility of individual states to control the sale and usage of firearms. In such a context, some states have chosen stricter paths, instituting extensive background checks and mandatory training. On the other hand, others have adopted a more lenient approach, maintaining the principle of the Second Amendment, which they interpret as the fundamental right of citizens to keep and bear arms. These discrepancies have led to wide variances in firearm safety and violence rates across the country, underscoring the need for examining gun laws. Given this complex scenario, our analysis aims to shed light on the states with the strictest gun laws, exemplified by our 'Gun Laws Strength Rank' metric, which grades states from 'A' to 'F', based on the rigor of their laws. This ranking considers several factors, including but not limited to the type of firearms allowed, the process for obtaining a gun, and the rules for carrying firearms in public. Key findings from the data include: • California emerges as the state with the most stringent gun laws, earning an 'A' grade, followed closely by New Jersey. Both these states have strong measures in place like mandatory background checks, restrictions on certain types of firearms, and stringent regulations on open and concealed carrying. • However, as we traverse towards the lower end of the list, we observe a stark contrast. States like Wyoming, Arkansas, and Missouri, among others, receive an 'F' - showcasing significantly relaxed gun laws, often allowing open-carry and featuring less stringent procedures for procuring firearms. • There's a noteworthy regional pattern visible, with Northeastern states generally demonstrating stricter gun laws than Southern or Midwestern states. The state with the strictest gun laws is California. The state receives an 'A' grade in 'Gun Laws Strength Rank,' given its comprehensive background checks, mandatory firearm safety training, and stringent regulations on open and concealed carrying. Indeed, California's stringent measures highlight a proactive approach towards gun control that has reduced firearm violence rates over the years. New Jersey follows closely with an 'A' grade, reflecting similarly strict gun laws. The state requires universal background checks for purchasing a gun and issues permits based on a 'justifiable need.' Connecticut, New York, Hawaii, Massachusetts, Maryland, and Illinois share a grade of 'A-', filling slots three through eight. These states have firm gun laws, which include a combination of mandatory background checks, restrictions on assault weapons and large-capacity magazines, as well as permit and licensing requirements. Barely missing the 'A' grade, Rhode Island and Washington lay claim to the ninth and tenth place, both securing a 'B+'. These states, while having robust gun laws and regulations, fall slightly short in terms of mandate stringency compared to the preceding states. States with the Strictest Gun Laws ('Gun Laws Strength Rank'): 1. California - 'A' 2. New Jersey - 'A' 3. Connecticut - 'A-' 4. New York - 'A-' 5. Hawaii - 'A-' 6. Massachusetts - 'A-' 7. Maryland - 'A-' 8. Illinois - 'A-' 9. Rhode Island - 'B+' 10. Washington - 'B+'
{"url":"https://www.datapandas.org/ranking/states-with-strictest-gun-laws","timestamp":"2024-11-15T00:18:46Z","content_type":"text/html","content_length":"134412","record_id":"<urn:uuid:386b6d6c-65a6-495e-86ec-c10c91c5f034>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00630.warc.gz"}
Rearranging Numbers Worksheet - 15 Worksheets.com Rearranging Numbers Worksheet Description This worksheet provides a set of exercises where students are tasked with arranging given digits to form the smallest and largest numbers possible. The sheet is divided into two sections, one for creating the smallest numbers and the other for the largest, with several items listed under each. Students are given three or four incomplete numbers and must determine the correct placement of the missing digits to achieve the objective. Blank lines next to each set of digits are provided for students to record their answers. The worksheet is designed to enhance students’ number sense, particularly their understanding of the value of digits depending on their placement within a number. It teaches students how to compare and order numbers by recognizing the significance of digit positioning. The exercise also fosters critical thinking as students must decide where to place a given digit to maximize or minimize the number’s value. Overall, this activity aims to solidify foundational math concepts that are vital for their mathematical progression.
{"url":"https://15worksheets.com/worksheet/rearranging-numbers-10/","timestamp":"2024-11-14T13:36:45Z","content_type":"text/html","content_length":"109003","record_id":"<urn:uuid:ea17024f-9ee7-4b52-bd3e-0ffc82d57b9d>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00742.warc.gz"}
What Is the True Value of Forecasts? 1. Introduction The construction of models that attempt to ascertain the economic value of weather and climate forecasts has a long history in meteorology and allied fields (Katz and Murphy 1997). Such valuation models are necessary if we are to understand when a particular set of forecasts might be favorably applied to a given decision problem, and they also play an important role in legitimizing meteorological research in wider society, particularly to funding bodies (Pielke and Carbone 2002). The dual motivations of forecast producers—a wish to provide weather and climate information that is useful to society and a simultaneous desire to pursue scientific research for its own sake—are not always in consonance. The scientific community has evolved a set of metrics by which it measures the performance of its forecasts (see, e.g., Wilks 2006) that are at best only a partial indication of whether they will be useful to actual forecast users. This is easily demonstrated by including even the most crude representation of a user’s decision problem into the verification exercise. Indeed, Murphy and Ehrendorfer (1987) show that increases in forecast accuracy can actually decrease forecast value. The inadequacy of standard forecast verification measures as indicators of forecast value is well known (Richardson 2001), and models of forecast value based on decision theory are a significant step toward obtaining measures of forecast performance that are relevant to real decision makers. However, even those decision models that have attempted to resolve some of the parameters that might be relevant to actual forecast users have invariably focused on a best-case scenario, in which forecast users are assumed to be hyperrational decision makers who process their perfect knowledge of the forecasting products in statistically sophisticated ways. Such models are normative in flavor, prescribing forecast use strategies that are optimal with respect to a stated decision metric. As such, these valuation models tend to overstate the value that actual users extract from forecasts. This has the dual effect of giving the scientific community a sense of legitimacy that is not necessarily mirrored on the ground while leaving real opportunities for gains in the uptake of objectively valuable forecasts relatively unexplored. Empirical studies over temporal scales ranging from days to seasons (Stewart et al. 2004; Rayner et al. 2005; Vogel and O’Brien 2006; Patt et al. 2005) suggest that the forecast user’s behavior is often not accurately predicted by such normative valuation models and that a variety of economic, institutional, and behavioral factors can contribute to the forecast use choices of individuals (Patt and Gwata 2002; Roncoli 2006). The purpose of this paper is to demonstrate that even modest changes to the behavioral assumptions of normative valuation models can lead to a substantially different picture of the forecast user’s behavior and hence the de facto value of forecasting information. Such a behavioral model may not only provide a more accurate picture of realized forecast value for certain user groups but also highlight situations in which interventions such as user education programs are most likely to be efficacious. The paper considers one of the simplest decision problems, the cost–loss scenario (see, e.g., Murphy 1977; Katz and Murphy 1997; Zhu et al. 2002; Smith and Roulston 2004), and compares the behavior of two types of stylized agents. The first type of agent is a rational decision maker who is statistically sophisticated (i.e., makes decisions based on knowledge of the statistical properties of the forecasts) and has perfect information about the forecasts she receives—I describe her behavior using a standard normative model used elsewhere in the literature (Richardson 2001; Wilks 2001; Katz and Ehrendorfer 2006). The second type of agent is also a rational decision maker but initially has limited information about the forecasting product, does not trust the forecasts completely, and is statistically unsophisticated; that is, he does not keep track of the consequences of his choices in a statistically consistent manner. Because he does not know that he will be better off using the forecasts, I model his behavior as a learning process in which his forecast use choices are guided by his past experience. As he decides whether or not to make use of the forecasts and experiences the consequences of his decisions, he learns from them in a manner consistent with a prominent psychological theory of learning behavior known as reinforcement learning. Reinforcement learning is a widely used paradigm for representing so-called operant conditioning, in which the consequences of a behavior modify its future likelihood of occurrence. It has a long history in psychology, going back at least as far as the statement of the Law of Effect by Thorndike (1911), which has been argued to be a necessary component of any theory of behavior (Dennett 1975 ). The framework is based on the empirical observation that the frequency of a given behavior increases when it leads to positive consequences and decreases when it leads to negative consequences ( Thorndike 1933; Bush and Mosteller 1955; Cross 1973). Because of its emphasis on the frequencies of choices, the model is necessarily probabilistic in nature. In addition, the model is forgetful, in that the only knowledge it has of past choices and their consequences is the agent’s current propensity for a given choice. Thus information about the entire historical sequence of choices and outcomes is compressed into a single value. This loss of information allows the model to represent statistically unsophisticated learning behavior. I construct a minimal behavioral model of the user’s learning process based on this framework. The model has been intentionally kept as basic as possible to facilitate comparisons with the normative paradigm. Yet even in this case, the introduction of learning dynamics has a marked effect on our picture of users’ rates of forecast usage and hence the value of forecasts. In the following section, a normative model of forecast value for the cost–loss decision problem is briefly introduced, and its assumptions are explained. Section 3 develops the behavioral model of statistically unsophisticated learning in detail and derives its properties. In section 4, a quantitative relationship between the normative and behavioral models of forecast value is established. It is shown that accounting for learning dynamics reduces the user’s realized value score by a factor that depends on his decision parameters (i.e., the cost–loss ratio), the climatological probability of the adverse event, and the forecast skill. The implications of this result are examined, and the general properties of the dependence of the deviation between the two models on these parameters are established. The paper concludes by commenting on the policy relevance of the results and suggesting a focus for future research. 2. A normative model of forecast value The normative model of forecast value that I will use was developed by Richardson (2001), though it is very similar to Wilks (2001). I focus on probabilistic forecasts because they are widely used by operational forecast centers. Indeed, Murphy (1977) shows that reliable probability forecasts are necessarily more valuable than categorical forecasts for the decision problem I consider. The model rests on a specification of the user’s decision problem, the assumption that users are rational decision makers with a perfect understanding of the forecasting product, and simplifying assumptions about the nature of the forecasts. I briefly develop the model below. Models of forecast value that are based on the rational actor assumption prescribe optimal user decision-making behavior. The notion of optimality requires that there be something to optimize. In the case of the model developed here, users are assumed to act so as to minimize their expected losses. In general, expected losses are dependent on the details of the user’s decision problem. To make the discussion concrete, I will consider the cost–loss problem—a staple of the forecast valuation literature (Katz and Murphy 1997). At each time step in the cost–loss scenario, users must decide whether or not to protect against the adverse effects of a weather event—for example, purchase hurricane insurance—that occurs with some stationary climatological probability p[c]. The insurance costs C, and the users have an exposure of L, where L > C. If the hurricane occurs, then the policy pays out an amount L so that the losses are completely covered. If we let a be the user’s action (a = 1 if she protects and a = 0 if she does not) and let e be the event (e = 1 if the event occurs and e = 0 if it does not), then we can represent this scenario with the loss matrix 𝗟(a, e), depicted in Table 1. A simple calculation shows that expected loss minimizing users will choose to protect themselves if they believe that the probability of the hurricane occurring satisfies . I define to be the cost–loss ratio . Thus the decision rule defines rational behavior in our model. To compute the value of a forecasting product as a whole, it is necessary to find the average losses that the user sustains when making use of the forecasts. This is achieved by specifying the joint distribution of forecasts p and events e. This joint distribution can be decomposed by using the calibration–refinement factorization (Murphy and Winkler 1987; Wilks 2001). In this scheme, one writes the joint distribution as the product of the marginal distribution of the forecasts p, which I will call g(p), and the conditional distribution of the event given the forecast. Because there are only two events in the cost–loss scenario, only one such conditional distribution is needed; for example, Prob(e = 1|p) because Prob(e = 0|p) = 1 − Prob(e = 1|p). In what follows I define f[1](p) := Prob( e = 1|p). The function f[1](p), also known as the calibration function, determines the reliability of the forecasts. Perfectly reliable forecasts have f[1](p) = p. Such forecasts can be shown to be unconditionally unbiased (Jolliffe and Stephenson 2003). In what follows I assume that forecasts are perfectly reliable. Although an idealization of reality, the perfect reliability assumption is a good working hypothesis. Provided sufficient validation data are available, operational forecasts are calibrated using empirical calibration functions [i.e., an estimate of f[1](p)] so that the calibrated forecasts approximate perfect reliability. However, one should keep in mind that forecasts that are calibrated to the past are not necessarily calibrated to the true probability of the event occurring in the present (Oreskes et al. 1994). Although this point affects what we mean when we talk about calibrated forecasts, it has no bearing on the analysis that follows. The assumption of perfect reliability implies that Thus the mean of ) is constrained to be equal to the climatological probability of the event. With the reliability assumption and the distribution )—which describes the probability of receiving a forecast —in hand, the expected losses that are sustained when the forecast product is used can be calculated. First, I define the function ), the partial mean of ) at and the cumulative distribution of Then from the loss matrix in Table 1 and the decision rule in Eq. , we have that Using this expression, one can now derive the so-called relative value score of Wilks (2001) , which measures the value of the forecasts relative to climatological information, normalized by the value of perfect information. Using only climatological information—that is, the value —the user’s expected losses would be whereas the expected losses of a user who has access to perfect information are Using these expressions, the normative relative value score is given by This quantity depends on only through their ratio In what follows I will follow Richardson (2001) Wilks (2001) , and Katz and Ehrendorfer (2006) and take the distribution of forecasts ) to be a beta distribution with (positive) parameters and Γ is the gamma function. The beta distribution is very versatile and can be made to assume many different shapes by varying its parameters. Using this distribution, Richardson (2001) shows that an explicit formula for the Brier score (BS), a measure of forecast performance, can be calculated. The BS of the forecasts is defined as denotes an expectation over the joint distribution of . It thus measures the average squared deviation between the forecasts and the realized events. Using the equation , the Brier skill score (BSS), a measure of forecast performance analogous to the relative value score specified earlier, can be shown to be given by This result is derived in the appendix of Katz and Ehrendorfer (2006) . Note that the assumption of perfect reliability ensures that BSS ∈ [0, 1]. In addition to this relation, the requirement in Eq. and the fact that the mean of ) is equal to ) imply that Using Eqs. , the parameters of the beta distribution that defines ) can be determined in terms of the Brier skill score and the climatological probability of the event: Thus to calculate a normative relative value score, the parameters one needs to specify are , and BSS. These three parameters capture the details of the decision problem, the environment, and the forecast performance, respectively. 3. A behavioral model: Learning from experience The normative model described above is an elegant extension of the forecast validation literature to include a representation of a decision structure that agents might face when applying forecast information in their daily lives. Although the cost–loss decision problem is by no means universal, it is an intuitive and straightforward example and general enough to be useful in a wide range of applied settings (Stewart et al. 2004; Katz and Murphy 1997). The inclusion of a decision structure into the validation and valuation exercise is a vital and necessary step; however, it falls short of providing an indication of how much value actual decision makers extract from forecasts. This should be no surprise because the model is intentionally normative, rather than positive, in its design. Thus it is perhaps best thought of as providing an upper bound on forecast value. Behavioral deviations from the normative framework are among the main reasons for expecting real forecast users to extract less value from forecasts than normative models say they should. There is abundant evidence in both the applied meteorology and psychology literatures that suggests that psychological factors can affect people’s forecast use behavior and thus the realized value of forecasts. A number of studies have focused on the difficulties of communicating probabilistic forecasts (Gigerenzer et al. 2005; Roncoli 2006; National Research Council 2006), while others (Nicholls 1999), inspired by the seminal work of Kahneman and Tversky (2000), have emphasized the importance of cognitive heuristics and biases as explanations of suboptimal behavior. The perceived trustworthiness of forecasts is also a key limitation on their uptake. In the context of the cost–loss scenario, the theoretical analysis of Millner (2008) suggests that perceived forecast value is nonlinearly related to perceived accuracy and that a critical trust threshold must be crossed before forecasts are believed to have any value at all. The normative model’s representation of user behavior assumes that users are rational, in the sense of the decision rule given in (1), and they have perfect knowledge of the properties of the forecasts; that is, they understand that the forecasts are perfectly reliable, and hence that they are better off using them than resorting to their climatological information. Thus perfectly knowledgeable users have complete trust in the forecasts, by definition. Although the heuristics and biases literature interrogates the rationality assumption in the context of decision making under uncertainty, there has been rather less formal treatment of the implications of the perfect knowledge assumption for users of weather and climate forecasts. When this assumption is relaxed in the normative model developed earlier, instead of using the forecasts all the time, the user is faced with a choice between using forecasts and other information sources. How this choice is made depends crucially on the representation of the user’s learning about forecast performance. If we assume that the user is statistically sophisticated—that is, he eventually deduces the conditional distribution f[1](p) after long-term exposure to the forecasts—then we can expect him to ultimately converge to normative behavior. This assumption may be justified for a subset of forecast users—for example, energy traders and sophisticated commercial farmers. If on the other hand he is statistically unsophisticated, does not trust the forecasts completely, or just does not pay close attention to forecast performance, his forecast use choices are likely to be dictated by other, more informal, learning processes. This case is likely to be more appropriate for people who lack the requisite statistical training, or the will, to understand objective quantifications of forecast performance and instead form opinions about the benefits of forecast use based on their experience. Put another way, their learning process is based on a response to the stimulus provided by the consequences of their forecast use choices, rather than cognitive reflection on the problem. In the remainder of this section, I propose a model of how a user who engages in such a learning process might differ from normative behavior. The model I will develop is a very basic version of reinforcement learning—one of the most prevalent psychological frameworks for understanding learning processes. Theories of learning based on a notion of reinforcement go back as far as Thorndike (1911) and Pavlov (1928), who studied associative learning in animals. The theory was later refined and developed by Thorndike (1933) and Skinner (1933) and became one of the theoretical pillars of the so-called behaviorist school of psychology. Although by no means unchallenged, reinforcement learning still underpins a vast swathe of behavioral research today. [Refer to Lieberman 1999 and Mazur 2006 for book-length treatments that contextualize reinforcement in the wider literature on learning.] In its simplest form, reinforcement learning suggests that the frequency of a given behavior is “reinforced” by its consequences; that is, choices that lead to positive consequences increase in frequency and those that lead to negative consequences decrease in frequency. The emphasis on the frequencies of choices, which necessitates a probabilistic description of choice behavior rather than a deterministic choice rule, is one of the fundamental differences between reinforcement learning and normative paradigms based on decision theory. Cross (1973) explains that, “[I]f we repeatedly reward a certain action, we in no sense guarantee that the action will be taken on a subsequent occasion, even in a ceteris paribus sense; only that the likelihood of its future selection will be increased. The vast body of experimental material on learning that has been accumulated provides convincing evidence that this interpretation is a good one.” As Cross suggests, reinforcement learning–type models have been remarkably successful at reproducing learning behavior in a variety of contexts, including more complex scenarios than ours in which agents engage in strategic interactions (Erev and Roth 1998; Arthur 1991). The particular formal model of choice behavior I will adopt itself has a long history in mathematical psychology and economics. The first of this class of models (Bush and Mosteller 1955) considered the case where reinforcing stimuli were either present or absent and modeled the change in the frequencies of choices based on these binary outcomes. This model was later extended by Cross (1973) to include the effect of positive payoffs of different magnitudes on reinforcement. The version of the model I employ here is a slightly extended modern incarnation taken from Brenner (2006), which allows payoffs to be either positive or negative, and hence can represent negative reinforcement as well. The model also captures what psychologists refer to as spontaneous recovery (Thorndike 1932), in which actions that are nearly abandoned because of a series of negative consequences quickly increase in frequency if they result in positive outcomes. The model has the advantage of being intuitive, and analytically tractable, so that the behavioral modification to the normative relative value score in Eq. (8) can be computed explicitly as a function of the model parameters. Imagine a hypothetical decision maker, call him Burrhus.^^1 Burrhus is not as certain about whether or not to use the forecasts as the users in the normative model. He has two sources of information: the forecasts and his knowledge of the climatological probability of the adverse event. Occasions will arise in which these two information sources will be in conflict, and Burrhus will be forced to choose between them. It is this choice that I wish to model. Burrhus suffers from informational constraints, in that initially he has no knowledge of the performance of a given forecast system, and he is statistically unsophisticated, in that even after long-term exposure to the forecasting service, he does not base his decision on the conditional distribution f[1](p). Instead, let us suppose that Burrhus bases his forecast use decisions on his experience of the consequences of his choices, and that he learns from the outcomes of those choices in a manner consistent with reinforcement learning. I thus assume that at each time step, Burrhus makes a probabilistic choice between using the forecasts or the climatology. We are interested in how the probability of his making use of the forecasts might evolve over time as he makes decisions and receives feedback on their consequences. The first step of the model is to isolate those events that require Burrhus to make a choice between the forecasts and the climatology. Intuitively, one would expect that learning only occurs in situations in which the forecast and the climatology disagree in their recommended actions because when the forecasts and the climatology agree, no choice between the two information sources is necessary. Define the set of events in which the forecast and climatology disagree as In what follows I restrict attention to this set in which learning takes place. Let be an index of the events in the set . Given that there is disagreement, let Burrhus’s forecast use choice at ), where ) = 1 if he chooses to make use of the forecasts and ) = 0 if he chooses to resort to his climatological information. I assume that once his choice is made, Burrhus acts rationally; that is, he acts in accordance with the decision rule and follows the recommendation of his chosen information source. Once a choice is made and a consequence is realized, Burrhus learns from his experience in such a way that a positive outcome at increases his chance of making choice ) at + 1, whereas a negative outcome reduces it. Suppose that ) is Burrhus’s probability of making choice . The Brenner (2006) model of how these probabilities evolve is ) is the reinforcement strength is the learning rate , where we require to ensure that the probabilities remain in [0, 1]. Positive outcomes [ ) > 0] reinforce the choice that led to them by increasing the probability of that choice being made on the next occasion of disagreement. The increase in probability is proportional to the strength of the reinforcement and the probability of the alternative choice. The proportionality constant is the learning rate and determines how quickly Burrhus responds to new information. Similarly, negative outcomes [ ) < 0] are negatively reinforced, in that the probability of making the choice that led to them at + 1 is decreased by an amount proportional to the reinforcement strength and to the probability of the choice ). Notice that this rule only tells us what to do to update the probability of making the choice ); however, because there are only two possible values of and the sum of the probabilities of the choices must equal one, we have that (1 − + 1) = 1 − + 1). This allows us to express this updating procedure in terms of just one of the probabilities, say, )—the probability of following the forecasts. What is the reinforcement strength? In general, this could be any function of the losses sustained at . I will assume that the reinforcement strength is determined as follows: Suppose the loss associated with the action ^^2 a ) at , given that event ) occurred, is 𝗟( )), where 𝗟 is the loss matrix in Table 1 . Then the reinforcement strength is given by That is, the reinforcement strength is the difference between the loss that would have been sustained had Burrhus used the other information source at and the loss that was actually realized given that he followed his choice ). Thus ) is a measure of the regret or happiness that Burrhus feels from his choice. For example, suppose that at Burrhus chose to make use of the forecasts. Suppose that , so that when the forecasts and the climatology disagree, the forecasts necessarily suggest that Burrhus should protect—that is, ) = 1. Assume that the adverse event was subsequently realized [ ) = 1]. Then the reinforcement strength used to update the probability of using the forecasts is ) = > 0. In this case Burrhus will be more likely to use the forecast on the next occasion of disagreement because it led him to take the correct action. It is important to keep in mind that the choice of reinforcement strength in Eq. (19), although intuitive, is not necessarily an accurate representation of human choice behavior. In general, reinforcement may be moderated by the variability in the rewards obtained from a given choice (Behrens et al. 2007), nonlinear responses to rewards of different magnitudes, and asymmetries between the reinforcing effects of regret and happiness (Kahneman and Tversky 1979). The motivation for my choice is a desire to keep the model simple and tractable and as close as possible to its normative analog. Thus the model should be interpreted as a stylized example of the behavioral modeling paradigm, rather than an empirically substantiated predictive model. Assuming the reinforcement strength is given by means that all possible values of are just linear combinations of . The update rule can thus be written as a function of only by defining a rescaled learning rate . By using this definition, Eqs. , and the fact that ) = 1 − , a complete list of possible outcomes from the reinforcement rule can be generated. This list is reproduced in appendix A . One finds that the learning rule reduces to and the constraint , when applied to and the definition of , becomes The fact that the reinforcement depends only on the realized event is due to the symmetry of the reinforcement strength [ ) = − (1 − )] and the fact that the two information sources necessarily disagree when learning occurs. At face value, these equations seem to suggest that the learning process is independent of forecast performance; however, this is not the case. The reason is that because attention is restricted to the set , the probability of the event = 1 is no longer given by , the climatological probability. In fact if we let be the probability of = 1 given that we are in , then we have that depends implicitly on the forecast performance through the distribution ) and also on the value of . A representative trajectory of the sequence is plotted in Fig. 1 Notice that although a statistically sophisticated forecast user would eventually converge to the value q[t] = 1 because forecasts are more valuable on average than the climatology, Burrhus’s tendency toward a particular choice is constantly in flux. Because the past is represented only by the current value q[t], he does not have access to a complete picture of the historical consequences of his actions. His tendencies thus change over time as he makes successive choices and receives feedback on their consequences. The trajectory illustrated in Fig. 1 is only one of an infinite number. To draw some general conclusions about Burrhus’s behavior for particular parameter values, we need to understand the statistics of the updating process given by Eq. (20). To do this I will make use of the following results: Theorem 1: be the expected value of . Then are constants given by Theorem 2: If λ ∈ (0, 1), then eventually (t → ∞) the update rule in Eq. (20) gives rise to a distribution P(q) of the values of q[t] that is independent of the initial value q[0] and asymptotically A proof of theorem 1 is given in appendix B, and a sketch of the proof of theorem 2 is given in appendix C. Figure 2 illustrates theorem 2 by simulating the long-run distribution P(q) for fixed values of z, p[c], BSS, and different values of λ. Notice that while the distributions ) shift when is changed, the figure suggests that the long-run mean value remains unchanged. With theorem 1 in hand, it is possible to calculate an explicit formula for the long-run mean value of that demonstrates this fact. Define := lim [t→∞] q[t] must be a fixed point of the linear recurrence relation in Eq. —that is, /(1 − ). Thus, is independent of and the initial value , the long-run average dynamics of the learning model are completely specified by , and BSS. The sequence of expected values is guaranteed to converge to , provided that | | < 1. This requirement translates into a condition on that ensures convergence; however, it can be shown that this condition is not as restrictive as that in Furthermore, restricting to be less than 1 ensures that the entire distribution of values (not just the mean) converges to a long-run distribution that is independent of the initial value . Thus the distributions plotted in Fig. 2 are valid for all Theorem 1 captures an additional piece of useful information, in that it also specifies how the sequence of expected values converges to its long-run value . Equation implies that, provided > 0, convergence to is monotonic. Because > 0 when , I will focus on this case. Thus when , Burrhus’s average tendency to follow the forecasts is always less than and increases to this asymptotic value. The converse holds when ; however, because Burrhus initially has no information about forecast performance, it is likely that . Furthermore, a straightforward calculation shows that Thus the quantity gives the rate of convergence of the sequence of mean values, with values close to zero (one) corresponding to fast (slow) convergence. Inspecting the expression for in Eq. , it is clear that the larger is, the faster the convergence. Moreover, one can show that lim [z→0] A = lim [z→1] A = 1. Thus high or low values of exhibit slow convergence to , with intermediate values converging faster. Figure 3 illustrates the dependence of for a sample case. In the next section, I make use of these results to establish the relationship between the behavioral learning model and the normative model of section 2. 4. Relationship between normative and behavioral models The fact that the long-run behavior of the learning model can be summarized in the single value leads to a very uncomplicated relationship between normative and behavioral measures of forecast value. To demonstrate this I will calculate an equivalent of the relative value score for the behavioral model is the expected loss that the behavioral learner sustains in the long run. To calculate VS begin by defining the set , the complement of —that is, the set of events in which the forecasts and the climatology agree with one another—and let = Prob( ) = 1 − Prob( ) be the probability of the forecasts and the climatology disagreeing. In addition, let be the expected losses from using the climatology and the forecasts, respectively, conditional on the fact that the two information sources disagree, with similar definitions holding for the case where they do agree. Then, the average losses that the behavioral learner sustains are given by Now, because , we have that Thus using Eqs. , one finds that Hence the behavioral relative value score VS is related to the normative relative value score VS I plot the normative and behavioral relative value scores as a function of for several values of the Brier skill score in Fig. 4 The relationship (34) and the expression for q in Eq. (26) constitute the main results of the analysis. Their implications are discussed next. First, notice that Eq. (34) implies that the behavioral relative value score is always less than the normative relative value score—statistically unsophisticated forecast users do not attain the normative ideal even after long-term exposure to the forecasting product. In fact, for fixed parameter values, VS[B] is a constant fraction q of VS[N]. Thus the key to understanding the relationship between the two models is to understand this quantity, which depends on z, BSS, and p[c]. Consider the effect of the decision variable on reinforcement. For large (small) values of , the reinforcement strength in Eq. will be large in absolute magnitude when = 0 ( = 1). However, using Eq. one can show that Thus, although reinforcement is strong for = 0 ( = 1) when is large (small), the probability of = 0 ( = 1) occurring is relatively low. It turns out that the low probability dominates the strength of the reinforcement for extreme values of . Extensive numerical simulations for a variety of parameter values show that In fact, either = 0 (for > 0.5) or = 1 (for < 0.5) gives rise to the lowest value of . Thus users with values of close to 0 or 1 exhibit the greatest relative reduction in their achieved value scores, with intermediate values of corresponding to higher values of . To see how these relative differences in value score translate into differences between the losses realized by normative and behavioral agents, notice from the definitions Because VS tends to zero as tends to 0 or 1 [as can be shown from the definition ], the difference between the losses realized by normative and behavioral agents is small for extreme values of . The behavior of for intermediate values of is as follows: For < 0.5, the difference between losses is increasing in , whereas for it either decreases monotonically, or it has a single maximum before descending back to zero at = 1. This behavior is reversed for > 0.5, with the loss difference increasing monotonically or exhibiting a single maximum for and decreasing for . This behavior is illustrated in Fig. 5 Now consider the dependence of q on BSS, the forecast skill. As one might expect, q is always an increasing function of BSS. If, however, we focus on the difference between the losses realized by the two agents, the dependence on BSS becomes more interesting. As Eq. (37) shows, the difference VS[N] − VS[B] is proportional to V[B] − V[F] when we hold z and p[c] fixed and vary BSS. I plot the difference between the value scores as a function of BSS in Fig. 6. The figure suggests that the difference between the losses realized by behavioral and normative agents is always a unimodal function of BSS, with the largest deviations occurring for intermediate values of BSS, and the losses sustained converging for extreme values of BSS. The reason for this is simple—the factor (1 − q) in Eq. (37) is decreasing in BSS, whereas VS[N] is increasing in BSS. The competition between these two effects leads to the inverted U shape in Fig. 6. In addition, because VS[N] = 0 when BSS = 0 and q = 1 when BSS = 1, the two value scores converge at BSS = 0, 1. Thus statistically unsophisticated learning has the most pronounced effect on realized value for forecasts of moderate skill. The fact that the difference between the normative and behavioral models has a characteristic dependence on the decision parameter z and the forecast skill BSS, allows predictions to be made about when the effect of statistically unsophisticated learning on user behavior and realized value is likely to be most important. In general, the model suggests that for users with intermediate values of z and for forecasts of moderate skill, behavioral deviations from the normative ideal are likely to be significant. This provides a direct justification for the model, because it may not only provide a more realistic representation of the behavior of certain kinds of forecast users but it can also suggest which users stand to gain the most from an increased knowledge and understanding of forecasts and their performance. 5. Conclusions The analysis presented in this paper offers an alternative modeling paradigm to the normative framework for assessing forecast value in the case of cost–loss decisions. The model is designed to incorporate a specific behavioral effect—learning dynamics based on the assumption that the forecast user is statistically unsophisticated; that is, the forecast user does not deduce the statistical properties of the forecasts after long-term exposure to them. Instead, the user reacts to forecasts in a manner consistent with the theory of reinforcement learning, so that the frequencies of forecast use choices are positively or negatively reinforced depending on their consequences. A simple model of this process based on existing models of reinforcement in the psychology and economics literature was proposed, its consequences deduced, and its deviation from the normative model analyzed. It was demonstrated that accounting for statistically unsophisticated learning reduces the relative value score that the forecast user achieves by a multiplicative factor that depends on the user’s cost–loss ratio, the forecast skill, and the climatological probability of the adverse event. An analytical expression for this factor was derived, and its properties analyzed. It was shown that differences between the losses sustained by normative and statistically unsophisticated users are greatest for users with intermediate cost–loss ratios and when forecasts are of intermediate skill. These predictions of the model are empirically testable, and if verified in the field or in laboratory experiments, they could act as a useful heuristic for directing interventions [such as those described by Patt et al. (2007)] aimed at educating users, thus increasing the value users realize from forecasts. If we accept the assertion of Pielke and Carbone (2002) that it is in the interests of the scientific community to take responsibility not only for the production of forecasts but to follow them through all the way to end users’ decisions, then it is vital to attempt a systematic scientific study of the user’s forecast use behavior. In pursuing this, we would do well to learn from and collaborate with colleagues in neuroscience and experimental psychology, and experimental and behavioral economics. These disciplines have evolved powerful tools for understanding the neural processes involved in decision making (Yu and Dayan 2005; Behrens et al. 2007; Platt and Huettel 2008; Rushworth and Behrens 2008), patterns and biases in decision making (Kahneman and Tversky 2000; Nicholls 1999), and sophisticated models of learning and trust (Camerer 2003; Erev et al. 2008). The learning model presented here is a stylized example of the behavioral modeling paradigm that was designed to emphasize the importance of behavioral factors in determining user behavior and thus de facto forecast value. It should certainly not be used for policy recommendations in the absence of empirical data; such data come in two varieties—descriptive field studies and laboratory experiments. Descriptive studies of forecast value (Patt et al. 2005; Stewart et al. 2004; Stewart 1997), in which researchers attempt to analyze the behavior of forecast users in the field, have found it difficult to make coherent measurements of forecast value and its determinants, owing largely to a wide range of circumstantial variables that are beyond the observer’s control. For this reason, more effort and resources should be directed toward running controlled laboratory experiments in which test subjects interact with forecasting products in simulated decision scenarios [see Sonka et al. (1988) for an early attempt]. Such experiments allow for a much greater degree of controllability than field studies, enabling the experimenter to build testable models of decision-making behavior that stand a greater chance of being generalizable than any single field study. It is not clear a priori that results obtained in the laboratory will necessarily be applicable in more complex real-world decision environments. Laboratory investigations can, however, serve to highlight key behavioral factors that influence decision making even in controlled situations and thus suggest causative hypotheses that can be empirically tested in the field. Hopefully, this will not only allow us to make more credible estimates of the true value of forecasts but will also suggest tangible, behaviorally sound, methods of increasing their value for the users that are their raison d’être. I thank Christopher Honey, Gareth Boxall, and Rachel Denison for their helpful suggestions. The comments of the two anonymous reviewers were very useful. The financial support of the Commonwealth Scholarship Commission and the NRF are gratefully acknowledged. • Arthur, W., 1991: Designing economic agents that act like human agents: A behavioral approach to bounded rationality. Amer. Econ. Rev., 81 , 353–359. • Behrens, T., Woolrich M. , Walton M. , and Rushworth M. , 2007: Learning the value of information in an uncertain world. Nat. Neurosci., 10 , 1214–1221. • Brenner, T., 2006: Agent learning representation: Advice on modelling economic learning. Agent-Based Computational Economics, L. Tesfatsion and K. Judd, Eds., Vol. 2, Handbook of Computational Economics, Elsevier, 895–947. • Bush, R., and Mosteller F. , 1955: Stochastic Models for Learning. Wiley, 365 pp. • Camerer, C., 2003: Behavioral Game Theory: Experiments in Strategic Interaction. Princeton University Press, 550 pp. • Cross, J., 1973: A stochastic learning model of economic behavior. Quart. J. Econ., 87 , 239–266. • Dennett, D., 1975: Why the law of effect will not go away. J. Theory Soc. Behav., 5 , 169–188. • Erev, I., and Roth A. , 1998: Predicting how people play games: Reinforcement learning in experimental games with unique, mixed strategy equilibria. Amer. Econ. Rev., 88 , 848–881. • Erev, I., Ert E. , and Yechiam E. , 2008: Loss aversion, diminishing sensitivity, and the effect of experience on repeated decisions. J. Behav. Decis. Making, 21 , 575–597. • Gigerenzer, G., Hertwig R. , van den Broek E. , Fasolo B. , and Katsikopoulos K. , 2005: “A 30% chance of rain tomorrow”: How does the public understand probabilistic weather forecasts? Risk Anal., 25 , 623–629. • Jolliffe, I., and Stephenson D. , 2003: Forecast Verification: A Practitioner’s Guide in Atmospheric Science. Wiley, 240 pp. • Kahneman, D., and Tversky A. , 1979: Prospect theory: An analysis of decision under risk. Econometrica, 47 , 263–292. • Kahneman, D., and Tversky A. , 2000: Choices, Values, and Frames. Cambridge University Press, 840 pp. • Katz, R., and Murphy A. , 1997: Economic Value of Weather and Climate Forecasts. Cambridge University Press, 222 pp. • Katz, R., and Ehrendorfer M. , 2006: Bayesian approach to decision making using ensemble weather forecasts. Wea. Forecasting, 21 , 220–223. • Khamsi, M., and Kirk W. , 2001: An Introduction to Metric Spaces and Fixed Point Theory. Wiley, 302 pp. • Lieberman, D., 1999: Learning: Behavior and Cognition. 3rd ed. Wadsworth Publishing, 595 pp. • Mazur, J., 2006: Learning and Behavior. 6th ed. Prentice Hall, 444 pp. • Millner, A., 2008: Getting the most out of ensemble forecasts: A valuation model based on user–forecast interactions. J. Appl. Meteor. Climatol., 47 , 2561–2571. • Murphy, A., 1977: The value of climatological, categorical and probabilistic forecasts in the cost–loss ratio situation. Mon. Wea. Rev., 105 , 803–816. • Murphy, A., and Ehrendorfer M. , 1987: On the relationship between the accuracy and value of forecasts in the cost–loss ratio situation. Wea. Forecasting, 2 , 243–251. • Murphy, A., and Winkler R. , 1987: A general framework for forecast verification. Mon. Wea. Rev., 115 , 1330–1338. • National Research Council, 2006: Completing the Forecast: Characterizing and Communicating Uncertainty for Better Decisions Using Weather and Climate Forecasts. National Academies Press, 112 pp. • Nicholls, N., 1999: Cognitive illusions, heuristics, and climate prediction. Bull. Amer. Meteor. Soc., 80 , 1385–1397. • Norman, M., 1968: Some convergence theorems for stochastic learning models with distance diminishing operators. J. Math. Psychol., 5 , 61–101. • Oreskes, N., Shrader-Frechette K. , and Belitz K. , 1994: Verification, validation, and confirmation of numerical models in the earth sciences. Science, 263 , 641–646. • Patt, A., and Gwata C. , 2002: Effective seasonal climate forecast applications: Examining constraints for subsistence farmers in Zimbabwe. Global Environ. Change, 12 , 185–195. • Patt, A., Suarez P. , and Gwata C. , 2005: Effects of seasonal climate forecasts and participatory workshops among subsistence farmers in Zimbabwe. Proc. Natl. Acad. Sci. USA, 102 , 12673–12678. • Patt, A., Ogallo L. , and Hellmuth M. , 2007: Learning from 10 years of climate outlook forums in Africa. Science, 318 , 49–50. • Pavlov, I., 1928: Lectures on Conditioned Reflexes: Twenty-Five Years of Objective Study of the Higher Nervous Activity (Behavior) of Animals. International Publishers, 414 pp. • Pielke R. Jr., , and Carbone R. E. , 2002: Weather impacts, forecasts, and policy: An integrated perspective. Bull. Amer. Meteor. Soc., 83 , 393–403. • Platt, M., and Huettel S. , 2008: Risky business: The neuroeconomics of decision making under uncertainty. Nat. Neurosci., 11 , 398–403. • Rayner, S., Lach D. , and Ingram H. , 2005: Weather forecasts are for wimps: Why water resource managers do not use climate forecasts. Climatic Change, 69 , 197–227. • Richardson, D., 2001: Measures of skill and value of ensemble prediction systems, their interrelationship and the effect of ensemble size. Quart. J. Roy. Meteor. Soc., 127 , 2473. • Roncoli, C., 2006: Ethnographic and participatory approaches to research on farmer responses to climate predictions. Climate Res., 33 , 81–99. • Roulston, M. S., and Smith L. A. , 2004: The boy who cried wolf revisited: The impact of false alarm intolerance on cost–loss scenarios. Wea. Forecasting, 19 , 391–397. • Rushworth, M., and Behrens T. , 2008: Choice, uncertainty and value in prefrontal and cingulate cortex. Nat. Neurosci., 11 , 389–397. • Skinner, B., 1933: The rate of establishment of a discrimination. J. Gen. Psychol., 9 , 302–350. • Sonka, S., Changnon S. , and Hofing S. , 1988: Assessing climate information use in agribusiness. Part II: Decision experiments to estimate economic value. J. Climate, 1 , 766–774. • Stewart, T., 1997: Forecast value: Descriptive decision studies. Economic Value of Weather and Climate Forecasts, R. Katz and A. Murphy, Eds., Cambridge University Press, 147–181. • Stewart, T., Pielke R. Jr., and Nath R. , 2004: Understanding user decision making and the value of improved precipitation forecasts: Lessons from a case study. Bull. Amer. Meteor. Soc., 85 , 223 • Thorndike, E., 1911: Animal Intelligence: Experimental Studies. Macmillan, 297 pp. • Thorndike, E., 1932: The Fundamentals of Learning. AMS Press, 638 pp. • Thorndike, E., 1933: A theory of the action of the after-effects of a connection upon it. Psychol. Rev., 40 , 434–439. • Vogel, C., and O’Brien K. , 2006: Who can eat information? Examining the effectiveness of seasonal climate forecasts and regional climate-risk management strategies. Climate Res., 33 , 111–122. • Wilks, D., 2001: A skill score based on economic value for probability forecasts. Meteor. Appl., 8 , 209–219. • Wilks, D., 2006: Statistical Methods in the Atmospheric Sciences. 2nd ed. Academic Press, 627 pp. • Yu, A., and Dayan P. , 2005: Uncertainty, neuromodulation, and attention. Neuron, 46 , 681–692. • Zhu, Y., Toth Z. , Wobus R. , Richardson D. , and Mylne K. , 2002: The economic value of ensemble-based weather forecasts. Bull. Amer. Meteor. Soc., 83 , 73–83. Reinforcement Learning Rule Table 2. Reinforcement learning rule. Proof of Theorem 1 To establish the theorem, I write the updating rule as follows: ∈ {0, 1} indexes a set of two updating functions—for example, ) = (1 − ) = [1 − (1 − (1 − ) when . At each step , one of the updating functions is chosen to update the value of with a known probability , where = 1 − . Systems of functions such as Eq. , in which a function is applied with a known probability at each iteration, are known as iterated function systems (IFS). ) to be the set of values that takes with nonzero probability given the initial value and that the ’s are updated according to . The mean value of is given by ) is the probability of being at at time . Now for each element ), we have that for some ) and some ) ∈ {0, 1}. The probability of this occurring satisfies Thus we have that Using the fact that the functions are all linear and that for a linear function ) = ), where is the expectation operator, we have that Substituting the expressions for the into this equation and collecting terms gives the result. Proof of Theorem 2 The theorem follows as a direct consequence of theorem 2.2 in Norman (1968). To establish it, I will work with the case z > p[c]. The analysis presented below follows through in an exactly analogous manner for the case z ≤ p[c]. To prove the theorem, I will make use of several definitions. Let = ([0, 1], ) be a metric space on the unit interval where ) = | | is the Euclidean distance between . A contraction mapping is a function , which satisfies Intuitively, applying the map shrinks the distances between the initial points by at least a factor of with each iteration. Readers interested in the general properties of such maps and the spaces they act on are referred to Khamsi and Kirk (2001) . If each of the functions ) in is a contraction mapping, then the IFS defined on distance diminishing Finally, define the distance ) between two sets Now let K[t](q, q[0]) be the probability of being in state q at time t given an initial value q[0]. Theorem 2.2 Norman (1968) : Suppose that an IFS of the form is distance diminishing and has no absorbing states. Suppose also that the following condition is satisfied: That is, the probability density of converges (uniformly) to the stationary asymptotic distribution ) for any initial value Thus to apply the theorem to the system defined by , I need to show the following: • (i)The system has no absorbing states; that is, there is no value of q[t] for which q[t+1] = q[t] with probability 1. • (ii)Each of the functions h[i] is a contraction mapping. • (iii)The condition (C3) is satisfied. Condition (i) is easily verified by inspecting the learning rule . To discuss conditions (ii) and (iii) I exploit the fact that the functions ) are linear functions. They may thus be written as are constants. One can verify using the definition that a linear function on is a contraction mapping if it has a slope of absolute magnitude less than 1. Thus we require for the system to be distance diminishing. This translates into the following conditions for Because 0 < < 1, these inequalities are satisfied for ∈ (0, 1). Finally, I must verify that for these values of , condition is satisfied. To do this, let be in ). This means that there is a sequence { } of values of such that Now consider ; that is, is the result of the same sequence of functions that gives rise to but with a different starting value . The distance between is a constant that is the same for both terms and = max[|1 − (1 − )|, |1 − |] < 1 for ∈ (0, 1). The minimum distance between ) and ) must be less than or equal to the distance between , so we have that Because this argument holds for all is satisfied and the theorem is established. Fig. 1. Sample trajectory of the probability q[t] of following the forecast. Here z = 0.1, p[c] = 0.2, λ = 0.2, BSS = 0.5, and q[1] = 0.5. Citation: Weather, Climate, and Society 1, 1; 10.1175/2009WCAS1001.1 Fig. 2. Long-run distributions of q values from the update rule (20); (left)–(right) λ = 0.2, 0.5, and 0.8. In all plots z = 0.1, p[c] = 0.2, and BSS = 0.5. Citation: Weather, Climate, and Society 1, 1; 10.1175/2009WCAS1001.1 Fig. 3. Dependence of the rate of convergence of the sequence of expected values q[t] on z. In this example, λ = 0.5, p[c] = 0.2, and BSS = 0.5. Citation: Weather, Climate, and Society 1, 1; 10.1175/2009WCAS1001.1 Fig. 4. Normative (solid line) and behavioral (dashed line) relative value scores; (left)–(right) BSS = 0.2, 0.5, 0.8, and p[c] = 0.2 in all plots. Citation: Weather, Climate, and Society 1, 1; 10.1175/2009WCAS1001.1 Fig. 5. Difference between expected losses sustained by behavioral and normative agents as a function of z. In this example, p[c] = 0.2 and L = 1. Citation: Weather, Climate, and Society 1, 1; 10.1175/2009WCAS1001.1 Fig. 6. Difference between normative and behavioral relative value scores as a function of forecast skill. In this example, p[c] = 0.2 and z = 0.1; however, the unimodal shape is preserved for all parameter Citation: Weather, Climate, and Society 1, 1; 10.1175/2009WCAS1001.1 Table 1. Loss matrix 𝗟(a, e) for the cost–loss scenario. Named in honor of B. F. Skinner. Notice that actions are determined by the information source selected because the climatology always prescribes the same action, and we know that the forecast disagrees with the climatology. Thus a = 1 when c = 1 and z > p[c], and a = 0 when c = 1 and z ≤ p[c]. To explain this further, notice that |A| < 1 implies that 0 < λ < 2/(z + p[1|D] − 2zp[1|D]). The expression on the right-hand side of this inequality is larger than or equal to 2; however, the weakest constraint on λ from the requirement (21) is that λ < max[z] {1/(max{z, 1 − z})} = 2. This follows from the definition of A, Eq. (22), and an application of l’Hôpital’s rule. This implication is specific to our set S and depends critically on the fact that k is strictly less than one for all the functions h[i]. Refer to Norman (1968) for the full general list of conditions that the IFS must satisfy for it to be distance diminishing.
{"url":"https://journals.ametsoc.org/view/journals/wcas/1/1/2009wcas1001_1.xml","timestamp":"2024-11-10T09:56:47Z","content_type":"text/html","content_length":"1049715","record_id":"<urn:uuid:5219e107-dc70-47f0-8fe5-c999f896edfd>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00621.warc.gz"}
The Historical Development of Co The history of computability encompasses thousands of years, involving two semi-independent stories. On the one hand, the development of computational devices dates back to approximately 1000 to 500 B.C.E.. On the other hand, the mathematical and logical history of the development of computation theory only reaches back about half as far to around 400 B.C.E.. This article discusses the history of computing devices. A separate article discusses to the development of the general theory of computation. Readers with selective interests can use the links in the contents window. The article has a somewhat non-standard structure. A somewhat streamlined main narrative runs below. However each of the links in the main article lead to an expansion of the main narrative to include significant detail. I recommend students read the streamlined narrative once, then read the narrative again including the linked content. The Elements of Computational Devices All computational devices share the same general four features: They all have (1) a physical structure, a medium that serves as the element of the system that can represent the objects, properties, events, relations, etc. for which the machine solves problems. For instance, the abacus pictured below has a series of rods and beads. These rods and beads constitute the physical structure serving as the representational medium for the abacus. The rods and beads can serve as the representational structure in the abacus since the rods and beads constitute distinct components having distinct relationships to one another. Of equal importance, these elements are capable of altering their relationships in systematic ways. (2) All computational devices have an interpretation function, a mapping that assigns elements of the structure in the device to elements of the problem domain that prove significant for the problem(s) the device solves. The interpretation function maps the structure to the elements of the problem domain in a systematic fashion so as to preserve the structural relationships between significant elements of both the device and the problem the domain. In the case of the abacus, the interpretation function maps the rods and beads to numbers and decimal places. (3) All computational devices have a set of structure-specific transformation operations that transform the structure of the device in a fashion that mirrors the structural changes in the domain appropriate to the solution of the problem. In the abacus case, the rules for moving beads on rods and across rods constitute a set of transformation rules that preserve mapping of beads and rods to numbers and decimal places in a manner that mirrors numeric functions like addition. For additional details, follow the abacus link below. (4) Finally, all computational devices include a control structure. Control structures determine the order of the transformation operations for any given computation. In our abacus case, the human operator must supply the control structure. To use an abacus, therefore, one must learn how to move the beads on the rods in order to perform a calculation. As we will see, an important part of the history of the development of computing devices involves the incremental automation of computing through the incorporation of the control structure into the device itself. │Picture of an abacus. The abacus represents the earliest known computational device in that it incorporates the four central components of computational devices. (1) A structure serving as the │ │representational medium. I the abacus the rods and beads serve as the structure. (2) An interpretation function that maps the structure to the elements of the problem domain in a systematic fashion│ │so as to preserve the structural relationships between significant elements of the domain. (3) A set of structure-specific transformation operations that transform the structure of the device in a │ │fashion that mirrors the structural changes in the domain appropriate to the solution. In the abacus case the operations are the rules for moving beads on rods and across rods so as to mirror │ │numeric functions. (4) A control structure which selects among the transformation operations and determines their order of execution during computation. │ Early Beginnings: Manual Computing Devices The first known device for numerical calculating is the abacus. It's invention in Asia Minor dates to approximately 1000 to 500 B.C.E.. Abacus users compute bymoving a system of sliding beads arranged in columns on a rack. Merchants of the time used the abacus to keep track of trading transactions until the use of paper and pencil undermined its importance especially in Europe. One can use the abacus to add, subtract, multiply, and divide. In fact, mathematicians have demonstrated that all Turing computable functions are ┌────────────────────────┐ abacus computable. However, the abacus requires the user to directly manipulate the device fo eachstep in a given computation. For this reason it has little in common with ├────────────────────────┤ modern computers that perform many or most operations without the direct manipulation of the user (i.e., automatically). │Incan Quipu Modern │ │Horizon period │ During what scholars call the modern Horizon period, approximately 600-1000 A.D.E. The Incas employed a device called the Quipu or khipu. This device consists of a series │approximately 600-1000 │ of cotton cords. There is a main cord from which from which many "pendant" cords hang. While Quipus seemed to be a general data bearing device (for instance colors of cords │A.D.E. │ were used to represent different materials and states of affairs), one well-documented use was to represent quantities and calculate numbers. Knots in the pendant cords └────────────────────────┘ angd their relative position on the cord allowed the Incas to represent numbers using a decimal system (1s, 10s, 100s, etc.). The 15th and 16th Centuries In 1967 American researchers discovered 2 unknown (or lost) notebooks of Leonardo Di Vinci (1452-1519) in the National Library of Spain in Madrid. Written between 1503 and 1505 these works, called the "Codex Madrid," were examined shortly after their discovery by Dr. Roberto Guatelli. Guatelli had an international reputation as an expert on Leonardo Di Vinci with a specialty of building working replicas of Di Vinci inventions. Guatelli recalled a drawing in the "Codex Atlanticus" (1480-1518) that was similar to the "Codex Madrid" calculator. Using both manuscripts Dr. Guatelli built a controversial (and now lost) replica of the da Vinci machine in 1968. In Guatelli's replica da Vinci's machine operates in a manner similar in principle to Pascal's later machine (1642). Most historians do not believe that da Vinci intended to design a calculator. Da Vinci himself probably could not have built the machine as the frictional resistance generated by the machine would have been excessive for the materials of the time. The 16th and 17th Century The use of such semi-automatic or automatic machines such as that envisioned by Di Vinci to solve mathematical problems developed primarily during the early 17th century. Early machines where designed and even built by mathematicians. These machines were calculators of sorts which were capable of basic arithmetical operations like addition, subtraction, multiplication and division. Among the early creators of such devices were John Napier, Wilhelm Schickhard, Blaise Pascal, and Gottfried Leibniz. The Scottish mathematician John Napier (1550-1617) invented several devices for multiplication. Best known of his devices, the "bones," consisted of a set of rods and a rack. Napier labeled the side of the rack from 2 to 9 so that the user combined the rods and rack to create a 2 to 9 multiplication table for any number. One selects the rods corresponding to the digits in the number and places them together in the rack. To multiply the number by 4, for example, one proceeds along the row marked 4 going from right to left adding the numbers in each parallelogram to give the next digit. The first slide rule appeared, according to who you consult, at some point between 1622-1625. The modern slide rule is generally thought to be primarily the result of the insights of four men; John Napier, the English astronomer Edmund Gunter, the English mathematician reverend William Oughtred, and a French Artillery officer and geometry professor Amedee Mannheim. The slide rule is based on Napier's discovery of logarithms. Gunter's contribution was to draw a 2 foot long line on which he placed whole numbers spaced at intervals proportionate to their log values. Prior to Gunter, to find the logarithm of a number you either calculated it yourself or looked it up on one of the standardized tables. The former was time consuming, while the later suffered from the many errors introduced in the calculation and reproduction of the tables. Using Gunter's line, one can find one's values simply by measuring distances between numbers. Oughtred took the process a step further by opposing two of Gunter's lines and showing how one can perform calculations by moving the two lines relative to one another. Manheim introduced the ten inch design movable double sided cursor while a student in Paris. Since the slide rule works by manipulating distances to perform calculations, it is probably the first analog computer ever to have widespread use. In fact, it was used in engineering, science, and math as late as 1972. Since the slide rule is an analog device it's accuracy (and usefulness) were dependent upon the accuracy limitations of of the technology used in it's manufacture. Early slide rules had an accuracy of only three digits. This proved sufficient precision for most works, but was not suited to situations where greater accuracy was needed. As with Di Vinci, a chance discovery in 1935 and again in 1956 of some of German astronomer and mathematician Wilhelm Schickard's (1592-1635) letters to his friend Johannes Kepler showed that Schickard devised a mechanical calculator in 1623. Schickard's invention was described to Kepler as a mechanical means for calculating ephemerides. Only two prototypes (now lost) were ever built at the time, one of which was used by Kepler. Schickard's machine has been reconstructed (1960) based upon his diagrams. Blaise Pascal (1623-1662), an 18-year-old in 1642, invented what has come to be known as the "Pascaline" (his name was "numerical wheel calculator") to facilitate his father's work as a French tax collector based in Paris. The numerical wheel calculator, or Pascaline, consisted of a rectangular box employing eight movable cogwheels or dials exploiting base ten to perform addition of sums up to eight figures long. Specifically, as the dial for the one's column completed one revolution (moved ten notches), it moved the next wheel, representing the tens column, one place. A complete revolution of the tens dial increased the hundred's dial one notch, so on through the entire eight wheels. To add with the Pascaline one moved the cogwheels to the first number followed by each of the other numbers to be added. The Pascaline, though clever in design, had two shortcomings: (1) The user had to configure the wheels manually, and (2) its straightforward computational abilities extended only to addition. The Pascaline could be used to subtract and multiply (by successive addition) though each operation required much more from the user. Gottfried Wilhem von Leibniz (1646-1716), a German mathematician and philosopher, studied Pascal's original notes and drawings to created a machine that improved upon the Pascaline. Leibniz's machine could add, subtract, multiply and divide. Leibniz modified the machine to include a stepped-drum gear design, called the Leibniz Wheel. The Leibniz wheel was a movable carriage connecting pin wheels like Pascal's via stepped cylinders containing ridge-like teeth of different lengths corresponding to the digits 1 through 9. Turning the crank that connected the cylinders engaged the smaller gears above the cylinders, and these in turn engaged the adding section. The adding section consisted of a cylinder on which gearing teeth were set at varying lengths, which functioned as combined series of simple flat gears. Leibniz called his final creation, commissioned in 1674, the Stepped Reckoner. The Reckoner, however, required some user manipulation for carry-overs and often gave the wrong answers. A design error in the carrying mechanism caused the machine to fail to carry tens correctly when the multiplier was a two or three digit number. Both Charles, the third Earl Stanhope, (English 1775) and Mathieus Hahn (Germanic; started 1770, finished 1776) did make their own successful multiplying calculator similar to Leibniz's. The calculators of the 16^th and 17^th century provided (to a limited extent) a proof of concept that mechanical methods embodied in machines could perform lengthy and involved numerical calculations. These machines represent the basic insights used in constructing mechanical calculators until the middle of the 20^th century. Nevertheless, the invention and use of devices capable of lengthy computations still required the development of several key elements. First, the machines of the 17^th and 18^th centuries operated at best semi-automatically. At each new stage of a calculation the user has to manually intervene. Second, every machine is a special purpose machine designed and constructed to perform a single task or a very small number of tasks. Third, each individual calculation required the user to configure the machine. There was no notion of a program, i.e., a set of instructions written in terms of a set of basic operations, which would allow the machine to perform a wide array of tasks by utilizing different sequences of its basic operations. Fourth, with the exception of the representation of input/output, no element of these machines can serve as memory, either for a program or for intermediate or partial results. In certain cases, users wrote down partial results and later re-entered them when they were needed to finish a calculation. Finally, since these machines operated by mechanical means, they were limited in complexity and speed. The history of calculating machines from Leibniz to ENIAC and ACE is largely one of the ideological and technological advances that culminate in the construction of general purpose programmable computers. The 18^th and 19^th Centuries The development of more sophisticated computing machines in the 19^th century was marked by more failure than success. In part the failures were due to the sheer complexity of the task. In part they were due to funding problems caused by the inability to envision the full impact of such machines upon diverse human activities. At the end of the 18^th century, in 1786, J. H. Mueller, a Hessian army officer, conceived the idea of what Babbage later calls the Difference Engine. Specifically, Mueller envisioned a mechanical calculator for determining polynomial values using Newton's method of differences. The method works by using a constant derived from subtracting values for the polynomial which can then be used to uniquely specify other values for the polynomial for a fixed interval. Such a machine, though seemingly as specialized as the Pascaline, can be used to calculate values for any function that one can approximate over suitable intervals by a polynomial. Mueller's fund raising efforts proved fruitless, and the project was forgotten. The next significant development in computing did not occur until the 1820's. Charles Xavier Thomas de Colmar (1785-1870), a French industrialist, constructed and mass-produced the first calculator. Like Mueller, de Colmar began developing his idea while in the army. De Colmar's "Arithmometer" employed the same stepped cylinder approach as Leibniz's calculator. In addition to multiplication, the Arithmometer could also perform division with user assistance. In 1811 a young Charles Babbage, the son of a banker and a gifted mathematician, entered Cambridge. According to Babbage's account in his autobiography, Passages from the Life of a Philosopher, his attention was first drawn to computing machinery in 1812 when ... I was sitting in the rooms of the Analytical Society, at Cambridge, my head leaning forward on the table in a kind of dreamy mood, with a table of logarithms lying open before me. Another member, coming into the room, and seeing me half asleep, called out, Well, Babbage, what are you dreaming about?" to which I replied "I am thinking that all these tables (pointing to the logarithms) might be calculated by machinery. Some doubt the veracity of Babbage's above account. Babbage definitely did not act on his ideas until 1819 in connection with checking tables for the Royal Astronomical Society. The astronomical data, values for logarithms and trigonometric functions, as well as various physical constants encoded in the tables were heavily and extensively employed for scientific experimentation and nautical navigation. The standard government tables for navigation, for instance, were known to have in excess of a 1,000 errors. Corrections for the navigation tables encompassed seven volumes. Babbage knew that the sources for the errors were the humans who had produced the tables. The tables had been produced manually, and in some cases measures dated back over two centuries. In such an exhaustive compendium compiled over such a long expanse of time human calculating errors compounded by copyist mistakes had infected the tables like a virus. Since the calculations for the tables were to a large extent tedious and mechanical, Babbage realized that a machine that could produce the tables would eliminate calculating and transcription errors as well as being incapable of suffering from the tedium of the task. Babbage's first important step, and the only one which he fully realized, was the conception and construction of a prototype for his Difference Engine. The Difference Engine, if Babbage had completed it, would have evaluated polynomials using the method of differences. Babbage began work on the prototype machine in 1819 and successfully demonstrated the machine (without the ability to print its answers) for the Royal Astronomical Society in 1822. The function Babbage computed for the royal society was 41 + n + n^2. At his demonstration Babbage proposed building a version of the machine that could calculate the necessary values and print these scientific tables. Impressed, the Society awarded him a gold medal and supported Babbage's proposal to build a full scale difference engine with an accuracy of 20 decimal places. In 1823, with an initial (and historic) grant of 1,500 English pounds, Babbage set to work. In addition to providing Babbage with a grant to produce the full scale Difference Engine, Babbage's prototype also brought him into contact with Ada, Countess of Lovelace. Ada was the only legitimate daughter of the poet Lord Byron (though she never lived with him). The teenaged Ada Byron encountered the prototype and Babbage when at a society function intended to show off new inventions. Miss Byron, who had been tutored by a family friend the great logician Augustus De Morgan, showed considerable intelligence, mathematical, and logical ability. She immediately grasped the workings of the machine and it's potential. In fact, Babbage once commented that she understood it better than himself and explained its functioning far better than he could. She and Babbage maintained constant contact for the rest her life. By 1840 Babbage had long ago (about 7 years) abandoned work on the Difference Engine with only half of its 25,000 parts completed and only a single fragment assembled. He had suffered endless struggles for funding, accusations of fraud, and controversies with his academic peers, while spending 34,000 pounds of his own and the British Government's money. In 1840 Babbage had begun touring the continent lecturing upon his new invention (which the British government refused to fund), the Analytical Engine. Babbage's design for the Analytical Engine represents the first design for a computer in the modern sense. It had a memory, a processor, and a program. In devising the Analytical Engine Babbage utilized Joseph-Marie Jacquard's 1801 technology if encoding data on punch cards. Jacquard used pasteboard punch cards to encode patterns that could then guide the behavior of looms. The Analytical Engine had two pasteboard memory stores. One store held the "operation cards" specifying what Babbage called the formula (the program). The other store held the "variable cards" which determined the variables upon which the formula would operate as well as any intermediate values. The two stores fed into the mill, which then carried out the The countess of Lovelace played an extremely important role in the development of the Analytical Engine. She translated a French publication of notes on Babbage's Lectures on the Analytical Engine into English, adding an addendum that was longer than the article, but so insightful that Babbage urged its publication in toto. Her translation with addendum appears n 1843 under the initials AAL in the September 1843 edition of Taylor's Scientific Memoirs. Though Babbage may have written algorithms for the difference engine in earlier notes, the algorithm in Lovelace's 1943 article makes her the first person to publish an algorithm intended to be carried out by such a machine. As a result, she is often regarded as one of the first computer programmers. The countess also developed the programming techniques of subroutines, loops, and jumps. In addition, she meticulously documented the design and logic of the engine, providing the only clear records now available. She was, likewise, the first to recognize the engine’s potential for applications beyond numerical calculation. A few years before his death Babbage began to fabricate the mill of the Analytical Engine. After Babbage's death the British Association for the Advancement of Science submitted a report (1878) recommending against construction of the Analytical Engine. In 1888 his son had completed the mill for the engine to a great enough extent that he used it to calculate to its 44th place. By 1906 the mill was fully completed. Though Babbage failed to produce a working difference engine, in 1834 Georg Scheutz, a Swedish printer, publicist, writer, Shakespeare translator, and engineer, read of Babbage's difference engine in an article in the Edinburgh Review written by Dionysuis Lardner. Working with his son Edvard, Georg Scheutz began to build a smaller version of the difference engine. Edvard was still in high school, and the two made their first engine in their kitchen from wood using hand tools and a makeshift lathe. Utilizing slightly different principles the Scheutz's constructed a working difference engine capable of storing 15-digit numbers and calculating fourth-order differences. Father and son demonstrated their difference engine for Babbage in 1854 who received them warmly. At the Exhibition of Paris in 1855, their machine won the gold medal. Ultimately, they sold it to the Dudley Observatory in Albany, New York. The observatory calculated the orbit of Mars with the Scheutz machine. Despite their success at making working engines, the father and son team's effort was a financial failure. While Babbage and the Scheutzs labored to implement digital computing instruments, James Thomson developed an analogue computer in the form of a mechanical integrator to predict the tides using harmonic analysis. Thomson completed his work between 1861 and 1864. Thomson's brother William, Lord Kelvin, combined several of Thomson's integrators to develop the actual tidal analyzer/predictor. Kelvin published papers in 1876 outlining the utilization of integrators to build a device for solving differential equations. The machines built and envisioned the Thomson brothers were analog devices, which operated by creating mechanical relationships which were isomorphic to (had the same structure as) the equation to be solved. The solution is computed by running the machine and recording what happened to the quantity of interest. The Late 19^th and 20^th Century Between 1888 and the creation of IAS in 1952 scientists and mathematicians developed the basic components and design innovations taken for granted in contemporary computers. The main innovations of this period involved increases in speed, the development of internalized read/write memories, the adoption of more efficient and general representations and processing elements, and design choices that favored simplicity over speed. Most if not all of these innovations were made possible by the development of the next significant component in digital computing instrumentation, the binary switching unit or transistor. The development of binary switching components and their integration into computing machinery made modern digital computers possible. The switch to electrical as opposed to mechanical instruments would eventually prove the key to developing computing machinery with vastly improved speed, enhanced memory functions, easier use of recursive functions, and general programmability. The development of electronic components would allow for dramatically increased processing speeds. The idea of a program stored internally in a readable and writeable memory allowed for programs with far greater complexity as well as self-structuring programs. The conception and technological implementation of large read/write electronic memories allowed for large programs that could operate on large amounts of stored data as well as storage of intermediate results, all at high speed. The utilization of binary as well as Boolean logic elements as the basis for machine operations eliminated many of the design problems faced by Babbage and others. Finally, the idea of a central serial processor sacrificed speed in favor of simplicity, which was a necessary tradeoff given that complexity was the limiting factor of electronic engineering of the time (just as it was in mechanical engineering). Also of note, was the development of analog computing devices. Scientists of this time rarely employed digital computing methods. Analog devices were in wide use, especially in engineering calculations, where the slide rule was indispensable. The first truly significant development of the era was the construction in 1930 of a large-scale differential analyzer by Vannevar Bush at MIT and funded by the Rockefeller foundation. Based upon work during the 1920's (independently of Kelvin's insights) the machine, which was the largest computational device in the world when built, could perform integration and differentiation. The next innovation chronologically was Kanrad Zuse's early prototype, originally named the "V1". Renamed the "Z1" after WWII, the Z1 was the first in series of four mechanical binary programmable calculators developed by Zuse implementing the same abstract design concept. Zuse, an engineer, conceived the idea of mechanically calculating his studies in the mid 1930's. Zuse wanted as general a computing machine as possible, and which could compute a series of equations. Zuse envisioned a design quite similar in approach to Babbage's Analytical Engine despite having no contact with Babbage's ideas until 1939. Zuse's machine would have had a data recording memory, a basic arithmetic unit, and a unit to coordinate operations and data (control), a program unit to enter programs and data, and a printer to record the results. Unlike Babbage's machine, the Z1 used binary representations of data and Boolean algebra to describe and implement the operations of the machine. The switch to binary and Boolean algebra represented a significant advance as it avoided many of the design and engineering problems faced by Babbage. The Z1 used a memory of sliding metal parts to store up to sixteen numbers. Computations remained largely mechanical. The program was input with punch tape of recycled 35mm movie film. The data values were entered through a keyboard, and outputs were displayed on electric lamps. In 1939 Zuse completed the Z2 (formerly V2). Electro-mechanical relays replaced mechanical calculation machinery at the suggestion of Helmut Schreyer a friend and electrical engineer. The memory design remained sliding metal parts. The Z2 worked, and worked extremely quickly for the time. Zuse would go on after his conscription to develop a Z3 and Z4 for the German Aerodynamics Research Institute. An incomplete Z4 survived the war in a basement. Zuse reconstructed the Z4 some time in the late 1940's in Zurich, where it would reign as the most powerful calculating machine on mainland Europe for several years. From Calculators to Computers: The Final Step While Zuse labored on the Z2, across the Atlantic John Atanasoff and Clifford Berry (Atanasoff's graduate student) completed a prototype 16-bit adder employing vacuum tubes in Iowa State College (now Iowa State University) 1939. Atanasoff and Berry designed a more complicated computer, the Atanasoff-Berry Computer (or ABC). Issues about patents and an need for more funding led the two to write Computing Machines for the Solution of Large Systems of Linear Algebraic Equations which outlined their work in great detail. Iowa State University still defends the claim, excepted by the supreme court in 1973, that the ABC marked the invention of the first general purpose electronic computer. ABC was never built, though analysis in the 1960's showed that it would work. Atanasoff and Berry abandoned the project when both began to work for the US war effort. The most widely known electro-mechanical programmable calculator was constructed by Howard Aiken and his group at Harvard in 1943. Named the "ASCC Mark I" ("Automatic Sequence-Controlled Calculator Mark I") or the "Harvard Mark I," it measured 51 feet in length, weighed 5 tons, employed electro-mechanical relays, and totaled three-quarters of a million parts. It's speed was comparable to the Z3, and it read programs from a tape. In addition to Aiken, Grace Hopper worked on the project for many years both for the navy and for Harvard. To understand the British developments at this time, one must backtrack to 1938. At that time the British Intelligence Service acquired a working description of the German coding device, the ENIGMA machine. Later Gustav Bertrand delivered a working copy of an ENIGMA from France. In 1940 Alan Turing went to France to meet with Polish Cryptanalysts. Turing returned with a knowledge of their bombes, machines used to aid in cracking the codes. ENIGMA operated by having a huge number of encryption schemes from which the user selected. The encryption, the mapping of a letter to a coded symbol, changed in a determinate manner for each letter one entered. Messages typed on the properly configured ENIGMA would appear in an encrypted form to be transmitted to the receivers, whose knowledge of the settings allowed them to decipher the message. The German High Command used ENIGMA to communicate all of their important messages to troops in the field. When the British intercepted a German message, they could decode it provided that they knew the keyword used to encode it. Knowledge of the internal working of ENIGMA greatly reduced the number of candidate keys consistent with a particular intercepted message, but typically an enormous number of possible keys still had to be ruled out. The frequency of code changes (three times a day) and the labor involved in testing possible keys meant that manually decoding a message would prove too time consuming to allow the English to benefit from the time-sensitive information it contained. What was required was a means of rapidly exploring and eliminating possible keys. Early in WWII the British started Project ULTRA at Bletchley Park (between Oxford and Cambridge). Researchers at Bletchley Park initially designed several small machines which like their Polish counterparts, they called "Bombes" to search through possible encryption schemes. Constructed using electronic relays, Bombes proved quite helpful in decoding more ordinary messages. However, the Germans used a very different code and machine built by the Lorenz Corporation and called the Lorenz when encrypting high-level strategic commands. COLOSSUS, a electric relay machine, was constructed in 1943 as a fully automatic decryption device for these high-level messages. COLOSSUS incorporated an estimated 1,800 to 2,400 vacuum tubes. COLOSSUS received input from 5 paper tape readers at a speed of approximately 5000 characters a second. The Dean of the Moore School of Electrical Engineering at the University of Pennsylvania, John Brainerd, supervised the construction of last of the wartime programmable calculating devices, which was eventually named ENIAC: Electronic Numerical Integrator and Calculator. ENIAC is often described as the first electronic programmable computer. In 1935 the US Army's Ballistic Research Laboratory had begun using a Bush Differential Analyzer to compute trajectory tables. However, because of the limitations of the Ballistic Research machine, the Army contracted with the Moore School in 1942 for exclusive use of their, much better, analyzer. Brainerd headed the project that included engineer Dr. J. Presper Eckert and physicist Dr. John W. Mauchly. Eckert began his collaborations with Mauchly in 1941 when Eckert attended the Moore School and Mauchly began working there. Mauchly had familiarized himself with John Atanasoff's work on an electric computer when visiting Iowa State prior to his arrival at the Moore School. In consultation with Eckert, Mauchly outlined his idea for an electric computer in a 1942 memorandum. At that time the scientists at Ballistic Research and Moore School found themselves struggling to keep pace with about six trajectory table requests a day. Over the next roughly six month Mauchly and the Moore School group convinced the head of supervisor of computational training and activities, Dr. Herman H. Goldstine, that electronic devices could attain much higher computational speed. The U.S. government agreed to provide 61,700 dollars for Mauchly's and Eckert's plan to build an electronic calculator, ENIAC, in May of 1943. The project was dubbed "Project PX." Bureaucratic formalities meant that, ironically, Mauchly never officially held a position as a researcher on Project PX. As an instructor at Moore, Mauchly could only act in the capacity of a consultant to the project. It was completed in May 1944 and demonstrated in 1945, with a cost overrun of approximately 450,000 dollars. Its main designers and implementers were John Mauchly and J. Presper Eckert. Brainerd's team completed ENIAC too late to fulfill it's original mission to create artillery tables for WWII, though it saw extensive use in the hydrogen bomb project. ENIAC distinguished itself from other calculating devices in two ways. First, it was extremely large (100 x 10 x 3 feet). Weighing 30 tons, it had over 100,000 components, approximately 18,000 of them vacuum tubes. Second, it was much faster than previous machines, multiplying in under .003 seconds. Though ENIAC had many of the qualities of modern computers, one reason not to call ENIAC the first general purpose electronic computer is it that it had no internal memory for storing programs. Setting-up the machine for a calculation required manually configuring all of the subunits using banks of switches located at various parts of the machine, connections between different subunits had to be set, the main programmer unit had to be set, and constants had to be input using switches. This was time consuming and also limited the unit's speed. Despite ENIAC's limitations, the work at the Moore School influenced all of the major post-war computing projects for years to come. One source of this influence was the summer school on computers hosted at the University of Pennsylvania in 1946. In attendance at the summer school were nearly all of the major figures in computing. The other major vehicle whereby the Moore School influenced future computing machines was also a source of controversy. In 1944 the famous Princeton mathematician John von Neumann joined the Moore School team as a consultant. Von Neumann had been on Alan Turing's dissertation committee at Princeton and had developed a strong interest in computing machines while working on the Los Alamos atomic bomb project. The Moore School team was already developing designs for ENIAC's successor. Von Neumann compiled their ideas in a report. In 1945 von Neumann began circulating the report. Early versions of the report did not list Mauchly and Eckert as authors. Von Neumann explained this as a typo and later versions did include the two. However, by that time the report, which importantly reintroduced the idea of a internally stored program, was so strongly associated with Von Neumann, that he is generally credited with the insights it contained even today. By March 1946, Mauchly and Eckert left Moore to found the Electronic Control Company. At least part of the reason for their departure was disputes over whether they would retain the patent for ENIAC. ECC built BINAC for Northrop, which used magnetic tape for memory. The company then became the Eckert-Mauchly Computer Corporation, which built 46 UNIVACs, which could handle both alphabetical and numerical information. Max Newman had contact with Bletchley Park, Turing, and the Moore School. Together with Freddie Williams and the Manchester University (Manchester, England) research team, Newman completed the "Mark I" or "Manchester Mark I" in June of 1948. This is the first machine with a true stored-program capability developed by Williams. The memory exploits the somewhat unreliable mechanism of encoding data through the residual charge left on the surface of a cathode ray tube (CRT) as the result of firing an electron beam at it. Though limited in reliability, the memory was fast, cheap, and small. Data is stored in memory by firing the electron beam at the screen. Data is read from memory by firing another beam and measuring the resulting voltage with an electrode beyond the screen. Eventually, the Manchester machine also employed a primitive form of assembly language developed by Turing to replace the use of binary in input and output. A year later (May 1949), Maurice Wilkes and his team at the Mathematical Laboratory at Cambridge University completed EDSAC (Electronic Delay Storage Automatic Computer). EDSAC was the first functional and practical electronic digital computer to utilize a stored-program. Like the Manchester system, its memory, the "Ultrasonic Delay Line," seems somewhat exotic today. It consisted of a set of mercury baths whereby data is represented through the continuous conversion of electric signals into acoustic pulses which are sent across the bath and then reconverted at the other side. Wilkes had attended a 1946 summer school on computers at the University of Pennsylvania, returning with the goal of building a computer along the lines of Von Neumann's outline. Von Neumann's EDVAC and IAS computers were not completed until 1952. The IAS, built at Princeton's Institute for Advanced Studies, was Von Neumann's realization of the design outlined in the EDVAC report. To a large extent, IAS served as the blueprint for most computers built since. At the same time, IBM introduced their first mass production computer, the IBM 701. The 701had 1kb of RAM and could write to a tape drive. Of course, the years following 1952 have ushered in enormous innovations which go beyond the scope of this article. General Computing History Links Virginia Tech History Page The Turing Archive for the History of Computing IEEE Timeline of Computing History Mike Muss History of Computing Information The Computer History Museum Hitmill's History of Computers Alan Turing's Paper in Mind Michelle Hoyle's History of Computing Science Steven White's Brief History Howard Rheingold's Site Paul E. Dunne's Lectures Jeffery Shallit's Brief History The Virtual Museum of Computing The History of Computing Project Dr. Tony Pridmore's Lectures Bebop Bytes Back PBS Triumph of the Nerds Bletchley Park Site Juha Takinen's Famous Computer Scientists Fortunecity Chronology
{"url":"https://home.csulb.edu/~cwallis/labs/computability/index.html","timestamp":"2024-11-13T16:17:28Z","content_type":"text/html","content_length":"56830","record_id":"<urn:uuid:2f920f51-ab64-49d3-9be0-d6acf884cc29>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00292.warc.gz"}
The FR Unit Fractions — or fractional units. FRs. These things are life-changing. Check this out the traditional way: We have four columns. Each set to 25% width. Making this parent grid a total of, spoiler alert: 100%. Want to change any one of them? You'll need calculus. Except you won't, but it's a pain. Why? If we make this one 50%, the others maintain their original sizing. And three 25s and a 50 is no longer 100%. It's 125%. Hence the overflow. And we know why this happens: the other three are still taking up 25% of the parent, and the new one hasn't subtracted from that at all. It gets crazier. What if you're mixing units? There are times where this happens. And following the math can be excruciating. Perhaps more importantly, it holds us back from design. Enter: CSS fractional units. Or, as Tolstoy called them, FRs. FRs do all the heavy lifting inside anything that’s a grid. And the math is super straightforward. It works like this: same four columns. All even. Each are now 1 FR (one fractional unit). So any one of these four is 1/4th that width. Want one of them to be twice as wide? Make it 2 FR. Notice how the others resized. Now it didn't do this randomly. They worked it out because the total is now 5 FRs. Now each of them take 1/5th while our new one takes up 2/5ths. And notice how we can change our gap? No need to recalculate. Everything just works. But wait. It gets better. Think of the FR here (these fractional units) as two things: a maximum, and a minimum. The maximum is whatever you set. In FRs. We already know how that works. But what’s the minimum? What happens if we have content in here like a heading? The other columns will shrink proportionally. So with FRs, the minimum is automatic (or auto). FR automatically sets minimums which will respect the content inside. Now we can override this minimum. We can set a minimum which gives us control over everything. With flexbox, this would require scratch paper, a calculator, and a subscription to Netflix, because, as we know, layout troubles are the third-leading cause of procrastination in web development. But that's the FR unit. FRs let you enter whole numbers, decimals — they're all relative to each other. Set your columns and rows to anything you want. And because this works like a fraction, you can put all sorts of values in here. It all works out, regardless of any gap in your grid. When should you use flexbox? When should you use CSS grid? In this video, we lay out the similarities and differences between the tools.
{"url":"https://www.learntokiz.com/episode/the-fr-unit","timestamp":"2024-11-05T01:21:38Z","content_type":"text/html","content_length":"21938","record_id":"<urn:uuid:8bd2197f-890d-41ce-8864-552df568223e>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00313.warc.gz"}
iCentre: Year 12 Mathematics: Home "Learn for free about maths, art, computer programming, economics, physics, chemistry, biology, medicine, finance, history, and more." "Interactive programs that allow students to manipulate virtual objects to help learn maths concepts from the National Library of Virtual Manipulatives (NLVM)." A collection of math and science puzzles. They are sorted into categories which you’ll find in the left-hand column. Use the links provided to explore and enjoy these puzzles. The Internet Archive is a non-profit that was founded to build an Internet library. Its purposes include offering permanent access to historical collections that exist in digital format. The purpose of this site is to provide a forum for mathematicians and computational scientists to study Planet Earth, its life-supporting capacity, and the impact of human activities. Links to sites with something immediately useful (and free, or free-to-try) for algebra students are listed. Check these reviews for sites containing lessons, tutoring forums, worksheets, articles on "how math is used in real life", and more. "Click on your year level to access skills covered in that year. These skills are organised into categories, and you can move your mouse over any skill name to view a sample question. To start practising, just click on any link. IXL will track your score, and the questions will automatically increase in difficulty as you improve!" "Plus is an internet magazine which aims to introduce readers to the beauty and the practical applications of mathematics. A lot of people don't have a very clear idea what "real" maths consists of, and often they don't realise how many things they take for granted only work because of a generous helping of it. Apparently, some people even have the idea that it's boring! Weird. Anyway, we hope that even if you're such a person now, you won't be after looking through one or two issues of Plus, and that you'll come back and read future issues as they come out." An online destination for the engineering community. Topics in the mathematics section include Arithmetic, Geometry, Calculus, Series, Elementary Functions, Special Functions, Statistics / Probability and more. The Australian Mathematical Sciences Institute (AMSI) website has a careers component. Maths Careers is an area for upper-primary to Year 10 students. Its main focus is to demonstrate how maths can enrich any career path. The NRICH Project aims to enrich the mathematical experiences of all learners. Use the search box - top right hand side of the screen.
{"url":"https://icentre.vnc.qld.edu.au/year12mathematics","timestamp":"2024-11-13T21:45:04Z","content_type":"text/html","content_length":"40791","record_id":"<urn:uuid:293444aa-2eb9-4870-af23-850b7840ab43>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00619.warc.gz"}
Point Tools The Connected Collection of Points tool creates a Bound collection of Line Segments from a selected group of Points. It connects every Point to every other Point and will produced a shape like this: You have the option of binding the line segments using normal binding or Layer Binding. You also select how many Bezier Points each line segment will use and whether or not to delete the original [Read More] The Convex Hull tool creates a polygon essentially by wrapping a rubber band around a collection of paths. You have two construction options Consider ALL Paths Points Only The Points Only option will only use Points for the wrapping and ignore all other paths. The Consider ALL Paths option will use all the Snapping Points on every path within the selection rectangle. This can be very useful for creating hybrid curved/straight shapes. [Read More] The Mark Point Polygon tool is similar to the Point Trace Polygon tool except that instead of laying out Points in advance you “mark” the vertices with the Mark Button Simply move your finger to a position and press the mark button to select the next vertex. You can use the Drag Constraint button to align the next vertex horizontally or vertically with the last vertex. [Read More] The Point tool creates a point on the page. It is created where your finger first touches the canvas. If you have Snapping enabled, the point may be drawn at a snapping point of another path if your initial touch is near enough to one. Points are useful as guides in the construction of other paths. For example, they can be used with the Point Trace Polygon tool to easily draw an irregularly shaped polygon. [Read More] The Point Along Line tool lets you position a point along a line defined by two points. Your initial touch defines the first point of the line. You use the Mark Button to define the second point. You can also use the Drag Constraint button to position the point horizontally or vertically relative to the initial point. Here’s an example: The Point by Angle tool allows you to create a point which is positioned based on an angle that you define. Your initial touch defines the vertex of the angle. You then drag and use the Mark Button to define the two endpoints of the angle. This is different from how angles are usually defined in Doodleback (it’s usually endpoint-vertex-endpoint), but the reason is that the two endpoints actually define the line on which the point will lie so really what you’re defining is a vertex and then a guide segment. [Read More] The Point by Intersection tool allows you to construct a Point at the intersection of two lines. The initial touch defines one point of the first line. While keeping your finger on the screen, drag it to another location and tap the Mark Button to define the other end point for the first line. After the first line is defined, keep your finger on the Paper, drag it to another location, and tap the mark button to define the first point on the second line. [Read More] The Point on Each Vertex tool creates a collection of Points matching the vertices of a selected polygon. There’s also an option to include the center of the polygon in the collection and a construction option to Bind the Points together at the time of creation. These Points are completely independent of the original polygon. Here’s an example: The Point Trace Polygon tool allows you to trace out a polygon using Points laid out on the canvas. First layout the corners of some shape with the Point tool, and then simply trace the shape with your finger. You can easily remove the points after construction by using the Delete Path Collection tool. The only construction options available are the number of Bezier Points and whether or not the first vertex and last vertex are connected. [Read More]
{"url":"http://doodleback.com/tags/points/","timestamp":"2024-11-05T03:56:03Z","content_type":"text/html","content_length":"23165","record_id":"<urn:uuid:27f6bd4e-a515-4949-a558-6d42b9ba6657>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00585.warc.gz"}
JOB, COMPQ, COMPZ, N, ILO, IHI, H, LDH, T, LDT, ALPHAR, ALPHAI, BETA, Q, LDQ, Z, LDZ, WORK, LWORK, INFO ) CHARACTER COMPQ, COMPZ, JOB INTEGER IHI, ILO, INFO, LDH, LDQ, LDT, LDZ, LWORK, N DOUBLE PRECISION ALPHAI( * ), ALPHAR( * ), BETA( * ), H( LDH, * ), Q( LDQ, * ), T( LDT, * ), WORK( * ), Z( LDZ, * ) DHGEQZ computes the eigenvalues of a real matrix pair (H,T), where H is an upper Hessenberg matrix and T is upper triangular, using the double-shift QZ method. Matrix pairs of this type are produced by the reduction to generalized upper Hessenberg form of a real matrix pair (A,B): A = Q1*H*Z1**T, B = Q1*T*Z1**T, as computed by DGGHRD. If JOB='S', then the Hessenberg-triangular pair (H,T) is also reduced to generalized Schur form, H = Q*S*Z**T, T = Q*P*Z**T, where Q and Z are orthogonal matrices, P is an upper triangular matrix, and S is a quasi-triangular matrix with 1-by-1 and 2-by-2 diagonal blocks. The 1-by-1 blocks correspond to real eigenvalues of the matrix pair (H,T) and the 2-by-2 blocks correspond to complex conjugate pairs of eigenvalues. Additionally, the 2-by-2 upper triangular diagonal blocks of P corresponding to 2-by-2 blocks of S are reduced to positive diagonal form, i.e., if S(j+1,j) is non-zero, then P(j+1,j) = P(j,j+1) = 0, P(j,j) > 0, and P(j+1,j+1) > 0. Optionally, the orthogonal matrix Q from the generalized Schur factorization may be postmultiplied into an input matrix Q1, and the orthogonal matrix Z may be postmultiplied into an input matrix Z1. If Q1 and Z1 are the orthogonal matrices from DGGHRD that reduced the matrix pair (A,B) to generalized upper Hessenberg form, then the output matrices Q1*Q and Z1*Z are the orthogonal factors from the generalized Schur factorization of (A,B): A = (Q1*Q)*S*(Z1*Z)**T, B = (Q1*Q)*P*(Z1*Z)**T. To avoid overflow, eigenvalues of the matrix pair (H,T) (equivalently, of (A,B)) are computed as a pair of values (alpha,beta), where alpha is complex and beta real. If beta is nonzero, lambda = alpha / beta is an eigenvalue of the generalized nonsymmetric eigenvalue problem (GNEP) A*x = lambda*B*x and if alpha is nonzero, mu = beta / alpha is an eigenvalue of the alternate form of the GNEP mu*A*y = B*y. Real eigenvalues can be read directly from the generalized Schur form: alpha = S(i,i), beta = P(i,i). Ref: C.B. Moler & G.W. Stewart, "An Algorithm for Generalized Matrix Eigenvalue Problems", SIAM J. Numer. Anal., 10(1973), pp. 241--256. JOB (input) CHARACTER*1 = 'E': Compute eigenvalues only; = 'S': Compute eigenvalues and the Schur form. COMPQ (input) CHARACTER*1 = 'N': Left Schur vectors (Q) are not computed; = 'I': Q is initialized to the unit matrix and the matrix Q of left Schur vectors of (H,T) is returned; = 'V': Q must contain an orthogonal matrix Q1 on entry and the product Q1*Q is returned. COMPZ (input) CHARACTER*1 = 'N': Right Schur vectors (Z) are not computed; = 'I': Z is initialized to the unit matrix and the matrix Z of right Schur vectors of (H,T) is returned; = 'V': Z must contain an orthogonal matrix Z1 on entry and the product Z1*Z is returned. N (input) INTEGER The order of the matrices H, T, Q, and Z. N >= 0. ILO (input) INTEGER IHI (input) INTEGER ILO and IHI mark the rows and columns of H which are in Hessenberg form. It is assumed that A is already upper triangular in rows and columns 1:ILO-1 and IHI+1:N. If N > 0, 1 <= ILO <= IHI <= N; if N = 0, ILO=1 and IHI=0. H (input/output) DOUBLE PRECISION array, dimension (LDH, N) On entry, the N-by-N upper Hessenberg matrix H. On exit, if JOB = 'S', H contains the upper quasi-triangular matrix S from the generalized Schur factorization; 2-by-2 diagonal blocks (corresponding to complex conjugate pairs of eigenvalues) are returned in standard form, with H(i,i) = H(i+1,i+1) and H(i+1,i)*H(i,i+1) < 0. If JOB = 'E', the diagonal blocks of H match those of S, but the rest of H is unspecified. LDH (input) INTEGER The leading dimension of the array H. LDH >= max( 1, N ). T (input/output) DOUBLE PRECISION array, dimension (LDT, N) On entry, the N-by-N upper triangular matrix T. On exit, if JOB = 'S', T contains the upper triangular matrix P from the generalized Schur factorization; 2-by-2 diagonal blocks of P corresponding to 2-by-2 blocks of S are reduced to positive diagonal form, i.e., if H(j+1,j) is non-zero, then T(j+1,j) = T(j,j+1) = 0, T(j,j) > 0, and T(j+1,j+1) > 0. If JOB = 'E', the diagonal blocks of T match those of P, but the rest of T is unspecified. LDT (input) INTEGER The leading dimension of the array T. LDT >= max( 1, N ). ALPHAR (output) DOUBLE PRECISION array, dimension (N) The real parts of each scalar alpha defining an eigenvalue of GNEP. ALPHAI (output) DOUBLE PRECISION array, dimension (N) The imaginary parts of each scalar alpha defining an eigenvalue of GNEP. If ALPHAI(j) is zero, then the j-th eigenvalue is real; if positive, then the j-th and (j+1)-st eigenvalues are a complex conjugate pair, with ALPHAI(j+1) = -ALPHAI(j). BETA (output) DOUBLE PRECISION array, dimension (N) The scalars beta that define the eigenvalues of GNEP. Together, the quantities alpha = (ALPHAR(j),ALPHAI(j)) and beta = BETA(j) represent the j-th eigenvalue of the matrix pair (A,B), in one of the forms lambda = alpha/beta or mu = beta/alpha. Since either lambda or mu may overflow, they should not, in general, be computed. Q (input/output) DOUBLE PRECISION array, dimension (LDQ, N) On entry, if COMPZ = 'V', the orthogonal matrix Q1 used in the reduction of (A,B) to generalized Hessenberg form. On exit, if COMPZ = 'I', the orthogonal matrix of left Schur vectors of (H,T), and if COMPZ = 'V', the orthogonal matrix of left Schur vectors of (A,B). Not referenced if COMPZ = 'N'. LDQ (input) INTEGER The leading dimension of the array Q. LDQ >= 1. If COMPQ='V' or 'I', then LDQ >= N. Z (input/output) DOUBLE PRECISION array, dimension (LDZ, N) On entry, if COMPZ = 'V', the orthogonal matrix Z1 used in the reduction of (A,B) to generalized Hessenberg form. On exit, if COMPZ = 'I', the orthogonal matrix of right Schur vectors of (H,T), and if COMPZ = 'V', the orthogonal matrix of right Schur vectors of (A,B). Not referenced if COMPZ = 'N'. LDZ (input) INTEGER The leading dimension of the array Z. LDZ >= 1. If COMPZ='V' or 'I', then LDZ >= N. WORK (workspace/output) DOUBLE PRECISION array, dimension (MAX(1,LWORK)) On exit, if INFO >= 0, WORK(1) returns the optimal LWORK. LWORK (input) INTEGER The dimension of the array WORK. LWORK >= max(1,N). If LWORK = -1, then a workspace query is assumed; the routine only calculates the optimal size of the WORK array, returns this value as the first entry of the WORK array, and no error message related to LWORK is issued by XERBLA. INFO (output) INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value = 1,...,N: the QZ iteration did not converge. (H,T) is not in Schur form, but ALPHAR(i), ALPHAI(i), and BETA(i), i=INFO+1,...,N should be correct. = N+1,...,2*N: the shift calculation failed. (H,T) is not in Schur form, but ALPHAR(i), ALPHAI(i), and BETA(i), i=INFO-N+1,...,N should be correct. Iteration counters: JITER -- counts iterations. IITER -- counts iterations run since ILAST was last changed. This is therefore reset only when a 1-by-1 or 2-by-2 block deflates off the bottom.
{"url":"https://manpages.org/dhgeqz/3","timestamp":"2024-11-07T15:56:12Z","content_type":"text/html","content_length":"38633","record_id":"<urn:uuid:7f6e3e23-e155-4773-a017-93762872524c>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00734.warc.gz"}
Doc Madhattan About ten years ago Giovanni Guido proposed me a new model to describe particles. Under certain aspects it remembers to me a string model: simplifying as much as possible, Giovanni's model supposes the presence of small quantum oscillators connected to each other by lines that run along a space-time lattice; these lines form geometric figures, golden triangles to be precise, which constitute the geometric structure of particles, elementary and otherwise. I have never had the opportunity to actively work on the model: the commitments in outreach with INAF have always been somehow a priority due to the type of contract that, in some way, pushes me to give priority to these aspects. However, despite everything, we have used his vision to describe a universe that is in a certain sense cyclical that you can find in the following two articles: The Universe at Lattice-Fields Variational Principle in an Expanding Universe Working on Guido's particle model, however, has always been a worry of mine, so a couple of years ago I proposed to him to try to develop a didactic formulation of the model that could be used to bring elementary particles not only to university, but also to high school. From that idea, although my contributions to the writing were minimal, a triptych of articles came out, of which you can find the links below, and which received a particular review that made me very happy: The Authors propose a didactic model representative of the particles described of the Standard Model. In this approach, particles result to be geometric forms corresponding to geometric structures of coupled quantum oscillators. An in-depth phenomenology of particles surfaces and this seems fully compatible with that of the Standard Model. Consequently, it is possible to calculate the mass of Higgs's Boson and the mass of the pair "muon and muonic neutrino" in "geometrical" sense. Via this geometric approach, it seems also possible to solve crucial aspects of the Standard Model. as the neutrinos’ oscillations and the intrinsic chirality of the neutrino and antineutrino. The paper is very interesting and deserves immediate publication in JHEPGC. I don't consider the work finished and indeed I would like to be able to bring these ideas into practice in schools. For now I'm happy to share this happiness here on the blog. The Madrid Codices I and II are two collections of Leonardo da Vinci's manuscripts found in a collection in the National Library of Madrid at the end of the 1960s. In particular, the Madrid Codex I consists of 382 pages of notes accompanied by something like 1600 between sketches and drawings and addresses a problem for which Leonardo is in some way particularly known as an engineer and designer: gears. Leonardo's starting point is the study of friction. This is a force that opposes motion, but thanks to its opposition it is possible for us to walk without slipping or losing balance. As we all know today, however, the intensity of the frictional force depends on the surfaces that are in contact with each other, on how smooth or rough they are, which is independent of the area in contact and which can be reduced by using, for example, a lubricant or cylinders. All this, however, was already known to Leonardo, as it is possible to observe from the reading of the Madrid Code I. Furthermore, it is always Leonardo who introduced the concept of friction coefficient, defining it as the ratio between the force required to slide two surfaces horizontally on top of each other and the pressure between the two surfaces. Leonardo also estimated the value of this friction coefficient in 1/4, consistent with the materials best known to the florentine and with which he could carry out experiments (wood on wood, bronze on steel, etc.)^(1). At this point Leonardo is ready to develop a series of gears capable of carrying mechanical energy and producing motion, minimizing friction with the use of spheres and cylinders, as can be seen from his numerous drawings. In particular, however, it is Leonardo's mechanical use of two particular geometric shapes that is striking, because it anticipates their actual adoption by centuries: the epicloidal teeth and the globoidal gear. Few days before the formal acceptance of this paper, an independent study about the architecture of the π Men planetary system was published^(1). The results of that work, based on public data and not including the ESPRESSO observations, confirm the high mutual inclination of the orbital planes of π Men b and c. Our results are in agreement with those of Xuan & Wyatt and are characterized by a better formal precision.^(2) Pi Mensae, or π Men, is a yellow dwarf star in the constellation of Mensa. We know that it has a little planetary system, constituted by two planets (or, if you prefer, we discover only two planets orbiting around Pi Mensae): Pi Mensae b, one of the most massive planets ever discovered, about 14.1 the mass of Jupiter, and Pi Mensae c, a super-Earth, about 4.5 the mass of our planet. In 2020, an analysis with Gaia DR2 and Hipparcos astrometry showed that planets b and c are located on orbits mutually inclined by 49°-131°, which causes planet c to not transit most of the time, and acquire large misalignments with its host star's spin axis^(1). This result was discovered also by an italian team^(2), but published just few days after on arXiv, using ESPRESSO (Echelle Spectrograph for Rocky Exoplanet- and Stable Spectroscopic Observations), a sèectrograph designed and developed in Italy by researchers of Brera's Astronomical Observatory at the Merate's laboratories. The instrument, mounted on the Very Large Telescope, has in a certain sense been tested with the planetary system of Pi Mensae, therefore, even coming seconds for a few, the result can be considered a success for the young ESPRESSO. 1. Xuan, J. W., & Wyatt, M. C. (2020). Evidence for a high mutual inclination between the cold Jupiter and transiting super Earth orbiting π Men. Monthly Notices of the Royal Astronomical Society, 497(2), 2096-2118. doi:10.1093/mnras/staa2033 (arXiv) ↩︎ ↩︎ 2. Damasso, M., Sozzetti, A., Lovis, C., Barros, S. C. C., Sousa, S. G., Demangeon, O. D. S., ... & Rebolo, R. (2020). A precise architecture characterization of the $\pi$ Men planetary system. A&A, Forthcoming article doi:10.1051/0004-6361/202038416 (arXiv) ↩︎ ↩︎ The Jovian moon Io, imaged by SHARK-VIS@LBT on January 10, 2024. The red, green, and blue channels of this tri-color image show the I (infrared), R (red), and V (green) spectral bands, respectively (corresponding at wavelengths of 755, 620 and 550 nanometers). This is the highest resolution image of Io ever obtained from a ground-based telescope. A research team led by the INAF (Istituto Nazionale di Astrofisica) and the University of Trieste has once again harnessed the very distant and energetic relativistic winds generated by a distant but decidedly active quasar (one of the brightest discovered so far). A study published in The Astrophysical Journal reports the first observation at different wavelengths of the interaction between the black hole and the quasar of the host galaxy J0923+0402 during the initial phases of the Universe, about 13 billion years ago (when the Universe was less than a billion years old). In addition to evidence of a gas storm generated by the black hole, experts have discovered for the first time a halo of gas extending well beyond the galaxy, suggesting the presence of material ejected from the galaxy itself via winds generated by the black hole. Our study helps us understand how gas is expelled or captured by galaxies in the young Universe and how black holes grow and can impact the evolution of galaxies. We know that the fate of galaxies such as the Milky Way is closely linked to that of black holes, since these can generate galactic storms capable of extinguishing the formation of new stars. Studying the primordial eras allows us to understand the initial conditions of the Universe we see today. - Manuela Bischetti Unfortunately I heared of this news now through the Physics World newsletter, whose releases from the beginning of the year I am guilty of catching up with a guilty delay. On January 22, 2024 Arno Penzias left us. He was 90 years old and had been awarded the Nobel Prize for Physics in 1978 for the discovery, together with Robert Wilson, of the cosmic microwave background radiation. As the story goes, their discovery came by chance, while they were trying to eliminate background noise from the signals that the Bell Labs radio astronomy antenna was receiving. In fact, another group of astronomers, headed by Robert Dicke, was also busy working on the question, and in the end he was "satisfied" with correctly interpreting the origin of the signal measured by the two researchers. The two articles, the observational one and the interpretative one, were published in the same issue of the Astrophysical Journal. The story, as well as being told in the Physics World article linked at the beginning of this post, is also summarized in the video that you can see below: Alan Turing was fascinated by mathematical patterns found in nature. In particular, he noticed that the Fibonacci sequence often occurred in sunflower seed heads. However, his theory that sunflower heads featured Fibonacci number sequences was left unfinished when he died in 1954, but some years ago a citizen science project led by the Museum of Science and Industry in Manchester and the Manchester Science Festival has found examples of Fibonacci sequences and other mathematical sequences in more than 500 sunflowers. Inspired by this, I suggest a prompt to NightCafe, a text-to-image generator to celbrate Turing and his unstoppable mind:
{"url":"http://docmadhattan.fieldofscience.com/2024/","timestamp":"2024-11-03T16:22:45Z","content_type":"application/xhtml+xml","content_length":"141859","record_id":"<urn:uuid:8570c556-6e6b-4e5f-a67a-509d84b18784>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00085.warc.gz"}
Turbulent flow 1. Turbulent Coherent Structures in Wall-Bounded Turbulent Flows Turbulent boundary layers (TBLs) are observed in many fluid dynamic engineering applications, such as automobiles, ships, airplanes and heat-exchangers, and the fundamental mechanisms of heat and momentum transfer are controlled by the dynamics of turbulent structures. In particular, it has been known that very-large-scale motions or superstructures observed in turbulent flows are prominent and these motions typically account for half of the streamwise turbulent kinetic energy and more than half of the Reynolds shear stress in canonical wall-bounded turbulent flows of pipes, channels and boundary layers. Thus, understanding the fundamental nature of the structures will improve modeling and control in these important applications. [Figure 1] Very large-scale motion in a turbulent pipe flow [Figure 2] Time evolution of a single vortical structure 2. Rough-Wall Turbulent Boundary Layer Flows Turbulent boundary layers (TBLs) are observed in numerous fluid dynamic engineering applications, and many experimental and numerical studies have examined, spatial features of TBLs. In engineering applications involving wall-bounded boundary layer flow (e.g. automobiles, ships, airplanes and heat exchangers), the roughness of the wall surface is an important design parameter because it influences flow characteristics such as the transport of heat, mass and momentum. Although effects of surface roughness on a TBL have been examined in many experimental and numerical studies, knowledge of these effects remains incomplete. [Figure 3] Direct numerical simulation of a turbulent boundary layer with surface change from smooth to rough walls 3. Adverse-Pressure Gradient Turbulent Boundary Layer Flows Turbulent boundary layers (TBLs) are subjected to adverse pressure gradients (APGs) in numerous engineering applications, such as diffusers, turbine blades and the trailing edges of aerofoils. Because the upper limit of the efficiency of such devices is almost always determined by the APGs, the behavior of the APG flow is of practical importance. A literature survey reveals many studies dealing with pressure gradient effects in turbulent boundary layers, but most of them have focused only on statistical properties, and little has been known about coherent structures in TBL with APG. [Figure 4] Mean velocity and streamwise turbulent intensity profiles of turbulent boundary layers subjected to zero- and adverse-pressure gradients. m denotes the exponent of the APG. [Figure 5] Premultiplied spanwise energy spectrum maps of the streamwise velocity fluctuations (a) ZPG, (b) mild APG, (c) moderate APG and (d) strong APG 4. Temporally Decelerating Turbulent Pipe Flow 5. Turbulent Plane Couette-Poiseuille Flow For several decades, turbulent Couette or Couette-Poiseuille flows have been received much attention in the area of fluid mechanics, because they are present whenever a wall moves to the flow direction (e.g., turbulent bearing films). These flows are known to be beneficial for more efficient diffusion, less resistance and greater turbulence kinetic energy than those in Poiseuille flows. Because the fundamental mechanisms of heat and mass transfer in turbulent Couette-like flows are mostly attributed to dynamics of turbulent coherent structures, study of turbulent structures in Couette-like flows with simple flow geometry will contribute to further advances for flow control, turbulent modeling and understanding of turbulent structures in Poiseuille flows. [Figure 6] Schematic of turbulent Couette-Poiseuille flow with moving wall at the top. The bottom wall is stationary with no-slip condition. 6. Temporally Accelerating Turbulent Pipe Flow Flow acceleration or deceleration in wall-bounded turbulent flows is frequently encountered not only in engineering applications (e.g., turbo-machinery and heat exchanager) but also in biomedical application (e.g., airflow in human lungs and blood flow in large arteries). Earlier studies for unsteady and non-periodic turbulent flows (Kline et al. 1967; Narasimha & Sreenivasan 1973; Warnack & Ferholz 1998) have shown that decelerating the flow enhances turbulence with more frequent and violent bursting event, and large-scale structures emerge more prominently in the outer layer. In contrast, when the flow is accelerated, the bursting process ceases, and relaminarization or 'reverse transition' occurs, thereby resulting in skin friction drag reduction, although kinetic energy of mean flow is increased by the acceleration. [Figure 7] Temporal evolution of the streamwise velocity fluctuation on the horizontal plane in a temporally accelerating turbulent pipe flow 7. Super-Hydrophobic Drag Reduction in Turbulent Pipe and Channel Flows Super-hydrophobic surfaces are patterned rough surfaces covered with a hydrophobic coating with micro-scale structures and large contact angle. Upon contact of liquids with these surfaces, small bubbles are created in between the surface roughness tips, producing a slip velocity over a gas/liquid menisci. This slippage generally leads to drag reduction, and it has been paid great attention for drag reduction in this society. [Figure 8] Schematic of turbulent channel and pipe flows over super-hydrophobic surface 8. Active Control of Turbulent boundary Channel Flow using Wall Shear Free Control Over several decades, significant efforts have been devoted to reduction of skin-friction drag in wall-bounded turbulent flows due to limited natural resources and environmental deterioration (Kasagi et al. 2009). Because decreasing the drag also induces reduction of structural vibrations, noise and surface heat transfer generated by turbulent flows (Kim & Bewley 2007), it is desirable to develop effective and reliable flow control strategies for drag reduction in many engineering applications. Here, we have revised a new flow control concept for active drag reduction using streamwise mean velocity free condition. Because the method only requires velocity information at the wall and achieves a large drag reduction rate even over a limited area, the active flow control suggested here could be a more practical and efficient method in real application. [Figure 9] Schematic of turbulent channel flow with spanwise alternating patterns. The black and white colors indicate no-control (no-slip) and control (slip) surfaces at the wall. 9. Shock-Turbulence Interaction in a Turbulent Channel Flow 10. Active Control of Pressure Fluctuations in Turbulent Cavity Flow Large-eddy simulations of turbulent boundary layer flows over an open cavity are conducted to investigate the effects of wall-normal steady blowing on the surpression of pressure fluctuations on the cavity walls. [Figure 10] Time evolution of the instantaneuos spanwise vorticity with swirling strength on the xy-plane: (a) base line, (b) Cμ = 0.015, (c) Cμ = 0.050 Deep Reinforcement Learning (DRL)-based Control of Turbulent Cavity Flows While numerous flow control methods have been explored for both turbulent and laminar flow scenarios, they often encounter limitations when dealing with chaotic flows characterized by nonlinearity and high dimensionality. With the advent of machine learning, Deep Reinforcement Learning (DRL) has emerged as a promising approach for addressing various flow control challenges. [Figure 3] Schematic of the deep reinforcement learning (DRL)-based model for controlling turbulent cavity flow to reduce cavity oscillations Additionally, DRL-based control methods, particularly those using the PPO algorithm, have been successfully applied to active flow control in turbulent cavity flows under various conditions. The PPO algorithm, which consists of actor and critic neural networks, uses flow field data as states (inputs to both networks; in this case, pressure data within the recirculation zone and along the wall) to train these networks. It provides actions (specifically, the intensity of the synthetic jet upstream of the cavity) to the flow field as a form of control. Due to its efficiency in handling high-dimensional flows and its stability, the PPO algorithm shows great promise for controlling unsteady, high-dimensional flows (Vignon et al., 2023).
{"url":"http://flow.unist.ac.kr/bbs/board.php?bo_table=sub1_2","timestamp":"2024-11-03T12:32:17Z","content_type":"text/html","content_length":"21722","record_id":"<urn:uuid:a6923fb5-3b37-491d-b246-2c009792650a>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00394.warc.gz"}
What is Arithmetic Mean? Understanding Arithmetic Mean Arithmetic Mean is a fundamental concept in mathematics, commonly used to calculate the average of a set of numbers. This blog post will explain what Arithmetic Mean is, how to compute it, and provide some examples to enhance your understanding. Stay tuned for more insights! What is Arithmetic Mean? In statistics, the Arithmetic Mean, often referred to as the mathematical average or simply the mean, is the sum of a collection of numbers divided by the number of elements in that collection. When people mention an average, they are typically referring to the arithmetic mean. The collection sizes can vary from very small, such as in scientific studies, to quite large, like in census data. Regardless of the size, the arithmetic mean serves as a valuable tool for understanding the central tendency of a data set. To compute the arithmetic mean, sum all the values in the data set and then divide by the total number of values. How to Calculate Arithmetic Mean For example, consider a data set with five values: 2, 4, 6, 8, and 10. The arithmetic mean is calculated as follows: So, the arithmetic mean of this data set is 6. Calculating the arithmetic mean is straightforward. However, it is important to note that the mean can be influenced by outliers, especially in small data sets. Outliers are values significantly different from the rest of the data, which can skew the result. Therefore, it is often helpful to also calculate other measures of central tendency, such as the median and mode, to get a more comprehensive understanding of your data set. Arithmetic Mean Formula The formula for calculating the arithmetic mean is: • = Arithmetic Mean • = Number of values • = Sum of the values Arithmetic Mean of Grouped Data For grouped data, the arithmetic mean is the weighted average of the group boundaries. The weight is the midpoint of the class interval multiplied by the class size. To find the arithmetic mean for grouped data, first, sum the weighted averages of each group, then divide by the total number of values. The arithmetic mean is typically used for quantitative data that is evenly distributed. For instance, if there are ten values in a set evenly distributed between 1 and 100, the arithmetic mean would be 50.5. However, if the data is not evenly distributed, the arithmetic mean may not accurately represent the data set. In such cases, other statistics, such as the median or mode, may be more The arithmetic mean is a fundamental concept that frequently appears in various studies. Understanding how it works is crucial for correctly applying it in different situations. The Noon app is an excellent resource for students looking to learn from top educators worldwide. With over 10,000 lectures on various subjects, Noon offers something for everyone. Sign up today and start your learning journey!arithmetic mean
{"url":"https://pinterestblogs.com/what-is-arithmetic-mean/","timestamp":"2024-11-06T14:20:46Z","content_type":"text/html","content_length":"69693","record_id":"<urn:uuid:7e637bad-edfc-4be5-9825-f85513d409e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00394.warc.gz"}
Times Table Chart 1 100 | Multiplication Chart Printable Times Table Chart 1 100 Multiplication Table 1 100 Free Printable Multiplication Chart 100X100 Times Table Chart 1 100 Times Table Chart 1 100 – A Multiplication Chart is an useful tool for kids to learn how to multiply, split, and also locate the smallest number. There are many usages for a Multiplication Chart. What is Multiplication Chart Printable? A multiplication chart can be utilized to help kids learn their multiplication facts. Multiplication charts come in many forms, from complete page times tables to single web page ones. While specific tables are useful for providing chunks of information, a full web page chart makes it simpler to review facts that have actually currently been grasped. The multiplication chart will commonly feature a top row as well as a left column. The top row will certainly have a listing of products. When you intend to find the product of 2 numbers, pick the initial number from the left column and the second number from the top row. Relocate them along the row or down the column till you get to the square where the two numbers satisfy once you have these numbers. You will after that have your product. Multiplication charts are handy understanding tools for both kids as well as grownups. Youngsters can utilize them in the house or in institution. Times Table Chart 1 100 are offered on the Internet and can be published out and also laminated for durability. They are a wonderful tool to utilize in mathematics or homeschooling, and will certainly provide a visual suggestion for children as they learn their multiplication facts. Why Do We Use a Multiplication Chart? A multiplication chart is a diagram that shows exactly how to multiply two numbers. You select the very first number in the left column, relocate it down the column, and also after that pick the second number from the leading row. Multiplication charts are useful for several factors, including helping children learn how to split and streamline fractions. Multiplication charts can also be helpful as desk sources since they serve as a constant suggestion of the trainee’s progress. Multiplication charts are additionally helpful for aiding pupils memorize their times tables. As with any skill, remembering multiplication tables takes time and also technique. Times Table Chart 1 100 Times Tables Chart 1 100 Times Tables Worksheets Multiplication Table 1 100 2020 Printable Calendar Posters Images 10 Best 1 100 Chart Printable Printablee Times Table Chart 1 100 If you’re seeking Times Table Chart 1 100, you’ve involved the right place. Multiplication charts are available in different layouts, consisting of complete dimension, half dimension, as well as a range of adorable designs. Some are vertical, while others feature a horizontal format. You can also find worksheet printables that include multiplication formulas as well as math facts. Multiplication charts and also tables are vital tools for kids’s education and learning. These charts are fantastic for use in homeschool math binders or as classroom posters. A Times Table Chart 1 100 is a beneficial tool to strengthen math realities and also can aid a kid discover multiplication rapidly. It’s likewise a wonderful tool for avoid counting as well as learning the times tables. Related For Times Table Chart 1 100
{"url":"https://multiplicationchart-printable.com/times-table-chart-1-100/","timestamp":"2024-11-07T15:21:28Z","content_type":"text/html","content_length":"42030","record_id":"<urn:uuid:de009557-4a49-45e5-8022-29134b1a6f44>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00521.warc.gz"}
Squaring / Square Roots / Radicals - ACT Math All ACT Math Resources Example Questions Example Question #3 : How To Multiply Complex Numbers Complex numbers take the form Correct answer: This equation can be solved very similarly to a binomial like Example Question #4 : How To Multiply Complex Numbers Complex numbers take the form Correct answer: This problem can be solved very similarly to a binomial like Example Question #22 : Complex Numbers Complex numbers take the form Which of the following is equivalent to Correct answer: When dealing with complex numbers, remember that If we square Yet another exponent gives us OR But when we hit Thus, we have a repeating pattern with powers of Since the remainder is 3, we know that Example Question #3 : How To Multiply Complex Numbers Correct answer: Begin by treating this just like any normal case of FOIL. Notice that this is really the form of a difference of squares. Therefore, the distribution is very simple. Thus: Now, recall that Example Question #8 : How To Multiply Complex Numbers Which of the following is equal to Correct answer: Remember that since Thus, we know that Example Question #41 : Squaring / Square Roots / Radicals Complex numbers take the form Simplify the following expression, leaving no complex numbers in the denominator. Correct answer: Solving this problem requires eliminating the nonreal term of the denominator. Our best bet for this is to cancel the nonreal term out by using the conjugate of the denominator. Remember that for all binomials This can also be applied to complex conjugates, which will eliminate the nonreal portion entirely (since Simplify. Note Combine and simplify. Simplify the fraction. Certified Tutor University of Sherbrooke, Doctor of Philosophy, Mathematics. University of Manitoba, Master of Science, Mathematics. Certified Tutor Millikin University, Bachelor of Science, Mathematics. Certified Tutor Fairfield University, Bachelor of Science, Biology, General. NUI Galway Ireland, Master of Science, Neuroscience. All ACT Math Resources ACT Math Tutors in Top Cities: Atlanta ACT Math Tutors Austin ACT Math Tutors Boston ACT Math Tutors Chicago ACT Math Tutors Dallas Fort Worth ACT Math Tutors Denver ACT Math Tutors Houston ACT Math Tutors Kansas City ACT Math Tutors Los Angeles ACT Math Tutors Miami ACT Math Tutors New York City ACT Math Tutors Philadelphia ACT Math Tutors Phoenix ACT Math Tutors San Diego ACT Math Tutors San Francisco-Bay Area ACT Math Tutors Seattle ACT Math Tutors St. Louis ACT Math Tutors Tucson ACT Math Tutors Washington DC ACT Math Tutors
{"url":"https://www.varsitytutors.com/act_math-help/exponents/algebra/squaring-square-roots-radicals?page=5","timestamp":"2024-11-11T07:50:42Z","content_type":"application/xhtml+xml","content_length":"169448","record_id":"<urn:uuid:711d88cb-2db2-4a3a-8989-c17df6fc2788>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00808.warc.gz"}
A Follow Up -- Explicit Error BoundsA Follow Up – Explicit Error Bounds A Follow Up -- Explicit Error Bounds In my previous post I talked about how we can use probability theory to make certain scary looking limits obvious. I mentioned that I would be writing a follow up post actually doing one of the derivations formally, and for once in my life I’m going to get to it quickly! Since I still consider myself a bit of a computer scientist, it’s important to know the rate at which limits converge, and so we’ll keep track of our error bounds throughout the computation. Let’s get to it! Let $f : \mathbb{R} \to \mathbb{R}$ be bounded and locally lipschitz. Then for every $x \gt 0$, \(\displaystyle e^{-nx} \sum_{k=0}^\infty \frac{(nx)^k}{k!} f \left ( \frac{k}{n} \right ) = f(x) \pm \widetilde{O}_x \left ( \frac{1}{\sqrt{n}} \right )\) In the interest of exposition, I’ll show far more detail than I think is required for the problem. Hopefully this shows how you might come up with each step yourself. Let $X_n$ be poisson with parameter $nx$. Then notice \[e^{-nx} \sum_{k=0}^\infty \frac{(nx)^k}{k!} f \left ( \frac{k}{n} \right ) = \mathbb{E} \left [ f \left ( \frac{X_n}{n} \right ) \right ]\] Now we compute \[ \left | \mathbb{E} \left [ f \left ( \frac{X_n}{n} \right ) \right ] - fx \right | &= \left | \int f \left ( \frac{X_n}{n} \right ) \ d \mathbb{P}(X_n) - \int fx \ d \mathbb{P}(X_n) \right | \\ &= \left | \int f \left ( \frac{X_n}{n} \right ) - fx \ d \mathbb{P}(X_n) \right | \] Now we use an extremely useful trick in measure theory! We’ll break up this integral into two parts. One part where the integrand behaves well, and one part (of small measure) where the integrand behaves badly. Using our intuition that $\frac{X_n}{n} \approx x$, we’ll partition into two parts: • $\left \lvert \frac{X_n}{n} - x \right \rvert \lt \delta_n$ (the “good” set) • $\left \lvert \frac{X_n}{n} - x \right \rvert \geq \delta_n$ (the “bad” set) We’ll be able to control the first part since there our integrand is $\approx 0$, and we’ll be able to control the second part since it doesn’t happen often. Precisely, we’ll use a chernoff bound. There’s a whole zoo of things called “chernoff bounds”, all subtly different, but any one of them will be good enough for our purposes here. I’m just using the first one that came up when I googled “poisson chernoff bounds” :P. So we split our integral: \[ & \left | \int f \left ( \frac{X_n}{n} \right ) - fx \ d \mathbb{P}(X_n) \right | \\ &\leq \left | \int_{\left | \frac{X_n}{n} - x \right | \lt \delta_n} f \left ( \frac{X_n}{n} \right ) - fx \ d \mathbb{P}(X_n) \right | + \left | \int_{\left | \frac{X_n}{n} - x \right | \geq \delta_n} f \left ( \frac{X_n}{n} \right ) - fx \ d \mathbb{P}(X_n) \right | \\ &\leq \int_{\left | \frac{X_n}{n} - x \right | \lt \delta_n} \left | f \left ( \frac{X_n}{n} \right ) - fx \right | \ d \mathbb{P}(X_n) + \int_{\left | \frac{X_n}{n} - x \right | \geq \delta_n} \left | f \left ( \frac{X_n}{n} \right ) - fx \right | \ d \mathbb{P}(X_n) \] Now since $f$ is locally lipschitz, we know that in a neighborhood of $x$, we must have \(|f(y) - f(x)| \leq M_x |y - x|\). So for $\delta_n$ small enough, we’ll have (in the “good” set) \[\left | f \left ( \frac{X_n}{n} \right ) - fx \right | \leq M_x \left | \frac{X_n}{n} - x \right | \leq M_x \delta_n\] Since $f$ is bounded, say by $\lVert f \rVert_\infty$, we know that no matter what we’ll have \[\left | f \left ( \frac{X_n}{n} \right ) - fx \right | \leq \left | f \left ( \frac{X_n}{n} \right ) \right | + \left | fx \right | \leq 2 \lVert f \rVert_\infty\] Putting these estimates into the above integral, we find^1 \[ &\leq \int_{\left | \frac{X_n}{n} - x \right | \lt \delta_n} M_x \delta_n \ d \mathbb{P} + \int_{\left | \frac{X_n}{n} - x \right | \geq \delta_n} 2 \lVert f \rVert_\infty \ d \mathbb{P} \\ &= (M_x \delta_n) \mathbb{P} \left ( \left | \frac{X_n}{n} - x \right | \geq \delta_n \right ) + 2 \lVert f \rVert_\infty \mathbb{P} \left ( \left | \frac{X_n}{n} - x \right | \geq \delta_n \right ) \\ since we can pull the constants out of the integral, then get the measures of the “good” and “bad” sets. Now the good set has measure at most $1$^2, and chernoff bounds show that the bad set has exponentially small measure: \[ \mathbb{P} \left ( \left | \frac{X_n}{n} - x \right | \geq \delta_n \right ) &= \mathbb{P} \left ( \left | X_n - nx \right | \geq n\delta_n \right ) \\ &\leq 2 \exp \left ( \frac{-(n \delta_n)^2} {2(xn + n \delta_n)} \right ) \\ &= 2 \exp \left ( \frac{-n \delta_n^2}{2(x + \delta_n)} \right ) \] So plugging back into our integral, we get \[\leq M_x \delta_n + 2 \lVert f \rVert_\infty 2 \exp \left ( \frac{-n \delta_n^2}{2(x + \delta_n)} \right )\] We want to show that this goes to $0$, and all we have is control over the sequence $\delta_n$. $M_x$ is a constant, so we have to have $\delta_n \to 0$. Likewise $4 \lVert f \rVert_\infty$ is a constant, so we need $\exp \left ( \frac{-n \delta_n^2}{2(x + \delta_n)} \right ) \to 0$. Since the stuff in the exponent is negative and $2(x + \delta_n) \approx 2x$ is a constant, that roughly means we need $n \delta_n^2 \to \infty$. We can choose any $\delta_n$ we like which satisfies these two properties, but it’s somewhat difficult to find something that works. We notice that $\delta_n = n^{-\frac{1}{2}}$ just barely fails, since $n^{- \frac{1}{2}} \to 0$, but $n \left ( n^{- \frac{1}{2}} \right )^2 \to 1$. If we pick $\delta_n$ just a smidge bigger than $n^{- \frac{1}{2}}$, say, $n^{- \frac{1}{2}} \log(n)$, that should get the job done^3. Clearly $M_x n^{- \frac{1}{2}} \log(n) = \widetilde{O}\left ( n^ {-\frac{1}{2}} \right )$, and moreover \[\exp \left ( \frac {-n \left ( n^{-\frac{1}{2}}\log(n) \right )^2} {2 \left ( x + n^{-\frac{1}{2}} \log(n) \right )} \right ) = n^{- \frac{1}{2} \frac{1}{\frac{x}{\log(n)} + \frac{1}{\sqrt{n}}}}\] and a tedious L’hospital rule computation shows that this is eventually less than $n^{- \frac{1}{2}} \log(n)$. So what have we done? We started with \[\left | e^{-nx} \sum_{k=0}^\infty \frac{(nx)^k}{k!} f \left ( \frac{k}{n} \right ) - f(x) \right | = \left | \mathbb{E} \left [ f \left ( \frac{X_n}{n} \right ) \right ] - f(x) \right |\] and we showed this is at most \[M_x \delta_n + 2 \lVert f \rVert_\infty 2 \exp \left ( \frac{-n \delta_n^2}{2(x + \delta_n)} \right ) = O_x \left ( \frac{\log(n)}{\sqrt{n}} \right ) = \widetilde{O}_x \left ( \frac{1}{\sqrt{n}} \ right )\] So, overall, we have \[e^{-nx} \sum_{k=0}^\infty \frac{(nx)^k}{k!} f \left ( \frac{k}{n} \right ) = f(x) \pm \widetilde{O}_x \left ( \frac{1}{\sqrt{n}} \right )\] as promised. After I got through with this proof, I wanted to see if it agreed with some simple experiments. The last step of this, where we played around with the $\delta_n$s, was really difficult for me, and if your math is making a prediction, it’s a nice sanity check to go ahead and test it. So then, let’s try to approximate some functions! I’ve precomputed these examples, but you can play around with the code at the bottom if you want to. First, let’s try to approximate $\sin(3/4)$ using this method. We get nice decay, but it’s much faster than $\frac{\log(n)}{\sqrt{n}}$. Indeed, sage guesses it’s closer to $\frac{\log(n)}{n^{1.6}}$. (Edit: I realized I never said what these pictures represent. The $x$ axis is the number of terms used, and the $y$ axis is the absolute error in the approximation. You can figure this out based on the code below, but you shouldn’t need to understand the code to understand the graphs so I’m adding this little update ^_^.) That’s fine, though. It’s more than possible that I was sloppy in this derivation, or that tighter error bounds are possible (especially because $\sin$ is much better behaved than your average bounded locally lipschitz function). So let’s try it on a more difficult function^4. What if we try to approximate $\sin(1/(0.001 + x))$ at $x = 0.1$. (We have to perturb things a little bit so the code doesn’t divide by $0$ anywhere). Now we get some oscillatory behavior, which seems to be stabilizing. At least the guess in this case is closer to what we computed, as the blue graph is roughly $1.23 \frac{\log n}{n^{0.56}}$. This obviously isn’t an upper bound for our data, but it’s only off by a translation upwards, and they do seem to be decaying at roughly the same rate. The fact that this example gives something fairly close to the bounds we ended up getting makes me feel better about things. In fact, after a bit of playing around, I can’t find anything where sage guesses the decay is slower than $\frac{\log(n)}{\sqrt{n}}$, which also bodes well. If you want to play around for yourself, I’ve left the sage code below! I’m pretty confident in what we’ve done today, but I can’t help but feel like there’s a better way, or that there’s slightly tighter bounds to be had. If there’s anyone particularly skilled in “hard” analysis reading this, I would love to hear your thoughts ^_^. In any case, this has been fun to work through! I’ll see you all soon! 1. In the interest of continuing the quick analysis trick tradition of saying the obvious thing, notice what we’ve done here. We want to control an integral. How do we do it? We break it up into a “good” piece, which is big, and a “bad” piece, which is small. The “good” piece we can control because it’s good. For us, this means $f \left ( \frac{X_n}{n} \right ) \approx f(x)$, so their difference is $\approx 0$. The “bad” piece we can’t control in this way. Indeed it’s defined to be the piece that we can’t control in this way. Thankfully, if all is right with the world, the “bad” piece will be small! So we get some crummy (uniform) estimate on the bad piece, and then multiply by the size of the piece. Since the size is going to $0$ and our bound is uniform (even if it’s bad), we’ll win! ↩ 2. Notice just like we were content with a crummy bound on the integrand of the bad piece because we’re controlling the measure, we’re content with a potentially crummy bound on the measure of the good piece because we’re controlling the integrand! Of course, for our particular case, $1$ is actually a fairly good bound for the measure of the good piece, but we don’t need it to be. ↩ 3. In fact, $\delta_n = n^{- \frac{1}{2} + \epsilon}$ also gets the job done, but the computation showing \[\exp \left ( \frac{-n \delta_n^2}{2(x + \delta_n)} \right ) = O(\delta_n)\] is horrendous. ↩ 4. Interestingly, while trying to break things, I realized that I don’t really have a good understanding of “pathological” bounded lipschitz functions. Really the only tool in my belt is to make things highly oscillatory… Obviously we won’t have anything too pathological. After all, every (globally) lipschitz function is differentiable almost everywhere, and boundedness means we can’t just throw singularities around to cause havoc. But is oscillation really the only pathology left? It feels like there should be grosser things around, but it might be that they’re just hard to write down. Again, if you have any ideas about this, I would also love to hear about it! ↩
{"url":"https://grossack.site/2022/01/05/error-bounds-for-probability-sums","timestamp":"2024-11-14T08:22:51Z","content_type":"text/html","content_length":"25180","record_id":"<urn:uuid:934b5275-5973-4b98-9bd0-89a6bcdd138a>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00125.warc.gz"}
Members: 3658 Articles: 2'599'751 Articles rated: 2609 03 November 2024 Article overview A Parallel P^3M Code for Very Large Scale Cosmological Simulations Tom MacFarland ; Jakob Pichlmeier ; Frazer Pearce ; Hugh Couchman ; Date: 7 Aug 1997 Subject: astro-ph Abstract: We have developed a parallel Particle-Particle, Particle-Mesh (P^3M) simulation code for the T3E well suited to studying the time evolution of systems of particles interacting via gravity and gas forces in cosmological contexts. The parallel code is based upon the public-domain serial Adaptive P^3M code of Couchman et al(1). The algorithm resolves gravitational forces into a long range component computed by discretizing the mass distribution and solving Poisson’s equation on a grid using an FFT convolution method, and a short range component computed by direct force summation for sufficiently close particle pairs. The code consists primarily of a particle-particle computation parallelized by domain decomposition over blocks of neighbor-cells, a more regular mesh calculation distributed in planes along one dimension, and several transformations between the two distributions. Great care was taken throughout to make optimal use of the available memory, so that the current implementation is capable of simulating systems approaching 10^9 particles using a 1024^3 mesh for the long range force computation. These are thus among the largest N-body simulations ever carried out. We discuss these memory optimizations as well as those motivated by computational performance. Results from production runs have been very encouraging, and even prior to the implimentation of the full adaptive scheme the code has been used effectively for simulations in which the particle distribution becomes highly clustered as well as for other non-uniform systems of astrophysical interest. Source: arXiv, astro-ph/9708066 Services: Forum | Review | PDF | Favorites No review found. Note: answers to reviews or questions about the article must be posted in the forum section. Authors are not allowed to review their own article. They can use the forum section.
{"url":"http://science-advisor.net/article/astro-ph/9708066","timestamp":"2024-11-03T12:12:15Z","content_type":"text/html","content_length":"23303","record_id":"<urn:uuid:e41a67a3-90f2-44cd-b910-ea75229ef380>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00340.warc.gz"}
On a separate sheet of paper, find the difference quotient | Quiz Lookup On a separate sheet of paper, find the difference quotient On &acy; sep&acy;r&acy;te sheet &ocy;f paper, find the difference qu&ocy;tient Which p&acy;rt &ocy;f cellul&acy;r respir&acy;ti&ocy;n requires &ocy;xygen? C&ocy;nsider the t&acy;ble bel&ocy;w th&acy;t summ&acy;rizes the grade perf&ocy;rmance on a recent exam for two sections of the same course. One section is offered on Mondays and the other is offered on Tuesday. The values in the table represent the number of students who made a particular grade within each section and the totals (e.g., 8 students in the Monday section made an A grade, and 18 students total made an A grade). A B C TOTAL Monday Section 8 18 13 39 Tuesday Section 10 4 12 26 TOTAL 18 22 25 65 PART A (5 points): Find the probability that a student chosen at random is from the Monday section. PART B (5 points): Find the probability that TWO students chosen at random are from the Monday section. Assume that you can't draw the same student twice. PART C (5 points): Find the probability that a student chosen at random is NOT from the Monday section. PART D (5 points): Find the probability that a student chosen at random is from the Monday section OR earned a C grade. PART E (5 points): Find the probability that a student chosen at random is from the Monday section GIVEN they earned a C grade. DIRECTIONS for SHOWING WORK for ALL PARTS: Show all relevant steps, including any formulas or mathematical procedures you use, so it's clear how you obtained your answer. Use the EQUATION EDITOR to type all mathematical expressions, equations, and formulas. Write your final answer as a decimal rounded to the third decimal place as needed. ________________________________________________________ Chapter 4 Probability Formulas P(A or B) = P(A) + P(B) {"version":"1.1","math":"P(A or B) = P(A) + P(B)"} P(A or B) = P(A) +P(B)- P(A and B){"version":"1.1","math":"P(A or B) = P(A) +P(B)- P(A and B)"} P(A and B)=P(A)*P(B){"version":"1.1","math":"P (A and B)=P(A)*P(B)"} P(A and B)=P(A)*P(B|A){"version":"1.1","math":"P(A and B)=P(A)*P(B|A)"} P(E) = 1-P(E¯){"version":"1.1","math":"P(E) = 1-P(E¯)"} A client w&acy;s &acy;dmitted with m&acy;j&ocy;r depressi&ocy;n that was a single epis&ocy;de and moderate. During her stay, she was started on Prozac (fluoxetine) 40 mg orally every day. The nurse's discharge teaching should include all of the following except: C&acy;sefinding is the pr&ocy;cess &ocy;f identifying the pr&ocy;blems th&acy;t need to be reported to centr&acy;l cancer registries. Ad&ocy;lescents' persistent &acy;rguments &ocy;ver rules &acy;re m&ocy;st likely &acy; result of Which &ocy;f these is NOT &acy;n effect &ocy;f PTH? Which &ocy;pti&ocy;n is the best temper&acy;ture f&ocy;r compost piles? Select ONE &ocy;f the f&ocy;ll&ocy;wing prompts. Begin your response by identifying the number of the prompt to which you &acy;re responding, then &acy;nswer the prompt. 1) Expl&acy;in the theories /concepts of "microaggressions" and "colorblind racism." Provide an example of how each may be seen in the real world. Provide two merits and two critiques of these theories. 2) Explain the following concepts and the relationship between them: prejudice, discrimination, skin-color stratification, racism, and colorism When y&ocy;u exercise, b&ocy;th the r&acy;te &acy;nd depth &ocy;f your bre&acy;thing increase. What change in what gases occurs that result in these responses? Which p&acy;rt &ocy;f cellul&acy;r respir&acy;ti&ocy;n requires &ocy;xygen? Which p&acy;rt &ocy;f cellul&acy;r respir&acy;ti&ocy;n requires &ocy;xygen? Which p&acy;rt &ocy;f cellul&acy;r respir&acy;ti&ocy;n requires &ocy;xygen? Which p&acy;rt &ocy;f cellul&acy;r respir&acy;ti&ocy;n requires &ocy;xygen? Which p&acy;rt &ocy;f cellul&acy;r respir&acy;ti&ocy;n requires &ocy;xygen? C&ocy;nsider the t&acy;ble bel&ocy;w th&acy;t summ&acy;rizes the grade perf&ocy;rmance on a recent exam for two sections of the same course. One section is offered on Mondays and the other is offered on Tuesday. The values in the table represent the number of students who made a particular grade within each section and the totals (e.g., 8 students in the Monday section made an A grade, and 18 students total made an A grade). A B C TOTAL Monday Section 8 18 13 39 Tuesday Section 10 4 12 26 TOTAL 18 22 25 65 PART A (5 points): Find the probability that a student chosen at random is from the Monday section. PART B (5 points): Find the probability that TWO students chosen at random are from the Monday section. Assume that you can't draw the same student twice. PART C (5 points): Find the probability that a student chosen at random is NOT from the Monday section. PART D (5 points): Find the probability that a student chosen at random is from the Monday section OR earned a C grade. PART E (5 points): Find the probability that a student chosen at random is from the Monday section GIVEN they earned a C grade. DIRECTIONS for SHOWING WORK for ALL PARTS: Show all relevant steps, including any formulas or mathematical procedures you use, so it's clear how you obtained your answer. Use the EQUATION EDITOR to type all mathematical expressions, equations, and formulas. Write your final answer as a decimal rounded to the third decimal place as needed. ________________________________________________________ Chapter 4 Probability Formulas P(A or B) = P(A) + P(B) {"version":"1.1","math":"P(A or B) = P(A) + P(B)"} P(A or B) = P(A) +P(B)- P(A and B){"version":"1.1","math":"P(A or B) = P(A) +P(B)- P(A and B)"} P(A and B)=P(A)*P(B){"version":"1.1","math":"P (A and B)=P(A)*P(B)"} P(A and B)=P(A)*P(B|A){"version":"1.1","math":"P(A and B)=P(A)*P(B|A)"} P(E) = 1-P(E¯){"version":"1.1","math":"P(E) = 1-P(E¯)"} C&ocy;nsider the t&acy;ble bel&ocy;w th&acy;t summ&acy;rizes the grade perf&ocy;rmance on a recent exam for two sections of the same course. One section is offered on Mondays and the other is offered on Tuesday. The values in the table represent the number of students who made a particular grade within each section and the totals (e.g., 8 students in the Monday section made an A grade, and 18 students total made an A grade). A B C TOTAL Monday Section 8 18 13 39 Tuesday Section 10 4 12 26 TOTAL 18 22 25 65 PART A (5 points): Find the probability that a student chosen at random is from the Monday section. PART B (5 points): Find the probability that TWO students chosen at random are from the Monday section. Assume that you can't draw the same student twice. PART C (5 points): Find the probability that a student chosen at random is NOT from the Monday section. PART D (5 points): Find the probability that a student chosen at random is from the Monday section OR earned a C grade. PART E (5 points): Find the probability that a student chosen at random is from the Monday section GIVEN they earned a C grade. DIRECTIONS for SHOWING WORK for ALL PARTS: Show all relevant steps, including any formulas or mathematical procedures you use, so it's clear how you obtained your answer. Use the EQUATION EDITOR to type all mathematical expressions, equations, and formulas. Write your final answer as a decimal rounded to the third decimal place as needed. ________________________________________________________ Chapter 4 Probability Formulas P(A or B) = P(A) + P(B) {"version":"1.1","math":"P(A or B) = P(A) + P(B)"} P(A or B) = P(A) +P(B)- P(A and B){"version":"1.1","math":"P(A or B) = P(A) +P(B)- P(A and B)"} P(A and B)=P(A)*P(B){"version":"1.1","math":"P (A and B)=P(A)*P(B)"} P(A and B)=P(A)*P(B|A){"version":"1.1","math":"P(A and B)=P(A)*P(B|A)"} P(E) = 1-P(E¯){"version":"1.1","math":"P(E) = 1-P(E¯)"} C&ocy;nsider the t&acy;ble bel&ocy;w th&acy;t summ&acy;rizes the grade perf&ocy;rmance on a recent exam for two sections of the same course. One section is offered on Mondays and the other is offered on Tuesday. The values in the table represent the number of students who made a particular grade within each section and the totals (e.g., 8 students in the Monday section made an A grade, and 18 students total made an A grade). A B C TOTAL Monday Section 8 18 13 39 Tuesday Section 10 4 12 26 TOTAL 18 22 25 65 PART A (5 points): Find the probability that a student chosen at random is from the Monday section. PART B (5 points): Find the probability that TWO students chosen at random are from the Monday section. Assume that you can't draw the same student twice. PART C (5 points): Find the probability that a student chosen at random is NOT from the Monday section. PART D (5 points): Find the probability that a student chosen at random is from the Monday section OR earned a C grade. PART E (5 points): Find the probability that a student chosen at random is from the Monday section GIVEN they earned a C grade. DIRECTIONS for SHOWING WORK for ALL PARTS: Show all relevant steps, including any formulas or mathematical procedures you use, so it's clear how you obtained your answer. Use the EQUATION EDITOR to type all mathematical expressions, equations, and formulas. Write your final answer as a decimal rounded to the third decimal place as needed. ________________________________________________________ Chapter 4 Probability Formulas P(A or B) = P(A) + P(B) {"version":"1.1","math":"P(A or B) = P(A) + P(B)"} P(A or B) = P(A) +P(B)- P(A and B){"version":"1.1","math":"P(A or B) = P(A) +P(B)- P(A and B)"} P(A and B)=P(A)*P(B){"version":"1.1","math":"P (A and B)=P(A)*P(B)"} P(A and B)=P(A)*P(B|A){"version":"1.1","math":"P(A and B)=P(A)*P(B|A)"} P(E) = 1-P(E¯){"version":"1.1","math":"P(E) = 1-P(E¯)"} C&ocy;nsider the t&acy;ble bel&ocy;w th&acy;t summ&acy;rizes the grade perf&ocy;rmance on a recent exam for two sections of the same course. One section is offered on Mondays and the other is offered on Tuesday. The values in the table represent the number of students who made a particular grade within each section and the totals (e.g., 8 students in the Monday section made an A grade, and 18 students total made an A grade). A B C TOTAL Monday Section 8 18 13 39 Tuesday Section 10 4 12 26 TOTAL 18 22 25 65 PART A (5 points): Find the probability that a student chosen at random is from the Monday section. PART B (5 points): Find the probability that TWO students chosen at random are from the Monday section. Assume that you can't draw the same student twice. PART C (5 points): Find the probability that a student chosen at random is NOT from the Monday section. PART D (5 points): Find the probability that a student chosen at random is from the Monday section OR earned a C grade. PART E (5 points): Find the probability that a student chosen at random is from the Monday section GIVEN they earned a C grade. DIRECTIONS for SHOWING WORK for ALL PARTS: Show all relevant steps, including any formulas or mathematical procedures you use, so it's clear how you obtained your answer. Use the EQUATION EDITOR to type all mathematical expressions, equations, and formulas. Write your final answer as a decimal rounded to the third decimal place as needed. ________________________________________________________ Chapter 4 Probability Formulas P(A or B) = P(A) + P(B) {"version":"1.1","math":"P(A or B) = P(A) + P(B)"} P(A or B) = P(A) +P(B)- P(A and B){"version":"1.1","math":"P(A or B) = P(A) +P(B)- P(A and B)"} P(A and B)=P(A)*P(B){"version":"1.1","math":"P (A and B)=P(A)*P(B)"} P(A and B)=P(A)*P(B|A){"version":"1.1","math":"P(A and B)=P(A)*P(B|A)"} P(E) = 1-P(E¯){"version":"1.1","math":"P(E) = 1-P(E¯)"} C&ocy;nsider the t&acy;ble bel&ocy;w th&acy;t summ&acy;rizes the grade perf&ocy;rmance on a recent exam for two sections of the same course. One section is offered on Mondays and the other is offered on Tuesday. The values in the table represent the number of students who made a particular grade within each section and the totals (e.g., 8 students in the Monday section made an A grade, and 18 students total made an A grade). A B C TOTAL Monday Section 8 18 13 39 Tuesday Section 10 4 12 26 TOTAL 18 22 25 65 PART A (5 points): Find the probability that a student chosen at random is from the Monday section. PART B (5 points): Find the probability that TWO students chosen at random are from the Monday section. Assume that you can't draw the same student twice. PART C (5 points): Find the probability that a student chosen at random is NOT from the Monday section. PART D (5 points): Find the probability that a student chosen at random is from the Monday section OR earned a C grade. PART E (5 points): Find the probability that a student chosen at random is from the Monday section GIVEN they earned a C grade. DIRECTIONS for SHOWING WORK for ALL PARTS: Show all relevant steps, including any formulas or mathematical procedures you use, so it's clear how you obtained your answer. Use the EQUATION EDITOR to type all mathematical expressions, equations, and formulas. Write your final answer as a decimal rounded to the third decimal place as needed. ________________________________________________________ Chapter 4 Probability Formulas P(A or B) = P(A) + P(B) {"version":"1.1","math":"P(A or B) = P(A) + P(B)"} P(A or B) = P(A) +P(B)- P(A and B){"version":"1.1","math":"P(A or B) = P(A) +P(B)- P(A and B)"} P(A and B)=P(A)*P(B){"version":"1.1","math":"P (A and B)=P(A)*P(B)"} P(A and B)=P(A)*P(B|A){"version":"1.1","math":"P(A and B)=P(A)*P(B|A)"} P(E) = 1-P(E¯){"version":"1.1","math":"P(E) = 1-P(E¯)"} C&ocy;nsider the t&acy;ble bel&ocy;w th&acy;t summ&acy;rizes the grade perf&ocy;rmance on a recent exam for two sections of the same course. One section is offered on Mondays and the other is offered on Tuesday. The values in the table represent the number of students who made a particular grade within each section and the totals (e.g., 8 students in the Monday section made an A grade, and 18 students total made an A grade). A B C TOTAL Monday Section 8 18 13 39 Tuesday Section 10 4 12 26 TOTAL 18 22 25 65 PART A (5 points): Find the probability that a student chosen at random is from the Monday section. PART B (5 points): Find the probability that TWO students chosen at random are from the Monday section. Assume that you can't draw the same student twice. PART C (5 points): Find the probability that a student chosen at random is NOT from the Monday section. PART D (5 points): Find the probability that a student chosen at random is from the Monday section OR earned a C grade. PART E (5 points): Find the probability that a student chosen at random is from the Monday section GIVEN they earned a C grade. DIRECTIONS for SHOWING WORK for ALL PARTS: Show all relevant steps, including any formulas or mathematical procedures you use, so it's clear how you obtained your answer. Use the EQUATION EDITOR to type all mathematical expressions, equations, and formulas. Write your final answer as a decimal rounded to the third decimal place as needed. ________________________________________________________ Chapter 4 Probability Formulas P(A or B) = P(A) + P(B) {"version":"1.1","math":"P(A or B) = P(A) + P(B)"} P(A or B) = P(A) +P(B)- P(A and B){"version":"1.1","math":"P(A or B) = P(A) +P(B)- P(A and B)"} P(A and B)=P(A)*P(B){"version":"1.1","math":"P (A and B)=P(A)*P(B)"} P(A and B)=P(A)*P(B|A){"version":"1.1","math":"P(A and B)=P(A)*P(B|A)"} P(E) = 1-P(E¯){"version":"1.1","math":"P(E) = 1-P(E¯)"} C&ocy;nsider the t&acy;ble bel&ocy;w th&acy;t summ&acy;rizes the grade perf&ocy;rmance on a recent exam for two sections of the same course. One section is offered on Mondays and the other is offered on Tuesday. The values in the table represent the number of students who made a particular grade within each section and the totals (e.g., 8 students in the Monday section made an A grade, and 18 students total made an A grade). A B C TOTAL Monday Section 8 18 13 39 Tuesday Section 10 4 12 26 TOTAL 18 22 25 65 PART A (5 points): Find the probability that a student chosen at random is from the Monday section. PART B (5 points): Find the probability that TWO students chosen at random are from the Monday section. Assume that you can't draw the same student twice. PART C (5 points): Find the probability that a student chosen at random is NOT from the Monday section. PART D (5 points): Find the probability that a student chosen at random is from the Monday section OR earned a C grade. PART E (5 points): Find the probability that a student chosen at random is from the Monday section GIVEN they earned a C grade. DIRECTIONS for SHOWING WORK for ALL PARTS: Show all relevant steps, including any formulas or mathematical procedures you use, so it's clear how you obtained your answer. Use the EQUATION EDITOR to type all mathematical expressions, equations, and formulas. Write your final answer as a decimal rounded to the third decimal place as needed. ________________________________________________________ Chapter 4 Probability Formulas P(A or B) = P(A) + P(B) {"version":"1.1","math":"P(A or B) = P(A) + P(B)"} P(A or B) = P(A) +P(B)- P(A and B){"version":"1.1","math":"P(A or B) = P(A) +P(B)- P(A and B)"} P(A and B)=P(A)*P(B){"version":"1.1","math":"P (A and B)=P(A)*P(B)"} P(A and B)=P(A)*P(B|A){"version":"1.1","math":"P(A and B)=P(A)*P(B|A)"} P(E) = 1-P(E¯){"version":"1.1","math":"P(E) = 1-P(E¯)"} C&ocy;nsider the t&acy;ble bel&ocy;w th&acy;t summ&acy;rizes the grade perf&ocy;rmance on a recent exam for two sections of the same course. One section is offered on Mondays and the other is offered on Tuesday. The values in the table represent the number of students who made a particular grade within each section and the totals (e.g., 8 students in the Monday section made an A grade, and 18 students total made an A grade). A B C TOTAL Monday Section 8 18 13 39 Tuesday Section 10 4 12 26 TOTAL 18 22 25 65 PART A (5 points): Find the probability that a student chosen at random is from the Monday section. PART B (5 points): Find the probability that TWO students chosen at random are from the Monday section. Assume that you can't draw the same student twice. PART C (5 points): Find the probability that a student chosen at random is NOT from the Monday section. PART D (5 points): Find the probability that a student chosen at random is from the Monday section OR earned a C grade. PART E (5 points): Find the probability that a student chosen at random is from the Monday section GIVEN they earned a C grade. DIRECTIONS for SHOWING WORK for ALL PARTS: Show all relevant steps, including any formulas or mathematical procedures you use, so it's clear how you obtained your answer. Use the EQUATION EDITOR to type all mathematical expressions, equations, and formulas. Write your final answer as a decimal rounded to the third decimal place as needed. ________________________________________________________ Chapter 4 Probability Formulas P(A or B) = P(A) + P(B) {"version":"1.1","math":"P(A or B) = P(A) + P(B)"} P(A or B) = P(A) +P(B)- P(A and B){"version":"1.1","math":"P(A or B) = P(A) +P(B)- P(A and B)"} P(A and B)=P(A)*P(B){"version":"1.1","math":"P (A and B)=P(A)*P(B)"} P(A and B)=P(A)*P(B|A){"version":"1.1","math":"P(A and B)=P(A)*P(B|A)"} P(E) = 1-P(E¯){"version":"1.1","math":"P(E) = 1-P(E¯)"} C&ocy;nsider the t&acy;ble bel&ocy;w th&acy;t summ&acy;rizes the grade perf&ocy;rmance on a recent exam for two sections of the same course. One section is offered on Mondays and the other is offered on Tuesday. The values in the table represent the number of students who made a particular grade within each section and the totals (e.g., 8 students in the Monday section made an A grade, and 18 students total made an A grade). A B C TOTAL Monday Section 8 18 13 39 Tuesday Section 10 4 12 26 TOTAL 18 22 25 65 PART A (5 points): Find the probability that a student chosen at random is from the Monday section. PART B (5 points): Find the probability that TWO students chosen at random are from the Monday section. Assume that you can't draw the same student twice. PART C (5 points): Find the probability that a student chosen at random is NOT from the Monday section. PART D (5 points): Find the probability that a student chosen at random is from the Monday section OR earned a C grade. PART E (5 points): Find the probability that a student chosen at random is from the Monday section GIVEN they earned a C grade. DIRECTIONS for SHOWING WORK for ALL PARTS: Show all relevant steps, including any formulas or mathematical procedures you use, so it's clear how you obtained your answer. Use the EQUATION EDITOR to type all mathematical expressions, equations, and formulas. Write your final answer as a decimal rounded to the third decimal place as needed. ________________________________________________________ Chapter 4 Probability Formulas P(A or B) = P(A) + P(B) {"version":"1.1","math":"P(A or B) = P(A) + P(B)"} P(A or B) = P(A) +P(B)- P(A and B){"version":"1.1","math":"P(A or B) = P(A) +P(B)- P(A and B)"} P(A and B)=P(A)*P(B){"version":"1.1","math":"P (A and B)=P(A)*P(B)"} P(A and B)=P(A)*P(B|A){"version":"1.1","math":"P(A and B)=P(A)*P(B|A)"} P(E) = 1-P(E¯){"version":"1.1","math":"P(E) = 1-P(E¯)"} C&ocy;nsider the t&acy;ble bel&ocy;w th&acy;t summ&acy;rizes the grade perf&ocy;rmance on a recent exam for two sections of the same course. One section is offered on Mondays and the other is offered on Tuesday. The values in the table represent the number of students who made a particular grade within each section and the totals (e.g., 8 students in the Monday section made an A grade, and 18 students total made an A grade). A B C TOTAL Monday Section 8 18 13 39 Tuesday Section 10 4 12 26 TOTAL 18 22 25 65 PART A (5 points): Find the probability that a student chosen at random is from the Monday section. PART B (5 points): Find the probability that TWO students chosen at random are from the Monday section. Assume that you can't draw the same student twice. PART C (5 points): Find the probability that a student chosen at random is NOT from the Monday section. PART D (5 points): Find the probability that a student chosen at random is from the Monday section OR earned a C grade. PART E (5 points): Find the probability that a student chosen at random is from the Monday section GIVEN they earned a C grade. DIRECTIONS for SHOWING WORK for ALL PARTS: Show all relevant steps, including any formulas or mathematical procedures you use, so it's clear how you obtained your answer. Use the EQUATION EDITOR to type all mathematical expressions, equations, and formulas. Write your final answer as a decimal rounded to the third decimal place as needed. ________________________________________________________ Chapter 4 Probability Formulas P(A or B) = P(A) + P(B) {"version":"1.1","math":"P(A or B) = P(A) + P(B)"} P(A or B) = P(A) +P(B)- P(A and B){"version":"1.1","math":"P(A or B) = P(A) +P(B)- P(A and B)"} P(A and B)=P(A)*P(B){"version":"1.1","math":"P (A and B)=P(A)*P(B)"} P(A and B)=P(A)*P(B|A){"version":"1.1","math":"P(A and B)=P(A)*P(B|A)"} P(E) = 1-P(E¯){"version":"1.1","math":"P(E) = 1-P(E¯)"} C&ocy;nsider the t&acy;ble bel&ocy;w th&acy;t summ&acy;rizes the grade perf&ocy;rmance on a recent exam for two sections of the same course. One section is offered on Mondays and the other is offered on Tuesday. The values in the table represent the number of students who made a particular grade within each section and the totals (e.g., 8 students in the Monday section made an A grade, and 18 students total made an A grade). A B C TOTAL Monday Section 8 18 13 39 Tuesday Section 10 4 12 26 TOTAL 18 22 25 65 PART A (5 points): Find the probability that a student chosen at random is from the Monday section. PART B (5 points): Find the probability that TWO students chosen at random are from the Monday section. Assume that you can't draw the same student twice. PART C (5 points): Find the probability that a student chosen at random is NOT from the Monday section. PART D (5 points): Find the probability that a student chosen at random is from the Monday section OR earned a C grade. PART E (5 points): Find the probability that a student chosen at random is from the Monday section GIVEN they earned a C grade. DIRECTIONS for SHOWING WORK for ALL PARTS: Show all relevant steps, including any formulas or mathematical procedures you use, so it's clear how you obtained your answer. Use the EQUATION EDITOR to type all mathematical expressions, equations, and formulas. Write your final answer as a decimal rounded to the third decimal place as needed. ________________________________________________________ Chapter 4 Probability Formulas P(A or B) = P(A) + P(B) {"version":"1.1","math":"P(A or B) = P(A) + P(B)"} P(A or B) = P(A) +P(B)- P(A and B){"version":"1.1","math":"P(A or B) = P(A) +P(B)- P(A and B)"} P(A and B)=P(A)*P(B){"version":"1.1","math":"P (A and B)=P(A)*P(B)"} P(A and B)=P(A)*P(B|A){"version":"1.1","math":"P(A and B)=P(A)*P(B|A)"} P(E) = 1-P(E¯){"version":"1.1","math":"P(E) = 1-P(E¯)"} A client w&acy;s &acy;dmitted with m&acy;j&ocy;r depressi&ocy;n that was a single epis&ocy;de and moderate. During her stay, she was started on Prozac (fluoxetine) 40 mg orally every day. The nurse's discharge teaching should include all of the following except: C&acy;sefinding is the pr&ocy;cess &ocy;f identifying the pr&ocy;blems th&acy;t need to be reported to centr&acy;l cancer registries. C&acy;sefinding is the pr&ocy;cess &ocy;f identifying the pr&ocy;blems th&acy;t need to be reported to centr&acy;l cancer registries. C&acy;sefinding is the pr&ocy;cess &ocy;f identifying the pr&ocy;blems th&acy;t need to be reported to centr&acy;l cancer registries. Ad&ocy;lescents' persistent &acy;rguments &ocy;ver rules &acy;re m&ocy;st likely &acy; result of Which &ocy;pti&ocy;n is the best temper&acy;ture f&ocy;r compost piles? When y&ocy;u exercise, b&ocy;th the r&acy;te &acy;nd depth &ocy;f your bre&acy;thing increase. What change in what gases occurs that result in these responses? Select ONE &ocy;f the f&ocy;ll&ocy;wing prompts. Begin your response by identifying the number of the prompt to which you &acy;re responding, then &acy;nswer the prompt. 1) Expl&acy;in the theories /concepts of "microaggressions" and "colorblind racism." Provide an example of how each may be seen in the real world. Provide two merits and two critiques of these theories. 2) Explain the following concepts and the relationship between them: prejudice, discrimination, skin-color stratification, racism, and colorism
{"url":"https://quizlookup.com/on-a-separate-sheet-of-paper-find-the-difference-quotient/","timestamp":"2024-11-08T14:37:12Z","content_type":"text/html","content_length":"96867","record_id":"<urn:uuid:4a086fd2-4746-4ea1-9400-769ccc52c338>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00396.warc.gz"}
How to find the right eyepieces. Practical tips How to find the right eyepieces. Visually, you won’t get far without eyepieces. But which ones are right for my telescope - and how many do I actually need? Did you know that the French astronomer Adrien Auzout is said to have stated that he wanted to build a 300-metre-long telescope, because he hoped that the magnification would be large enough to see animals on the Moon? In the 17th century, it was common to think about telescopes with high magnification. This shows us that there has always been a desire to experience the objects in space up close. But every day we meet them, hear from them and commiserate with them: people who have chosen the wrong eyepieces. However, with the following strategy, you'll make the correct decisions right away. This article is for you, if you have already wondered how to find the right eyepieces as quickly and easily as possible, without having to wade through many books or lose yourself in a forest of theory and formulae. Because when you pay attention to the following points, you’ll find the right eyepieces for every telescope. This is how you calculate magnification: Focal length of the telescope/focal length of the eyepiece Eyepieces are like lenses; they enlarge the image produced by the telescope and offer us a visual experience. First basic principle: The large ones, the small ones, and what they are used for There are just a few things to pay attention to with eyepiece sizes. Eyepieces come in just two sizes, which are standardised for astronomical telescopes. This means that all you have to do is insert the eyepiece in your focuser and you're done! But is it really that simple? The wider ones with the wow effect Eyepiece diameters are usually stated in inches, not in millimetres. The large 2" eyepieces have a 50.8-mm diameter and offer a wonderful overview at small magnifications. We select one of these when we are searching for objects, observing large objects, or want to enjoy a wide field of view. If you want to use such an eyepiece, you need to be aware that normally only telescopes with a diameter of 150-200 mm or more have a focuser that can take a 2" eyepiece. The slim ones The smaller 1.25" eyepieces with a diameter of 31.7 mm are the standard, and simple versions are usually included in the accessories that come with the telescope. 1.25" eyepieces are used for medium and high magnifications and are useful when observing lunar craters, planets or globular clusters. Each object requires different magnifications, but which magnification is best for which object? Second basic principle: three magnifications that will allow you to view everything When getting started, you should aim for a few quality eyepieces, around 3 to 4 is good in the beginning. And you should choose one with low magnification, one with medium and one slightly higher magnification. Generally speaking, with these, you can cover the entire spectrum of astronomical objects. It’s far better to go for three very good eyepieces which will give you sharp images and good contrast, instead of seven mediocre ones. Third basic principle: why the exit pupil is so important The exit pupil is the bundle of light that enters the eye from the eyepiece. You usually see it as a small, bright disc if you look into the eyepiece from a distance of thirty centimetres. The exit pupil, EP, becomes an important factor for us when we want to calculate which eyepieces we need for which object. Here's how you calculate it: EP = aperture of the telescope/magnification The larger the EP, the smaller the magnification. And the reverse is also true: the smaller the EP the higher the magnification. We'll need the EP again later, so keep it in the back of your mind. Fourth basic principle: minimum, optimal and maximum magnification You don't need any more than these three values. Why? Because with these three you cover a large range of magnifications and can view everything. With more eyepieces, you are only refining these The minimum A low magnification is more important than a high one. But there is a lower limit below which observation is not meaningful. This is the minimum magnification. For this, choose an eyepiece with as long a focal length as possible. If you have a 2" focuser, use a 2" eyepiece, which usually offers a large field of view. But you have a smaller focuser? Then go for a 1.25" eyepiece. This is how you calculate the minimum magnification: Telescope aperture in mm/7 Let's take a look at an example. If you have a telescope with an aperture of 200mm, the minimum magnification is 28 times. Combined with the eyepiece, this produces an exit pupil (EP) of 7mm. That's the diameter of the light that comes out of the eyepiece and into our eye. Important: seven millimetres is exactly the maximum aperture of the human eye’s pupil. With a larger EP, our pupil would act as a baffle and the extra light would be lost. Tip 1: You do not have to choose an eyepiece that delivers exactly the minimum magnification for your telescope. It is sufficient to use this as a guide and to choose an eyepiece with a low magnification. Tip 2: Always use this eyepiece to locate an object, because a low magnification offers you a large field of view. With a wide-angle eyepiece, you can further expand your field of view. Low magnifications are most suitable for galaxies, open star clusters and hydrogen nebulae. The optimum magnification At medium to higher magnifications where the telescope’s theoretical resolution is reached and fully utilized, we talk of the optimum magnification or maximum useful magnification. We reach this when a 0.7-mm to 0.8-mm diameter light bundle passes through the eyepiece. By definition, a star is then a minimally small disc. If we increase the magnification, we don't get any more details, the object simply grows in size. Optimum magnification: Aperture/0.8 A telescope with a 200 mm diameter has a useful magnification of 285 times. You can use this as a guide. It is well suited to planets or planetary nebulae. The maximum magnification Opinions often differ here. How high can or should magnification be? Let's go back to the exit pupil: if it is 0.5 mm, we reach the magnification limit of a telescope. The rule of thumb for this is: Lens aperture x 2 A 200-mm telescope would reach the maximum magnification of 400 times. This rarely works in practice, however. The image is darker than at low magnification, so this is suitable only for bright objects. And the seeing must be perfect. Unless you have these perfect conditions, the eyepiece will usually have to stay in its box. Now we are ready to find the right eyepieces. Why focal length and aperture are important To choose the right eyepiece, we need something else: the telescope’s aperture and focal length. These two values give us the aperture ratio. For example: a 200-mm telescope with a focal length of 1,000 mm has an aperture ratio of f/5. The calculation of the various magnifications - and in this case also the focal lengths - is brilliantly simple: For the minimum focal length, you calculate 7 x f/5. For a 200-mm f/5 telescope, this would mean an eyepiece with a 35 mm focal length. In practice, almost no one has a 7-mm pupil aperture, so it’s better to deduct 1-2 mm to get your focal length. Then we end up at 33 mm and an EP of 6.6 mm. You calculate the optimum focal length by... Hang on a minute, you don’t have to calculate this at all, because it corresponds exactly to the aperture ratio. So, at f/5, it's 5 mm. You can find the maximum magnification with this simple formula: aperture ratio/2. With the 200/1,000-mm telescope, we arrive at an eyepiece focal length of 2.5 mm. Which eyepieces for which object? It is important to know what you can observe with a particular focal length. Small magnifications with an EP of 7-6 mm are suitable for large nebulae, or even an EP of 4-3.5 mm if the nebula is also very bright. Open star clusters and galaxies are best observed between 3.5 mm and 1.5 mm. In the case of star clusters, it can be a higher magnification with an EP between 1.5 and 1 mm. You can set the magnification really high for double stars, between 0.7 mm and 0.5 mm. An overview of the simple formulae for object and magnification is shown in the table. Which eyepieces are best for my telescope? Do not buy seven inferior quality eyepieces. It’s much better to buy three to four good ones. Unfortunately, it really is the case that your telescope can only show its full capabilities with good Good eyepieces create a win-win situation for you and your telescope. Look for a sufficiently large eye relief, good edge sharpness, preferably a large field of view and high light transmission. Start with a small magnification eyepiece at around the minimum range, a medium magnification one at an EP of about 1.5 mm, and one with a higher magnification at about 0.8 mm. Together with the 200-mm telescope, this gives 28-times, 133-times and 250-times magnification. You won’t be able to see animals on the moon, but if you don't want to be as angry as a bull, choose good to excellent eyepieces and avoid using the eyepieces that are supplied with the telescope. To do this, you need to invest at least fifty euros or more per eyepiece. But believe me: every night spent star-gazing will be an amazing experience that you will still be raving about the next day. Recommended focal length Recommended magnification Locating objects Small galaxies Globular clusters fx1 or fx1.5 Aperture/1 or 1.5 Planetary nebulae Double stars Recommended eyepieces You may also find these articles interesting: Author: Marcus Schenk Marcus is a stargazer, content creator and book author. He has been helping people to find the right telescope since 2006, nowadays through his writing and his videos. His book "Mein Weg zu den Sternen für dummies Junior" advises young people, and those who are still young at heart, what they can discover in the sky. As a coffee junkie, he would love to have his high-end espresso machine by his side under the starry sky.
{"url":"https://www.astroshop.eu/magazine/practical-tips/tips-and-tricks/how-to-find-the-right-eyepieces-/i,1078","timestamp":"2024-11-03T23:14:27Z","content_type":"text/html","content_length":"66196","record_id":"<urn:uuid:b88ce971-025e-4e6f-bcfb-b6e214507870>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00472.warc.gz"}
Incidence and Intersection - Quantum Calculus Incidence and Intersection Barycentric and Connection graphs Barycentric graphs depend on incidence, connection graphs on intersection. Here are some examples from this blog. Both graphs have as the vertex set the complete subgraphs of the graph. In the connection graph, we take the intersection, in the Barycentric case, we take incidence. Here are two pictures illustrating this. Our initial graph is a 3-sphere, the join of two circular graphs $C_4$ . The Graph s has the $f$-function $1 + 8 x + 24 x^2 + 32 x^3 + 16 x^4$ with 80 vertices. Here is the Hodge Laplacian and the connection Laplacian: The Hodge Laplacian H is a direct sum of form Laplacians, the connection Laplacian is invertible and has here determinant 1. The Shannon product of two linear graphs The spetrum of the inverse of the connection Laplacian of this graph The spectrum of the connection Laplacian of this graph. The minimal absolute eigenvalue of the connection Laplacian is 0.11435, the maximum 52.0261. There is a spectral gap which does not get smaller even in the van Hoove limit! Also for the infinite Laplacian, we have stability of the vacuum. Apropos Statistical mechanics, the Shannon product of two linear graphs is the most common neighborhood graph considered in statisticsal mechanics. The unit sphere is the set of lattice positions, which a king can move to. It is technically a three dimensional graph because of the presence of tetrahedra graphs (3 dimensional simplices). BUt the graph is a soft 2-manifold. All unit spheres are soft 1-spheres. Note that when taking the connection Laplacian of the graph L(20) x L(20) , we already have a matrix with 3687 entries. In that case, the minimal absolute eigenvalue is 0.11311 and the maximal absolute value 53.2093. There is a limit. The Frenkel-Kontorova model in nonlinear physics is a general variational problem. If $\Delta$ is a Laplacian on a graph and $U$ a real valued potential, one can look the functional $E[ (u,\Delta u)/ 2 + \epsilon U(u) ]$, where E is an expectation on some linear space of functions and $\epsilon$ is a perturbation parameters. Critical points satisfy a non-linear system $\Delta u+\epsilon V(u)=0$,, where $U’=V$. These are called the Euler equations. Unfortunately for physics, the Laplacians we usually deal with are singular in the sense that their inverse, the Green functions are unbounded. The source of the problem exists already in the simplest cases like if space is a finite graph, where Laplacians have a kernel. They are calledHarmonic forms. The Laplacian of a space determines the natural Newton potential on the space and that is always singular. In the most famous case when space is $\mathbb{R}^3$, we see the 1/r potential like in gravity or in electro-magnetism. But it is obvious that this can not make sense also physically: take two neutral elements like neutrons hitting each other front on, their gravitational energy would go to infinity. We know that there are quantum effects but still, even relativistic mathematics breaks down. This touches of course to the fundamental question how to combine gravity with quantum mechanics but it is clear that classical force descriptions will change fundamentally in the very small. As mathematicians, we see the difficulties in mathematical variational problems like how to describe critical points of the above variational problem if the non-linear perturbation has been turned on. This can already be subtle in the very simplest cases like when space is the set of integers $\mathbb{Z}$ and $\Delta u(n) = u (n+1)-2 u(n) + u(n-1)$ and $U(u) = \cos(u)$. The fact that $\Delta$ is non-invertible can be seen that $\Delta$ has the constant functions as kernel. Now, of course, the variational problem depends on the Banach space which is chosen. A good one is to fix some real $\alpha$ and to take $C(\mathbb{T})$ and take for a function $q$ in that Banach space the random sequence $u(n)=q(x+n \alpha)$, where $x$ in the circle is an element in the probability space we integrate over to get the expectation. The functional to minimize is then $\int_{\mathbb{T}} (q(x+\alpha)-q(x))^2/2+ \epsilon \cos(q (x)) \; dx$ and critical points satisfy the nonlinear equation $q(x+\alpha) - 2 q(x) +q(x-\alpha) = \epsilon \sin(q(x))$. Critical points correspond to invariant curves (KAM tori) of the Standard map. Solving such nonlinear equations in a Banach space is a KAM problem. It needs a sophisticated implicit function theorem to be solved in a Banach space of real analytic functions and only has solutions if $\alpha$ is sufficiently Diophantine. It does not really matter much what Frenkel-Kontorova model we chose, also whether it is higher dimensional or not. The important thing to realize is that the problem to continue solutions from c=0 to $c>0$ is hard and only works for carefully crafted solutions in the case c=0 (which can be established by looking at configurations u(n) defined by a specific dynamical system like an irrational rotation. The source of the problem is that the Hessian, the linear operator obtained by linearizing near the equilbrium solution is not invertible. If it were invertible, we could continue the solution using the soft implicit function theorem. For more about the calculus of variations, check out the book of Moser. The problem of non-invertibility of the Laplacian $\Delta$ when looking at non-linear equations $\Delta u + V(u)=0$.
{"url":"https://www.quantumcalculus.org/incidence-and-intersection/","timestamp":"2024-11-02T07:40:19Z","content_type":"text/html","content_length":"92227","record_id":"<urn:uuid:bcacf513-0319-4009-8834-e5c6d8224720>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00852.warc.gz"}
A Parametric Study of Bipolar Plate Structural Parameters on the Performance of Proton Exchange Membrane Fuel Cell In this research, the impact of structural parameters of bipolar plates on the proton exchange membrane (PEM) fuel cell performance has been investigated using numerical method, and this model incorporates all the essential fundamental physical and electrochemical processes occurring in the membrane electrolyte, cathode catalyst layer, electrode backing, and flow channel, with some assumptions in each part. In formulation of this model, the cell is assumed to work under steady state conditions. Also, since the thickness of the cell is negligible compared to other dimensions, one-dimensional and isothermal approximations are used. The structural parameters considered in this paper are: the width of channels (W[c]), the width of support (W[s]), the number of gas channels (n[g]), the height of channels (h[c]), and the height of supports (h[p]). The results show that structural parameters of bipolar plates have a great impact on outlet voltage in high current densities. Also, the number of gas channels, their surface area, the contacting area of bipolar plates, and electrodes have a great effect on the rate of reaction and consequently on outlet voltage. The model predictions have been compared with the existing experimental results available in the literature, and excellent agreement has been demonstrated between the model results and the experimental data for the cell polarization curve. Issue Section: Research Papers Catalysts, Current density, Electrodes, Flow (Dynamics), Fuel cells, Membranes, Oxygen, Plates (structures), Porosity, Proton exchange membrane fuel cells, Diffusion (Physics), Polarization (Electricity), Polarization (Light), Polarization (Waves), Temperature, Water, Boundary-value problems The proton exchange membrane fuel cell (PEMFC) is an electrochemical energy converter that converts chemical energy of fuel directly into DC electricity. A fuel cell using a very thin polymer membrane as an electrolyte has been considered as a promising candidate for future power sources, especially for transportation applications and residential power, due to their high efficiency, high power density, quick startup, quiet operations, and, most importantly, zero emissions, thereby reducing air pollution and greenhouse gas emissions. The first demonstration of a fuel cell was by lawyer and scientist William Grove in 1839 [1], although it appears that a Swiss scientist Christian F. Shoenbein independently discovered the very same effect at about the same time (or even a year before) [2]. Optimizing of the geometrical parameters is one of the ways to minimize ohmic losses. Some studies have been developed to investigate the effect of using different channel configurations on cell performance. Sun et al. [3] developed a numerical study, which suggested using a trapezoidal channel cross-section rather than a rectangular or square cross-section to improve cell performance. Chiang and Chu [4] conducted numerical simulations of a 3D isothermal model of a PEMFC for various channel configurations (channel length, height, and shoulder width), with particular emphasis on the effect of channel height on cell performance. They carried out simulations with different combinations of channel width and height, while maintaining the same channel cross-sectional area and constant flow rate. Their results indicated that flat channels (i.e., smaller height) gave better performance. Yi and Nguyen [5] developed an along-the-channel model for evaluating the effects of various designs and operating parameters on the performance of a PEM fuel cell. The results showed that the humidification of the anode gas is required to improve the conductivity of the membrane and the liquid injection and higher humidification temperature can enhance the cell performance. Ge and Yi [6] developed a two-dimensional model to investigate the effects of operation conditions and membrane thickness on the water transport. The results revealed that the cell performance can be enhanced by increasing the cell temperature. Yi and Nguyen [7] employed a two-dimensional isothermal model of a porous electrode to simulate the hydrodynamics of gas flow through the pore volume of the electrode of a PEMFC. It is concluded that, from the predictions of the PEM fuel cells with interdigitated flow field, the higher gas flow rate through the electrode improves the electrode performance. He et al. [8] and Kumar and Reddy [9,10] studied the influences of electrode and flow field design on the performance of a PEM fuel cell with a half-cell model. It was found that higher differential pressure between inlet and outlet channels would enhance the electrode performance. Kim et al. [11] developed a curve-fitting scheme based on experimental data of the first polarization curve of a PEM fuel cell. Ahmed and Sung [12] performed the simulations of PEMFCs with a new design for the channel shoulder geometry, in which the membrane electrode assembly is deflected from shoulder to shoulder. Bernardi and Verbrugge [13,14] used an analytical approach. They developed a mathematical model of a PEMFC from fundamental transport properties. Where the losses incurred by the activation over potential of the anode and cathode reaction, the ohmic losses incurred by the membrane and the ohmic losses due to the electrodes are subtracted from the reversible cell voltage. Marr and Li [ 15,16] developed a simplified engineering model of a PEMFC based on the catalyst layer of Weisbrod et al. [17] and the membrane model of Bernardi and Verbrugge [13,14]. This is the model that will be used to incorporate the effect of bipolar plate structural parameters on the PEMFC performance. Marr and Li [15] investigated the composition and performance optimization of cathode catalyst platinum and catalyst layer structure in a proton exchange membrane fuel cell by including both electrochemical reaction and mass transport process. They found that electrochemical reactions occur in a thin layer within a few micrometers thick, indicating ineffective catalyst utilization for the present catalyst layer design. Also Baschuk and Li [18] investigated a polymer electrolyte membrane fuel cell with variable degrees of water flooding of the membrane. Therefore, the issue of bipolar structural parameters in a PEMFC has not been fully examined in the previous work. In the following sections, the formulation of the model will be presented. The result is then compared to experimental data available in the literature for validation. Model Formulation The operation of a PEM fuel cell is based on converting the chemical energy of the fuel, such as hydrogen, and oxidant, such as oxygen, into electrical energy, as shown in Fig. 1. The following electrochemical reactions happening on the anode and the cathode are the base of fuel cell function [19]. More precisely, the reactions above happen on a border between the ionically conductive electrolyte and electrically conductive electrode. The electrodes must be porous, allowing the gases to arrive to, as well as product water to leave, the reaction sites, because there are gases involved in fuel cell electrochemical reactions. In the formulation of this model, the cell is assumed to work under steady state conditions. Also, since the thickness of the cell is negligible compared to its other dimensional, one-dimensional and isothermal approximation is used. The membrane is assumed to be fully hydrated, according to Marr and Li [ ]. In addition, the anode reaction over potential is neglected in the present study, because the model of Marr and Li [ ] found the over potential due to the anode reaction to be negligible [ ]. Therefore, by calculating the reversible cell voltage and subtracting the losses from it, the overall cell voltage can be found. where E[r] is the reversible cell voltage and η[act] is the loss due to resistance to the mass transfer limitations and electrochemical reactions in the cathode catalyst layer. , η , and η are the voltage losses due to the ohmic resistance of the electrodes, flow channel plate, and membrane layer. E is calculated from a modified version of the Nernst equation, which is modified with an extra term to account for changes in temperature from the standard reference temperature [ ]. This is given by Eq. is the change in Gibbs free energy, is the Faraday constant, Δ is the change in the entropy, and is the universal gas constant, while are the partial pressures of hydrogen and oxygen in the atmosphere. Taking into account the quantity of parameters, Eq. is simplified to the form below. Table shows the operating and structural parameters. $Er=1.229-0.85×10-3(T-298.15) +4.31×10-5T[ln(PH2)+12ln(PO2)]$ Ohmic voltage losses are important factors in calculating overall cell voltage; ohmic losses are due to the resistance in the electrode and plate, which are modeled as shown in Fig. . This model assumes that current density is constant at the border electrode and the catalyst layer and that the electrons take the shortest path to and from the reaction sites. Total resistance due to the gas flow channels on the bipolar plate results from two equivalent resistances due to the solid portion of flow channels and resistance of the flow channels supports, which can be determined as follows: where h[p] is the thickness of the solid portion of the plate and h[c] is the height of the flow channels and supports. W and L are the width and length of the cell, respectively, and w[s] is the width of the channels supports. Also, according to Fig. , the relation between (width of support) and (width of channel) can be determined by the equation below, where w[c] refers to the width of the flow channels and w[s] is the width of support, while n[g] is the number of the flow channels in the cell. Thus, the total plate resistance can be determined by combining the two resistances as the following: In this case, the cell voltage loss due to the resistance of the plate is $ηohmic,p=2 Rp Iδ WL$ where I[δ] is the cell current density. The total resistance for one electrode, from series resistance which is shown in Fig. can be calculated from the following equation [ where δ is the thickness of the electrode. Effective resistivity of the electrode, , can be derived from the bulk resistivity of the electrode, ρ, and the void fraction of the electrode, ϕ, by the Bruggeman’s correction [ The cathode and anode electrodes are assumed to be identical, so the voltage loss due to the resistance of the electrode is $ηohmic,e=2 Re Iδ WL$ The catalyst layer is modeled with a set of coupled differential equations. In modeling the cathode catalyst layer, it is assumed to be isothermal, one-dimensional, and uniformly distributed. In this model, the other important assumption is that the negated space within the catalyst layer is large enough so that Knudsen diffusion is unimportant [18]. One of the processes which modeled in the catalyst layer is the electrochemical reaction. The rate of this process, with assuming constant proton concentration, is given by the Butler-Volmer equation where I is the protonic current density, C is the concentration of oxygen, η[act] is the over potential caused mostly by the resistance to the electrochemical reactions and due to the finite rate of mass diffusion, and z is the distance into the catalyst layer measured from the electrode/catalyst layer interface. i[o,ref] denotes the reference current density that is a function of the cell temperature. Also, it is an experimentally derived parameter and is given as a function of temperature for Nafion on platinum in Parthasarathy et al. [21]. Also, γ, reaction order, can be found analytically from the procedure in Newman [22], with a value of 0.5 resulting. is the reference oxygen concentration, and it is related to and, in this model, has a value of 12 mol/m . α and α denote the cathodic and anodic transfer coefficient, which, in this model, have the values of 1 and 0.5, respectively. is the specific reaction surface and can be derived from the below equation [ m[pt] is the catalyst mass loading per unit area of the cathode, δ[c] is the thickness of the catalyst layer, and A[s] is the catalyst surface area per unit mass of the catalyst. One of the other processes modeled in the catalyst layer is an ohmic loss that was found, using Ohm’s law, as the next equation. This equation describes the ohmic losses through the catalyst layer. The effective conductivities of the membrane and the catalyzed solids are denoted by , respectively. These can be related to the bulk conductivity and correction of porosity for the catalyst layer as the following: In these equations, are the bulk conductivities of the membrane and solid catalyst, is the fraction of membrane in the catalyst layer, and ϕ is the void fraction of the catalyst layer, which can be calculated as Eq. depending upon the type and thickness of the catalyst layer. ρ[pt] and ρ[c] are the density of the platinum and its carbon support, and f[pt] represents the amount of platinum catalyst on its carbon support. in Eq. is the bulk conductivity of membrane, and its value depends on the membrane type and the temperature of the cell. For a Nafion membrane, the electric conductivity is given by [ Also, the mass diffusion process was modeled using conservation of species and Fick’s law of diffusion in the catalyst layer as the following equation: This equation describes the mass transport through the cathode catalyst layer. In this equation, is the protonic current density and denotes the effective diffusion coefficient of oxygen in the catalyst layer, which is determined as the following equation [ In this model, the mass transfer of oxygen from the flow channel to the reaction site of the cathode catalyst layer will determine the shape of the polarization curve in the concentration over the potential region [18]. The boundary conditions that apply to Eqs. require that the cathode electrode/catalyst layer interface and the protonic current density must be equal to zero, because the electrode is ionically insulated. Also, at the other end of the catalyst layer, the protonic current density must be equal to the cell current density. Finally, the concentration profile of the oxygen at the catalyst layer must be the same as the one calculated for the electrode at the point. These boundary conditions are summarized below [ The membrane in this model is assumed to be fully hydrated and one-dimensional. As a result, the equation for the ohmic losses incurred by the membrane is [ In this equation, δ[m] is the thickness of the membrane, K[m] is the electrical conductivity of the membrane, which is a function of cell temperature, K[p] is the hydraulic permeability, K[E] is the electrokinetic permeability, ΔP[a-c] is the pressure differential across the membrane, C[H]^+ is the fixed charge concentration, and $μH2O$ denotes the viscosity of liquid water. The values for the K [m], K[p], C[H]^+, and K[E] for Nafion can be found in Bernardi and Verbrugge [13,14]. Bipolar Plate. In a typical frame and PEM cell, the bipolar plate is a plate of graphic with a serpentine flow channel cut into it. These channels are used to deliver reactants to the electrode and to collect current from the electrode. Two different phenomena are modeled in the plate: 1. (1) Mass transport in the flow channel 2. (2) Electric resistance in the plate Channel Flow. Many models [ ] use a plug flow assumption to model mass diffusion to the electrode surface. Making this assumption effectively ignored the convective effect on mass transport in the channel. In this model, mass transport is approximated using dimensionless parameters. A Sherwood number approximation [ ] is used to estimate the amount of reactant that migrates from the bulk of the stream to the surface of the electrode. This equation is valid for a binary system. Unmodified, a Sherwood number approximation would not be valid for a humidified air stream, because this stream constitutes a ternary system. To overcome this complication, the air stream is reduced to a binary system of oxygen/ bulk mixture of H O and N by using a bulk diffusion correction. It is assumed that there is a constant concentration of O at the electrode surface. Additionally, inlet and secondary flow patterns at the corners of the channel were ignored. These assumptions will give more conservative predictions, because adding in correlations for inlet and flow around the corners would increase the mass transfer rate. The mass transport has the form is the molar flow rate and Δ is the logarithmic average of concentration change between bulk gas concentration and electrode surface. The exposed area of the electrode on the channel is symbolized with , the diffusion coefficient, , is the bulk-corrected diffusion, and is the hydraulic diameter of the channel. However, the Sherwood number, , is a dimensionless parameter that is defined as is the mass transfer coefficient (mol/m .s) and is the diffusion coefficient (m /s) of the species . For the flow channel, the Sherwood number was derived by the analogy between heat and mass transfer. The Nusselt number correlation [ ] for a three-sided adiabatic square duct with one constant heat flux wall was converted into a Sherwood number by using the following conversion formula [ Pr is the Prandtl number, and the Schmidt number is defined as The concentrations of reactants are calculated using logarithmic averages. A logarithmic average is defined as the equation below [ In this equation, ΔC[1]=C[s] – C[inlet] represents the inlet concentration difference and ΔC[2]=C[s] – C[outlet] represents the outlet concentration difference. shows how the channel is constructed in the cell. The area exposed to the electrode is defined as the length of the channel multiplied by the channel width. The length of the channel is derived by setting the length of the channel in the direction of equal to the total width of the end supports and the width of the channel. The length is measured from the center of the channel. For the distance in the direction, we have is the width of the cell, is the length of the channel in the direction, and are the widths of the support and channel, respectively. The channel length in the L direction is more complicated, because the inlets and outlets are a different length than the inner channels. The length of the inlet or outlet is L is the length of the cell. The length of an inner channel is and the total distance in the direction is Summing up the total for both width and length gives For the second group of terms, it is assumed that w[s] has the same distance with respect to both length and width. Also, ohmic resistance, which is calculated in Sec. 2.1, is assumed constant within the temperature range 300–370 K, and diffusion is assumed to spread O[2] evenly throughout the electrode surface such that there is a constant O[2] concentration at the electrode catalyst layer Boundary Conditions. In order to obtain a well-posed problem, the derived equations must be supplemented with a number of boundary conditions. At the gas diffusion layer (GDL)/catalyst layer (CL) interface, Z=0, oxygen enters the CL pores from the GDL, and, since the pores are assumed to be flooded with liquid water, the oxygen passes through the GDL/CL interface at Z=0 by first dissolving in water. The oxygen concentration at the interface is therefore determined using Henry’s law. where the oxygen partial pressure is is the oxygen mole fraction, and is the gas mixture pressure at the GDL/CL interface. The Henry constant Ho (in atm m ) is typically assumed to be a function of temperature [ ] according to In this equation, is measured in degrees Kelvin. Because the GDL is very thin, the pressure conditions at the GDL/CL interface are set to the cathode channel values, which, in this model, have been taken to be Xo =0.21 and P=5 atm. The boundary condition for the protonic current density comes from the assumption that all protons are consumed before they reach the GDL/CL boundary, so that Also, at the CL/membrane interface, , the protonic current density at the membrane approaches its ultimate value (i.e., the cell current density, ), so that Also, oxygen cannot diffuse into the membrane, so that The procedure for solving the formulated model to yield a cell voltage for given structural and operational parameters is as follows: first, the reversible voltage of the cell is calculated for the reference temperature and pressure (T=60°C, P=5 atm) using Eq. (5). Then, the losses are calculated in all parts of the cell for calculating overall cell voltage in Eq. (3). Also, for calculating the catalyst influence on overall cell voltage, it is needed to solve ordinary differential equations consisting of Eqs. (14), (16), and (21) along with boundary conditions, which are stated for the catalyst layer in Eqs. (38), (41), and (43) that represent a two-point boundary value problem. The built-in shooting method algorithm within the boundary value problems function of the Matlab software package has been used for the present computation [27]. The polarization curve is obtained from the present calculations in base case condition (T=353.15 K, P=5 atm), except with m[Pt]=0.35 mg/cm^2 (f=10 Pt/C Mass %) and R[Ohmic]=0.225 (Ω/cm^2), and is compared with the experimental Ticianelli et al. [28]. As shown in Fig. 4, the results predicted by the numerical are in good agreement with the experimental data. The study on the structural parameters of the bipolar plates, such as W[c], W[s], ϕ, n[g], h[p], and h[c], which represent the channel width, support width, void fraction, height of support, and channel, respectively, has been done by using the Marr [15] model (Fig. 2). The amounts of other parameters are listed in Table 1. Considering any structural changes in channels makes change in the amounts of pressure losses values, so it was assumed that its losses will be relieved by external tools and the pressure will be constant(5 bar). By comparing the results obtained from this research with other models and other sizes of the PEMs, it is revealed that these results are extendable for other fuel cells with any dimensions and structural and operational parameters. Width of Channels (W[c]). According to obtained results, when the other parameters are constant and W[c] is changed, the outlet voltage will be changed according to Fig. 5. As shown in this diagram, it is obvious that, when the number of channels or grooves in the plate is constant, the voltage increases with the decrease of W[c]. Of course, this change is very insignificant and, according to Fig. 6, the changes in W[c] are more sensible in high current densities. It can be a good manner to increase total voltage in high current densities. Figure 5 shows the change of voltage in variable sizes of channel width in n[g]=12 and I[δ]=10 000 (A/m^2). Figure 6 shows the voltage and power difference when W[c] changes from 0.003 (m) to 0.00001 (m) and other parameters are constant. More surface area is necessary in high current densities; however, when the width of channels decreases, the support area will increase. The increment of the support area will change the void fraction of the electrode as compression decreases and consequently change the ability of reactant to diffusion to the catalyst surface directly underneath the support changes. So, with decreasing W[c], the ohmic resistance will decrease and also changes of void fraction of the electrode will have influence on cell voltage. The electrode void fraction influence on cell voltage will be discussed in Sec. 4.2. Void Fraction. Electrode void fraction is one of the important structural parameters that has influence on cell voltage. According to Marr [20], when void fraction in the electrode increases, mass transfer increases as well and, similarly, electrode resistance increases. The corresponding increase in resistivity from the increase of the electrode void fraction is swamped by increasing performance from better gas diffusion. As shown in Fig. 7, the increasing of the void fraction will lead to enhancement of cell voltage. Width of Support (Ws). According to Eq. (8), it can be seen that, for a constant number of channels in the bipolar plate with defined dimensions, W[s] is increased as W[c] decreases. So the outlet voltage is increased by increase of W[s], and this increase in voltage is due to the decrease of ohmic resistance in addition to changes in void fraction, like in Sec. 4.2. Also, this increase of outlet voltage, due to increase of W[s], is more obvious in high current densities. Figure 8 shows the change of voltage in variable sizes of W[s] when n[g]=12 and I[δ]=10 000 (A/m^2). Figure 9 shows the voltage and power difference when W[s] changes from 0.003 (m) to 0.00001(m) and other parameters are constant. Number of Channels (n[g]). It was seen that the outlet voltage is increased because of decrease in W[c]. Also, it is clear that, in high current densities, the effects of structural parameters are more sensible and more area of reaction is needed in these high current densities. Therefore, by increase of n[g], the surface of contact can be increased, but on the other side, the width of supports will decrease by the increase of the number of channels. This causes changes in the void fraction and, accordingly, the ohmic resistance of the plate. Therefore, the number of channels must have an optimized amount. It has been clearly shown in Fig. 10. According to the voltage-n[g] diagram, it is obvious that, when n[g] increases in constant amounts of W[c] and I[δ], the voltage increases and has a maximum amount in a definite amount for each size of W[c]. Height of Channels (h[c]). Through obtained data by numerical solution method and according to Fig. 11, it is clear that, as h[c] (the height of channels shown in Fig. 2) is decreasing, the voltage is increased. This is due to the decrease of ohmic resistance. In this diagram, the number of channels used for the transfer of gas, the width of channel, and mass flow rate of channel are constant and h[c] is variable. Also, in Fig. 12, the polarization curve for two different h[c] demonstrates the increase of voltage in higher current densities because of decreasing height of channels. That change in h[c] has more impact on outlet voltage in higher current densities. Height of Support (h[p]). By study effect of the height of supports, which is shown in Fig. 2, it is evident that, by reducing h[p], the outlet voltage increases due to reduction of ohmic resistance. Figure 13 shows the influence of h[p] on the outlet voltage. Other structural and operational parameters remain constant in the investigation of h[p] parameter. Figure 14 shows the voltage and power difference when h[p] changes from 0.009 (m) to 0.00001 (m) and other parameters are constant. Furthermore, it can be seen that the effect of the channel height is more sensible in high current densities, and according to Fig. 15, the effect of channel height is more sensible than the height of support on cell voltage. By investigation of the isothermal, steady, and one-dimensional PEM fuel cell, it is shown that the present model is in excellent agreement with experimental data. Also, it was obtained that structural parameters of bipolar plates and the method of their designing and construction has impact on outlet voltage in high current densities. Also, the result indicates decreasing width of the gas channels, increasing outlet voltage. Similarly, it is evident that the height of channel, the height of support, and the number of gas channels has influence on outlet voltage. Also, the result shows that the contacting area of bipolar plates and the electrode and surface area of gas channels have great effect on the rate of reaction and, consequently, on cell voltage. Changes in the number of gas channels show that there are an optimum number of them for each size of channel width. Also, the increase in number of bipolar plate channels is not a benefit to reach maximum power density and depends on other parameters like contact area or width of channels and width of supports. W. R. , “ On Voltaic Series and the Combination of Gases by Platinum London Edinburgh Philos. Mag. J. Sci., Series 3 , pp. , “ The Birth of the Fuel Cell 1835–1845 European Fuel Cell Forum Oberrohrdorf, Switzerland P. H. , and K. B. , “ A Numerical Study of Channel-to-Channel Flow Cross-Over Through the Gas Diffusion Layer in a PEM-Fuel-Cell Flow System Using a Serpentine Channel With a Trapezoidal Cross-Sectional Shape Int. J. Therm. Sci. (in press) M. S. , and H. S. , “ Numerical Investigation of Transport Component Design Effect on a Proton Exchange Membrane Fuel Cell J. Power Sources (in press) J. S. , and T. V. , “ An Along-the-Channel Model for Proton Exchange Membrane Fuel Cells J. Electrochem. ), pp. S. H. , and B. L. , “ A Mathematical Model for PEMFC in Different Flow Modes J. Power Sources , pp. J. S. , and T. V. , “ Multicomponent Transport in Porous Electrodes of Proton Exchange Membrane Fuel Cells Using the Interdigitated Gas Distributors J. Electrochem. Sci. , pp. J. S. , and T. V. , “ Two-Phase Flow Model of the Cathode of PEM Fuel Cells Using Interdigitated Flow Fields AIChE J. ), pp. , and R. G. , “ Effect of Gas Flow-Field Design in the Bipolar/End Plates on the Steady and Transient State Performance of Polymer Electrolyte Membrane Fuel Cells J. Power Sources , pp. , and R. G. , “ Effect of Channel Dimensions and Shape in the Flowfield Distributor on the Performance of Polymer Electrolyte Membrane Fuel Cells J. Power Sources , pp. , and , “ Modeling of Proton Exchange Membrane Fuel Cell Performance With an Empirical Equation J. Electrochem. Soc. ), pp. D. H. , and H. J. , “ Design of a Deflected Membrane Electrode Assembly for PEMFCs Int. J. Heat Mass Transfer , pp. D. M. , and M. W. , “ A Mathematical Model of a Gas Diffusion Electrode Bonded to a Polymer Electrolyte AIChE J. ), pp. D. M. , and M. W. , “ A Mathematical Model of a Solid Polymer Electrolyte Fuel Cell J. Electrochem. Soc. ), pp. , and , “ Composition and Performance Modeling of Catalyst Layer in a Proton Exchange Membrane Fuel Cell J. Power Sources , pp. C. L. , and , “ An Engineering Model of Proton Exchange Membrane Fuel Cell Performance , pp. K. R. S. A. , and N. E. , “ Through-the-Electrode Model of a Proton Exchange Membrane Fuel Cell Electrochem. Soc. Proc. , pp. J. J. , and , “ Modeling of Polymer Electrolytemembrane Fuel Cells With Variable Degrees of Water Flooding J. Power Sources , pp. , and Fuel Cell Systems Explained 2nd ed. New York C. R. , “ Performance Modelling of a Proton Exchange Membrane Fuel Cell MSc thesis University of Victoria Victoria, Canada , and A. J. , “ Temperature Dependence of the Electrode Kinetics of Oxygen Reduction at the Platinum Nafion Interface—A Microelectrode Investigation J. Electrochem. Soc. ), pp. J. S. Electrochemical Systems 2nd ed. Englewood Cliffs, NJ J. C. R. M. R. E. B. A. , and P. R. , “ Performance Modeling of the Ballard Mark IV Solid Polymer Electrolyte Fuel Cell II: Empirical Model Development J. Electrochem. Soc. ), pp. T. E. T. A. , and , “ Polymer Electrolyte Fuel Cell Model J. Electrochem. Soc. ), pp. F. P. , and De Witt D. P. Fundamentals of Heat and Mass Transfer John Wiley New York , and R. K. Handbook of Single Phase Heat Transfer John Wiley New York C. F. , and P. O. Applied Numerical Analysis 6th ed Reading, MA E. A. C. R. . “ Methods of Advance Technology of Proton Exchange Membrane Fuel Cells J. Electrochem. Soc. ), pp.
{"url":"https://mechanismsrobotics.asmedigitalcollection.asme.org/electrochemical/article-split/9/5/051003/372084/A-Parametric-Study-of-Bipolar-Plate-Structural","timestamp":"2024-11-07T06:44:03Z","content_type":"text/html","content_length":"391615","record_id":"<urn:uuid:b7d05c45-a518-40de-a9f1-f76feeb4245b>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00565.warc.gz"}
Understanding Data Structures in C#: A Comprehensive Guide - CODERZON Understanding Data Structures in C#: A Comprehensive Guide Data structures are fundamental to programming, enabling efficient data storage, manipulation, and retrieval. In C#, data structures are part of the System.Collections, System.Collections.Generic, and System.Collections.Concurrent namespaces, offering a rich set of tools for developers. This article will explore various data structures available in C#, their use cases, and implementation Table of Contents 1. Introduction to Data Structures 2. Primitive Data Structures 1. Non-Primitive Data Structures • Lists • Linked Lists • Stacks • Queues • HashTables • Dictionaries • Sets • Trees • Graphs 1. Choosing the Right Data Structure 2. Conclusion 1. Introduction to Data Structures Data structures are specialized formats for organizing, processing, retrieving, and storing data. They are essential for managing large amounts of data efficiently, which can be crucial in algorithm performance. In C#, data structures can be classified into two broad categories: Primitive and Non-Primitive. 2. Primitive Data Structures 2.1 Arrays An array is a collection of elements, all of the same type, stored in contiguous memory locations. • Fixed size • Zero-based indexing • Efficient access time (O(1) for read/write) int[] numbers = new int[5] {1, 2, 3, 4, 5}; Use Cases: • Storing a collection of data elements when the size is known and fixed. • Iterating over elements quickly. • Fast access to elements. • Simple to use. • Fixed size, which can be inefficient in terms of memory usage. • Insertion and deletion of elements can be costly. 2.2 Strings A string in C# is a sequence of characters, which is actually an array of char type. • Immutable (once created, cannot be changed) • Unicode support • Rich set of methods for manipulation (e.g., Substring, IndexOf, Replace) string message = "Hello, World!"; Use Cases: • Storing text data. • Manipulating text data. • Rich set of in-built methods. • Easy to use. • Immutable nature can lead to performance overhead if heavy string manipulations are required. 3. Non-Primitive Data Structures 3.1 Lists A List<T> is a dynamic array that can grow as needed, part of the System.Collections.Generic namespace. • Dynamic size • Zero-based indexing • Implements IList<T> and ICollection<T> interfaces List<int> numbers = new List<int> {1, 2, 3, 4, 5}; Use Cases: • When you need a dynamic array that can grow in size. • Frequently adding and removing elements. • Dynamic size. • Flexible and easy to use. • Slower access compared to arrays in some scenarios. • Performance overhead when resizing the underlying array. 3.2 Linked Lists A LinkedList<T> is a linear collection of nodes where each node points to the next node in the sequence. It’s a doubly linked list. • Nodes contain data and a reference to the next and previous nodes. • Efficient insertion and deletion (O(1) when performed at the ends). • Not index-based. LinkedList<int> linkedList = new LinkedList<int>(); Use Cases: • When frequent insertions and deletions at both ends are required. • When the overhead of resizing an array is a concern. • Efficient insertions/deletions. • No need to resize. • No random access (O(n) to find an element). • Higher memory usage due to additional pointers. 3.3 Stacks A Stack<T> is a LIFO (Last In, First Out) collection. • Push (add) and Pop (remove) operations. • Peek operation to view the top element without removing it. Stack<int> stack = new Stack<int>(); int top = stack.Pop(); Use Cases: • Implementing undo features. • Navigating through recursive algorithms. • Simple and efficient for LIFO operations. • O(1) for push and pop operations. 3.4 Queues A Queue<T> is a FIFO (First In, First Out) collection. • Enqueue (add) and Dequeue (remove) operations. • Peek operation to view the front element without removing it. Queue<int> queue = new Queue<int>(); int front = queue.Dequeue(); Use Cases: • Implementing task scheduling. • Managing requests in a buffer. • Simple and efficient for FIFO operations. • O(1) for enqueue and dequeue operations. 3.5 HashTables A Hashtable is a collection of key-value pairs, where the key is hashed to find the corresponding value. • Fast lookup based on keys (O(1) average case). • Allows different types for keys and values. Hashtable hashtable = new Hashtable(); hashtable.Add("key1", "value1"); string value = (string)hashtable["key1"]; Use Cases: • Fast lookups by unique keys. • Implementing caches. • Fast lookups. • Flexible key and value types. • Unordered collection. • Slower performance on large collections compared to Dictionary<TKey, TValue>. 3.6 Dictionaries A Dictionary<TKey, TValue> is a generic collection of key-value pairs. • Strongly typed (both key and value). • Fast lookup based on keys (O(1) average case). • Ordered based on key insertion. Dictionary<string, int> dictionary = new Dictionary<string, int>(); dictionary.Add("key1", 1); int value = dictionary["key1"]; Use Cases: • Fast lookups by unique keys. • Storing related data with a clear key-value relationship. • Strongly typed. • Fast lookups. • Ordered by insertion. • Higher memory usage due to hash tables. • Can be slower than Hashtable for small collections. 3.7 Sets A HashSet<T> is a collection of unique elements. • Does not allow duplicate elements. • Based on hashing, providing O(1) average case time complexity for add, remove, and contains operations. HashSet<int> set = new HashSet<int>(); bool contains = set.Contains(1); Use Cases: • When uniqueness of elements is a requirement. • Implementing algorithms that rely on unique collections. • No duplicates. • Fast operations. • Unordered collection. • Cannot access elements by index. 3.8 Trees A tree is a hierarchical data structure where each node has a value and references to child nodes. BinaryTree is a common implementation. • Consists of nodes connected by edges. • Each node has at most two children in a binary tree. • Used for hierarchical data representation. public class TreeNode<T> public T Value; public TreeNode<T> Left; public TreeNode<T> Right; Use Cases: • Representing hierarchical data, like file systems. • Implementing binary search trees for fast searching. • Efficient searching (O(log n) for balanced trees). • Clear hierarchical structure. • Can become unbalanced, leading to poor performance (O(n) in worst case). • More complex to implement and manage. 3.9 Graphs A graph is a collection of nodes (vertices) and edges connecting them. It can be directed or undirected. • Nodes represent entities, edges represent relationships. • Can be cyclic or acyclic. • Supports various traversal algorithms (BFS, DFS). public class GraphNode<T> public T Value; public List<GraphNode<T>> Neighbors; Use Cases: • Representing networks, like social networks or transportation networks. • Solving problems involving connectivity, like finding the shortest path. • Flexible representation of relationships. • Supports complex algorithms for traversal and searching. • Complex implementation. • High memory usage for large graphs. 4. Choosing the Right Data Structure Selecting the appropriate data structure depends on the specific requirements of the task at hand. Considerations include: • Access Speed: For fast access, arrays or dictionaries are ideal. • Memory Usage: Linked lists or trees can be more memory efficient in certain scenarios. • Insertion/Deletion: Stacks, queues, or linked lists are better for frequent insertions and deletions. • Uniqueness: Use sets to ensure all elements are unique. • Hierarchical Data: Trees are the go-to choice for hierarchical data representation. • Complex Relationships: Graphs are suitable for representing complex networks of relationships. 5. Conclusion Understanding and effectively utilizing data structures is crucial for writing efficient and scalable code in C#. Each data structure offers unique benefits and trade-offs, and the choice of which to use should be guided by the specific needs of your application. Whether you’re working with simple collections or complex networks, C# provides a rich set of data structures to meet your needs. By mastering these, you can optimize your applications for performance, memory usage, and readability. This comprehensive guide should give you a strong foundation in C# data structures, helping you choose the right one for your projects. Happy coding!
{"url":"https://coderzon.com/understanding-data-structures-in-c-a-comprehensive-guide/","timestamp":"2024-11-09T03:34:06Z","content_type":"text/html","content_length":"134278","record_id":"<urn:uuid:9da87252-c322-48c0-9ae4-396dffafd39b>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00051.warc.gz"}
Specific Impulse Calculation in context of specific impulse to thrust 30 Aug 2024 Title: A Comprehensive Analysis of Specific Impulse Calculation for Thrust-to-Specific-Impulse Optimization In the realm of propulsion systems, specific impulse (Isp) is a crucial parameter that quantifies the efficiency of a thruster in converting its energy into thrust. The relationship between Isp and thrust is complex, making it essential to develop accurate calculation methods for optimal system design. This article presents a detailed analysis of specific impulse calculation, focusing on the context of thrust-to-specific-impulse optimization. Specific impulse (Isp) is defined as the total impulse per unit mass flow rate of propellant, typically measured in seconds (s). It represents the efficiency of a thruster in generating thrust while minimizing propellant consumption. The thrust-to-specific-impulse ratio is a critical performance metric for propulsion systems, as it directly affects the overall system efficiency and mission Theoretical Background: The specific impulse (Isp) can be calculated using the following formula: Isp = ∫(V_e * g_0) / (m_dot * g_0) where: Isp = specific impulse (s) V_e = exhaust velocity (m/s) g_0 = standard acceleration due to gravity (9.81 m/s^2) m_dot = mass flow rate of propellant (kg/s) Thrust Calculation: The thrust (F) generated by a thruster can be calculated using the following formula: F = m_dot * V_e where: F = thrust (N) m_dot = mass flow rate of propellant (kg/s) V_e = exhaust velocity (m/s) Relationship between Isp and Thrust: The relationship between specific impulse and thrust is complex, as it depends on the thruster’s efficiency, nozzle design, and propellant characteristics. However, a simplified analysis can be performed using the following formula: F = Isp * m_dot This equation highlights the direct correlation between specific impulse and thrust. Optimization Strategies: To optimize the thrust-to-specific-impulse ratio, designers can employ various strategies, such as: 1. Nozzle optimization: By optimizing the nozzle design, engineers can increase the exhaust velocity (V_e) and subsequently improve the specific impulse. 2. Propellant selection: Selecting a propellant with a higher specific impulse or a more efficient combustion process can enhance the overall thrust-to-specific-impulse ratio. 3. Thrust vectoring: Implementing thrust vectoring techniques, such as gimbal or vernier thrusters, can improve the system’s overall efficiency and thrust-to-specific-impulse ratio. In conclusion, specific impulse calculation is a critical aspect of propulsion system design, particularly in the context of thrust-to-specific-impulse optimization. By understanding the relationships between Isp, thrust, and propellant characteristics, engineers can develop more efficient and effective propulsion systems for various applications. 1. Bennett, G. W., & Hearn, J. E. (2018). Thrust-to-Specific-Impulse Optimization of a Hall Effect Thruster. Journal of Propulsion and Power, 34(4), 741-748. 2. Kumar, P., & Kumar, R. (2020). Performance Analysis of a Hybrid Rocket Engine using Specific Impulse Calculation. International Journal of Aerospace and Mechanical Engineering, 14(1), 23-32. ASCII Format: Here is the article in ASCII format: Title: A Comprehensive Analysis of Specific Impulse Calculation for Thrust-to-Specific-Impulse Optimization In the realm of propulsion systems, specific impulse (Isp) is a crucial parameter that quantifies the efficiency of a thruster in converting its energy into thrust. The relationship between Isp and thrust is complex, making it essential to develop accurate calculation methods for optimal system design. Specific impulse (Isp) is defined as the total impulse per unit mass flow rate of propellant, typically measured in seconds (s). It represents the efficiency of a thruster in generating thrust while minimizing propellant consumption. The thrust-to-specific-impulse ratio is a critical performance metric for propulsion systems, as it directly affects the overall system efficiency and mission duration. Theoretical Background: Isp = ∫(V_e \* g_0) / (m_dot \* g_0) Isp = specific impulse (s) V_e = exhaust velocity (m/s) g_0 = standard acceleration due to gravity (9.81 m/s^2) m_dot = mass flow rate of propellant (kg/s) Thrust Calculation: F = m_dot \* V_e F = thrust (N) m_dot = mass flow rate of propellant (kg/s) V_e = exhaust velocity (m/s) Relationship between Isp and Thrust: F = Isp \* m_dot Optimization Strategies: 1. Nozzle optimization: By optimizing the nozzle design, engineers can increase the exhaust velocity (V_e) and subsequently improve the specific impulse. 2. Propellant selection: Selecting a propellant with a higher specific impulse or a more efficient combustion process can enhance the overall thrust-to-specific-impulse ratio. 3. Thrust vectoring: Implementing thrust vectoring techniques, such as gimbal or vernier thrusters, can improve the system's overall efficiency and thrust-to-specific-impulse ratio. In conclusion, specific impulse calculation is a critical aspect of propulsion system design, particularly in the context of thrust-to-specific-impulse optimization. By understanding the relationships between Isp, thrust, and propellant characteristics, engineers can develop more efficient and effective propulsion systems for various applications. BODMAS Format: Here is the article in BODMAS format: Title: A Comprehensive Analysis of Specific Impulse Calculation for Thrust-to-Specific-Impulse Optimization In the realm of propulsion systems, specific impulse (Isp) is a crucial parameter that quantifies the efficiency of a thruster in converting its energy into thrust. The relationship between Isp and thrust is complex, making it essential to develop accurate calculation methods for optimal system design. Specific impulse (Isp) is defined as the total impulse per unit mass flow rate of propellant, typically measured in seconds (s). It represents the efficiency of a thruster in generating thrust while minimizing propellant consumption. The thrust-to-specific-impulse ratio is a critical performance metric for propulsion systems, as it directly affects the overall system efficiency and mission duration. Theoretical Background: Isp = ∫(V_e × g_0) / (m_dot × g_0) Isp = specific impulse (s) V_e = exhaust velocity (m/s) g_0 = standard acceleration due to gravity (9.81 m/s^2) m_dot = mass flow rate of propellant (kg/s) Thrust Calculation: F = m_dot × V_e F = thrust (N) m_dot = mass flow rate of propellant (kg/s) V_e = exhaust velocity (m/s) Relationship between Isp and Thrust: F = Isp × m_dot Optimization Strategies: 1. Nozzle optimization: By optimizing the nozzle design, engineers can increase the exhaust velocity (V_e) and subsequently improve the specific impulse. 2. Propellant selection: Selecting a propellant with a higher specific impulse or a more efficient combustion process can enhance the overall thrust-to-specific-impulse ratio. 3. Thrust vectoring: Implementing thrust vectoring techniques, such as gimbal or vernier thrusters, can improve the system's overall efficiency and thrust-to-specific-impulse ratio. In conclusion, specific impulse calculation is a critical aspect of propulsion system design, particularly in the context of thrust-to-specific-impulse optimization. By understanding the relationships between Isp, thrust, and propellant characteristics, engineers can develop more efficient and effective propulsion systems for various applications. Note: The article is written in a formal academic tone and includes mathematical formulas to support the analysis. The ASCII format is used to present the article in a concise and readable manner. Related articles for ‘specific impulse to thrust’ : • Reading: Specific Impulse Calculation in context of specific impulse to thrust Calculators for ‘specific impulse to thrust’
{"url":"https://blog.truegeometry.com/tutorials/education/c842dc7e14c5ce2e8fbf6fd9fd0dfbbd/JSON_TO_ARTCL_Specific_Impulse_Calculation_in_context_of_specific_impulse_to_thr.html","timestamp":"2024-11-03T17:12:16Z","content_type":"text/html","content_length":"22433","record_id":"<urn:uuid:29deb4bc-eae1-47bf-acd2-9534d640c91d>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00206.warc.gz"}
Dividing Fractions Worksheet Pdf Grade 7 Dividing Fractions Worksheet Pdf Grade 7 Educurve 13 created date. These pdf worksheets include division of any two proper fractions. An unlimited supply of printable worksheets for addition subtraction multiplication and division of fractions and mixed numbers. Dividing fractions worksheet pdf grade 7. These dividing fractions worksheets all come with a corresponding printable answer page. Dividing fractions by fractions. Divide proper fractions 2. Dividing fractions by types. This is a comprehensive collection of free printable math worksheets for grade 7 and for pre algebra organized by topics such as expressions integers one step equations rational numbers multi step equations inequalities speed time distance graphing slope ratios proportions percent geometry and pi. Teachers should also check out dividing fractions lesson plans. Teachers parents and students can print these worksheets out and make copies. Change the improper fractions to mixed numbers. These dividing fractions worksheets are printable. Dividing fractions t1s1. The worksheets are available as pdf and html worksheets are randomly generated and come with an answer key. This product is suitable for preschool kindergarten and grade 1 the product is available for instant download after purchase. Dividing fractions worksheets printable. Mixed numbers identify which of the following are improper fractions. Fraction worksheets pdf preschool to 7th grade. Worksheets math grade 5 fractions multiplication division. Microsoft word divide author. These worksheets are pdf files. Dividing whole numbers by fractions dividing mixed numbers by fractions. Multiplication and division of fractions and mixed numbers. Pre k kindergarten 1st grade 2nd grade 3rd grade 4th grade 5th grade 6th grade and 7th grade worksheets cover the following fraction topics. Divide improper fractions 1. Math workbook 1 is a content rich downloadable zip file with 100 math printable exercises and 100 pages of answer sheets attached to each exercise. The videos games quizzes and worksheets make excellent materials for math teachers math educators and parents. Below are six versions of our grade 6 math worksheet on dividing proper and improper fractions by other fractions. Divide improper fractions 2. Introduction to fractions fractions illustrated with These grade 5 worksheets begin with multiplying and dividing fractions by whole numbers and continue through mixed number operations all worksheets are printable pdf documents. Worksheets based on dividing any two improper fractions. Fraction worksheets for children to practice suitable pdf printable math fractions worksheets for children in the following grades. Students should simplify answers where possible. 1 2 5 4 3 6 a 21 b 4 c 83 d 7 2 5 126 6 change the mixed numbers to improper fractions. Divide proper fractions 1. 4 4 2017 12 00 36 pm. 7 10 9 8 11 2 6 12 9 87 4 5 11 17 8 45 3 61 41 69 8 3 10 7 56 17 132 11 94 93. Basic Fraction Division With Wholes Including Dividing Fractions With Whole Numbers And Divid Common Core Math Worksheets Math Worksheets Free Math Worksheets Pin By Galina Temiakova On Food Fractions Worksheets Fractions Mixed Fractions Worksheets Printable Fraction Worksheets Convert Mixed Numbers To Improper Fractions 2 Gif 1000 1294 Fractions Worksheets Improper Fractions Fractions Free Printables For Kids Multiplying Fractions Worksheets Fractions Worksheets Dividing Fractions Worksheets The Multiplying And Dividing Fractions A Math Worksheet From The Fractions Wo Math Fractions Worksheets Fractions Worksheets Multiplying Fractions Worksheets Dividing Fractions Worksheets Fractions Worksheets Dividing Fractions Dividing Fractions Worksheets Grade 5 Multiplying Fractions Worksheet Fractions Worksheets Dividing Fractions Worksheets Multiplying Fractions Worksheets Free Printables For Kids Multiplying Fractions Worksheets Fractions Worksheets Dividing Fractions Worksheets Dividing Fractions Fractions Worksheets Multiplying Fractions Worksheets Dividing Fractions Worksheets Division Fractions Worksheets Dividing Fractions Worksheets Fractions Fractions Worksheet Multiplying And Dividing Mixed Fractions B Dividing Mixed Fractions Fractions Worksheets Multiplying Fractions The Reducing Fractions To Lowest Terms A Math Worksheet From The Fractions Worksheet Page At Math Drills Lowest Terms Reducing Fractions Fractions Worksheets Fractions Worksheet Dividing And Simplifying Proper And Improper Fractions Love This Site Fractions Worksheets Fractions Simplifying Fractions Dividing Fractions And Whole Numbers Worksheet With Solutions Dividing Fractions Fractions Number Worksheets The Multiplying And Dividing Mixed Fractions B Math Worksheet From The Fractions Worksheet Fractions Worksheets Dividing Fractions Worksheets Mixed Fractions Worksheets For Fraction Multiplication Multiplying Fractions Worksheets Fractions Worksheets Multiplying Fractions Subtracting Mixed Numbers Fractions Worksheets Fractions Worksheets Fractions Mixed Fractions Worksheets Free Printables For Kids Multiplying Fractions Worksheets Fractions Worksheets Dividing Fractions Worksheets Dividing Fraction Worksheets 2gif 1 000 Atilde 1 294 Pixels Dividing Dividing Fractions Fractions Worksheets Fractions
{"url":"https://kidsworksheetfun.com/dividing-fractions-worksheet-pdf-grade-7/","timestamp":"2024-11-09T05:01:20Z","content_type":"text/html","content_length":"136050","record_id":"<urn:uuid:f0f00fba-872d-458a-952b-55473a419fec>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00449.warc.gz"}
Download capillarity and wetting phenomena by Pierre-Gilles de Gennes PDF By Pierre-Gilles de Gennes The learn of capillarity is in the course of a veritable explosion. what's provided here's no longer a accomplished assessment of the most recent learn yet fairly a compendium of ideas designed for the undergraduate pupil and for readers drawn to the physics underlying those phenomena. Read or Download capillarity and wetting phenomena PDF Similar thermodynamics and statistical mechanics books The query of ways reversible microscopic equations of movement may end up in irreversible macroscopic behaviour has been one of many principal concerns in statistical mechanics for greater than a century. the fundamental matters have been identified to Gibbs. Boltzmann carried out a truly public debate with Loschmidt and others and not using a passable answer. Complex Dynamics of Glass-Forming Liquids: A Mode-Coupling Theory The e-book comprises the one on hand entire presentation of the mode-coupling thought (MCT) of complicated dynamics of glass-forming drinks, dense polymer melts, and colloidal suspensions. It describes in a self-contained demeanour the derivation of the MCT equations of movement and explains that the latter outline a version for a statistical description of non-linear dynamics. Statistical thermodynamics and microscale thermophysics Many intriguing new advancements in microscale engineering are in response to the appliance of conventional rules of statistical thermodynamics. during this textual content Van Carey deals a latest view of thermodynamics, interweaving classical and statistical thermodynamic rules and utilising them to present engineering platforms. Extra info for capillarity and wetting phenomena Sample text The values of Xs are subject to IS1I'2 rN _ V - 3 _ Lqs - Ls s ( kTX s)3/t ha . s = l/kT,x = /3hw = {3haqt, so that kT)3/t 2 q dq = ( ha Hence = ::~t (1) T X 3 t I / - dx. e:y't ~f:~tJ-~3/~dx, . e. for fixed solid angle, the upper limit of the x-integra­ tion is given by the limiting surface in q-space. However, at low enough temperatures the integration extends to x = 00 so that the last integral is replaced by a constant numerical value. The temperature dependence is therefore given by the factor in front of the summation: Eth 0: T3/t+l. Use the Pq -formula to show that the particle concentration at height z is given by the barometer-formula mgn (mgz) p(z) = AkT exp - kT . If the law pv = nkT holds at all levels show that the pressure variation is p(z) = exp ( - mgz) kT . ] Solution K+M) dqj ... dpj (----u To find the number of particles in range dz, we must multiply by n, and to obtain the concentration we must divide the result by A dz. The stated result is then found. To obtain the pressure at level z, put is a normalisation constant) over all q's to find the stated result with a new normalisation constant B. C) Use the fact that dW r~ == uVV. Also E = K+ W (1 + ~ )7<. , n; j = x, y, z), and Oi) the ergodic hypo­ thesis that ensemble and time averages yield identical results, prove that C = ~nkT. (b) If the forces are derivable from a potential W, -aWlaq,j, and the momenta are involved only in a kinetic energy of the form n = 3 1" ( a11) 2'1-! Piiy;-:. I,J P'I L Fi • rl, where the bar denotes a time average. == I K aw.. /dt, where Pi = (Pix, Ply, PIZ)' The virial of a system of kT. Also (m~VZ+m~c2-m~VZ)Y' m (-qijFii) This becomes C == 1nkT. Rated of 5 – based on votes
{"url":"http://melkerscatering.se/ebooks/capillarity-and-wetting-phenomena","timestamp":"2024-11-06T05:55:06Z","content_type":"text/html","content_length":"28689","record_id":"<urn:uuid:dec17502-63e2-4ea2-b0d2-cb2746b9909c>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00681.warc.gz"}
Mathematics 574 Mathematics 574 Information Selected Topics in Number Theory Information concerning the course Mathematics 574 (Selected Topics in Number Theory) to be taught in Fall 2004 by J. Tunnell is kept here. The course will be an introduction to rational points on elliptic curves through examples of interesting elliptic curves. It will meet Monday and Wednesday 2:50-4:10 in Hill 425. The following information is currently available about this course. Posted course announcement Weekly assignments Week 1
{"url":"https://sites.math.rutgers.edu/~tunnell/math574.html","timestamp":"2024-11-04T05:44:04Z","content_type":"text/html","content_length":"3439","record_id":"<urn:uuid:a68baf4f-a40e-4e7a-aed0-1526f94f4c75>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00389.warc.gz"}
Derivatives depend on index position General questions (367) I noticed recently that • the dependence of an object only includes the given position of the index and also that • objects with derivatives cannot change their index-position if the index position is given as fixed. For example, the Exp in the following gives zero, because only A with an upper index depends on the derivative: Exp:= \partial^{\mu}{A_\nu}; Of course I could just make A dependent on \partial for lowercase indices as well or just for every index position via but it still strikes me as odd. Secondly, once I have defined objects with derivatives, the index position does not change through the use of canonicalise. So for example Exp:= \partial_{\mu}{A^\mu} - \partial^{\mu}{A_\mu}; gives not zero as expected. And even if I derive a scalar, its index is fixed, so Exp:= A^\mu \partial_{\mu}{B} - A_\mu \partial^{\mu}{B}; does not give zero. I usually don't want to think about the position of dummy indices. So if it would be possible to give the index positions of derivatives a bit more freedom I would be very grateful! Cheers, Karim
{"url":"https://cadabra.science/qa/1250/derivatives-depend-on-index-position","timestamp":"2024-11-07T19:07:23Z","content_type":"text/html","content_length":"13502","record_id":"<urn:uuid:bd191c7f-89cc-490f-b062-447ada3c4764>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00669.warc.gz"}
Electrostatics in the presence of dielectrics: The benefits of treating the induced surface charge density directly A new method is presented for solving electrostatic boundary value problems with dielectrics or conductors and is applied to systems with spherical geometry. The strategy of the method is to treat the induced surface charge density as the variable of the boundary value problem. Because the potential is expressed directly in terms of the induced surface charge density, the potential is automatically continuous at the boundary. The remaining boundary condition produces an algebraic equation for the surface charge density, which when solved leads to the potential. The surface charge method requires the enforcement of only one boundary condition, and produces the induced surface charge in addition to the potential with no additional labor. The surface charge method also can be applied in nonspherical geometries and provides a starting place for efficient numerical solutions. , “ On a class of integrals of Legendre polynomials with complicated arguments—with applications in electrostatics and biomolecular modeling Physica A The following is a selection of textbooks that discuss electrostatics, but do not mention the surface charge method. D. J. Griffiths, Introduction to Electrodynamics (Prentice Hall, Englewood Cliffs, NJ, 1981); J. D. Jackson, Classical Electrodynamics, 2nd ed. (Wiley, New York, 1975); P. Lorrain, D. R. Corson, and F. Lorrain, Electromagnetic Fields and Waves, 3rd ed. (W. H. Freeman, New York, 1988); G. L. Pollack and D. R. Stump, Electromagnetism (Addison Wesley, San Francisco, 2002); M. Schwartz, Principles of Electrodynamics (Dover, New York, 1987); O. D. Jefimenko, Electricity and Magnetism, 2nd ed. (Electret Scientific Co., Star City, WV, 1989); J. A. Stratton, Electromagnetic Theory (McGraw-Hill, New York, 1941); W. K. H. Panofsky and M. Phillips, Classical Electricity and Magnetism (Addison-Wesley, Reading, MA, 1955); L. D. Landau and E. M. Lifshitz, Electrodynamics of Continuous Media, 2nd ed. (Pergamon, New York, 1984). , “ Method for solving electrostatic problems having a simple dielectric boundary Am. J. Phys. , and , “ Electrostatic potential inside ionic solutions confined by dielectrics: a variational approach Phys. Chem. Chem. Phys. In a uniform linear dielectric with dielectric constant ε and free and total charge densities $ρf$ and ρ, we have $4πρf=∇⋅D⃗=∇⋅εE⃗=ε∇⋅E⃗=ε4πρ,$ the first and last members of which imply that $ρf=ερ.$ The first equality is the differential form of Gauss’s law for the displacement field $D⃗,$ the second equality follows from $D⃗=εE⃗$ for a linear dielectric, the third equality is a consequence of the spatial uniformity of ε, and the fourth equality is a result of the differential form of Gauss’s law for the electric field $E⃗.$ At the boundary between two distinct linear dielectric materials, the third equality does not hold because the dielectric constant is not locally constant there. Therefore, the total charge density may be nonzero at the boundary even if the free charge density vanishes there. In the special case where the free charge distribution is in the form of a point charge $qf,$ the total charge (except that at the boundary) is $qf/ε.$ The total free charge is $Q0.$ The monopole contribution to the electric displacement outside the void is $D⃗mp=(Q0r̂)/r2.$ Because $D⃗mp=εexE⃗$ in the infinite dielectric, $E⃗mp=(Q0r̂)/(εexr2)$ and consequently $Φmp=Q0/(εexr).$ The monopole moment is identified as $Q0/εex$ which must be the total charge of the charge distribution. If we let either dielectric constant go to infinity, we recover the case of a conductor. If the other dielectric constant is set to unity, corresponding to vacuum, we recover a situation commonly discussed in textbooks. See, for example, Griffiths, Ref. , pp. 85–93. Indeed, the dielectric constants of the various finite pieces of material need not all be the same, but in order to avoid excessively elaborate notation at this stage, attention will be restricted to the case indicated. The generalization is straightforward. In the degenerate case that the three points are collinear, the $z$ axis can be taken to be perpendicular to the line that includes all three points. In this case, $θ1=θ2=π/2.$ This content is only available via PDF. © 2004 American Association of Physics Teachers. American Association of Physics Teachers
{"url":"https://pubs.aip.org/aapt/ajp/article-abstract/72/2/190/1039893/Electrostatics-in-the-presence-of-dielectrics-The?redirectedFrom=fulltext","timestamp":"2024-11-09T19:13:18Z","content_type":"text/html","content_length":"121222","record_id":"<urn:uuid:c1711ade-9055-425e-8e0d-074da2e5309b>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00240.warc.gz"}
Superdiffusion in the periodic Lorentz gas We prove a superdiffusive central limit theorem for the displacement of a test particle in the periodic Lorentz gas in the limit of large times t and low scatterer densities (Boltzmann–Grad limit). The normalization factor is √tlogt, where t is measured in units of the mean collision time. This result holds in any dimension and for a general class of finite-range scattering potentials. We also establish the corresponding invariance principle, i.e., the weak convergence of the particle dynamics to Brownian motion. Dive into the research topics of 'Superdiffusion in the periodic Lorentz gas'. Together they form a unique fingerprint. • Person: Academic , Member, Group lead
{"url":"https://research-information.bris.ac.uk/en/publications/superdiffusion-in-the-periodic-lorentz-gas","timestamp":"2024-11-11T04:13:27Z","content_type":"text/html","content_length":"57470","record_id":"<urn:uuid:5465116b-01f9-4f70-a710-1495d20acb16>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00572.warc.gz"}
Rediscovering Trigonometry Part 3: Half Angle Formulas Click here to see part one of this four week series. Click here to see part two of this four week series. Last week, we figured out a way to figure out trigonometric functions for twice a given angle. This week, we will do the same, but for determining the trigonometric functions for half a given angle. First, let's discuss what half of an angle means. Remember that there are 360° in a circle, or 360° in a full revolution. This means that a number like 370° can also be expressed as 10°. Though they are different measurements, plugging 370° into a trigonometric function will yield the same answer as 10°. It is also equivalent in most other situations in mathematics. For double angle formulas, we did not need to discuss this. This is because when doing double angle formula calculations with these measurements, there would be no issue. sin(2 • 10°) = sin(20°) sin(2 • 370°) = sin(740°) = sin(740° - 720°) = sin(20°) Note that 720° is two full revolutions around a circle, and thus, it can be subtracted off when performing a trigonometric operation. But performing a half angle calculation will create more of an issue. Let's use 10° and 370° again. sin(1/2 • 10°) = sin(5°) sin(1/2 • 370°) = sin(185°) These answers are not the same. Since they are both in the 0° - 360° interval, we cannot make any assumptions. We do know that the sine and cosine of 185° are the negative sine and negative cosine of 5° respectively, but this proves that they are not equal. If one were go up to 730°, they would be back to normal, however. sin(1/2 • 730°) = sin(365°) = sin(365° - 360°) = sin(5°) This means that for every angle, the half sine and half cosine function should yield two answers. The two answers should have the same absolute value, but different signs (they are the same number, but one is negative and one is positive). You may already have a function in your head that can create this type of situation, but we will be able to derive it as well. Take a variation of the cosine double angle formula that we derived last week: cos(2α) = 1 – 2sin^2α Let's try to isolate sinα. We would first add that to the left hand side and subtract the cosine of 2α over to the right hand side. 2sin^2α = 1 – cos(2α) Divide through by 2 to get: sin^2α = (1 – cos(2α))/2 And square root both sides to get: sinα = ±√((1 - cos(2α))/2) Notice how there is a ± sign in front of the square root. This is because when one squares a positive or negative value, it becomes positive. For instance, the equation x^2 = 25 would be solved as x = ±5 because (5)(5) = 25 and (–5)(–5) = 25. The same thing happened here. But also remember what we found before. We proved through logic that the half sine and half cosine of an angle has two answers, one negative and one positive. With that in mind, we can see that this is the accurate way to write the square root (some derivations call for just a positive answer such as the Distance Let's replace angle α with α/2 to keep the half angle definition. This gives a formula of: sin(α/2) = ±√((1 - cosα)/2) Let's derive a cosine half angle formula. We can take another variation on the cosine double angle formula and go forward. cos(2α) = 2cos^2α – 1 This time, we will only need to add one to both sides to isolate the cosα term. Let's also flip the equation around to make it simpler. 2cos^2α = 1 + cos(2α) Divide through by 2 to get: cos^2α = (1 + cos(2α))/2 And square rooting both sides yields: cosα = ±√((1 + cos(2α))/2) Again, we end up with a ± sign in the formula. This means that we probably did everything correctly, as logic shows we will need this sort of sign to create two answers. Rewriting α as α/2 gives a final formula of: cos(α/2) = ±√((1 + cosα)/2) It is tough to see what these formulas actually look like in this formatting, so I have written them out in LaTeX so you can see what is going on more conveniently. These formulas themselves are pretty cool, but the logic involved in finding them is also very interesting. The fact that we could predict the nature of the function before we even found it is really helpful. This can be huge in trying to figure out the right way to go about solving a problem.
{"url":"https://coolmathstuff123.blogspot.com/2014/03/rediscovering-trigonometry-part-3-half.html","timestamp":"2024-11-03T22:27:07Z","content_type":"text/html","content_length":"82243","record_id":"<urn:uuid:c65f3489-8657-41fb-a15f-84bc0ad8bc3a>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00110.warc.gz"}
Solutions of Linear Programming - Class 10 Optional Math Short Questions Define Linear programming and Linear programming problem. Linear programming: Linear programming is a mathematical technique of finding the maximum or minimum value of the objective function satisfying the given condition. Linear programming problem: The problem which has the object of finding the maximum or minimum value satisfying all the given conditions is called linear programming problem (LPP). Define linear inequality. A relation represented by ax + by + c > 0, ax + by + c < 0, or ax + by + c ≥ 0 or ax + by + c ≤ 0 is known as the linear inequality in two variables x and y. Define boundary line with example. The Boundary line is the line which is the corresponding equation of given inequality and divides the coordinate graph into two halves. It is dashed for > and < and solid for ≥ and ≤. For example: the boundary line of inequality 5x - 8y ≥ 10 is 5x - 8y = 10. In which condition the boundary line is dotted line? The boundary line is dotted line if inequality contains > or < symbol. Define Decision variables. The non-negative independent variables involving in the L.P problem are called decision variables. For example: in 2x + 3y = 7, x and y are decision variable. Define constraints with an example. The conditions satisfied by the decision variables are called constraints. For example: if x and y be the number of the first two kinds of articles produced, then x + y ≥ 1000, x ≥ 0, and y ≥ 0 are the constraints. Define Feasible region and Feasible solution. Feasible region: A closed plane region bounded by the intersection of finite number of boundary lines is known as feasible region. It is also called as convex polygonal region. Feasible solution: The values of decision variables x and y involved in objective function satisfying all the given condition is known as feasible solution. What do you mean by objective function? Also write an example. The linear function whose value is to be maximized or minimized (optimized) is called an objective function. For example: F = 4x - y, D = 5x - 6y. In an objective function (F) = 5x - 2y, one vertex of feasible region is (5, 2) then find the value of objective function. Objective function, F = 5x - 2y Vertex (x, y) = (5, 2) Hence, F = 5(5) - 2(2) = 21 Which quadrants do the inequality x &geq; 0 and x &leq; 0 represent in a graph? The inequality x &geq; 0 represent the first and fourth quartile and the inequality x &leq; 0 represent the second and third quartile. Which quadrants do the inequality y &geq; 0 and y &leq; 0 represent in a graph? The inequality y &geq; 0 represent the first and second quartile and the inequality y &leq; 0 represent the third and fourth quartile. From the given inequality 4x + 3y ≥ 10, Write the boundary line equation. The boundary line equation of inequality 4x + 3y ≥ 10 is 4x + 3y = 10.
{"url":"https://seekoguide.com/subjects/class-ten-optional-math/linear-programming","timestamp":"2024-11-06T15:58:32Z","content_type":"text/html","content_length":"42172","record_id":"<urn:uuid:0b1690d0-3e82-4335-9e2e-c1043513bc66>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00137.warc.gz"}
Machine Learning Techniques For Analyzing Inscriptions From Israel DHQ: Digital Humanities Quarterly Volume 17 Number 2 Machine Learning Techniques For Analyzing Inscriptions From Israel The date of artifacts is an important factor for scholars to get a further understanding of culture and society of the past. However, many artifacts are damaged over time, and we can often only get fragments of information regarding the original artifact. Here, we use the inscription data from Israel as a model dataset and compare the performances of eleven commonly used regression models. We find that the random forest model would be the optimal machine learning model to predict the year of inscriptions from tabular data. We further show how we can make interpretations from the machine learning prediction model through a variance important plot. This research shows an overview of how machine learning techniques could be used to resolve digital humanities problems by using the Inscription of Israel/Palestine dataset as a model dataset 1. Introduction The study of antiquity is full of missing data. The evidence that does survive – whether texts on papyrus or parchment; inscriptions; coins; or archaeological – frequently survives only in damaged form. That problem, however, is compounded by two additional complications. First, many of these data have been unearthed in non-controlled excavations and have taken winding paths to libraries, museums, and the hands of private collectors, along the way losing valuable contextual information. Second, scholars have used a bewildering array of conflicting and often inherently vague reporting methods. My “Roman period,” for example, might be your “Byzantine period.” As a result of this situation, scholars in ancient studies frequently find themselves unable to place or date evidence that could be critical to our deeper understanding. Given the paucity of our information, for example, dating a particularly revealing inscription to the fifth or third century BCE, or as originating from Athens or Asia Minor, could have serious scholarly ramifications. Traditionally, scholars have used their own experience and specialized training to supply these missing contextual data [ Emmanuel et al. 2021 ]. Recently, however, there has been increasing interest in using machine learning techniques to supplement, or even replace, subjective and idiosyncratic (although sometimes brilliant) evaluations. For example, Niculae et al. [ Niculae et al. 2014 ] used machine learning techniques to date a corpus of older texts. In 2022, Assael et al. [ Assael et al. 2022 ] published their research into developing a machine learning platform that would aid the automated reconstruction and adding of missing contextual information to ancient Greek inscriptions. This platform, which they call Ithaca, is based on a deep neural network model. They demonstrate that such a technique greatly enhances scholarly expertise, although it cannot substitute for it. As impressive as Ithaca is, deep neural network techniques presently have limited applicability to other digital humanities projects. One inherent problem with using them is that they are “black box” models; their processes remain opaque. Furthermore, they need both technical expertise to implement and a large sample size to train the algorithm [ LeCun, Benigo, and Hinton 2015 ]. Datasets from antiquity, particularly those that exist in high-quality structured form, are rarely large enough to make this approach suitable. In this paper, we explore the utility of other machine learning algorithms for predicting values in incomplete datasets. We have determined that a random forest model has the most potential to predict these values, particularly in smaller datasets with several categorical variables. While our own work was based on one dataset, “Inscriptions of Israel/Palestine,” we believe that our results are applicable to other datasets as well. 2. Methods 2.1 Inscription dataset The “Inscriptions of Israel/Palestine” (IIP) dataset is an online database which seeks to make all of the previously published inscriptions of Israel/Palestine from the Persian period through the Islamic conquest (ca. 500 BCE - 640 CE) freely accessible [ Satlow 2022 ]. This database includes approximately 4,500 inscriptions, and they are written primarily in Hebrew, Aramaic, Greek and Latin, by Jews, Christians, Greeks, and Romans. Some of the examples include imperial declarations on monumental architecture, notices of donations in synagogues and humble names scratched on ossuaries [ Satlow 2022 ]. Each inscription exists as a single XML file structured according to EpiDoc conventions [ Elliott, Bodard, Cayless, et al. 2006 2.2 Variable Explanation We consider the following characteristics in the dataset: • Terminus ante quem (the latest possible date) • Terminus post quem (the earliest possible date) • Text Genre • Language • Material • Region • Likely Religion It is worth noting that language, material, and region are objectively determined in most cases. Dating, on the other hand, is often determined subjectively by scholars; relatively few contain dates or were found in carefully controlled archaeological excavations. Thus, we examine how machine learning models can accurately predict the date of inscriptions given the information of other variables in the dataset. All variables inside the dataset except date are categorical, as they are not quantifiable. 2.3 Data Preprocessing The IIP dataset is converted into a single csv file through using the ElementTree XML API in Python programming. One of the features of the IIP dataset, like many others in the humanities, is that it contains a number of categorical variables, that is, different phrases that occur within a single XML element. For example, there are many different cities in the location element, and over fifty different text genres (e.g., funerary, dedicatory, label, prayer) are found within the appropriate element. The result of this is an imbalanced dataset. Imbalanced datasets occur when the proportion of minority class is significantly low compared with other classes in the dataset, and creating an effective machine learning algorithm with imbalanced datasets is a very difficult problem [ Johnson and Khoshgoftaar 2019 ]. It is also important to make sure that all the possible categorical values are in the training dataset to create a good machine learning model. For example, if the training set does not include any inscriptions that has the city name “Jerusalem”, it would be difficult for the machine learning algorithm to use the Jerusalem information in the test dataset. Error messages can come out in many machine learning programs when they encounter some information that is not included in the training dataset. Splitting the dataset into training and test set is done randomly, so it is possible for the minority class to not appear in both training and test set if the number of observations from the minority class is small. To have enough observations in the minority class, we combine various unique terms and generate a dataset that is better suited for the machine learning algorithm. We first fix spelling mistakes inside the dataset. Afterwards, we combine words that describe the same concept. For instance, “Golan Heights” and “Golan” can be grouped together as “Golan” and there is no need for the machine learning algorithm to consider these elements separately. There are also some phrases such as “dedicatory quotation” and “dedicatory verse”, where they describe different objects but can be grouped together as “dedicatory” to reduce the number of variables inside the dataset. However, we take a different approach with the “City Name” variable. There are 244 unique city names inside the dataset, and many of them only include a few inscriptions. Since the location of inscription is already indicated in the “Region” variable, we only consider “Jerusalem” and “Other Cities” to make the prediction easier to interpret. We also do not consider all variables in the dataset, such as condition of artifacts and relief style, as they have a lot of missing values. We use a technique called one-hot encoding to convert the categorical features to numerical features. There are many machine learning algorithms that can only analyze numerical data, so analyzing categorical variables without one-hot encoding can cause some issues. In this technique, we create a binary column for each category, where we denote the output of the column to be 1 if the variable is present and 0 if the variable is not present. For example, we create a new variable called Language_Greek to describe if the inscription is written in Greek or not. The Language_Greek variable will be 1 if the inscription is written in Greek, but 0 otherwise. These inscriptions are used to examine the performance of machine learning models to predict the time periods. The time period distribution of inscription is shown in Figure 1. The mean date is 109.68 CE with standard deviation 311.47. The large standard deviation implies that the dataset is suitable for conducting this analysis, as it has inscriptions from a wide range of time periods. After we preprocess the dataset, we select 650 inscriptions that actually contain a certain date. The overview of the steps that are taken in this research project is shown in Figure 2. 2.4 Machine Learning Techniques We compare the performances of eleven machine learning models: linear regression, ridge regression, lasso regression, elastic net, decision tree, random forest, neural network, XGBoost, and support vector regression with linear, radial and polynomial kernel. We select these algorithms, as they require minimal hyperparameter tuning and do not require data transformation. Hyperparameters determine the overall behavior of the machine learning model, and they must be set appropriately by the user before conducting the analysis [ Claesen and DeMoor 2015 ]. Hyperparameter tuning is often performed manually, but it is impractical when we have many hyperparameters, and technical expertise is required to correctly set the hyperparameters [ Claesen et al. 2014 ]. To create a simple and reproducible machine learning prediction model, we try to select models that do not require fine parameter tuning. We will provide a brief overview of these techniques with some examples of previous studies in digital humanities. Readers who are interested in further details of machine learning techniques should consult Hastie et al. [ Hastie et al. 2009 2.4.1 Penalized Regression Techniques Ordinary least squares (OLS) regression is a commonly used statistical technique in regression problems. It assumes that there is a linear relationship between the predictor and response variable. We can directly observe the regression coefficients in OLS regression, so we can understand how the model is making predictions. There are, however, some disadvantages to using OLS. When the number of predictors become large, a small change in the training dataset can cause a large change in the prediction model produced by the OLS model [ Hastie et al. 2009 ]. Thus, penalization techniques are often used to improve the predictability of OLS while retaining its linear model structure [ Zou and Hastie 2005 ]. These methods impose a shrinkage penalty and bring the estimated coefficients closer to zero [ Hastie et al. 2009 ]. We will be examining ridge, lasso, and elastic net, as they are commonly used penalization techniques. Penalization techniques are frequently used in digital humanities research projects that contain datasets with many variables. For example, Finegold et al. [ Finegold et al. 2016 ] used Poisson Graphical Lasso to reconstruct the historical social network in early modern Britain. They have imposed the penalization technique in statistical graph learning methods to find out the relationship between people’s names inside the historical documents. Considering that the number of distinct names inside the historical documents is large, penalized regression went well for their 2.4.2 Support Vector Regression Support Vector Regression (SVR) uses the same principle as Support Vector Machine (SVM), which is one of the most widely used supervised machine learning techniques [ Drucker et al. 1996 ]. It is frequently used in digital humanities, including a study by Argamon et al. [ Argamon et al. 2009 ], where they used SVM to classify author's gender from literary texts. The SVM algorithm conducts regression based on kernel functions, which converts the lower dimensional data into a higher dimensional feature space. We consider the performances of three kernels, linear, radial and polynomial kernel, as they are commonly used kernels. The detailed information about the SVR mechanism can be found in [ Drucker et al. 1996 2.4.3 Neural Network Deep learning algorithms make predictions based on a neural network structure, which is inspired by the human nervous system [ Goodfellow, Benigo, and Courville 2016 ]. It has been used in multiple algorithms in digital humanities studies, including a study by Assael et al. [ Assael et al. 2022 ] where they had implemented a deep learning algorithm to predict contextual information based on the textual information in ancient Greek inscriptions. To examine the performance of deep learning technique, we fit a single-hidden-layer neural network, as it has been shown that low complexity deep learning models perform better when the sample size is small [ Brigato and Iocchi 2021 2.4.4 Tree Based Approach We examine three different tree based machine learning techniques, decision tree, random forest and Extreme Gradient Boosting (XGBoost). Random forest and XGBoost are tree ensemble methods, and they are considered to be the recommended tools to analyze tabular datasets [ Borisov et al. 2022 ]. Ensembles are methods that combine multiple machine learning techniques to create more powerful models, and tree ensemble methods are used extensively in various digital humanities research. For example, a recent project by Baledent et al. [ Baledent, Hiebel, and Lejeune 2020 ] used decision trees and random forests to automatically date French documents with high predictability. Fragkiadakis et al. [ Fragkiadakis, Nyst, and Putten 2021 ] compared the performances of various machine learning techniques to annotate video data with sign languages, and showed that the XGBoost was the optimal model to predict the begin and end frames of a sign sequence in a video. Decision tree is the foundation of random forest and XGBoost model. It is considered to be one of the most interpretable machine learning methods for data analysis, as it can classify data based on a set of yes/no questions. However, decision trees can be very non-robust and a minor change in the training data can result in a large change in the final tree [ Hastie et al. 2009 XGBoost is a tree ensemble machine learning algorithm that uses gradient boosted decision trees. It has a tree learning algorithm that enables to learn from sparse data, and it can analyze data faster than other popular machine learning techniques [ Chen and Guestrin 2016 ]. The gradient boosting algorithm generates one tree at a time based on the previous model’s residuals, and then they are combined to make the final prediction. In our analysis, we generate 150 trees in the final model, where the maximum depth of each tree is three. The random forest algorithm is another tree ensemble machine learning algorithm that generates hundreds of decision trees by using a random subset of predictors in the bootstrapped samples. Bootstrapping is a statistical technique that repeatedly draw samples from the data with replacement [ Hastie et al. 2009 ]. Since the same element can appear multiple times in the new sample, this technique generates a large number of new datasets that are not exactly the same as the original model. The average of the decision trees generated from the bootstrapped samples is examined to make the final prediction. Random forest can also be used to rank the predictor variables based on its ability to decrease the sum of squared errors when it is chosen to split the data [ Breiman 2001 ]. This is an important aspect of random forest, as we can understand which variables are important in the regression model to predict the criterion variable. Due to these advantages, multiple research highlight that random forests have emerged as serious competitors to other machine learning models for predicting numerical and categorical variables [ Belgiu and Drăguţ 2016 To implement random forests, we only need to specify the number of trees and the number of features in each split. In terms of the number of trees, it has been shown that implementing many trees will provide a stable result of variable importance [ Liaw and Wiener 2002 ] and using more than the required number of trees does not harm the model [ Breiman 2001 ]. Many studies use p/3 number of features in each split for regression problems, where p is the number of predictor variables [ Liaw and Wiener 2002 2.5 Metric Metrics are used to quantify the accuracy of the machine learning model once we obtain the machine learning models. We will examine three commonly used metrics to evaluate the machine learning algorithm, root mean square error (RMSE), mean absolute error (MAE), and R-squared. 10-fold cross-validation is performed to compare the performances of machine learning algorithms. In k-fold cross validation, we split the dataset into k smaller sets with equal number of elements and use k-1 sets to train the model, while the remaining set is used to evaluate the model [ Hastie et al. 2009 ]. We repeat the above iteration thirty times and compute the mean value of the determined metrics in cross validation to determine the optimal machine learning model for predicting the date. 2.6 Programming We use R version 4.1.3 to perform the data analysis and Python version 3.8.3 to obtain the XML dataset from the IIP database [ Satlow 2022 ]. The dataset and the codes that we use to obtain the dataset are openly available to the public at 3. Results 3.1 Variable Relationship It is important to understand the relationship between the predictor variables in the dataset before conducting machine learning analysis, as we can understand the issues behind effectively analyzing the dataset. Considering that all predictor variables are categorical, we use Pearson's chi-squared test of independence to examine the association between the variables in the dataset. We have examined the association between: 1. Language and Location 2. Religion and Location 3. Religion and Language 4. Religion and Text Genre The residual plots of the chi-squared test are shown in Figure 3. Results from chi-squared test indicate that there is a significant relationship between all the examined combinations (p<0.001). The results imply that the predictor variables are correlated with each other, which raises the problem of multicollinearity. Multicollinearity occurs when independent variables in the regression model are correlated with each other [ Alin 2010 ]. One of the key assumptions of the linear regression model is that the predictor variables are uncorrelated. Thus, multicollinearity can undermine the statistical significance of an independent variable and can give inaccurate coefficient estimates when traditional statistical techniques are used [ Allen 1997 In contrast, due to recent advances in machine learning techniques, it has been reported that machine learning can better analyze data with multicollinearity than traditional statistical techniques [ Chan et al. 2022 ]. The presence of multicollinearity suggests the usage of machine learning techniques to effectively analyze the dataset. 3.2 Machine Learning Model Comparison A total of eleven regression models are compared: linear regression, ridge regression, lasso regression, elastic net, decision tree, random forest, neural network, SVR linear, SVR radial, SVR polynomial, and XGBoost. The optimal hyperparameters of the machine learning algorithms are determined through 10-fold cross-validation. The cross-validation procedure is repeated 3 times, and the machine learning model is tested by using a total of 30 different datasets, each of which are generated through cross-validation. The evaluation results are shown in Figure 4, where Figure 4 (a) shows the distribution of RMSE from 30 different datasets and Figure 4 (b) shows the mean values of MAE, RMSE and R-squared. MAE and RMSE measure the error of the machine learning model and R-squared is a goodness of fit measure. The random forest model has the lowest value for MAE and RMSE, and has the highest value for R-squared among all models that are examined. This implies that random forest model is the optimal model for predicting the date of inscriptions. 3.3 Random Forest Model We describe the random forest model in detail, as it is the best model that is implemented in the previous section. We initially convert the number of trees that the random forest model generates from 100 to 1000 to determine the optimal number of trees that we put inside the algorithm, but we do not observe any significant differences. Thus, we select 500 number of trees, as it is the default number of trees in R's randomForest package [ Liaw and Wiener 2002 Figure 5 (a) shows the variable importance plot of the random forest model. Variable importance is based upon the mean increase of mean squared error as a result of permuting a given variable [ Liaw and Wiener 2002 ]. The plot suggests that the material of inscription is the most important variable in the prediction model. The prediction plot is shown in Figure 5 (b). The machine learning model is trained based on the training dataset, which includes 70% of the randomly chosen inscriptions in the data. The prediction plot is created by using the test dataset, which is not used to train the machine learning 4. Discussions 4.1 Categorical Analysis Every region and community in the Mediterranean in antiquity had its own epigraphic characteristics. The statistical analysis reveals some features of different communities within Judea/Roman Palestine. From the residual plot in Figure 3 (a), we can infer that inscription in Judea have a higher probability of being written in Hebrew and inscriptions in Negev have a higher probability of being written in Aramaic. There is also a very strong positive association between Aramaic and Samaria. However, there is a lower probability of Aramaic inscriptions found in the Coastal Plain. When we examine the residual plot in Figure 3 (b), there is a higher possibility of discovering Christian inscriptions in Coastal Plain, Galilee, and Negev. There is a higher possibility of discovering Jewish inscriptions in Judea and Galilee. However, the probability of finding Christian and Jewish inscriptions in Samaria is lower than other regions, and there are many inscriptions from other religions. These results are consistent with what we would expect from other historical sources. The residual plot of language and religion is shown in Figure 3 (c). Christian inscriptions have a strong positive association with Greek, but negative association with other languages, specifically Aramaic. Inscriptions written in Aramaic and Hebrew are more likely to be Jewish inscriptions. The relationship between text genre and religion is shown in Figure 3 (d). The plot implies that Christian inscriptions tend to be funerary or invocation related compared with other religions, but the probability of Christian inscription being document or legal/economic is lower. It seems that Christian inscriptions tend to be more religious and less administrative. 4.2 Machine Learning Model We are able to conclude that the random forest model is the optimal machine learning model for predicting time periods of inscriptions. This is consistent with previous research, as it has been reported that tree-ensemble algorithms like random forests are better to analyze tabular data [ Shwartz-Ziv and Armon 2022 ], which is the data type that we use in our project. If we examine the metric values in Figure 4 (b), we see that random forest, XGBoost and decision tree perform better than other models that we have examined. This highlights the importance of using tree based machine learning algorithms to analyze tabular dataset. Our study also shows that linear models do not perform well compared with other methods in analyzing tabular datasets which consists of only categorical variables. This might be due to the nonlinear interactions between the variables in the dataset. In terms of SVM, it is important to select the appropriate kernel for each dataset. In our example, we see that SVR with linear kernel performs the worst out of all three kernels that we have examined. However, there are many kinds of kernels in SVM, and it would be a challenging problem to select the optimal kernel for the dataset. The results also suggest that the predictability of neural network is not high compared with tree-based algorithms when we analyze tabular datasets. Many digital humanities datasets are tabular data and they are rarely large enough to effectively train the deep learning algorithm. Our results are consistent with the previous study by Shwartz-Ziv & Armon [ Shwartz-Ziv and Armon 2022 ], where they also showed that tree ensemble methods are better than deep learning techniques to analyze tabular data. In contrast, a random forest model can easily be implemented by specifying two hyperparameters of the model, and it is possible for a random forest model to capture the nonlinear interactions inside the dataset. This is a major advantage of random forests, as most machine learning models require fine tuning of hyperparameters [ Hastie et al. 2009 In spite of advances in machine learning techniques, it is still necessary to have epigraphers to analyze inscriptions. According to the variable importance plot of the random forest model (Figure 5 (b)), material is the most important variable in making predictions, but we cannot ignore the effects of other variables, including religion and text genre. These variables are subjective, and necessitates the importance of having humans to classify the inscriptions as well. Even in the research project conducted by Assael et al. [ Assael et al. 2022 ], they showed that accuracy was the best when the deep learning algorithm was paired up with historians. Machine learning algorithms are still not perfect, so it would be important for us to incorporate knowledge from both human scholars and computers to analyze the dataset effectively. 5. Conclusions We show how machine learning techniques can be used to make predictions based on tabular dataset that is comprised of categorical variables. It is uncommon for humanities data to include all elements of a dataset. This could be due to the damage of artifacts over time and many texts being often only available in fragments. Instead of only using one element of the dataset to make predictions, it would be important for us to incorporate other elements in the dataset to effectively date the artifacts. As a next step, we plan to integrate the deep learning framework to the machine learning model that we have created, so that we can incorporate both textual data and tabular data of inscriptions in the prediction model to achieve better accuracy. The results of our work indicate that computers can successfully be taught to predict missing characteristics of historical artifacts. The widespread use of machine learning techniques offers exciting prospects in epigraphy and related fields. Even if the dataset is not large, we provide an example in which machine learning techniques can effectively be used to make predictions. In addition to the inscription dataset, our research shows that the machine learning model could be used to analyze other digital humanities dataset which includes a wide range of categorical variables. We acknowledge computing resources from Columbia University's Shared Research Computing Facility project, which is supported by NIH Research Facility Improvement Grant 1G20RR030893-01, and associated funds from the New York State Empire State Development, Division of Science Technology and Innovation (NYSTAR) Contract C090171, both awarded April 15, 2010. We thank Brown University’s Center for Digital Scholarship for providing the valuable dataset for our research. We would also like to thank the anonymous reviewers, as their suggestions and comments have significantly improved the content and presentation of this paper. Works Cited Alin 2010 Alin, A. (2010) “Multicollinearity.” Wiley Interdisciplinary Reviews: Computational Statistics, 2(3), pp. 370–374. Allen 1997 Allen, M. P. (1997) “The problem of multicollinearity”. In Allen, M.P. Understanding Regression Analysis. Berlin: Springer, pp. 176-180. Argamon et al. 2009 Argamon, S., Goulain, J.-B., Horton, R., and Olsen, M. (2009) “Vive la différence! Text mining gender difference in french literature.” Digital Humanities Quarterly, 3(2). Assael et al. 2022 Assael, Y., Sommerschield, T., Shillingford, B., Bordbar, M., Pavlopoulos, J., Chatzipanagiotou, M., Androutsopoulos, I., Prag, J., and de Freitas, N. (2022) Restoring and attributing ancient texts using deep neural networks. Nature, 603(7900), pp. 280–283. Baledent, Hiebel, and Lejeune 2020 Baledent, A., Hiebel, N., and Lejeune, G. (2020) “Dating ancient texts: An approach for noisy French documents.” Language Resources and Evaluation Conference (LREC) Belgiu and Drăguţ 2016 Belgiu, M., and Drăguţ, L. (2016) Random forest in remote sensing: A review of applications and future directions. ISPRS Journal of Photogrammetry and Remote Sensing, 114, pp. Borisov et al. 2022 Borisov, V., Leemann, T., Seßler, K., Haug, J., Pawelczyk, M., & Kasneci, G. (2022) “Deep Neural Networks and Tabular Data: A Survey.” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–21. https://doi.org/10.1109/TNNLS.2022.3229161 Breiman 2001 Breiman, L. (2001). “Random forests.” Machine Learning, 45(1), pp. 5–32. Brigato and Iocchi 2021 Brigato, L., and Iocchi, L. (2021) “A Close Look at Deep Learning with Small Data.” 2020 25th International Conference on Pattern Recognition (ICPR), pp. 2490–2497. https:// Chan et al. 2022 Chan, J. Y. L., Leow, S. M. H., Bea, K. T., Cheng, W. K., Phoong, S. W., Hong, Z. W., and Chen, Y. L. (2022) “Mitigating the multicollinearity problem and its machine learning approach: a review”. Mathematics, 10(8), 1283. Chen and Guestrin 2016 Chen, T., and Guestrin, C. (2016) “XGBoost: A Scalable Tree Boosting System.” Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785–794. https://doi.org/10.1145/2939672.2939785 Claesen and DeMoor 2015 Claesen, M., and De Moor, B. (2015) Hyperparameter Search in Machine Learning (arXiv:1502.02127). arXiv. https://doi.org/10.48550/arXiv.1502.02127 Claesen et al. 2014 Claesen, M., Simm, J., Popovic, D., Moreau, Y., and De Moor, B. (2014) Easy Hyperparameter Search Using Optunity (arXiv:1412.1114). arXiv. https://doi.org/10.48550/arXiv.1412.1114 Drucker et al. 1996 Drucker, H., Burges, C. J., Kaufman, L., Smola, A., and Vapnik, V. (1996) “Support vector regression machines.” Advances in Neural Information Processing Systems, 9. Elliott, Bodard, Cayless, et al. 2006 Elliott, Tom, Bodard, Gabriel, and Cayless, Hugh et al. (2006, 2022) EpiDoc: Epigraphic Documents in TEI XML. Emmanuel et al. 2021 Emmanuel, T., Maupong, T., Mpoeleng, D., Semong, T., Mphago, B., and Tabona, O. (2021) “A survey on missing data in machine learning.” Journal of Big Data, 8(1), pp. 1–37. Finegold et al. 2016 Finegold, M., Otis, J., Shalizi, C., Shore, D., Wang, L., and Warren, C. (2016) “Six degrees of Francis Bacon: A statistical method for reconstructing large historical social networks.” Digital Humanities Quarterly, 10(3). Fragkiadakis, Nyst, and Putten 2021 Fragkiadakis, M., Nyst, V., and Putten, P. van der. (2021) “Towards a User-Friendly Tool for Automated Sign Annotation: Identification and Annotation of Time Slots, Number of Hands, and Handshape.” Digital Humanities Quarterly, 15(1). Goodfellow, Benigo, and Courville 2016 Goodfellow, I., Bengio, Y., and Courville, A. (2016) Deep Learning. MIT press. Hastie et al. 2009 Hastie, T., Tibshirani, R., Friedman, J. H., and Friedman, J. H. (2009) The Elements of Statistical Learning: Data Mining, Inference, and Prediction (Vol. 2). Springer. Johnson and Khoshgoftaar 2019 Johnson, J. M., and Khoshgoftaar, T. M. (2019) “Survey on deep learning with class imbalance.” Journal of Big Data, 6(1), 27. https://doi.org/10.1186/s40537-019-0192-5 LeCun, Benigo, and Hinton 2015 LeCun, Y., Bengio, Y., and Hinton, G. (2015) “Deep learning.” Nature, 521 (7553), pp. 436–444. Liaw and Wiener 2002 Liaw, A., and Wiener, M. (2002) “Classification and regression by randomForest.” R News, 2(3), pp. 18–22. Niculae et al. 2014 Niculae, V., Zampieri, M., Dinu, L., and Ciobanu, A. M. (2014) “Temporal Text Ranking and Automatic Dating of Texts.” Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, Volume 2: Short Papers, pp. 17–21. https://doi.org/10.3115/v1/E14-4004 Satlow 2022 Satlow, M. L. (2022) “Inscriptions of Israel/Palestine.” Jewish Studies Quarterly (JSQ), 29(4), pp. 349–369. https://doi.org/10.1628/jsq-2022-0021 Shwartz-Ziv and Armon 2022 Shwartz-Ziv, R., and Armon, A. (2022) “Tabular data: Deep learning is not all you need.” Information Fusion, 81, pp. 84–90. Zhitomirsky-Geffet et al. 2020 Zhitomirsky-Geffet, Maayan, Gila Prebor, and Isaac Miller. (2020) “Ontology-based analysis of the large collection of historical Hebrew manuscripts.” Digital Scholarship in the Humanities, 35(3), pp. 688–719. https://doi.org/10.1093/llc/fqz058 Zou and Hastie 2005 Zou, H., & Hastie, T. (2005). “Regularization and variable selection via the elastic net.” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2), pp.
{"url":"https://www.digitalhumanities.org/dhq/vol/17/2/000681/000681.html","timestamp":"2024-11-07T12:42:34Z","content_type":"application/xhtml+xml","content_length":"69582","record_id":"<urn:uuid:5bf11bec-b27c-4733-88c4-ddd26b5e2d3d>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00477.warc.gz"}
The package ZLAvian tests for patterns consistent with Zipf’s Law of Abbreviation (ZLA) in animal communication following the methods described in Lewis et al. (2023) and Gilman et al. (2023). This function measures and tests the statistical significance of the concordance between note duration and frequency of use in a sample of animal communication represented in a dataframe that must include columns with the following names and information: • A column “note” (factor/character) that describes the type to which each note, phrase, or syllable in the sample has been assigned; • A column “duration” (numeric) that describes the duration of each note, phrase, or syllable; and • A column “ID” (factor) identifies the individual animal that produced each note, phrase, or syllable. Other columns in the dataframe are ignored in the analysis. Youngblood (2024) observed that the column duration might alternatively include data on any other numerical measure that estimates the effort involved in producing a note type. testZLA computes the mean concordance (i.e., Kendall’s tau) between note duration and frequency of use within individuals, averages across all individuals in the data set, and compares this to the expectation under the null hypothesis that note duration and frequency of use are unrelated. The null distribution is computed by permutation while constraining for the observed similarity of note repertoires among individuals. This controls for the possibility that individuals in the population learn their repertoires from others. The significance test is one-tailed, so p-values close to 1 suggest evidence for a positive concordances, contrary to ZLA. See Lewis at al. (2023) for the formal computation of the null distribution and Gilman et al. (2023) for discussion. Users can control the following parameters in testZLA: • minimum: the minimum number of times a note type must appear in the data set to be included in the analysis. All notes types that appear less than this number of times are removed from the sample before analysis. minimium must be a positive integer. The default value is 1. • null: the number of permutations used to estimate the null distribution. This must be a positive integer 99 or greater. The default value is 999. • est: takes values “mixed,” “mean,” or “median.” If est = “mixed” then the expected logged duration for each note type in the population is computed as the intercept from an intercept-only mixed effects model (fit using the lmer() function of lme4) that includes a random effect of individual ID. This computes a weighted mean across individuals, and accords greater weights to individuals that produce the note type more frequently. If est = “mean” then the expected logged duration for each note type in the population is computed as the mean of the means for the individuals, with each individual weighted equally. If est = “median” then the expected logged duration for each note type within individuals is taken to be the median logged duration of the note type when produced by that individual, and the expected logged duration for each note type in the population is taken to be the median of the medians for the individuals that produced that note type.The expected durations for note types are used in the permutation algorithm. Estimation using the “mixed” approach is more precise - it gives greater weight to means based on more notes, because we can estimate those means more accurately. However, estimation using the “mean” approach is faster. • cores: The permutation process in testZLA is computationally expensive. To make simulating the null distribution faster, testZLA allows users to assign the task to multiple cores (i.e., parallelization). cores must be an integer between 1 and the number of cores available on the user’s machine, inclusive. Users can find the number of cores on their machines using the function detectCores() in the package parallel. The default value is 2. • transform: Takes values “log” or “none.” Indicates how duration data should be transformed prior to analysis. Defaults to “log.” Gilman and colleagues (2023) argue that log transformation is often appropriate for duration data, but other measures might be better analysed as untransformed data. test.ZLA.output = testZLA(data, minimum = 1, null = 999, est = "mixed", cores = 2) #> tau p #> individual level -0.03745038 0.4440000 #> population level -0.15000000 0.2088536 testZLA prints a table that reports concordances (tau) and p-values at the individual and population levels. Results at the individual level are obtained using the method described in Lewis et al (2023) and Gilman et al (2023). Results at the population level report the concordance between note type duration and frequency of use in the full dataset, without considering which individuals produced which notes. Population-level concordances may be problematic when studying ZLA in animal communication (see Gilman et al 2023 for discussion) but have been widely used to study ZLA in human Further information can be extracted from the function: • $stats: a matrix that reports Kendall’s tau and the p-value associated with Kendall’s tau computed at both the population and individual levels. • $unweighted: in stats, the population mean value of Kendall’s tau is computed with the value of tau for each individual weighted by its inverse variance. The inverse variance depends on the individual’s repertoire size. In unweighted, Kendall’s tau and the p-value associated with tau are computed with tau for each individual weighted equally regardless of repertoire size. • $overview: a matrix that reports the total number of notes, total number of individuals, total number of note types, mean number of notes per individual, and mean number of note types produced by each individual in the dataset. • $shannon: a matrix that reports the Shannon diversity of note classes in the population and the mean Shannon diversity of note classes used by individuals in the dataset. • $plotObject: a list containing data used by the plotZLA function to produce a web plot illustrating the within-individual concordance in the data set. • $thresholds: a matrix that reports the 90% inclusion interval for the null distribution for the mean value Kendall’s tau in population, both with and without weighting individual taus by their inverse variances. The lower bound represents the least negative concordance that would be inferred to be evidence for ZLA, and is thus a measure of the power of the analysis. plotZLA uses the output of testZLA to create a web plot illustrating the concordance between note class duration and frequency of use within individuals in the population. Users can control the following parameters in testZLA: • title: a title for the webplot. • ylab: a label for the y-axis of the webplot. Defaults to “duration (s).” • x.scale: takes values “log” or “linear.” Indicates how the x-axis should be scaled. Defaults to “log.” • y.base: takes values 2 or 10. Controls tick mark positions on the y-axis. When set to 2, tick marks indicate that values differ by a factor of 2. When set to 10, tick marks indicate that values differ by a factor of 10. If data was not log transformed in the analysis being illustrated, then the y-axis is linear and this argument is ignored. Defaults to 2. store <- testZLA(data, minimum = 1, null = 999, est = "mixed", cores = 2) #> tau p #> individual level -0.03745038 0.4320000 #> population level -0.15000000 0.2088536 plotZLA(store, ylab = "duration (ms)", x.scale = "linear") In the figure produced by plotZLA, each point represents a note or phrase type in the population repertoire. Note types are joined by a line if at least one individual produces both note types. The weight of the line is proportional to the number of individuals that produce both note types. The color of the lines indicates whether there is a positive (blue) or negative (red) concordance between the duration and frequency of use of the joined note types. Negative concordances are consistent with Zipf’s law of abbreviation. Shades between blue and red indicate that the concordance is positive in some individuals and negative in others. For example, this can happen if some individuals use the note types more frequently than others, such that the rank order of frequency of use varies among individuals. Grey crosses centered on each point show the longest and shortest durations of the note type (vertical) and the highest and lowest frequencies of use (horizontal) in the population. Gilman, R. T., Durrant, C. D., Malpas, L., and Lewis, R. N. (2023) Does Zipf’s law of abbreviation shape birdsong? bioRxiv (doi.org/10.1101/2023.12.06.569773). Lewis, R. N., Kwong, A., Soma, M., de Kort, S. R., Gilman, R. T. (2023) Java sparrow song conforms to Mezerath’s law but not to Zipf’s law of abbreviation. bioRxiv (doi.org/10.1101/ Youngblood, M. (2024) Language-like efficiency and structure in house finch song. Proceedings of the Royal Society B - Biological Sciences, 291:20240250 (doi.org/10.1098/rspb.2024.0250).
{"url":"http://cran.fhcrc.org/web/packages/ZLAvian/vignettes/ZLAvian.html","timestamp":"2024-11-08T11:07:39Z","content_type":"text/html","content_length":"51750","record_id":"<urn:uuid:f5495b0f-e61d-4f07-85d8-65d492dad80e>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00435.warc.gz"}