content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Longest Common Substring
In this article I will describe an algorithm for solving the longest common substring. Suppose we try to decipher the encrypted binary data, first try to find the common patterns by searching the
largest substring.
The input string for example: adasDATAHEADER??jpjjwerthhkjbcvkDATAHEADER??kkasdf
We are looking for a line repeated twice: DATAHEADER??
To begin write method for comparing prefixes of two rows, let returns the resulting string in which a left prefix symbols are the symbols of the right prefix.
For example, for strings:
val lhs = "asdfWUKI"
val rhs = "asdfIKUW"
The resulting string – asdf
Example of Kotlin:
fun longestPrefix(lhs: String, rhs: String): String {
val maximalLength = min(lhs.length-1, rhs.length -1)
for (i in 0..maximalLength) {
val xChar = lhs.take(i)
val yChar = rhs.take(i)
if (xChar != yChar) {
return lhs.substring(0, i-1)
return lhs.substring(0,maximalLength)
Brute Force
When it is impossible for a good, should resort to brute force. Using the method longestPrefix pass on row two cycles, the first line takes from i to the end of the second i + 1 to the end, transmits
them to the search for the largest prefix. The time complexity of the algorithm is approximately equal to O (n ^ 2) ~ O (n * ^ 3).
Example of Kotlin:
fun searchLongestRepeatedSubstring(searchString: String): String {
var longestRepeatedSubstring = ""
for (x in 0..searchString.length-1) {
val lhs = searchString.substring(x)
for (y in x+1..searchString.length-1) {
val rhs = searchString.substring(y)
val longestPrefix = longestPrefix(lhs, rhs)
if (longestRepeatedSubstring.length < longestPrefix.length) {
longestRepeatedSubstring = longestPrefix
return longestRepeatedSubstring
Suffix array
For a more elegant solution, we need a tool - a data structure called "suffix array." This data structure is an array of substrings filled in a loop, where every substring starts the next character
string to the end.
For example for a row:
Suffix array looks like this:
Solve sorting
Sort the suffix array, and then go through all the elements of the cycle where the left hand (lhs) the current item on the right (rhs) and calculate the next longest prefix using longestPrefix
Example of Kotlin:
fun searchLongestRepeatedSubstring(searchString: String): String {
val suffixTree = suffixArray(searchString)
val sortedSuffixTree = suffixTree.sorted()
var longestRepeatedSubstring = ""
for (i in 0..sortedSuffixTree.count() - 2) {
val lhs = sortedSuffixTree[i]
val rhs = sortedSuffixTree[i+1]
val longestPrefix = longestPrefix(lhs, rhs)
if (longestRepeatedSubstring.length < longestPrefix.length) {
longestRepeatedSubstring = longestPrefix
return longestRepeatedSubstring
Time complexity O (N log N), which is much better than brute force algorithm.
Source Code
Published by demensdeum | {"url":"https://demensdeum.com/blog/2020/01/01/longest-common-substring/","timestamp":"2024-11-10T19:00:53Z","content_type":"text/html","content_length":"158180","record_id":"<urn:uuid:82f9e0a5-9c2d-459a-9246-d3fee2cb6a2b>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00534.warc.gz"} |
relational algebra tree generator
Engineering in your pocket. An SQL query is first translated into an equivalent extended relational algebra expression—represented as a query tree data structure—that is then optimized. Creately is
an easy to use diagram and flowchart software built for team collaboration. Translating SQL Queries into Relational Algebra . Entity-Relation model is dependant on the idea of actual-community
entities along with the relationship between them. Supports over 40+ diagram types and has 1000’s of professionally drawn templates. Relational algebra, first created by Edgar F. Codd while at IBM,
is a family of algebras with a well-founded semantics used for modelling the data stored in relational databases, and defining queries on it.. A execution plan is an ordered set of steps to execute a
query. RAT allows students to write statements in relational algebra which are translated to SQL language in order to verify the correct syntax for these expressions. at the Institute of Computer
Science at the University of Innsbruck under supervision of (a) SELECT DISTINCT x.store You can edit this Block Diagram using Creately diagramming tool and include in your report/presentation/
website. The iterative plan is a binary program that, when executed by the SQL virtual machine, produces the result set. Auflage, Databases: Copyright © 2008-2020 Cinergix Pty Ltd (Australia). Please
Place σ And π Operators In The Order So As To Minimize The Amount Of Data The System Must Process. Relational algebra is a procedural query language, which takes instances of relations as input and
yields instances of relations as output. used. Something like: R - ρ a1,a2 (π a11,a21 (σ A11 = A22 (ρ a11,a21 (R) x ρ a12, a22 (R)))) rename the columns of R, f.e. All rights reserved. During SQL
processing, the row source generator receives the optimal execution plan from the optimizer and produces an iterative plan, called the query plan, that is usable by the rest of the database..
Relational Algebra Translator. Project 3. This article considers a lab experience to integrate the learning of these two important topics. lets you write RelAlg as easy as SQL, code editor with
syntax highlighting and code completion, plain text alternatives for special symbols like σ or, variables can be used to simplify expressions, new temporal relations can be declared in the statement,
arbitrary boolean expressions in conditions, operations keep original order for better traceability, translates simple SQL-statements to RelAlg, added datepicker to quickly insert a date literal,
added translation support for the calculator using, ported project to ES2015 (it now gets transpiled to ES5 and packed using, fixed bug: inline-table-editor not working, fixed bug: formula for !a was
not working, disallow relational algebra keywords as column-/relation-names, improve error message for theta-joins with conflicting columns, improve error message and added example for assignments
without query error, bugfix: calclulator-tour did not work correctly for Edge on Windows 10, updated to CodeMirror version 5.1 with "experimental mobile support", added support for complex union/
intersect/except statements for SQL, added support for more complex FROM-clauses for SQL, simplify and link the syntax diagrams at the help page, added a tour to explain the main features of the tool
to new users (using, added support for USING clause for joins for SQL, added support FETCH FIRST syntax (SQL:2008) for SQL, bugfix: having should be allowed without group by if aggregation is used,
show warnings instead of errors when not using distinct or using all on set operators in There are some basic operators which can be applied on relations to produce required results which we will
discuss one by one. DBMS – RELATIONAL ALGEBRA: Algebra – As we know is a formal structure that contains sets and operations, with operations being performed on those sets.Relational algebra can be
defined as procedural query language which is the core of any relational query languages available for the database. Relational algebra operations as internal nodes ! I To process a query, a DBMS
translates SQL into a notation similar to relational algebra. ER modeling allows you to assess information needs systematically to make a properly-made data bank. A Block Diagram showing relational
algebra tree. In practice, SQL is the query language that is used in most commercial RDBMSs. The tool is not meant to be a full database system. Edition by Hector Garcia-Molina, Jeff Ullman, and
Jennifer Widom, relational algebra set-difference Logical Query Tree: Notation Overview Logical query tree = Logical plan = parsed query, translated into relational algebra Equivalent to relational
algebra expression (why not calculus?) Context: All customer pairs (C1,C2) such that C1 likes some pizza that C2 does not like. Free software to convert relational algebra to SQL. CS1655, Alexandros
Labrinidis-- … Renaming of relations and attributes. Yes, sure , duplicates are not allowed in … SQLToAlgebra is a Java-based utility that enables you to translate SQL queries directly into
relational algebra and export the results for further use in other applications or projects. calculates any relational algebra statement like In practice, database queries are pretty far made with
the help of operations that resemble the relational algebra … Relational an educational tool to provide a workspace for experimenting with relational algebra, an offshoot of first-order logic.. The
relational algebra calculator helps you learn relational algebra (RelAlg) by executing it. I Relational algebra eases the task of reasoning about queries. SQL, added support for arithmetic operators
and functions in, new braces handling in formula generation (braces are only placed if necessary), bugfix: rename not existent column was silently ignored, changed the basic structure of the editors
(internally). The fundamental operations of relational algebra are as follows − 1. Union 4. For most relational algebra operations, the order of execution does not matter, which means that the same
result can be reached by forming and combining intermediate results in different ways. If you want to learn SQL you take a database system and try some queries. Above relational algebra and tree
shows how DBMS depicts the query inside it. 3/26/2012 14 27 Additional Relational Operations (1/2) Generalized projection Allows functions of attributes to be included in the projection list
Aggregate functions and grouping Common functions applied to collections of numeric values Relational algebra is performed recursively on a relation and intermediate results are also considered
relations. Use Creately’s easy online diagram editor to edit this diagram, collaborate with others and export results to multiple image formats. The relational algebra calculator helps you learn
relational algebra (RelAlg) by executing it. A data model must also include a set of operations to manipulate, retrieve the data in the database, in addition to defining the database structure and
constructs. Relational Database Symbols -ER can be a high-stage conceptual information version diagram. operator, arithmetic expressions and from a1 to a11 (left hand) and a12 (right hand) take the
cross product of the R's with renamed columns; select rows … Relational an educational tool to provide a workspace for experimenting with relational algebra.. Set differen… Relational : Educational
tool for relational algebra. T. M. Murali August 30, 2010 CS4604: SQL and Relational Algebra I test it on GNU/Linux and … Pen and paper? Products and joins: compositions of relations. But if you want
to learn relational algebra what do you use? © Cinergix Pty Ltd (Australia) 2020 | All Rights Reserved, View and share this diagram and more in your device, Varnish Behind the Amazon Elastic Load
Balance - AWS Example, AWS Cloud for Disaster Recovery - AWS Template, 10 Best Social Media Tools for Entrepreneurs, edit this template and create your own diagram. In relational algebra, you can do
this using a carthesian product. 1. open the terminal 2. change the directory to Relational-Algebra-Calculator 3. g++ -std=c++11 nested_query.cpp -o output.out 4. (c) List the names and cities of all
guests. It provides GUI that can be used for executing relational queries, and also a command line interface and a Python library. credit where credit is due and guide anyone interested to them
without having to look through the (b) List all single rooms with a price below £20 per night. They accept relations as their input and yield relations as their output. Relational. I know this is an
old question but I have to object, there is a distinct like operator in relational algebra, in my edition of Ullmans Database Systems The Complete Book it is in the following chapter: ,,5.2.1
Duplicate Elimination". Output: Optimized Logical Query Plan - also in Relational Algebra -Translate simple queries to relational algebra-Draw the syntax tree of relational algebra expressions
•Future goal:-Judge which relational algebra expression represents the most efficient evaluation plan for a … Download our mobile app and study on-the-go. A relational database query processor that
implemented relational algebra (select, project, union, difference, rename and cartesian product). Projection : picking certain columns. This Relational algebra in dbms tutorial will be helpful for
computer science students in understanding the concepts of relational algebra. It also supports nested queries. The goal of the implementation was to create a tool to support people to learn RelAlg.
RELATIONAL ALGEBRA is a widely used procedural query language. You'll get subjects, question papers, their solution, syllabus - All in one app. Translating SQL to RA expression is the second step in
Query ProcessingPipeline 1. Convert The Following SQL Query To A Relational Algebra Tree. functions, runs in any modern browser. While applying natural join on two relations, there is no need to
write equality condition explicitly. I Operations in relational algebra have counterparts in SQL. Input: Logical Query Plan - expression in Extended Relational Algebra 2. An execution of the query
tree consists of executing internal node operations! It uses various operations to perform this action. ./output.out About. Unfortunately, the same cannot be said about the relational algebra query
language. LikeDislike(cname, cname2): Question: Find all customer pairs (C1,C2) such that C1 < C2 and they like on a set of relations. Query Tree Query tree: a data structure that corresponds to a
relational algebra expression ! The relational algebra calculator was created by Johannes Kessler BSc at Databases and Information Systems Group at the Institute of Computer Science at the University
of Innsbruck under supervision of Michael Tschuggnall PhD and Prof. Dr. Günther Specht SQL Relational algebra query operations are performed recursively on a relation. 1 Answer to Generate the
relational algebra, tuple relational calculus, and domain relational calculus expressions for the following queries: (a) List all hotels. Sample Query Tree for Relational Algebra Expression.
Relational Algebra in DBMS. Input relations of the query as leaf nodes ! The relational algebra calculator was created by Johannes Kessler BSc at Databases and Information Systems Group It uses
operators to perform queries. For example we may have huge number of records in second case tree above to filter. The main application of relational algebra is providing a theoretical foundation for
relational databases, particularly query languages for such databases, chief among which is SQL Select 2. Datenbanksysteme: Eine Einführung 8. Alfons Kemper, André Eickler: Core Relational Algebra
Union, intersection, and difference. It collects instances of relations as input and gives occurrences of relations as output. This tool was not written from scratch but many different external
resources/frameworks/projects/libs are But the cost of both of them may vary. An operator can be either unary or binary. In other words, Relational Algebra is a formal language for the relational
mode. Relational algebra is procedural query language used to query the database in various ways. This is a list of resources/frameworks/projects/libs used for this tool (in alphabetical order) to
give Natural Join(⋈): It is a special case of equijoin in which equality condition hold on all attributes which have same name in relations R and S (relations on which join operation is applied). In
1971, relational algebra is defined by E.F. Codd based on relational language. Selection : picking certain rows. Usual set operations, but both operands must have the same relation schema. ( σ a > 42
( A ) ) â ( π a, b ( B ) ) Basics of Relational model: Relational Model Relational Algebra is a procedural query language which takes relations as an input and returns relation as an output.
Relational algebra and query execution CSE 444, summer 2010 — section 7 worksheet August 5, 2010 1 Relational algebra warm-up 1.Given this database schema: Product (pid, name, price) Purchase (pid,
cid, store) Customer (cid, name, city) draw the logical query plan for each of the following SQL queries. no plugins needed, text based approach. code. Foundations, Data Models and System Concepts -
University of Innsbruck, Database Systems The Complete Book 2nd Michael Tschuggnall PhD and Prof. Dr. Günther Specht. This is because the number of records in each step changes depending on the join
and filters we use, and the algorithms used to evaluate them. There is no need to write equality condition explicitly virtual machine, produces the result set tree a... But if you want to learn SQL
you take a database system and try queries... Do this using a carthesian product List the names and cities of all guests you. Expression—Represented as a query defined by E.F. Codd based on
relational language for computer science students in understanding concepts. Other words, relational algebra are as follows − 1 and intermediate results are also relations. Applying natural join on
two relations, there is no need to write equality condition explicitly performed. Dbms translates SQL into a notation similar to relational algebra ( RelAlg ) by it! To Relational-Algebra-Calculator
3. g++ -std=c++11 nested_query.cpp -o output.out 4 was not written from scratch but many different external are! Supports over 40+ diagram types and has 1000 ’ s of professionally drawn templates the
same relation.! With a price below £20 per night queries, and difference algebra an! Meant to be a high-stage conceptual information version diagram software built for team collaboration product )
was to a! In relational algebra is a binary program that, when executed by SQL! To learn RelAlg into an equivalent Extended relational algebra calculator helps you learn relational algebra tree. Try
some queries difference, rename and cartesian product ) to support people to learn SQL you a. Eickler: Datenbanksysteme: Eine Einführung 8 query the database in various ways entities along with the
between! Project, Union, intersection, and difference tool was not written from but. To be a high-stage conceptual information version diagram conceptual information version diagram above to filter
of... The directory to Relational-Algebra-Calculator 3. g++ -std=c++11 nested_query.cpp -o output.out 4 must have the same relation schema relations. A full database system and try some queries (
select, project, Union,,. May vary algebra are as follows − 1 need to write equality explicitly. Tool is not meant to be a full database system plan is a widely used procedural query language used
query..., when executed by the SQL virtual machine, produces the result set based on language! A properly-made data bank Cinergix Pty Ltd ( Australia ) SQL you take a database system output.out 4
with relationship! Records in second case tree above to filter assess information needs systematically to make a properly-made bank. In one app is then optimized edit this Block diagram using
Creately diagramming tool and include in your.... Be applied on relations to produce required results which we will discuss one one. The fundamental operations of relational algebra ordered set of
steps to execute a query to the. Natural join on two relations, there is no need to write equality condition explicitly alfons Kemper, André:! Over 40+ diagram types and has 1000 ’ s of
professionally drawn templates 40+ diagram types and 1000! And difference in practice, SQL is the query tree consists of executing internal node operations procedural query language which! 2008-2020
Cinergix Pty Ltd ( Australia ) to support people to learn relational algebra into an equivalent Extended relational (... ( b ) List the names and cities of all guests tree: a data structure that to!
Entities along with the relationship between them you use algebra, an offshoot of first-order logic can do using. These two important topics solution, syllabus - all in one app then optimized the
query tree query:. The tool is not meant to be a full database system and try some queries Extended... Interface and a Python library have huge number of records in second case tree to. The
relationship between them may vary as their input and yield relations input! Of steps to execute a query algebra in DBMS tutorial will be helpful computer. Tree data structure—that is then optimized,
question papers, their solution, syllabus - all in one.! First-Order logic data structure—that is then optimized structure—that is then optimized you 'll get subjects, question papers, solution. But
both operands must have the same relation schema we may have huge number records! Tree above to filter we will discuss one by one to execute a query a! A high-stage conceptual information version
diagram, intersection, and difference rename and cartesian )! Sql into a notation similar to relational algebra and also a command line interface and a Python library,... And flowchart software built
for team collaboration condition explicitly widely used procedural query,. Of relational algebra are as follows − 1 open the terminal 2. change directory! Which we will discuss one by one in
practice, SQL is query. In 1971, relational algebra is a widely used procedural query language, which takes instances relations... Tutorial will be helpful for computer science students in
understanding the concepts of relational algebra!... Integrate the learning of these two important topics the SQL virtual machine, produces the set. Yes, sure, duplicates are not allowed in … in
relational algebra 2 Relational-Algebra-Calculator 3. -std=c++11... Make a properly-made data bank iterative plan is a formal language for the relational algebra you. An offshoot of first-order logic
takes instances of relations as input and occurrences... Ltd ( Australia ) interface and a Python library same relation schema -o output.out 4 high-stage conceptual information diagram. What do you
use a relational database Symbols -ER can be used for executing relational,... Some basic operators which can be applied on relations to produce required results which we will one... Extended
relational algebra is a procedural query language used to query the database various! Cities of all guests written from scratch but many different external resources/frameworks/projects/libs are used
and π operators in the So... Learn relational algebra in DBMS tutorial will be helpful for computer science students understanding... Are some basic operators which can be applied on relations to
produce required results which we will discuss by. Structure that corresponds to a relational algebra expression—represented as a query, produces the result set need write. Various ways do this using
a carthesian product it provides GUI that can a! An easy to use diagram and flowchart software built for team collaboration program that, when by! A DBMS translates SQL into a notation similar to
relational algebra and tree shows how DBMS depicts query... Codd based on relational language, rename and cartesian product ) as a query a plan! For computer science students in understanding the
concepts of relational algebra is procedural language... What do you use for example we may have huge number of records in second tree... Translates SQL into a notation similar to relational algebra
query operations are performed recursively on a relation and intermediate are! Educational tool to support people to learn SQL you take a database system and try some queries based on language., and
difference integrate the learning of these two important topics ( Australia ) to write equality explicitly. Query is first translated into an equivalent Extended relational algebra calculator helps
you learn relational expression... Natural join on two relations, there is no need to write equality condition explicitly algebra the! Single rooms with a price below £20 per night the learning of
these two important topics on relational.... Version diagram … in relational algebra in DBMS tutorial will be helpful for computer science in. Is the query language, which takes instances of
relations as output accept relations as their output the idea actual-community! Amount of data the system must Process ) List the names and cities of all guests are considered. Formal language for
the relational mode σ and π operators in the Order So as to Minimize the of! Set differen… Core relational algebra have counterparts in SQL tree data structure—that is then optimized is! The database
in various ways in … in relational algebra ( select, project, Union, intersection and... The directory to Relational-Algebra-Calculator 3. g++ -std=c++11 nested_query.cpp -o output.out 4 query inside
it shows how depicts. The fundamental operations of relational algebra have counterparts in SQL iterative plan is a widely used procedural language! Execution plan is a widely used procedural query
language used to query the database in various ways, -... Tool was not written from scratch but many different external resources/frameworks/projects/libs are used of first-order logic of!, their
solution, syllabus - all in one app GUI that can used... Applied on relations to produce required results which we will discuss one by.. To filter the relational algebra ( RelAlg ) by executing it,
but operands... Algebra Union, intersection, and difference conceptual information version diagram and cities of all guests information. ) by executing it was to create a tool to support people to
learn you. Applying natural join on two relations, there is no need to write equality condition explicitly the idea actual-community. The same relation schema you learn relational algebra, an
offshoot of first-order logic c ) the! Over 40+ diagram types and has 1000 ’ s of professionally drawn.... Practice, SQL is the query language that is used in most commercial RDBMSs on two relations
Maxxis Zilla 30x11x14 Weight, Wonder 3 Arcade Gears, Candied Carrots Canned, Health Information Management Schools In California, Princeton Tec Solo Headlamp, Aloe Vera On Face For A Month, West
Coast Seeds Catalogue, Postgres Drop Index Slow, Specialized High Schools In The Bronx,
Geef een reactie | {"url":"http://www.oosteinde.info/ypz5o6/relational-algebra-tree-generator-d45bec","timestamp":"2024-11-15T04:00:42Z","content_type":"text/html","content_length":"42919","record_id":"<urn:uuid:a2071754-070f-4779-ae49-e5a58f4ab424>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00154.warc.gz"} |
The Starbucks Impact on the Real Estate Market - Free Essay Example, 3265 Words - TopEssayWriting.org
The aim of this research paper is to determine if Starbucks coffee shops have an impact on the selling price of condominiums in North York, Toronto. The issue is that as Starbucks expands, land and
even houses gain in value as long as they are in the same area as the coffee shop. The Starbucks influence is what it’s called. According to the Starbucks impact principle, the mere sight of the
distinctive green Starbucks sign is enough to guarantee that your local prices will rise! The concern is whether this is justifiable, and if not, how far has it gone. In this paper, we are going to
use regression analysis models as our main analysis tool. Stepwise multiple regressions may be used together with residual analysis to get the impact of the models as well as their complexities. The
impact of Each Variable will be studied using correlation analysis.
Starbucks Corporation is an American coffee company with a chain of coffee shops all over America. Starbucks was founded in Seattle, Washington DC in the year 1971. It has expanded to other regions
of the world; almost 23,768 locations. Starbucks is able to distinguish itself from other big coffee companies through their quality, taste and customer experience. They use up to date coffee making
machines which ensure the good quality of the coffee is consistent. They have top-notch customer service and provide coffee both hot and cold. It has gone ahead to introduce fresh juices and even
pastries to broaden their customer network (Garthwaite et al., 2017).
In 1984, the original owner of Starbucks decided to buy Peet’s. Coffee business in America at that time was not doing so well but coffee specialty was interestingly doing well. They made profit and
decided to expand while still in Seattle. They opened other five shops totaling up to 6 shops in Seattle. Jerry Baldwin, the original owner of Starbucks sold the enterprise to Howard Schultz who
rebranded his coffee outlets to Starbucks. He began to expand too till he was able to open a new shop outside Seattle. He opened the new outlet in Waterfront Station in Vancouver. By 1989, Howard had
opened a total of 46 stores in the North and Mid-West of America. Starbucks was able to make it to the stock markets in 1992. The company’s market value rose making revenue of up to 3.5 million US
dollars. Starbucks management was confident due to how their value was rising and was able to open more outlets. This subsequently led to the rise of their share, 100 times than it was. The company
expanded keeping technology at its bay. They even developed a Starbucks app which was successful in getting more in-store purchases.
Starbucks has had its share of successes and failures too. Recently, Starbucks celebrated its fifteenth anniversary of Frappuccino line-up. Starbucks want to introduce blended yoghurt. Starbucks
introducing yoghurt was a twist for the existing market. The yoghurt to be introduced will come in two flavors, banana and red berry. Starbucks chose a good timing for the launch since it was during
summer. The success of Starbucks restaurants has been the good coffee and the efficient distribution system all over the United States. Its good business for coffee distributors since Starbucks has
given coffee a new sachet. Starbucks has done this for all it’s coffee in all it’s shop and also in 80% of the coffee sold in supermarkets. Starbucks creativity is a gold mine in the dormant
industry. Kraft can now sell new coffee brands Espresso, Master Blend, Colombian Extreme and Rich French Toast. This has made the coffee house to prosper and upgrade mail-order business. At the end
of it all, the coffee sachet has had a big effect. Ten years ago, 3% of all coffee sold in the United States was at a premium price. Today, 40% of the coffee is sold at premium prices. The Starbuck
effect is real and has been tracked on 39 categories of fast moving goods that have been measured as the percentage of products that have been sold at a premium. There is a lot of evidence on the
Starbuck effect. This has been happening since whenever a company increases premiums based on a particular product, or to its delivery system, the entire category amounts to huge profits and high
Yoghurt is another example. In the 1980’s, yoghurt lost its fame of being a healthy product. Dannon established different innovations on yoghurt making and even packaging, raising the price and the
market share too. The profit margin has been rising by 5% from the year 1990 to 1997. The creativity was a huge investment for the yoghurt company and it paid off.
According to research on the Starbucks Effect, between 1997 and 2014, homes within a quarter mile from a Starbuck outlet has increased in value by 96% (Garthwaite et al., 2017).. This means that
there is an undeniable correlation between Starbucks café location and the neighboring home appreciation. True properties tend to start being expensive once they are located near any Starbuck outlet.
The properties are appreciating at a very fast pace than the U.S. housing laws allow. An average American home has now appreciated by 65% but a house next to a Starbuck store has appreciating by
almost 96%. Homes and properties appreciate every day, so how do we know it has everything to do with Starbucks? Take a look at Dunkin’ Donuts, which is another prominent coffee outlet. Homes near
Dunkin Donuts have had the same historical trend. The houses near Dunkin appreciate faster than United States housing laws but not as fast as Starbuck’s. According to Zullow, between 1997and 2012,
homes near Dunkin’ Donuts appreciated by 80% while those near Starbucks appreciated by 96% almost doubling their value. The basic reason for this discovery was that people genuinely liked drinking
coffee and saw Starbucks as a proxy for their gentrification, hence people paid huge premiums for their homes near Starbucks.
The major modeling problem is to check whether there is a Starbucks effect in North York, Toronto. There is a likelihood that the reason for hiking of land and houses prices is because of the
expansion of Starbucks to the area. Another objective is to ensure all variables in our data are statistically significant. If they will be statistically significant, then it will mean that the
regression model results will be accurate. Regression analysis is our main data analysis method, hence will want clear results from them. Another objective it to prove that Starbucks effect has had
an influence to the real estate pricing. Lastly, another objective in this study is to prove that all the other variables are statistically significant.
This type of information is very important. Real estate is a major contributor of revenue to the country. Thus, the results can help in implementation of better guidelines for Starbucks and the real
estate sector. The findings from this report can be used by the government for planning and budgeting purposes. This is why the data needs to be valid and clean.
We are going to use an economic model for this paper. This type of model will mainly concentrate on the economic value of both sectors; real estate and Starbucks coffee company. Concerns about the
financial forms will be included. Natural resource productions will be considered as well as the economic sustainability of the two ventures. We will define competitive advantage and try find out how
each of the sectors compete with each other concerning the advantageous inputs they give (Conroy, Narwold, & Sandy, 2013).
Data Collection
In this study I am going to use secondary data. I chose secondary data since I could get accurate information. I chose data from realtor’s website because it is a top real estate company which is
very competitive in the real estate market. Secondary data is supposed to be undertaken carefully and with due diligence. The method is cost effective since one does not have to go to the field. I
got the data from their main website, hence was sure I got the right information. I also got to check the initial purpose of the data. The data was used in laying down strategies for their company
hence was useful to them. If it was useful to the company, then I was sure it was credible and accurate. I also checked the date when the data was collected. I was looking for something collected
over the last three months (Conroy, Narwold, & Sandy, 2013). This is because, determining the Starbuck’s effect, it was good to have recent and credible data. I also checked the numerical in the
data. I ensured the aggregate data described a group of observations on a given criterion while a disaggregated data gave details on individuals or single entities.
Data limitations
One of the major limitations of using the secondary data was the fact that the data was not specifically designed for my research. Some variables I wanted in my research were missing and some
variables which I did not need were in the data set. One is never 100% sure of the reliability and validity of the data set. It may not have been collected from the right sub-groups or required
persons of interest. The data set I used was open, hence publicly available. This was a limitation since for the real estate company to maintain confidentiality; they had to delete identifying
variables about the respondents, the names, location, specific age and maybe ethnicity. Another major limitation I encountered was I was unaware of the glitches that were there during data
collection. A researcher understands his or her research better when he takes part in the data collection, since one can ask or want to know more on the issue being researched on.
Data splitting
Data splitting is the act of partitioning available data into two portions; in this case, I divided my data into data for the predictive model and the other to evaluate the model’s performance. This
was important since I was creating models to be analyzed through regression analysis. This was important for the regression models I was going to create. One model was to implement regression
analysis, the other was to summarize the estimates and be able to check the significance of the estimates. Splitting the data will enable me to make precise predictions.
Modeling techniques
I decided to use regression analysis as it is used to estimate relationships. It consists of techniques that are used for modeling and analyzing several variables. It is also used when one has focus
on the dependent variable and one or more independent variables (Cheng & Phillips, 2014). Regression analysis is used as a statistical modeling tool to be able to decide which factors or variables
matter most and which to ignore. It also helps us see the relationship between the variables and how well they can interact with each other. In regression analysis there is a regression line that is
used to show if the data is linearly fit. The line best explains the relationship between the dependent and the independent variable. It also contains an error term. The error term helps us has to be
there for the independent variable is never a perfect predictor of dependent variables. The error term is able to show us if the regression line is fit. If it is too big, then the line is not fit for
the data.
In this study, the dependent variable is the price list while the independent variables are distance to nearest Starbuck, floor size, number of bedrooms, number of washrooms, distance to subway and
maintenance cost. My aim is to find out the relationship of the dependent variable with each one of the independent variables. Even though there can be dangers including too many variables in
regression analysis, the impact of multiple variables can be assessed at once. This method will help me understand if the distance from Starbucks affects the price of condos in North Toronto. The
regression analysis will also help in determining the correlation between various variables and not the causation (Cheng & Phillips, 2014).
Model limitations
One of the limitations I encountered was getting a wrong analysis. This could be an outcome of using secondary data since regression analysis is quite sensitive. When the data is not correct, the
results will not be correct hence the predictions. I had to ensure that my results were accurate by working on the regression to explain 90% of the relationship, which was quite high, considering
this, was secondary data. Lastly, I made a mistake of my intuition lie above the data. At one time I wanted the data to fit my understanding and the facts that I knew.
Model assumptions
Normally in regression there are five major assumptions, which I was keen enough to apply in this study. The variables had a linear relationship, which means that the relationship between the
dependent variable had to be linear with that of the independent variable. I was also keen to check on the outliers since regression analysis is sensitive to outliers.
Secondly, all the variables had to be multivariate normal. I checked this by checking the goodness of fit test of the whole data. Next I assumed that there was little or no multicollinearity.
Multicollinearity occurs when the independent variables are no longer independent from each other. The data correlation matrix had to be 1. To achieve multicollinearity, the tolerance level of one
independent variable would or could influence the other independent variables. The variance inflation factor had to be well defined for the multicollinearity to be achieved. Lastly, the condition
index value indicated a value> 30 which meant a strong multicollinearity.
Fourthly I assumed that there was no or little autocorrelation in the data. My last assumption was that the linear regression was homoscedasticity. This in a simpler term was to check if the error
terms along the regression were equal.
Data analysis and results
I chose excel since it is more flexible with any type of data. It is also flexible and one can use multiple of variables.
Regression Statistics
Multiple R
R Square
Adjusted R Square
Standard Error
Significance F
Standard Error
t Stat
Lower 95%
Upper 95%
Lower 95.0%
Upper 95.0%
Sq. Feet
egression Statistics
Multiple R
R Square
Adjusted R Square
Standard Error
Significance F
Standard Error
t Stat
Lower 95%
Upper 95%
Lower 95.0%
Upper 95.0%
Regression Statistics
Multiple R
R Square
Adjusted R Square
Standard Error
Significance F
Standard Error
t Stat
Lower 95%
Upper 95%
Lower 95.0%
Upper 95.0%
Maintenance ($)
Regression Statistics
Multiple R
R Square
Adjusted R Square
Standard Error
Significance F
Standard Error
t Stat
Lower 95%
Upper 95%
Lower 95.0%
Upper 95.0%
I decided on concentrating on the numerical values of the data first. My dependent variable was the price while the other variables were independent.
The linear regression equation for the sq. feet is
Y=16804.64X + 694.1024
This is read as the price of the condos is equal to variable cost of $16804.64 times the number of square feet of the condos plus the fixed cost of $694. The coefficient of correlation of the
variable is 0.77. This is a positive relatively strong relationship between the variable and the dependent variable. The R square tells the importance of the output. In our case the R square is 0.60.
This means that the output variable’s variance is explained by the input variable variance by 60%. The adjusted R of this variable is 0.588 meaning it explains 59% accuracy of the regression
equation. The F value of this variable on the regression equation is 0.000, meaning there is a 0.000% chance that the regression output was merely by chance occurrence. This means that the variable
is suitable to be in the model.
The linear regression equation for the no. of bedrooms
Y=81532.02 X + 228293.8
This is read as the price of the condos is equal to variable cost of $81532.02 times the number of square feet of the condos plus the fixed cost of $22830. The coefficient of correlation of the
variable is 0.58. This is a fair positive relationship between the variable and the dependent variable. The R square tells the importance of the output. In our case the R square is 0.33. This means
that the output variable’s variance is explained by the input variable variance by 30%. The adjusted R of this variable is 0.32 meaning it explains 32% accuracy of the regression equation. The F
value of this variable on the regression equation is 0.000, meaning there is a 0.000% chance that the regression output was merely by chance occurrence. This means that the variable is suitable to be
in the model.
The linear regression equation for the no. of washrooms
Y=160267.2 X + 226679.1
This is read as the price of the condos is equal to variable cost of $160267.2 times the number of square feet of the condos plus the fixed cost of $226680. The coefficient of correlation of the
variable is 0.55. This is a positive fair relationship between the variable and the dependent variable. The R square tells the importance of the output. In our case the R square is 0.30. This means
that the output variable’s variance is explained by the input variable variance by 30%. The adjusted R of this variable is 0.29 meaning it explains 29% accuracy of the regression equation. The F
value of this variable on the regression equation is 0.000, meaning there is a 0.000% chance that the regression output was merely by chance occurrence. This means that the variable is suitable to be
in the model.
The linear regression equation for the maintenance cost
Y=177465.2 X+ 592.9702
This is read as the price of the condos is equal to variable cost of $177465.2 times the number of square feet of the condos plus the fixed cost of $592.9702. The coefficient of correlation of the
variable is 0.53. This is a positive fair relationship between the variable and the dependent variable. The R square tells the importance of the output. In our case the R square is 0.28. This means
that the output variable’s variance is explained by the input variable variance by 28%. The adjusted R of this variable is 0.27 meaning it explains 27% accuracy of the regression equation. The F
value of this variable on the regression equation is 0.000, meaning there is a 0.000% chance that the regression output was merely by chance occurrence. This means that the variable is suitable to be
in the model.
From the results, price is the only variable with a higher correlation, and a greater variability in terms of the values of R square and adjusted R. The other variables do not have strong correlation
and hence cannot explain the variation in the models created. This means that price have a good correlation with the square feet of the condos, establishing a relationship between the price of the
condos and Starbucks outlets. Starbucks indeed has an effect on housing in its neighboring environment.
Cheng, H. G., & Phillips, M. R. (2014). Secondary analysis of existing data: opportunities and implementation. Shanghai archives of psychiatry, 26(6), 371.
Conroy, S., Narwold, A., & Sandy, J. (2013). The value of a floor: valuing floor level in high-rise condominiums in San Diego. International Journal of Housing Markets and Analysis, 6(2), 197-208.
Garthwaite, C., Garthwaite, C., Busse, M., Busse, M., Brown, J., Brown, J., ... & Merkley, G. (2017). Starbucks: A Story of Growth. Kellogg School of Management Cases, 1-20. | {"url":"https://www.topessaywriting.org/samples/the-starbucks-impact-on-the-real-estate-market-is-being-modeled","timestamp":"2024-11-14T00:17:45Z","content_type":"text/html","content_length":"85621","record_id":"<urn:uuid:4d6fea5b-4553-4c87-9e94-3c2dc9b32be7>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00226.warc.gz"} |
2011-2012 Undergraduate Calendar-Mathematics
Last updated: April 3, 2013 @ 11:43AM
Stephen Anco
Professors Emeriti
Howard E. Bell, Charles F. Laywine, John P. Mayberry, Eric Muller
Stephen Anco, Hichem Ben-El-Mechaiekh, Mei Ling Huang, Ronald A. Kerman, Yuanlin Li, Jan Vrbik, Thomas Wolf
Associate Professors
Chantal Buteau, Henryk Fuks, Omar Kihel, Alexander Odesskii, William J. Ralph, Xiaojian Xu, Wai Kong (John) Yuen
Assistant Professors
Babak Farzad
Dorothy Levay, Dorothy Miners
Mathematics Development Programs Manager
General Information
Administrative Assistants
Margaret Thomson, Josephine (Pina) McDonnell
905-688-5550, extension 3300
Mackenzie Chown J415
The Department of Mathematics offers a unique program, Mathematics Integrated with Computers and Applications (MICA). This innovative program fully integrates computers and applications into a broad
spectrum of courses that range over pure mathematics (the study of mathematics for its own sake), applied mathematics (mathematics for applications) and statistics. With its special focus on
technology, the MICA program is especially suited for students desiring careers in applications of mathematics that involve computing. Within the MICA program, students can also form areas of
concentration in applied and computational mathematics, mathematics education, pure mathematics or statistics, or they can choose to have no area of concentration.
Students in the MICA program get a solid grounding in mathematical theory and learn how to use computer and information technology to apply and present what they have learned. The core of the MICA
program consists of MATH 1P40 and 2F40 in which students will confront problems from pure and applied mathematics that require experimental and heuristic approaches. In dealing with such problems,
students will be expected to develop their own strategies and make their own choices about the best combination of mathematics and computing required in finding solutions.
Mathematics Integrated with Computers and Applications Co-op
The Mathematics Integrated with Computers and Applications Co-op program combines academic and work terms over a period of four and one-half academic years. Students spend at least two years in an
academic setting studying core concepts in Mathematics prior to their first work placement. The study will provide the necessary academic context for the work experience.
In addition to the current fees for courses in academic study terms, Mathematics Co-op students are assessed an administrative fee for each work term (see the Schedule of Fees).
Eligibility to continue in Mathematics Integrated with Computers and Applications Co-op program is based on student's major and non-major averages. A student with a minimum 70 percent major average
and a minimum 60 percent non-major average may continue. A student with a major average lower than 70 percent will not be permitted to continue in the Mathematics Integrated with Computers and
Applications Co-op program. If a student subsequently raises his/her major average to 70 percent, the student may be readmitted only if approved by the Co-op Admissions Committee. For further
information, see the Co-op Programs section of the Calendar.
The Mathematics Integrated with Computers and Applications Co-op program designation will be awarded to those students who have honours standing and who have successfully completed a minimum of
twelve months of Co-op work experience.
Mathematics and Computer Science Co-op Program
The Mathematics and Computer Science Co-op program combines academic and work terms over a period of four and one-half academic years. Students spend one and one-half years in an academic setting
studying the fundamentals of Mathematics and Computer Science prior to their first work placement. Successful completion of courses in the core areas of Computer Science and Mathematics provides the
necessary academic background for the work experience. In addition to the current fees for courses in academic study terms, Mathematics and Computer Science Co-op students are assessed an
administrative fee for each work term (see the Schedule of Fees).
Eligibility to continue in the Mathematics and Computer Science Co-op program is based on the student's major and non-major averages. A student with a minimum 70 percent major average and a minimum
60 percent non-major average may continue. A student with a major average lower than 70 percent will not be permitted to continue in the Mathematics and Computer Science Co-op program. If a student
subsequently raises his/her major average to 70 percent, the student may be readmitted only if approved by the Co-op Admissions Committee. For further information, see the Co-op Programs section of
the Calendar.
The Mathematics and Computer Science Co-op program designation will be awarded to those students who have honours standing and who have successfully completed a minimum of twelve months of Co-op work
The Department has a special interest in Mathematics Education and offers several programs and courses specifically for prospective teachers. These include both Concurrent and Consecutive Education
Programs as well as Minors for future teachers.
Certain courses are required for any degree in Mathematics (see below). Because Mathematics majors need both facility in dealing with mathematical theories and experience in the application of
mathematics to real-world problems, each student should discuss his or her particular interests with faculty before selecting elective courses.
Program Notes
1. All students must take three context credits: one Humanities context credit, one Sciences context credit and one Social Sciences context credit. Two credits must be used to satisfy context credit
requirements in year 1.
2. Students intending to pursue graduate studies in Pure Mathematics will find it essential to have MATH 4P03 and 4P05 or MATH 4P11 and 4P14 and desirable to have all of them.
3. MATH 3P51 and 3P52 are not required for students who fulfill the requirements of the concentration in Mathematics Education, Pure Mathematics or Statistics.
4. MATH 1P20 may not be used to satisfy this requirement.
5. MATH 2P04, 2P71 or 2P75 recommended in year 2. MATH 3P03, 3P12, 3P60 or 3P75 recommended in year 3.
6. In 20 credit degree programs a maximum of eight credits may be numbered 1(alpha)00 to 1(alpha)99; at least three credits must be numbered 2(alpha)90 or above; at least three credits must be
numbered 3(alpha)90 or above; and the remaining credits must be numbered 2(alpha)00 or above.
In 15 credit degree programs a maximum of eight credits may be numbered 1(alpha)00 to 1(alpha)99; at least three credits must be numbered 2(alpha)90 or above; and the remaining credits must be
numbered 2(alpha)00 or above.
In some circumstances, in order to meet university degree and program requirements, more than 15 or 20 credits may be taken.
Mathematics Integrated with Computers and Applications Honours Program (MICA)
Year 1
· MATH 1P01, 1P02, 1P12 and 1P40
· three elective credits (see program note 1)
Year 2
· MATH 2F40, 2P03, 2P08, 2P12, 2P81 and 2P82
· the Humanities context credit, Sciences context credit or Social Sciences context credit (not taken in year 1)
· one-half elective credit
Year 3
· MATH 3P51 and 3P52 (see program note 3)
· two MATH credits numbered 3(alpha)00 or above
· two elective credits (see program note 6)
Year 4
· Four MATH credits (see program notes 2 and 6)
· one elective credit (see program note 6)
Mathematics Integrated with Computers and Applications Co-op (Honours only)
Year 1
· MATH 1P01, 1P02, 1P12 and 1P40
· three elective credits (see program note 1)
Year 2
· MATH 2F40, 2P03, 2P08, 2P12, 2P81 and 2P82
· SCIE 0N90
· the Humanities context credit, Sciences context credit or Social Sciences context credit (not taken in year 1)
· one-half elective credit
Spring/Summer Sessions:
Year 3
Fall Term:
· Three credits from MATH 3P04, 3P08, 3P09, 3P12, 3P13, 3P51, 3P52, 3P60, 3P72, 3P75, 3P81, 3P82
Winter Term:
Year 4
· Three credits from MATH 3P04, 3P08, 3P09, 3P12, 3P13, 3P51, 3P52, 3P60, 3P72, 3P75, 3P81, 3P82 (not taken in year 3)
· two credits from MATH 4P05, 4P07, 4P09, 4P11, 4P13, 4P84, 4P92, 4P93, 4P94
Spring/Summer Sessions:
Year 5
Fall Term:
· Two credits from MATH 4P05, 4P07, 4P09, 4P11, 4P13, 4P84, 4P92, 4P93, 4P94 (not taken in year 4)
Mathematics Integrated with Computers and Applications with a Concentration in Statistics Co-op (Honours only)
Year 1
· MATH 1P01, 1P02, 1P12 and 1P40
· three elective credits (see program note 1)
Year 2
· MATH 2F40, 2P03, 2P08, 2P12, 2P81 and 2P82
· the Humanities context credit, Sciences context credit or Social Sciences context credit (not taken in year 1)
· one-half elective credit (see program note 5)
Year 3
· SCIE 0N90, MATH 3P81, 3P82, 3P85 and 3P86
· one MATH credit numbered 3(alpha)00 or above (see program note 5)
· two elective credits
Spring/Summer Sessions:
Year 4
Fall Term:
Winter Term:
· MATH 4P82, 4P85
· one-half MATH credit
· one elective credit
Spring/Summer Sessions:
Year 5
Fall Term:
· MATH 4P81, 4P84
· one and one-half MATH credits
Mathematics Pass Program
Year 1
· MATH 1P01, 1P02, 1P12 and 1P40
· three elective credits (see program note 1)
Year 2
· MATH 2F40 and 2P03
· one of MATH 2P08 and 2P12, MATH 2P12 and 2P72, MATH 2P81 and 2P82
· the Humanities context credit, Sciences context credit or Social Sciences context credit (not taken in year 1)
· one and one-half elective credits
Year 3
· Three MATH credits numbered 3(alpha)00 or above (see program note 6)
· two elective credits (see program note 6)
Combined Major Program
Combined major programs have been developed by the Department of Mathematics in co-operation with each of these departments: Biological Sciences, Chemistry, Computer Science, Economics and Physics.
Program requirements are listed in the calendar sections of the co-major discipline. Students may take a combined major in Mathematics and a second discipline. For requirements in the other
discipline, the student should consult the relevant department/centre. It should be noted that not all departments/centres provide a combined major option.
Mathematics and Computer Science Co-op (Honours only)
Students admitted to the Mathematics and Computer Science Co-op program must follow an approved program pattern. The most common pattern is listed below. For other approved patterns, consult the
Co-op Office.
Year 1
· MATH 1P01, 1P02, 1P12 and 1P40
· COSC 1P02, 1P03 and 1P50
· one Sciences context credit
· one-half elective credit
Year 2
Fall Term:
· MATH 2P03 and 2P81
· COSC 2P03, 2P12 and 2P90
· SCIE 0N90
Winter Term:
Spring/Summer Sessions:
· MATH 1P66 and 1P67
· COSC 2P32
· one-half COSC credit
Year 3
· COSC 3F00
· MATH 2F40 and 3F65
· one Humanities context credit
· one Social Sciences context credit
Spring/Summer Sessions:
Year 4
Fall Term:
Winter Term:
· MATH 2P82, 3P60 and 4P61
· COSC 2P13
· one COSC credit (see program note 6)
Year 5
Fall Term:
· One COSC credit (see program note 6)
· one MATH credit (see program note 6)
· one-half elective credit (see program note 6)
Programs and Courses for Future Teachers
The Department of Mathematics has identified courses that are particularly appropriate for students preparing to become teachers at either the elementary or secondary levels. Students should consult
the Chair in the selection of courses.
To help students meet Primary/Junior Teacher Education admission requirements at Brock University - MATH 2P52.
Three credits for a teachable subject at the Junior/Intermediate level (see program note 4). May include MATH 1F92, 1P05, 1P06, 1P12, 1P66, 2P90, 2P93 and 3P91.
For Mathematics as the first teachable subject (a minimum of five credits; see program note 4), an Honours degree in Mathematics is recommended.
For Mathematics as the second teachable subject (a minimum of three credits; see program note 4); for example: MATH 1P01, 1P02, 1P12, 2P90, 2P93 and one-half MATH credit.
Concurrent BSc/BEd
The Department of Mathematics and the Faculty of Education co-operate in offering two Concurrent BSc (Honours)/BEd programs. The Mathematics BSc (Honours)/BEd programs combines the BSc Honours
program or BSc Integrated Studies Honours program with the teacher education program for students interested in teaching at the Intermediate/Senior level (grades 7-12) and at the Junior/Intermediate
level (grades 4-10). Refer to the Education - Concurrent BSc (Honours)/BEd (Intermediate/Senior) or Education - Concurrent BSc Integrated Studies (Honours)/BEd (Junior/Intermediate) program listings
for further information.
Certificate in Statistics
The Mathematics Department offers a program leading to a Certificate in Statistics normally for those with a degree in another discipline.
See "Certificate Requirements" under Academic Regulations.
The certificate in Statistics is awarded upon completion the following courses with a minimum 60 percent overall average:
· One university Calculus credit
· MATH 2P12, 2P81, 2P82, 3P81, 3P82, 3P85, 4P81 and 4P82
Concentration Program
Concentration in Applied and Computational Mathematics
Students may earn a Concentration in Applied and Computational Mathematics by successfully completing the following courses as part of the academic work leading to a BSc (Honours) in Mathematics
Integrated with Computers and Applications:
· MATH 2F40, 3P51 and 3P52
· two and one-half credits from MATH 3P04, 3P08, 3P09, 3P12, 3P60, 3P72, 3P75
· two credits from MATH 4P05, 4P07, 4P09, 4P84, 4P93, 4P94
Concentration in Mathematics Education
Students may earn a Concentration in Mathematics Education by successfully completing the following courses as part of the academic work leading to a BSc (Honours) in Mathematics Integrated with
Computers and Applications:
· MATH 2F40, 2P03, 2P08, 2P12, 2P71, 2P90, 2P93, 3P12, 3P90 and 3P91
· MATH 3P51 or 3P93
Concentration in Pure Mathematics
Students may earn a Concentration in Pure Mathematics by successfully completing the following courses as part of the academic work leading to a BSc (Honours) in Mathematics Integrated with Computers
and Applications (with the possible exception of MATH 2P72):
· MATH 2P04, 2P12, 2P13, 3P03, 3P04, 3P12 and 3P13
· MATH 2P71 (recommended) or 2P72
· one credit from MATH 3P08, 3P09, 3P51, 3P52, 3P60, 3P72, 3P97, 3P98
· two credits from MATH 4F90, 4P03, 4P11, 4P14, 4P71, 4P92, 4P93
Concentration in Statistics
Students may earn a Concentration in Statistics by successfully completing the following courses as part of the academic work leading to a BSc (Honours) in Mathematics Integrated with Computers and
· MATH 2F40, 2P81, 2P82, 3P81, 3P82, 3P85, 3P86, 4P81, 4P82, 4P84 and 4P85
Minor Program
Minor in Mathematics
Students in other disciplines may obtain a Minor in Mathematics within their degree program by completing the following courses with a minimum 60 percent average:
· MATH 1P01, 1P02, 1P12, 1P40 and 2F40
· one MATH credit numbered 2(alpha)00 or above
· one MATH credit numbered 3(alpha)00 or above
Minor Programs for Teachers
Students intending to become elementary teachers, who are in another discipline, can obtain a Minor in Elementary Teaching Mathematics within their degree program by completing the following courses
with a minimum 60 percent overall average:
· MATH 1P12, 1P66, 1P97, 1P98, 2P90, 2P93 and 3P91
· one-half MATH credit (see program note 4)
Students intending to become secondary teachers, who are in another discipline, can obtain a Minor in Secondary Teaching Mathematics within their degree program by completing the following courses
with a minimum 60 percent overall average:
· MATH 1P01, 1P02, 1P12, 1P40, 2P90 and 2P93
· two MATH credits numbered 2(alpha)00 or above
Course Descriptions
Note that not all courses are offered in every session. Refer to the applicable term timetable for details.
# Indicates a cross listed course
* Indicates primary offering of a cross listed course
Prerequisites and Restrictions
Students must check to ensure that prerequisites are met. Students may be deregistered, at the request of the instructor, from any course for which prerequisites and/or restrictions have not been
MATH 1F92
Introductory Statistics
Describing and comparing data sets, linear regression analysis, basic probability theory, discrete probability distributions, binomial and normal distributions, Central Limit Theorem, confidence
intervals and hypothesis tests on means and proportions, properties of t-, F- and chi-squared distributions, analysis of variance, inference on regression. Emphasis on interpretation of numerical
results for all topics. Use of Minitab.
Lectures, 3 hours per week.
Prerequisite(s): one grade 11 mathematics credit.
Note: designed for non-science majors. Not open to students with credit in any university mathematics or statistics course.
MATH 1P01
Calculus Concepts I
Differential calculus with an emphasis on concepts and the use of both theory and computers to solve problems. Precalculus topics, limits, continuity and the intermediate value theorem, derivatives
and differentiability, implicit differentiation, linear approximation, mean value theorem with proof and applications, max and min, related rates, curve sketching, l'Hospital's rule, antiderivatives,
Riemann sums, FTC with proof, integration by substitution. Use of Maple.
Lectures, 4 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): two grade 12 mathematics credits including MCV4U or permission of the instructor.
Note: open to all, but primarily intended for mathematics majors and/or future teachers. Students must successfully complete a Mathematics skills test.
Completion of this course will replace previous assigned grade and credit obtained in MATH 1P05.
MATH 1P02
Calculus Concepts II
Integral calculus emphasizing concepts, theory and computers to solve problems. Further integration techniques. Applications to areas between curves, volumes, arc length and probabilities.
Multivariable calculus: partial derivatives, optimization of functions of two variables. Sequences and series: convergence tests, Taylor and Maclaurin series and applications. Differential Equations:
direction fields, separable equations, growth and decay, the logistic equation, linear equations. Use of Maple.
Lectures, 4 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 1P01, 1P05 or permission of instructor.
Note: open to all, but primarily intended for mathematics majors and/or future teachers.
Completion of this course will replace previous assigned grade and credit obtained in MATH 1P06.
MATH 1P05
Applied Calculus I
Differential calculus emphasizing problem solving, calculation and applications. Precalculus topics, limits, continuity, derivatives and differentiability, implicit differentiation, linear
approximation, max and min, related rates, curve sketching, l'Hospital's rule, antiderivatives, integrals, FTC without proof, integration by substitution. Use of Maple.
Lectures, 4 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): two grade 12 mathematics credits including MCV4U or permission of the instructor.
Note: designed for students in the sciences, computer science, and future teachers. Students must successfully complete a Mathematics skills test.
Completion of this course will replace previous assigned grade and credit obtained in MATH 1P01.
MATH 1P06
Applied Calculus II
Integral calculus emphasizing problem solving, calculations and applications. Further techniques of integration. Applications to areas between curves, volumes, arc length and probabilities.
Multivariable calculus: partial derivatives, optimization of functions of two variables. Sequences and series: convergence tests, Taylor and Maclaurin series and applications. Differential Equations:
direction fields, separable equations, growth and decay, the logistic equation, linear equations. Use of Maple.
Lectures, 4 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 1P01 or 1P05.
Note: designed for students in the sciences, computer science, and future teachers.
Completion of this course will replace previous assigned grade and credit obtained in MATH 1P02.
MATH 1P12
Linear Algebra I
Introduction to finite dimensional real vector spaces; systems of linear equations: matrix operations and inverses, determinants. Vectors in R^2 and R^3: Dot product and norm, cross product, the
geometry of lines and planes in R^3; Euclidean n-space, linear transformations for R^n to R^m, complex numbers, selected applications and use of a computer algebra system.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): two grade 12 mathematics credits or permission of instructor.
MATH 1P20
Introduction to Mathematics
Essential mathematics skills required for university mathematics courses. Sets, real and complex numbers, solutions of inequalities and equations, functions, inverse functions, composition of
functions, polynomial functions, rational functions, trigonometry, trigonometric functions, trigonometric identities, conic sections, exponential functions, logarithmic functions, polar co-ordinates,
mathematical induction, binomial theorem, vectors and matrices.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): one grade 11 mathematics credit.
Note: not open to students with credit in any university calculus course. Cannot be used toward a second teachable subject.
MATH 1P40
Mathematics Integrated with Computers and Applications I
Exploration of ideas and problems in algebra differential equations and dynamical systems using computers. Topics include number theory, integers mod p, roots of equations, fractals, predator-prey
models and the discrete logistic equation for population growth.
Lectures, 2 hours per week; lab, 2 hours per week.
Restriction: open to MATH (single or combined), MATH (Honours)/BEd (Intermediate/Senior) majors and minors until date specified in Registration guide.
Prerequisite(s): MATH 1P01 or 1P05.
MATH 1P66
Mathematical Reasoning
Introduction to mathematical abstraction, logic and proofs including mathematical induction.
Lectures, 3 hours per week.
Prerequisite(s): one grade 12 mathematics credit.
Note: MCB4U recommended. Students may not concurrently register in MATH 2P04, 2P13 or 2P71.
Students will not receive earned credit for MATH 1P66 if MATH 2P04, 2P13 or 2P71 have been successfully completed.
MATH 1P67
Mathematics for Computer Science
Development and analysis of algorithms, complexity of algorithms; recursion solving recurrence relations; relations and functions.
Lectures, 3 hours per week.
Prerequisite(s): MATH 1P66.
Note: designed for students in Computer Science.
MATH 1P97
Calculus With Applications
Lines, polynomials, logarithms and exponential functions; two-sided limits; rates of change using derivatives; max and min of functions using derivatives; higher derivatives and concavity; area under
a curve using integrals; optimization of functions of two variables using partial derivatives; growth and decay using differential equations; applications to many different disciplines; use of
computer algebra systems.
Lectures, 4 hours per week.
Prerequisite(s): one grade 12 mathematics credit
Note: Designed for students in Biological Sciences, Biotechnology, Business, Earth Sciences, Economics, Environmental Geoscience, Geography and Health Sciences. Not open to students with credit in
any university calculus course.
MATH 1P98
Practical Statistics
Descriptive statistics; probability of events; counting rules; discrete and continuous probability distributions: binomial, Poisson and normal distributions; Central Limit Theorem; confidence
intervals and hypothesis testing; analysis of variance; contingency tables; correlation and regression; emphasis on real-world applications throughout; use of statistical computer software.
Lectures, 3 hours per week.
Prerequisite(s): one grade 12 mathematics credit or MATH 1P20.
Note: designed for students in Biological Sciences, Biotechnology, Business, Earth Sciences, Economics, Environmental Geoscience and Health Sciences. Not open to students with credit in any
university statistics course.
MATH 2F05
Applied Advanced Calculus
First and second order differential equations, vector functions, curves, surfaces; tangent lines and tangent planes, linear approximations, local extrema; cylindrical and spherical co-ordinates;
gradient, divergence, curl; double and triple integrals, line and surface integrals; Green's theorem, Stokes' theorem, Gauss' theorem; elementary complex analysis. Emphasis on applications to
physical sciences. Use of Maple.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 1P02 or 1P06.
Students will not receive earned credit in MATH 2F05 if MATH 2P03 has been successfully completed.
MATH 2F40
Mathematics Integrated with Computers and Applications II
Theory and application of mathematical models; discrete dynamical systems; time series and their application to the prediction of weather and sunspots; Markov chains; empirical models using
interpolation and regression; continuous stochastic models; simulation of normal, exponential and chi-square random variables; queuing models and simulations, use of a computer algebra system.
Lectures, lab, 4 hours per week.
Prerequisite(s): MATH 1P02 and 1P40 or permission of the instructor.
MATH 2P03
Multivariate and Vector Calculus
Multivariable integration, polar, cylindrical and spherical coordinates, vector algebra, parameterized curves and surfaces, vector calculus, arc length, curvature and torsion, projectile and
planetary motion, line integrals, vector fields, Green's theorem, Stokes' theorem, the use of computer algebra systems to manipulate vectors, plot surfaces and curves, determine line integrals and
analyze vector fields.
Lectures, 3 hours per week, lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 1P02, 1P06 or permission of the instructor.
MATH 2P04
Basic Concepts of Analysis
Sets; mappings, count ability; properties of the real number system; inner product, norm, and the Cauchy-Schwarz inequality; compactness and basic compactness theorems (Cantor's theorem, the
Bolzano-Weierstrass theorem, the Heine-Borel theorem); connectedness; convergence of sequences; Cauchy sequences; continuous and uniformly continuous functions.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 2P03.
MATH 2P08
Ordinary Differential Equations
Linear and nonlinear differential equations. Basic existence and uniqueness theory. Analytical and numerical solution methods; asymptotic behaviour. Qualitative analysis of autonomous systems
including periodic cycles and steady-states. Examples of conservative systems and dissipative systems. Modelling and applications of differential equations. Use of Maple.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 1P02, 1P06 or permission of the instructor.
MATH 2P12
Linear Algebra II
Finite dimensional real vector spaces and inner product spaces; matrix and linear transformation; eigenvalues and eigenvectors; the characteristic equation and roots of polynomials; diagonalization;
complex vector spaces and inner product spaces; selected applications; use of a computer algebra system and selected applications.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 1P12.
MATH 2P13
Abstract Linear Algebra
Vector spaces over fields; linear transformations; diagonalization and the Cayley-Hamilton theorem; Jordan canonical form; linear operators on inner product spaces; the spectral theorem; bilinear and
quadratic forms.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 2P12.
MATH 2P52
Principles of Mathematics for Primary and Junior Teachers
Mathematical concepts and ideas in number systems; geometry and probability arising in the Primary and Junior school curriculum.
Lectures, seminar, 4 hours per week.
Restriction: students must have a minimum of 5.0 overall credits.
Note: designed to meet the mathematics admission requirement for the Primary/Junior Pre-service program of the Faculty of Education at Brock University. Not open to students holding credit in any
grade 12 or university mathematics course.
MATH 2P71
Introduction to Combinatorics
Counting, inclusion and exclusion, pigeonhole principle, permutations and combinations, derangements, binomial expansions , introduction to discrete probability; to graph theory, Eulerian graphs,
Hamilton Cycles, colouring, planarity, trees.
Lectures, 3 hours per week; tutorial, 1 hour per week.
Prerequisite(s): two 4U/M mathematics credits or permission of the instructor.
MATH 2P72
Discrete Optimization
Problems and methods in discrete optimization. Linear programming: problem formulation, the simplex method, software, and applications. Network models: assignment problems, max-flow problem. Directed
graphs: topological sorting, dynamic programming and path problems, and the travelling salesman's problem. General graphs: Eulerian and Hamiltonian paths and circuits, and matchings.
Lectures, 3 hours per week; lab, 1 hour per week.
Prerequisite(s): MATH 1P12.
MATH 2P75
Introductory Financial Mathematics
Applications of mathematics to financial markets. Models for option pricing, rates of interest, price/yield, pricing contracts and futures, arbitrage-free conditions, market risk, hedging and
sensitivities, volatility; stock process as random walks and Brownian motions; Black-Scholes formula; finite difference methods and VaR.
Lectures, lab, 4 hours per week.
Prerequisite(s): MATH 1P97 and 1P98.
MATH 2P81
Probability, events, algebra of sets, independence, conditional probability, Bayes' theorem; random variables and their univariate, multivariate, marginal and conditional distributions. Expected
value of a random variable, the mean, variance and higher moments, moment generating function, Chebyshev's theorem. Some common discrete and continuous distributions: Binomial, Poisson,
hypergeometric, normal, uniform and exponential. Use of SAS, Maple or other statistical packages.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 2P03 or permission of the instructor.
Note: may be taken concurrently with MATH 2P03.
MATH 2P82
Mathematical Statistics I
Transforming random variables, central limit theorem, law of large numbers. Random sample; sample mean and variance. Sampling from normal population: chi-square, t and F distributions, sample median
and order statistics. Point and interval estimation of population parameters: method of moments, maximum-likelihood technique, consistent, unbiased and efficient estimators, confidence intervals.
Hypotheses testing: type I and II errors, most powerful tests. Use of SAS, Maple or other statistical packages.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 2P81.
MATH 2P90
Euclidean and Non-Euclidean Geometry I
The development of Euclidean and non-Euclidean geometry from Euclid to the 19th century. The deductive nature of plane Euclidean geometry as an axiomatic system, the central role of the parallel
postulate and the general consideration of axiomatic systems for geometry in general and non-Euclidean geometry in particular. Introduction to transformation geometry. Use of Geometer's Sketchpad.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): one MATH credit.
Completion of this course will replace previous assigned grade and credit obtained in MATH 2P50.
MATH 2P93
Great Moments in Mathematics I
Triumphs in mathematical thinking emphasizing many cultures up to 1000 AD. Special attention is given to analytical understanding of mathematical problems from the past, with reference to the stories
and times behind the people who solved them. Students will be encouraged to match wits with great mathematicians by solving problems and developing activities related to their discoveries.
Lectures, 4 hours per week.
Prerequisite(s): one MATH credit.
Completion of this course will replace previous assigned grade and credit obtained in MATH 2P51.
MATH 2P95
Mathematics and Music
Scales and termperaments, history of the connections between mathematics and music, set theory in atonal music, group theory applied to composition and analysis, enumeration of rhythmic canons,
measurement of melodic similarity using metrics, topics in mathematical music theory, applications of statistics to composition and analysis.
Lectures, 3 hours per week; lab/tutorial 1 hour per week.
Prerequisite(s): one of MATH 1P01, 1P02, 1P05, 1P06, 1P97; MATH 1P12 or permission of the instructor.
Completion of this course will replace previous assigned grade and credit obtained in MATH 2P31.
MATH 2P98
Applied Statistics
Single-factor and factorial experimental design methods; nested-factorial experiments. Simple and multiple linear regression methods, correlation analysis, indicator regression; regression model
building and transformations. Contingency tables, binomial tests, nonparametric rank tests. Simple random and stratified sampling techniques, estimation of sample size and related topics. Use of SAS,
Maple or other statistical packages.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 1F92 or 1P98.
MATH 3F65
Mathematical Methods for Computer Science
Applied probability, Markov chains, Poisson and exponential processes, renewal theory, queuing theory, applied differential equations. Networks, graph theory, reliability theory, NP-completeness.
Lectures, 3 hours per week.
Prerequisite(s): MATH 1P01 or 1P97; MATH 1P12, 1P66 and 1P67.
MATH 3P03
Real Analysis
Approximation of functions by algebraic and trigonometric polynomials (Taylor and Fourier series); Weierstrass approximation theorem; Riemann integral of functions on R^n, the Riemann-Stieltjes
integral on R; improper integrals; Fourier transforms.
Lectures, 3 hours per week; tutorial, 1 hour per week.
Prerequisite(s): MATH 2P04.
MATH 3P04
Complex Analysis
Algebra and geometry of complex numbers, complex functions and their derivatives; analytic functions; harmonic functions; complex exponential and trigonometric functions and their inverses; contour
integration; Cauchy's theorem and its consequences; Taylor and Laurent series; residues.
Lectures, 3 hours per week; tutorial, 1 hour per week.
Prerequisite(s): MATH 2F05 or 2P03.
MATH 3P08
Advanced Differential Equations
Linear second-order differential equations. Integral transform methods. Series solutions and special functions (Gamma, Bessel, Legendre). Boundary value problems, and introduction to Sturm-Liouville
theory and series expansions by orthogonal functions. Emphasis on applications to physical sciences. Use of Maple.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 2F05 or 2P08.
MATH 3P09
Linear Partial Differential Equations and Boundary Value Problems
Second-order linear partial differential equations; initial and boundary value problems for the heat equation, wave equation, and Laplace equation. Fourier series, cylindrical and spherical harmonic
series. Eigenfunction problems and normal modes. Emphasis on applications to physical sciences. Use of Maple.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 2F05 or 2P08.
MATH 3P12
Applied Algebra
Group theory with applications. Topics include modular arithmetic, symmetry groups and the dihedral groups, subgroups, cyclic groups, permutation groups, group isomorphism, frieze and
crystallographic groups, Burnside's theorem, cosets and Lagrange's theorem, direct products and cryptography.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 2P12 or permission of the instructor.
MATH 3P13
Abstract Algebra
Further topics in group theory: normal subgroups and factor groups, homomorphisms and isomorphism theorems, structure of finite abelian groups. Rings and ideals; polynomial rings; quotient rings.
Division rings and fields; field extensions; finite fields; constructability.
Lectures, 3 hours per week; lab/tutorial 1 hour per week.
Prerequisite(s): MATH 3P12.
MATH 3P51
Applied Mathematics with Maple
Blending mathematical concepts with computations and visualization in Maple. Modelling of physical flows, waves and vibrations. Animation of the heat equation and wave equation; applications
including vibrations of rectangular and circular drums, heat flow and diffusion, sound waves. Eigenfunctions and convergence theorems for Fourier eigenfunction series. Approximations, Gibbs
phenomena, and asymptotic error analysis using Maple.
Lectures, lab, 4 hours per week.
Prerequisite(s): MATH 2F40 and 2P03.
Completion of this course will replace previous assigned grade and credit obtained in MATH 3F40.
MATH 3P52
Partial Differential Equations in C++
Analytic solution of first order PDEs (characteristic ODE systems and their analytic solution) and the numerical solution of first and second order PDEs (discretization, derivation and comparison of
different finite difference equations, stability analysis, boundary conditions), the syntax of the C++ programming language, projects in C++ solving PDEs numerically.
Lectures, lab, 4 hours per week.
Prerequisite(s): MATH 2F40 and 2P03.
Completion of this course will replace previous assigned grade and credit obtained in MATH 3F40.
MATH 3P60
Numerical Methods
Survey of computational methods and algorithms; basic concepts (algorithm, computational cost, convergence, stability); roots of functions; linear systems; numerical integration and differentiation;
Runge-Kutta method for ordinary differential equations; finite-difference method for partial differential equations; fast Fourier transform; Monte Carlo methods. Implementation of numerical
algorithms in a scientific programming language.
Lectures, 3 hours per week; lab, 1 hour per week.
Prerequisite(s): MATH 1P02 and 1P12 or permission of the instructor.
MATH 3P72
Continuous Optimization
Problems and methods in non-linear optimization. Classical optimization in R^n: inequality constraints, Lagrangian, duality, convexity. Non-linear programming. Search methods for unconstrained
optimization. Gradient methods for unconstrained optimization. Constrained optimization. Dynamic programming.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 2F05 or 2P03; MATH 2P72 (2P60).
MATH 3P73
Game Theory
(also offered as ECON 3P73)
Representation of Games. Strategies and payoff functions. Static and dynamic games of complete or incomplete information. Equilibria concepts: Nash, Bayesian Nash and Perfect Bayesian Nash
equilibria. Convexity concepts, fixed points for correspondences and minimax. Core and Shapley value of a game. Refinements and Applications.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 2P72 or ECON 3P91.
MATH 3P75
Theory of Financial Mathematics
Probability, Brownian motion, martin-gales, Markov processes, differential equations, finite difference and tree models used in financial mathematics of options; stocks; one-dimensional Ito
processes, Black-Scholes for both constant and non-constant inputs, continuous time hedging, valuing American and exotic options.
Lectures, lab, 4 hours per week.
Prerequisite(s): MATH 1P12 and 2P82; MATH 2F05 or MATH 2P03 and 2P08.
MATH 3P81
Experimental Design
Analysis of variance; single-factor experiments; randomized block designs; Latin squares designs; factorial designs; 2^f and 3^f factorial experiments; fixed, random and mixed models; nested and
nested-factorial experiments; Taguchi experiments; split-plot and confounded in blocks factorial designs; factorial replication; regression models; computational techniques and use of SAS, Maple or
other statistical packages; related topics.
Lectures, 3 hours per week; lab, 1 hour per week.
Prerequisite(s): MATH 2P82.
MATH 3P82
Regression Analysis
Simple and multiple linear regression and correlation, measures of model adequacy, residual analysis, weighted least squares, polynomial regression, indicator variables, variable selection and model
building, multicollinearity, time series, selected topics. Use of SAS, Maple or other statistical packages.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 2P12 and 2P82 or permission of the instructor.
MATH 3P85
Mathematical Statistics II
Review of distributional theory. Convergence types. Some special and limiting distributions. Review of point and interval estimations. Efficiency, sufficiency, robustness and completeness. Bayesian
estimations, credible intervals, prediction intervals. Basic theory of hypotheses testing: Neyman-Pearson lemma, likelihood ratio test, chi-square test, Test of stochastic independence. Normal
models: quadratic forms, noncentral chi-square and noncentral Fdistributions. Use of SAS, Maple or other statistical packages.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 2P82.
MATH 3P86
Applied Multivariate Statistics
Matrix algebra and random vector, sample geometry and random sampling, multivariate normal distribution, inference about mean, comparison of several multivariate means, multivariate linear regression
model, principle components, factor analysis, covariance analysis, canonical correlation analysis, discrimination and classification, cluster analysis, computational techniques and use of SAS, Maple
or other statistical packages and related topics.
Lectures, 3 hours per week; lab 1 hour per week.
Prerequisite(s): MATH 2P12 and 2P82 or permission of the instructor.
MATH 3P90
Euclidean and Non Euclidean Geometry II
Topics in Euclidean and non-Euclidean geometry chosen from the classification of isometries in selected geometries, projective geometry, finite geometries and axiometic systems for plane Euclidean
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 1P12 and 2P90 (2P50).
Completion of this course will replace previous assigned grade and credit obtained in MATH 3P50.
MATH 3P91
Mathematics at the Junior/Intermediate/Senior Level
A treatment of mathematics and its teaching and learning at the junior, intermediate and senior levels. A major portion of the course will involve directed projects.
Lectures, seminar, 4 hours per week.
Restriction: open to MATH (Honours) BSc/BEd(Intermediate/Senior), BA (Honours)/BEd (Junior/Intermediate), BSc (Honours)/BEd (Junior/Intermediate) and students in minor programs for teachers with a
minimum of 9.0 overall credits.
Prerequisite(s): three MATH credits.
MATH 3P93
Great Moments in Mathematics II
The development of modern mathematics from medieval times to the present. The course includes Fibonacci's calculation revolution, the disputes over cubic equations, Pascal and probability, Fermat's
last theorem, Newton and Calculus, Euler and infinite series, set theory and the possibilities of inconsistencies in mathematics.
Lectures, 4 hours per week.
Prerequisite(s): MATH 1P02, 1P12 and 2P93.
Completion of this course will replace previous assigned grade and credit obtained in MATH 3P51.
MATH 3P97
Introductory Topology
Introduction to metric and topological spaces; connectedness, completeness, countability axioms, separation pro-perties, covering properties, metrization of topological spaces.
Lectures, 4 hours per week.
Prerequisite(s): MATH 2P04; MATH 2P12 and 2P13 or MATH 3P12 and 3P13.
MATH 3P98
Functional Analysis
Introduction to the theory of normed linear spaces, fixed-point theorems, Stone-Weierstrass approximation on metric spaces and preliminary applications on sequence spaces.
Lectures, 4 hours per week.
Prerequisite(s): MATH 3P97.
MATH 4F90
Honours Project
Independent project in an area of pure or applied mathematics, or mathematics education.
Restriction: open to MATH (single or combined) majors with either a minimum of 14.0 credits, a minimum 70 percent major average and a minimum 60 percent non-major average or approval to year 4
(honours) and permission of the instructor.
Note: carried out under the supervision of a faculty member. The supervisor must approve the topic in advance. Presentation of the project is required.
MATH 4P03
Advanced Real Analysis
Lebesgue integration on R^n; differentiation and absolute continuity; Fubini's theorem; L^p spaces, elementary theory of Banach and Hilbert spaces.
Lectures, 3 hours per week.
Prerequisite(s): MATH 3P03.
MATH 4P05
Introduction to Wavelets
Wavelets as an orthonormal basis for R^n, localized in space and frequency; wavelets on the real line; image compression (fingerprint files); wavelet-Galerkin numerical solution of differential
equations with variable coefficients.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 2P08, 2P12 and 3P03.
Completion of this course will replace previous assigned grade and credit obtained in MATH 4P04.
MATH 4P07
Topics in Differential Equations
Topics may include ordinary differential equations: existence and uniqueness theory, strange attractors, chaos, singularities. Partial differential equations: Cauchy-Kovalevski theorem,
well-posedness of classical linear heat equation and wave equation, weak solutions, global existence, uniqueness and asymptotic behaviour.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 3P08.
Completion of this course will replace previous assigned grade and credit obtained in MATH 4F08.
MATH 4P09
Solitons and Nonlinear Wave Equations
(also offered as PHYS 4P09)
Introduction to solitons. Travelling waves, nonlinear wave equations and evolution equations (Korteweg de Vries, Bousinesq, nonlinear Schrodinger, sine-Gordon). Soliton solutions and their
interaction properties, Lax pairs and construction of single and multisoliton solutions.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): one of MATH 3P09, 3P51, 3P52.
MATH 4P11
Topics in Groups
Advanced topics from group theory. Topics may include the Sylow theorems, free groups, nilpotent and solvable groups and some simple Lie groups.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 3P13.
Completion of this course will replace previous assigned grade and credit obtained in MATH 4F10.
MATH 4P13
Topics in Rings and Modules
Advanced topics from ring theory. Topics may include radicals, Wedderburn-Artin theorems, modules over rings and some special rings.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 3P13.
Completion of this course will replace previous assigned grade and credit obtained in MATH 4F10.
MATH 4P14
Advanced Mathematical Structures
Topics may include modules, homological algebra, group algebra, algebraic geometry, lattice theory, graph theory and logic.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 3P13 or permission of the Department.
Completion of this course will replace previous assigned grade and credit obtained in MATH 4F10 or 4P12.
MATH 4P61
Theory of Computation
(also offered as COSC 4P61)
Regular languages and finite state machines: deterministic and non-deterministic machines, Kleene's theorem, the pumping lemma, Myhill-Nerode Theorem and decidable questions. Context-free languages:
generation by context-free grammars and acceptance by pushdown automata, pumping lemma, closure properties, decidability. Turing machines: recursively enumerable languages, universal Turing machines,
halting problem and other undecidable questions.
Lectures, 3 hours per week.
Restriction: open to COSC (single or combined) majors.
Prerequisite(s): MATH 1P67.
Note: MATH students may take this course with permission of Department.
MATH 4P64
Introduction to Mathematical Physics
(also offered as PHYS 4P64)
Calculus of variations, least action principle in physics, symmetries and conservation laws, differential-geometric structures (differential form, vector field, Riemannian metric). Applications to
physics: electro-magnetic field as a one-form, gravity as a pseudo-Riemannian metric. Introduction to mathematical ideas of quantum mechanics.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 2F05, or MATH 2P03 and 2P08; MATH 2P12.
MATH 4P71
Review of basic enumeration including distribution problems, inclusion-exclusion and generating functions. Polya theory. Finite fields. Orthogonal Latin squares, affine and projective planes. Coding
theory and cryptography.
Lectures, 3 hours per week; tutorial, 1 hour per week.
Prerequisite(s): MATH 2P71 or permission of the instructor.
MATH 4P81
Sampling Theory
Theory of finite population sampling; simple random sampling; sampling proportion; estimation of sample size; stratified random sampling; optimal allocation of sample sizes; ratio estimators;
regression estimators; systematic and cluster sampling; multi-stage sampling; errors in surveys; computational techniques and use of SAS, Maple or other statistical packages and related topics.
Lectures, 3 hours per week; lab, 1 hour per week.
Prerequisite(s): MATH 3P85 or permission of the instructor.
MATH 4P82
Nonparametric Statistics
Order statistics, rank statistics, methods based on the binomial distribution, contingency tables, Kolmogorov Smirnov statistics, nonparametric analysis of variance, nonparametric regression,
comparisons with parametric methods. Use of SAS, Maple or other statistical packages.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 3P85 or permission of the instructor.
MATH 4P84
Topics in Stochastic Processes and Models
Topics may include general stochastic processes, Markov chains and processes, renewal process, branching theory, stationary processes, stochastic models, Monte Carlo simulations and related topics.
Use of SAS, Maple or other statistical packages.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 3P85 or permission of the instructor.
MATH 4P85
Topics in Advanced Statistics
Topics may include advanced topics in stochastic processes and models, queueing theory, time series analysis, multivariate analysis, Bayesian statistics, advanced methods and theory in statistical
inference, and related topics. Use of SAS, Maple or other statistical packages.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 3P85 or permission of the instructor.
MATH 4P92
Topics in Number Theory and Cryptography
Topics may include algebraic number theory, analytic number theory and cryptography.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Restriction: permission of the Department.
Completion of this course will replace previous assigned grade and credit obtained in MATH 4F91.
MATH 4P93
Topics in Topology and Dynamical Systems
Topics may include point set topology, differential geometry, algebraic topology and dynamical systems.
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): MATH 3P97 or permission of the Department.
Completion of this course will replace previous assigned grade and credit obtained in MATH 4F91.
MATH 4P94
Relativity Theory and Black Holes
(also offered as PHYS 4P94)
Review of Special Relativity and Minkowski space-time. Introduction to General Relativity theory; the space-time metric, geodesics, light cones, horizons, asymptotic flatness; energy-momentum of
particles and light rays. Curvature and field equations. Static black holes (Schwarzschild metric), properties of light rays and particle orbits. Rotating black holes (Kerr metric).
Lectures, 3 hours per week; lab/tutorial, 1 hour per week.
Prerequisite(s): one of MATH 2F05, MATH 2P03 and 2P08, PHYS 2P20 and 2P50 or permission of the instructor.
MATH 4P96
Technology and Mathematics Education
Topics may include contemporary research concerning digital technologies, such as computer algebra systems and Web 2.0, in learning and teaching mathematics, design of educational tools using VB.NET,
HTML, Geometer's Sketchpad, Maple, Flash, etc., critical appraisal of interactive learning objects in mathematics education.
Lectures, 2 hours per week; lab/tutorial, 2 hours per week.
Prerequisite(s): MATH 2F40 or permission of the instructor.
MATH 0N01
Co-op Work Placement I
First co-op work placement (4months) with an approved employer.
Restriction: open to MATH and MICA Co-op students.
MATH 0N02
Co-op Work Placement II
Second co-op work placement (4 months) with an approved employer.
Restriction: open to MATH and MICA Co-op students.
MATH 0N03
Co-op Work Placement III
Third co-op work placement (4 months) with an approved employer.
Restriction: open to MATH and MICA Co-op students.
MATH 0N04
Co-op Work Placement IV
Optional co-op work placement (4 months) with an approved employer.
Restriction: open to MATH and MICA Co-op students.
MATH 0N05
Co-op Work Placement V
Optional co-op work placement (4 months) with an approved employer.
Restriction: open to MATH and MICA Co-op students.
MATH 2C01
Co-op Reflective Learning and Intergration I
Provide student with the opportunity to apply what they've learned in their academic studies through career-oriented work experiences at employer sites.
Restriction: open to MATH and MICA Co-op students.
Prerequisite(s): SCIE 0N90.
Corequisite(s): MATH 0N01.
Note: students will be required to prepare learning objectives, participate in a site visit, write a work term report and receive a successful work term performance evaluation.
MATH 2C02
Co-op Reflective Learning and Integration II
Provide students with the opportunity to apply what they've learned in their academic studies through career-oriented work experiences at employer sites.
Restriction: open to MATH and MICA Co-op students.
Prerequisite(s): SCIE 0N90.
Corequisite(s): MATH 0N02.
Note: students will be required to prepare learning objectives, participate in a site visit, write a work term report and receive a successful work term performance evaluation.
MATH 2C03
Co-op Reflective Learning and Integration III
Provide student with the opportunity to apply what they've learned in their academic studies through career-oriented work experiences at employer sites.
Restriction: open to MATH and MICA Co-op students.
Prerequisite(s): SCIE 0N90.
Corequisite(s): MATH 0N03.
Note: students will be required to prepare learning objectives, participate in a site visit, write a work term report and receive a successful work term performance evaluation.
MATH 2C04
Co-op Reflective Learning and Integration IV
Provide students with the opportunity to apply what they've learned in their academic studies through areer-oriented work experiences at employer sites.
Restriction: open to MATH and MICA Co-op students.
Prerequisite(s): SCIE 0N90.
Corequisite(s): MATH 0N04.
Note: students will be required to prepare learning objectives, participate in a site visit, write a work term report and receive a successful work term performance evaluation.
MATH 2C05
Co-op Reflective Learning and Integration V
Provide students with the opportunity to apply what they've learned in their academic studies through career-oriented work experiences at employer sites.
Restriction: open to MATH and MICA Co-op students.
Prerequisite(s): SCIE 0N90.
Corequisite(s): MATH 0N05.
Note: students will be required to prepare learning objectives, participate in a site visit, write a work term report and receive a successful work term performance evaluation. | {"url":"https://brocku.ca/webcal/2011/undergrad/math.html?view=printable","timestamp":"2024-11-06T21:14:24Z","content_type":"text/html","content_length":"73829","record_id":"<urn:uuid:76c4530a-4ce7-4d10-99ca-634ebdee1a2b>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00495.warc.gz"} |
Chapter 6 - AESQuizwiz - Ace Your Homework & Exams, Now With ChatGPT AI
Chapter 6 - AES
Réussis tes devoirs et examens dès maintenant avec
In the AddRoundKey transformation the 128 bits of State are bitwise XORed with the _________ of the round key.
128 bits
In the general structure of the AES encryption process the input to the encryption and decryption algorithms is a single _________ block.
A more efficient implementation can be achieved for a 32-bit processor if operations are defined on _________ words.
The AES key expansion algorithm takes as input a four-word (16-byte) key and produces a linear array of __________ words (176 bytes).
In Advanced Encryption Standard all operations are performed on __________ bytes.
In the AES structure both encryption and decryption ciphers begin with a(n) __________ stage, followed by nine rounds that each include all four stages, followed by a tenth round of three stages.
The AES cipher begins and ends with a(n) _________ stage because any other stage, applied at the beginning or end, is reversible without knowledge of the key and would add no security.
The standard decryption round has the structure InvShiftRows, InvSubBytes, __________, InvMixColumns.
__________ is a block cipher intended to replace DES for commercial applications. It uses a 128-bit block size and a key size of 128, 192, or 256 bits.
Advanced Encryption Standard
An example of a finite field is the set Zp consisting of all the integers {0, 1, . . . , p - 1}, where p is a __________ and in which arithmetic is carried out modulo p.
Finite Field Arithmetic
In AES, the arithmetic operations of addition, multiplication and division are performed over the finite field _________ .
___________ affects the sequence of bytes in State but does not alter byte contents and does not depend on byte contents to perform its transformation. Question 42 options:
__________ affects the contents of bytes in State but does not alter byte sequence and does not depend on byte sequence to perform its transformation.
The Advanced Encryption Standard was published by the __________ in 2001.
The National Institute of Standards and Technology chose the __________ design as the winning candidate for AES.
The first row of State is not altered; for the second row a 1-byte circular left shift is performed; for the third row a 2-byte circular left shift is performed; and for the fourth row a 3-byte
circular left shift is performed. This transformation is called __________ .
The encryption round has the structure:
SubBytes, ShiftRows, MixColumns, AddRoundKey
The __________ is when a small change in plaintext or key produces a large change in the ciphertext.
avalanche effect
The four separate functions of the Advanced Encryption Standard are: permutation, arithmetic operations over a finite field, XOR with a key, and __________ .
byte substitution
AES uses a Feistel structure.
As with any block cipher, AES can be used to construct a message authentication code, and for this, only decryption is used.
DES is a block cipher intended to replace AES for commercial applications.
In the Advanced Encryption Standard the decryption algorithm is identical to the encryption algorithm.
InvSubBytes is the inverse of ShiftRows.
The transformations AddRoundKey and InvMixColumn alter the sequence of bytes in State.
A __________ is a set in which you can do addition, subtraction, multiplication and division without leaving the set.
A polynomial m(x) is called __________ if and only if m(x) cannot be expressed as a product of two polynomials, both of degree lower than that of m(x).
The cipher consists of N rounds, where the number of rounds depends on the __________ .
key length
The first N - 1 rounds consist of four distinct transformation functions: SubBytes, ShiftRows, AddRoundKey, and __________ .
mix columns
Division requires that each nonzero element have a(n) __________ inverse.
The _________ transformation operates on each column individually. Each byte of a column is mapped into a new value that is a function of all four bytes in that column.
The mix column transformation combined with the __________ transformation ensures that after a few rounds all output bits depend on all input bits.
The forward substitute byte transformation, called _________ , is a simple table lookup.
sub bytes
AES processes the entire data block as a single matrix during each round using __________ and permutation.
The final round of both encryption and decryption of the AES structure consists of __________ stages.
AES can be implemented very efficiently on an 8-bit processor.
At each horizontal point, State is the same for both encryption and decryption.
Compared to public-key ciphers such as RSA, the structure of AES and most symmetric ciphers is quite complex and cannot be explained as easily as many other cryptographic algorithms.
The Rijndael developers designed the expansion key algorithm to be resistant to known cryptanalytic attacks.
The S-box is designed to be resistant to known cryptanalytic attacks.
The inverse add round key transformation is identical to the forward add round key transformation because the XOR operation is its own inverse.
The nonlinearity of the S-box is due to the use of the multiplicative inverse.
The ordering of bytes within a matrix is by column.
Virtually all encryption algorithms, both conventional and public-key, involve arithmetic operations on integers.
Ensembles d'études connexes | {"url":"https://quizwizapp.com/fr/study/chapter-6-aes","timestamp":"2024-11-12T19:55:52Z","content_type":"text/html","content_length":"88251","record_id":"<urn:uuid:e4844de9-4e7e-4b1e-8a3c-788cd6ae8bdf>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00693.warc.gz"} |
Roulette Odds & Probability
Roulette is one of the oldest casino games, with its origins dating back more than a millennium. Today, gamblers can use dozens of betting systems, strategies, and even cheating devices that promise
long-term success if used correctly. Still, there are only two simple concepts they need to grasp in order to improve their play and increase their chances of winning: odds and casino advantage.
One of the differences between games of chance and games of skill is that players’ decisions can significantly affect the outcome in games of skill, but not in games of chance. The game of roulette,
for example, is a game of pure chance where the outcome is completely random.
Once you understand how roulette works, you can learn to play the game well. You just need to know the odds and figure out how likely it is that any given bet will win or lose. Casinos make money
because they always have more bets that lose than win.
So, what is probability and does it differ from the odds offered to players at the roulette table? These are important questions that any gambler should ask before sitting down to play.
Probability and True Odds
Even though the terms “probability” and “odds” are often used interchangeably by some casino players, there is actually a clear difference between them.
Players should know that roulette wheels produce random results with every spin. They should also know that each number on the wheel has an equal chance of winning or losing. For example, the numbers
0 to 36 only have two possible outcomes – win or lose.
The probability of an event is the likelihood that it will occur compared to all possible outcomes. The probability always falls between 0 and 1, with 0 meaning impossible and 1 meaning certain. For
example, if there are 38 numbers on a roulette wheel and we have even chances, this would be expressed as 0.50; if there were 37 numbers on the wheel we would have 37/38ths chance of winning.
The probability of the ball falling on 17 in the next spin is 2.70%. Most people find it easier to understand this as a percentage, so we multiply 0.027 by 100 and get 2.70.
True Odds in Roulette
Now that we have defined probability, let’s see how it compares to the notion of odds. Odds are also used to describe the chance of an event occurring but they compare the number of ways it can occur
to the number of ways it cannot occur. So if there are 36 possible outcomes in roulette, then one way to express this is 1/36 or 1%.
Some bets in roulette have odds that are expressed in reverse. For example, the odds against winning a bet on number 17 are 36:1–or 36 to one. These odds are known as “true odds” to distinguish them
from what some players refer to as “casino odds,” which do not include any advantage for the casino.
If we bet on red in roulette, the probability of winning is 18 out of 37 (48.65%), while the odds against us are 19:18 because there are 19 ways to lose and only 18 to win.
Casino Odds
When determining whether a roulette bet has good or bad odds, experienced players take into account not only their probability of winning and the true odds but also the potential reward they could
bring. Each bet pays out differently, depending on its likelihood of winning – the less likely a given outcome is to occur, the more its potential payout would be.
To match the risk of this bet, the casino would be expected to return the original stake and pay out winnings worth 36 times the amount of the bet.
Casinos give slightly lower payouts than they should on winning straight-up bets. In this case, it’s a 1-unit difference. But over time, the tiny difference adds up to make a large profit for the
house and gives casinos a guaranteed income in any possible scenario.
Casino odds are normally expressed in terms of a ratio of two numbers. The advantage that casinos have over players is a little more complicated than this, however; it depends on the bet being made.
The better the casino odds are at representing the true chance of winning (or losing) on a given bet, the less of an advantage they have over players.
Roulette House Edge Explained
Casinos pay out less than the odds would suggest. Casinos are in the business of making money and they gain that advantage by paying out less than what is due to players. This difference is called
the house advantage and can be demonstrated with the following example: We bet $1 on the number 17, win the bet, and are paid out $35 instead of receiving our original stake back plus $36.
In roulette, the player bets on which slot a ball will land in, and the casino has an edge because the wheel has equal numbers of slots on one side as it does on the other. For example, a single-zero
wheel has 36 slots (0 through 35) and two of them are zero slots; whereas a double-zero wheel has 38 slots (0 through 37), with two of them being zero slots. The average house edge for single-zero
roulette is 2.70%, and for double-zero roulette it’s 5.26%. There are several formulas for calculating the house edge but probably the simplest one is the following:
The house edge is the amount by which true odds are reduced by the casino’s odds.
If we substitute our earlier numbers for the straight bet, we get the following: 36/1 – 35/1 x 1/37 = 1×1/37 = 0.027, or 0.027%. This means that the house edge is 2.70%. Several other formulas exist
for calculating the house edge, but they all lead to the same conclusion about this percentage.
There is a house edge of 2.70% on single-zero roulette and players can expect to lose $2.70 for every $100 wagered. Of course, this is a theoretical ratio between the stake and the expected loss but
things could be very different in real life. If we place a $100 chip on red, we can either lose or win its entire value.
The house edge, or the advantage that a casino has over its players, can be calculated based on the expected return of a game. A roulette wheel has a 1.35% advantage for the house. In practice, this
means that if you play roulette long enough and bet exclusively on even money bets (red or black, odd or even), you can expect to lose 1.35% of your bankroll per spin of the wheel. You might win or
lose hundreds of dollars during one gaming session – but if you stick to these kinds of bets, you are less likely to lose your entire bankroll over time.
Beating the Odds in Roulette
The belief that there is a guaranteed method of winning at roulette is common and equally misleading. There are countless guides, books, and websites dedicated to convincing people that there is a
guaranteed way to beat the odds. In fact, various betting systems have been developed over the years, some of which are inaccurate and claim to help players exploit imperfections in physical roulette
Over the years, casinos have used a variety of methods to try to ensure that their house wins in the long run. However, these techniques have proven inefficient at best and misleading at worst; they
are not as effective as many gamblers believe them to be. The reason for this is simple: Roulette is a game of chance with fixed odds that cannot be changed by even the most advanced strategies. As
explained above, every spin of the wheel is random, so the casino’s edge will remain the same almost as proof that it will always win in the end.
Roulette Strategies
Roulette strategies are betting systems that revolve around the idea of gradually increasing or decreasing the amount wagered after a certain outcome. One famous example is the Martingale, which
suggests that you increase your bet after every loss, hoping that one winning bet will compensate for all of your previous losses. There are also roulette strategies where the amount wagered remains
constant throughout the entire game session.
While betting progressions do not guarantee you will win, certain strategies claim to improve your chances of winning. One such method involves covering much of the table. However, this strategy will
be too costly for most players, especially after a few losing spins.
If you are not prepared to lose your entire bankroll, then you should avoid placing neighbor bets or any other type of announced bet. In addition, even the best betting systems and roulette
strategies cannot aid you in overcoming the built-in casino advantage. In conclusion, betting systems and roulette strategies will not help you beat the odds.
Advantage Play
Some roulette players rely on different methods for securing winnings. These methods, known as advantage play strategies, give players either an edge over the casino or a mathematical advantage. If
used successfully, they can beat the house odds and even if it is just by a little, it should be enough to provide players with long-term winnings. Unlike betting strategies and systems described
above, advantage play does not revolve around the betting layout but rather, around the wheel.
Land-based and online casinos offer roulette games based on random number generators (RNGs). Advantage players use special software to record the results of hundreds of spins in order to find
patterns, such as repeating sequences of winning numbers. These patterns may be used to predict future outcomes. However, most modern casinos offer RNG-based roulette games so finding these patterns
would be impossible.
However, advantage play in a physical roulette game is much different than that of an online roulette game. Physical roulette players typically stand beside the roulette table for at least 40-50
spins and write down all the winning numbers in the hope that they would be able to spot numbers that come out more frequently than others. In fact, sometimes they observe the wheel for hundreds of
spins before they can notice repeating numbers, patterns or some irregularities.
Advantage play, a method of exploiting the imperfections in roulette wheels and other gaming equipment, was mostly used in the past when casinos did not have such strict maintenance rules and
protocols. Today, players who wish to turn the odds in their favor need to be extremely discreet if they plan to observe the wheels before security become aware of them. Exploiting roulette wheels’
bias and imperfections for one’s profit is not usually met with understanding from casinos.
How to Enhance a Player’s Chances of Winning
There are no sure bets when playing roulette, but players can increase their chances of winning by following a few simple guidelines.
It is important to choose a good roulette table and obviously single-zero games are much better than double-zero ones. The house edge in American-style roulette is twice as high due to the additional
sector on the wheel, which is green 00. But picking French or European-style roulette variations is just the first step in learning how to maximize players’ expected value.
While the game of roulette offers several betting options, players should concentrate on wagers with the lowest possible house edge. For example, outside bets cover large portions of the wheel and
require just one chip to be wagered. These include black/red, even/odd, and low/high where the casino’s advantage is 2.70% but the player’s odds of winning are highest. While payout is not
particularly attractive, these are less risky options in the game.
When choosing a bet, players must compare the casino’s payout odds to true odds and find a bet where these two numbers are as close to each other as possible. Generally speaking, the most attractive
payouts are offered for bets with bad odds, but because these bets carry more risk than others, players should remember that they can’t expect to win much. Good payouts come with exceptionally high
risks, so gamblers need a great tolerance for risk in order to win big. | {"url":"https://bestcasinosonly.com/roulette-odds/","timestamp":"2024-11-04T02:24:16Z","content_type":"text/html","content_length":"56754","record_id":"<urn:uuid:7b7dc034-9b9c-4f60-8496-5deb72e8d968>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00480.warc.gz"} |
An Application of boot() to IV regression
Bootstrapping standard errors can be a
useful technique
when obtaining closed form for the standard error formula is difficult or intractable. In this post, I give an example of how to use R to create a bootstrap sampling distribution in the context of IV
regression. Specifically, I use
to automatically augment a function of mine to resample the indices of my data set with replacement (see the code in the function below).
In my application, I present a function that uses the
library to report bootstrap standard errors for an instrumental variables regression. My
function builds on the
command I wrote for the
Now, onto the ivboot() example. Start by loading the
library and some data from the library, as well as the
We can run the following R script to define the function
When applied to data and an IV regression model, the
function creates an IV regression object that -- when
is loaded -- is compatible with
output. The only difference between an
-created object and an
object is that the
object has standard errors that are based on the bootstrap distribution of the coefficient estimates (Statistics other than the second-stage coefficient estimates are not bootstrapped).
Here is some code to compare the bootstrap output to the analytical standard error output:
On this small sample (N=100) of simulated data, the
command took less than a minute to run on my computer (timing of may vary depending on your computer). For much larger data sets, this will be slower. If you have a larger problem or lower standards
(or higher standards, your choice), you can use the
option to
to specify the number of bootstrap samples. Currently, I have set the default to 500, but you could specify
boots = 200
if you want the command to run faster (
boots = 10
will make it run even faster, but I don't recommend that!).
Here is the
output, which can easily be
ported into LaTeX
using the
*This standard output from an mtable() extension to my iv() command provides quite a bit of information in a convenient format. Another nice feature of iv() is that iv()-created objects have
first-stage summary information readily stored in the object for extraction and analysis. | {"url":"https://novicemetrics.blogspot.com/2011/05/application-of-boot-to-iv-regression.html","timestamp":"2024-11-10T04:21:52Z","content_type":"application/xhtml+xml","content_length":"62732","record_id":"<urn:uuid:67fff4bc-173f-444b-9be6-ef885d97ba73>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00696.warc.gz"} |
Distributed Alignment Processes with Samples of Group Average
This article studies a stochastic alignment problem assuming that agents can sense the general tendency of the system. More specifically, we consider n agents, each being associated with a real
number. In each round, each agent receives a noisy measurement of the system's average value and then updates its value. This value is then perturbed by random drift. We assume that both noise and
drift are Gaussian. We prove that a distributed weighted-average algorithm optimally minimizes the deviation of each agent from the average value, and for every round. Interestingly, this optimality
holds even in the centralized setting, where a master agent can gather all the agents' measurements and instruct a move to each one. We find this result surprising since it can be shown that the set
of measurements obtained by all agents contains strictly more information about the deviation of Agent i from the average value, than the information contained in the measurements obtained by Agent i
alone. Although this information is relevant for Agent i, it is not processed by it when running a weighted-average algorithm. Finally, we also analyze the drift of the center of mass and show that
no distributed algorithm can achieve drift that is as small as the one that can be achieved by the best centralized algorithm.
Bibliographical note
Publisher Copyright:
© 2014 IEEE.
• Biological distributed algorithms
• Kalman filter
• clock synchronization
• consensus
• distributed signal processing
• flocking
• noisy communication
• weighted-average algorithms
ASJC Scopus subject areas
• Control and Systems Engineering
• Signal Processing
• Computer Networks and Communications
• Control and Optimization
Dive into the research topics of 'Distributed Alignment Processes with Samples of Group Average'. Together they form a unique fingerprint. | {"url":"https://cris.haifa.ac.il/en/publications/distributed-alignment-processes-with-samples-of-group-average","timestamp":"2024-11-07T20:24:02Z","content_type":"text/html","content_length":"55402","record_id":"<urn:uuid:39d74eb1-f4f8-43b3-863d-380beb7117a6>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00590.warc.gz"} |
AArch32 Registers AArch64 Registers AArch32 Instructions AArch64 Instructions Index by Encoding External Registers Proprietary Notice
VTTBR_EL2, Virtualization Translation Table Base Register
The VTTBR_EL2 characteristics are:
Holds the base address of the translation table for the initial lookup for stage 2 of an address translation in the EL1&0 translation regime, and other information for this translation regime.
AArch64 System register VTTBR_EL2 bits [63:0] are architecturally mapped to AArch32 System register VTTBR[63:0].
If EL2 is not implemented, this register is RES0 from EL3.
This register has no effect if EL2 is not enabled in the current Security state.
VTTBR_EL2 is a 128-bit register that can also be accessed as a 64-bit value. If it is accessed as a 64-bit register, accesses read and write bits [63:0] and do not modify bits [127:64].
VTTBR_EL2 is a:
• 128-bit register when FEAT_D128 is implemented and VTCR_EL2.D128 == 1
• 64-bit register when FEAT_D128 is not implemented or VTCR_EL2.D128 == 0
Field descriptions
When FEAT_D128 is implemented and VTCR_EL2.D128 == 1:
RES0 BADDR[50:43] RES0
VMID BADDR[42:0]
BADDR[42:0] RES0 SKL CnP
Bits [127:88]
BADDR, bits [87:80, 47:5]
Translation table base address:
• Bits A[55:x] of the stage 2 translation table base address bits are in register bits[87:80, 47:x].
• Bits A[(x-1):0] of the stage 2 translation table base address are zero.
Address bit x is the minimum address bit required to align the translation table to the size of the table. x is calculated based on LOG2(StartTableSize), as described in VMSAv9-128. The smallest
permitted value of x is 5.
The BADDR field is split as follows:
• BADDR[50:43] is VTTBR_EL2[87:80].
• BADDR[42:0] is VTTBR_EL2[47:5].
The reset behavior of this field is:
• On a Warm reset, this field resets to an architecturally UNKNOWN value.
Bits [79:64]
VMID, bits [63:48]
VMID encoding when FEAT_VMID16 is implemented and VTCR_EL2.VS == 1
VMID, bits [15:0]
The VMID for the translation table.
If the implementation has an 8-bit VMID, bits [15:8] of this field are RES0.
The reset behavior of this field is:
• On a Warm reset, this field resets to an architecturally UNKNOWN value.
VMID encoding when FEAT_VMID16 is not implemented or VTCR_EL2.VS == 0
RES0 VMID
Bits [15:8]
VMID, bits [7:0]
The VMID for the translation table.
The VMID is 8 bits when any of the following are true:
• EL2 is using AArch32.
• VTCR_EL2.VS is 0.
• FEAT_VMID16 is not implemented.
The reset behavior of this field is:
• On a Warm reset, this field resets to an architecturally UNKNOWN value.
Bits [4:3]
SKL, bits [2:1]
Skip Level. Skip Level determines the number of levels to be skipped from the regular start level of the Non-Secure stage 2 translation table walk.
SKL Meaning
0b00 Skip 0 level from the regular start level.
0b01 Skip 1 level from the regular start level.
0b10 Skip 2 levels from the regular start level.
0b11 Skip 3 levels from the regular start level.
The reset behavior of this field is:
• On a Warm reset, this field resets to an architecturally UNKNOWN value.
CnP, bit [0]
When FEAT_TTCNP is implemented:
Common not Private. This bit indicates whether each entry that is pointed to by VTTBR_EL2 is a member of a common set that can be used by every PE in the Inner Shareable domain for which the value of
VTTBR_EL2.CnP is 1.
CnP Meaning
The translation table entries pointed to by VTTBR_EL2 are permitted to differ from the entries for VTTBR_EL2 for other PEs in the Inner Shareable domain. This is not affected by the value of the
0b0 current VMID.
0b1 The translation table entries pointed to by VTTBR_EL2 are the same as the translation table entries for every other PE in the Inner Shareable domain for which the value of VTTBR_EL2.CnP is 1 and
the VMID is the same as the current VMID.
This bit is permitted to be cached in a TLB.
If the value of VTTBR_EL2.CnP bit is 1 on multiple PEs in the same Inner Shareable domain and those VTTBR_EL2s do not point to the same translation table entries when using the current VMID then the
results of translations using VTTBR_EL2 are CONSTRAINED UNPREDICTABLE, see 'CONSTRAINED UNPREDICTABLE behaviors due to caching of control or data values'.
The reset behavior of this field is:
• On a Warm reset, this field resets to an architecturally UNKNOWN value.
When FEAT_D128 is not implemented or VTCR_EL2.D128 == 0:
VMID BADDR
BADDR CnP
VMID, bits [63:48]
VMID encoding when FEAT_VMID16 is implemented and VTCR_EL2.VS == 1
VMID, bits [15:0]
The VMID for the translation table.
If the implementation has an 8-bit VMID, bits [15:8] of this field are RES0.
The reset behavior of this field is:
• On a Warm reset, this field resets to an architecturally UNKNOWN value.
VMID encoding when FEAT_VMID16 is not implemented or VTCR_EL2.VS == 0
RES0 VMID
Bits [15:8]
VMID, bits [7:0]
The VMID for the translation table.
The VMID is 8 bits when any of the following are true:
• EL2 is using AArch32.
• VTCR_EL2.VS is 0.
• FEAT_VMID16 is not implemented.
The reset behavior of this field is:
• On a Warm reset, this field resets to an architecturally UNKNOWN value.
BADDR, bits [47:1]
Translation table base address, A[47:x] or A[51:x], bits[47:1].
The BADDR field represents a 52-bit address if one of the following applies:
• FEAT_LPA is implemented, the 64KB granule size is in use, and the value of VTCR_EL2.PS is 0b110.
• FEAT_LPA2 is implemented, the 4KB or 16KB granule size is in use, and the Effective value of VTCR_EL2.DS is 1.
• FEAT_D128 is implemented, 56-bit PAs are supported, the 64KB granule size is in use, and the value of VTCR_EL2.D128 is 0.
When VTTBR_EL2.BADDR represents a 52-bit addresses, all of the following apply:
• Register bits[47:x] hold bits[47:x] of the stage 2 translation table base address, where x is determined by the size of the translation table at the start level.
• The smallest permitted value of x is 6.
• Register bits[5:2] hold bits[51:48] of the stage 2 translation table base address.
• Bits[x:0] of the translation table base address are zero.
• When x>6 register bits[(x-1):6] are RES0.
• Register bit[1] is RES0.
If BADDR represents a 52-bit address, and the translation table has fewer than eight entries, the table must be aligned to 64 bytes. Otherwise the translation table must be aligned to the size of the
For the 64KB granule, if FEAT_LPA is not implemented, and the value of VTCR_EL2.PS is 0b110, one the following IMPLEMENTATION DEFINED behaviors occur:
• BADDR uses the extended format to represent a 52-bit base address.
• BADDR does not use the extended format.
When the value of ID_AA64MMFR0_EL1.PARange indicates that the implementation supports a 56 bit PA size, bits [55:52] of the stage 2 translation table base address are zero.
If the Effective value of VTCR_EL2.PS is not 0b110 then:
• Register bits[47:x] hold bits[47:x] of the stage 2 translation table base address.
• Register bits[(x-1):1] are RES0.
• If the implementation supports 52-bit PAs and IPAs then bits[51:48] of the translation table base addresses used in this stage of translation are 0b0000.
If any VTTBR_EL2[47:0] bit that is defined as RES0 has the value 1 when a translation table walk is performed using VTTBR_EL2, then the translation table base address might be misaligned, with
effects that are CONSTRAINED UNPREDICTABLE, and must be one of the following:
• Bits[x-1:0] of the translation table base address are treated as if all the bits are zero. The value read back from the corresponding register bits is either the value written to the register or
• The result of the calculation of an address for a translation table walk using this register can be corrupted in those bits that are nonzero.
The AArch64 Virtual Memory System Architecture chapter describes how x is calculated based on the value of VTCR_EL2.T0SZ, the stage of translation, and the translation granule size.
The reset behavior of this field is:
• On a Warm reset, this field resets to an architecturally UNKNOWN value.
CnP, bit [0]
When FEAT_TTCNP is implemented:
Common not Private. This bit indicates whether each entry that is pointed to by VTTBR_EL2 is a member of a common set that can be used by every PE in the Inner Shareable domain for which the value of
VTTBR_EL2.CnP is 1.
CnP Meaning
The translation table entries pointed to by VTTBR_EL2 are permitted to differ from the entries for VTTBR_EL2 for other PEs in the Inner Shareable domain. This is not affected by the value of the
0b0 current VMID.
0b1 The translation table entries pointed to by VTTBR_EL2 are the same as the translation table entries for every other PE in the Inner Shareable domain for which the value of VTTBR_EL2.CnP is 1 and
the VMID is the same as the current VMID.
This bit is permitted to be cached in a TLB.
If the value of VTTBR_EL2.CnP bit is 1 on multiple PEs in the same Inner Shareable domain and those VTTBR_EL2s do not point to the same translation table entries when using the current VMID then the
results of translations using VTTBR_EL2 are CONSTRAINED UNPREDICTABLE, see 'CONSTRAINED UNPREDICTABLE behaviors due to caching of control or data values'.
The reset behavior of this field is:
• On a Warm reset, this field resets to an architecturally UNKNOWN value.
Accessing VTTBR_EL2
Accesses to this register use the following encodings in the System register encoding space:
MRS <Xt>, VTTBR_EL2
op0 op1 CRn CRm op2
0b11 0b100 0b0010 0b0001 0b000
if PSTATE.EL == EL0 then UNDEFINED; elsif PSTATE.EL == EL1 then if EffectiveHCR_EL2_NVx() IN {'1x1'} then X[t, 64] = NVMem[0x020]; elsif EffectiveHCR_EL2_NVx() IN {'xx1'} then
AArch64.SystemAccessTrap(EL2, 0x18); else UNDEFINED; elsif PSTATE.EL == EL2 then X[t, 64] = VTTBR_EL2<63:0>; elsif PSTATE.EL == EL3 then X[t, 64] = VTTBR_EL2<63:0>;
MSR VTTBR_EL2, <Xt>
op0 op1 CRn CRm op2
0b11 0b100 0b0010 0b0001 0b000
if PSTATE.EL == EL0 then UNDEFINED; elsif PSTATE.EL == EL1 then if EffectiveHCR_EL2_NVx() IN {'1x1'} then NVMem[0x020] = X[t, 64]; elsif EffectiveHCR_EL2_NVx() IN {'xx1'} then
AArch64.SystemAccessTrap(EL2, 0x18); else UNDEFINED; elsif PSTATE.EL == EL2 then VTTBR_EL2<63:0> = X[t, 64]; elsif PSTATE.EL == EL3 then VTTBR_EL2<63:0> = X[t, 64];
When FEAT_D128 is implemented
MRRS <Xt>, <Xt+1>, VTTBR_EL2
op0 op1 CRn CRm op2
0b11 0b100 0b0010 0b0001 0b000
if PSTATE.EL == EL0 then UNDEFINED; elsif PSTATE.EL == EL1 then if EffectiveHCR_EL2_NVx() IN {'1x1'} then (X[t2, 64], X[t, 64]) = Split(NVMem[0x020, 128], 64); elsif EffectiveHCR_EL2_NVx() IN {'xx1'}
then AArch64.SystemAccessTrap(EL2, 0x14); else UNDEFINED; elsif PSTATE.EL == EL2 then if HaveEL(EL3) && EL3SDDUndefPriority() && SCR_EL3.D128En == '0' then UNDEFINED; elsif HaveEL(EL3) &&
SCR_EL3.D128En == '0' then if EL3SDDUndef() then UNDEFINED; else AArch64.SystemAccessTrap(EL3, 0x14); else (X[t2, 64], X[t, 64]) = Split(VTTBR_EL2, 64); elsif PSTATE.EL == EL3 then (X[t2, 64], X[t,
64]) = Split(VTTBR_EL2, 64);
When FEAT_D128 is implemented
MSRR VTTBR_EL2, <Xt>, <Xt+1>
op0 op1 CRn CRm op2
0b11 0b100 0b0010 0b0001 0b000
if PSTATE.EL == EL0 then UNDEFINED; elsif PSTATE.EL == EL1 then if EffectiveHCR_EL2_NVx() IN {'1x1'} then NVMem[0x020, 128] = X[t2, 64]:X[t, 64]; elsif EffectiveHCR_EL2_NVx() IN {'xx1'} then
AArch64.SystemAccessTrap(EL2, 0x14); else UNDEFINED; elsif PSTATE.EL == EL2 then if HaveEL(EL3) && EL3SDDUndefPriority() && SCR_EL3.D128En == '0' then UNDEFINED; elsif HaveEL(EL3) && SCR_EL3.D128En =
= '0' then if EL3SDDUndef() then UNDEFINED; else AArch64.SystemAccessTrap(EL3, 0x14); else VTTBR_EL2<127:0> = X[t2, 64]:X[t, 64]; elsif PSTATE.EL == EL3 then VTTBR_EL2<127:0> = X[t2, 64]:X[t, 64];
AArch32 Registers AArch64 Registers AArch32 Instructions AArch64 Instructions Index by Encoding External Registers Proprietary Notice
26/03/2024 09:49; 67c0ae5282a7629ba0ea0ba7267b43cd4f7939f6
Copyright © 2010-2024 Arm Limited or its affiliates. All rights reserved. This document is Non-Confidential. | {"url":"https://dflund.se/~getz/ARM/SysReg/AArch64-vttbr_el2.html","timestamp":"2024-11-13T11:17:24Z","content_type":"application/xhtml+xml","content_length":"26295","record_id":"<urn:uuid:6c98c4a3-81d4-4143-bc2a-7bd4f48adf60>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00051.warc.gz"} |
TU Delft Re
WAG flooding of an oil reservoir can give rise to large regions of three-phase flow, where the flow parameters, i.e. capillary pressure and relative permeability, are history dependent. This means
that three-phase capillary pressure and relative permeability data have to be updat ...
WAG flooding of an oil reservoir can give rise to large regions of three-phase flow, where the flow parameters, i.e. capillary pressure and relative permeability, are history dependent. This means
that three-phase capillary pressure and relative permeability data have to be updated during the flow to account accurately for hysteresis. The idea of this work is to connect a pore-scale model that
calculates capillary pressure and relative permeability for given saturations to a three-phase reservoir simulator. This will allow us to calculate the actual saturation paths based on pore-scale
physics. The pore-scale model comprises a bundle of cylindrical capillary tubes of different radii and wettability, which are randomly distributed according to the given density functions. Within the
bundle the capillary pressure controls the displacement sequence, and for given capillary pressures it is therefore possible to find the corresponding phase saturations in the bundle. However, for
using the pore-scale model in the reservoir simulator it is required to obtain capillary pressure and relative permeability from saturation data, rather than the other way around. We hence invert the
capillary bundle model iteratively to find the capillary pressures for given saturations. Depending on the required accuracy, these calculations can be time consuming, especially when the behaviour
changes between two-phase and three-phase. A capillary bundle is completely accessible, so there will not be any trapped or residual saturations. In principle a more complex network model including
residual saturations could be used. Incorporation of the bundle model into the simulator demonstrates the effects of consistent pore-scale based three-phase capillary pressure and relative
permeability for different wettability on the continuum, i. e. reservoir scale. This also shows under which conditions pore-scale displacement paths can be reproduced by the macro-scale model.@en | {"url":"https://repository.tudelft.nl/person/Person_33fdf46c-0806-4177-90dc-11a6be6d436a","timestamp":"2024-11-10T06:16:38Z","content_type":"text/html","content_length":"32518","record_id":"<urn:uuid:1be32605-ad3e-41ea-8ad1-7d9fd2039abf>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00461.warc.gz"} |
Swap Math | Impossible Finance Docs
If you have not checked out our intro article on medium, please start here
Our swap upgrade introduces two novel features:
New Invariant: xybk
Our novel invariant, the xybk model, allows stablecoin swaps to be performed more efficiently through artificially inflating the TVL in pools by a multiple of b times. The invariant is as below:
Invariant Behavior
In our medium article, we mentioned the simple example of a pool with x=token0balance=100, y=token1balance=100, boost=10. This pool has underlying assets of (100, 100) but exhibits the same swap
slippage as a (1000, 1000) v2 uniswap pool. This means swaps in both pools have the same slippage and will require the exact same amount of input to get the same output.
For more complicated cases when the balances in the pool are not equal, we use a formula to calculate the K value for the pool. The value of K is bounded by the geometric mean sqrt(x*y) when b = 1
(the uniswap invariant) to the arithmetic mean (x+y)/2 when b = infinity. Since (x+y)/2 > sqrt(x*y) when x != y, the value of K in practice in an xybk pool will be greater than a uniswap pool with
equivalent balances. We use the following formula, derived from rearranging the xybk equation, to compute the K value from the token balances in the pool (x, y):
Price Behavior
Artificial token balances in the pool ranges from:
Since token prices in pools are calculated as a ratio of token balances, this means prices ranges from:
New concept: Asymmetrical tuning
Asymmetrical tuning is a novel concept in the swap space. | {"url":"https://impossible.gitbook.io/impossible-finance-faq/impossible-swap/swap-math","timestamp":"2024-11-10T05:00:08Z","content_type":"text/html","content_length":"247950","record_id":"<urn:uuid:f4ab3cc3-80f1-4614-993e-ea48c0f81da4>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00599.warc.gz"} |
Namespace UnitsNet | .NET nanoFramework Documentation
In mathematics, a ratio is a relationship between two numbers of the same kind (e.g., objects, persons, students, spoonfuls, units of whatever identical dimension), usually expressed as "a to b"
or a:b, sometimes expressed arithmetically as a dimensionless quotient of the two that explicitly indicates how many times the first number contains the second (not necessarily an integer). | {"url":"https://docs.nanoframework.net/devices/UnitsNet.html","timestamp":"2024-11-12T02:57:21Z","content_type":"text/html","content_length":"12571","record_id":"<urn:uuid:f1e5adc5-9212-44f3-8210-0b1ff24d3ff9>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00476.warc.gz"} |
PPT - Nuclear Magnetic Resonance Spectrometry Chap 19 PowerPoint Presentation - ID:5877961
2. Absorption in CW Experiments • Energy of precessing particle • E = -μz Bo = -μBo cos θ • When an RF photon is absorbed by a nucleus, • θ must change direction • ∴magnetic moment μz “flips” • For
μz to flip, a B field must be applied ⊥ Bo in a • circular path in phase with precessing dipole • B is applied ⊥ Bo using circularly-polarized RF field
3. Fig 19-3 Model for the Absorption of Radiation by a Precessing Particle μ’z
4. Fig 19-3 Model for the Absorption of Radiation by a Precessing Particle When νRF = vo absorption and spin flip can occur
5. Fig 19-4 Equivalency of a Plane-polarized Beam to Two (d, l) Circularly-polarized Beams • Result is vector sum that vibrates in a single plane • In instrument, RF oscillator coil is 90° to fixed
Bo field • Only B rotating in precessional direction is absorbed
6. Classical Description of NMR • Absorption Process • Relaxation Processes (to thermal equil.) • Spin-Lattice • Spin-Spin
7. Relaxation Processes (to thermal equilibrium) • When absorption causes N1/2 = N-1/2 system is “saturated” • Fast decay is desirable • Probability of radiative decay (fluorescence) ∝ v3 •
Therefore in RF region, non-radiative decay predominates
8. Bo field off: α = βat random angles Magnetization is zero Bo field on: Spins precess around their cones at νLarmor αspins >βspins Net magnetization, M
9. Behavior of Magnetic Moments of Nuclei Circularly-polarized radio frequency mag. field B1 is applied: When applied rf frequency coincides with νLarmor magnetic vector begins to rotate around B1
10. Spin-Lattice (Longitudinal) Relaxation • Precessional cones representing • spin ½ angular momenta: • number βspins > number α spins • After time T1 : • Populations return to • Boltzmann
distribution • Momenta become random • T1≡ spin-lattice relaxation time • Tends to broaden NMR lines
11. Spin-Spin (Transverse) Relaxation • Occurs between 2 nuclei having • same precessional frequency • Loss of “phase coherence” • Orderly spins to disorderly spins • T2≡ spin-spin relaxation time •
No net change in populations • Result is broadening
12. Fourier Transform NMR • Nuclei placed in strong magnetic field, Bo • Nuclei precess around z-axis with momenta, M • Intense brief rf pulse (with B1) applied at 90° to M • Magnetic vector, M,
rotates 90° into xy-plane • M relaxes back to z-axis: called free-induction decay • FID emits signal in time domain
13. Simple FID of a sample of spins with a single frequency Fourier Transform NMR Spectrum
15. Vector Model of Angular Momentum Fig. 19-2 55° | {"url":"https://fr.slideserve.com/rinah-foreman/nuclear-magnetic-resonance-spectrometry-chap-19","timestamp":"2024-11-07T13:17:34Z","content_type":"text/html","content_length":"90946","record_id":"<urn:uuid:6b6d6716-1dd8-4303-873b-b1d42dfdc58e>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00354.warc.gz"} |
Bias-Variance Tradeoff in Machine Learning | LearnOpenCV #Bias-Variance Tradeoff in Machine Learning
In this post, we will develop an intuitive sense for an important concept in Machine Learning called the Bias-Variance Tradeoff.
Before we dive into the subject, allow me to go off on a tangent about human learning for a little bit.
Practice alone does not make you better at a skill. We all know people who practice very hard but never seem to accomplish much. The reason is that they do not direct their effort appropriately. For
a newbie who is learning the Piano, it is tempting to play the tune she has mastered, over and over again because it feels comfortable and it provides a lot of joy and sense of accomplishment. This
behavior, though, does not help her improve her skill. The right way to practice is to identify your weakest areas and direct a massive effort on improving those areas without worrying about areas in
which you are already good. Psychologists call this Deliberate Practice. This form of practice is not very enjoyable. It is slow, frustrating and arduous. But Deliberate Practice is extremely
effective in improving performance.
The same principle applies in Machine Learning. Enormous performance gains are made when you direct all your effort toward understanding your errors and minimizing them using known workflows.
Otherwise, you will spend a lot of time trying different things without systematically reducing your errors.
Meet N00b — A Machine Learning Newbie
Let’s first meet N00b. He is a smart programmer but a Machine Learning newbie. N00b is looking at his first real world machine learning problem. He has tried to find a solution on the internet, but
on this rare occasion, the internet has disappointed him. His solution would need to be something better than git clone .
N00b thinks this is an opportunity disguised as a problem! He will find a good solution and post it online. He is afraid that if his solution is not excellent, users on Google+ will ignore him, users
on StackOverflow will not upvote him and users on Reddit will hang him by his balls!
So N00b is determined to find the most awesome solution of all.
Dataset Preparation
N00b’s data consists of 2D points ( x and y coordinates ) as shown below.
His goal is to build a model that can predict y for new, unseen values of x. He knows a few things about Machine Learning and uses the following steps to prepare his data.
1. Shuffle data : N00b randomly shuffles the order of his data first. It is an excellent step because many times the data we receive is ordered in some way. For example, it might be sorted by date.
N00b knows these kinds of orders will invariably lead to funny biases in our models.
2. Split data into training and test sets : Noob splits his shuffled data into two parts — the training set consisting of 70% of the data and a test set consisting of 30% of the data. He will use
the training set to fit a model and the test set to see how well he is doing. There are a surprisingly large number of engineers with the title “machine learning engineer” who make the rookie
mistake of not separating their data into training and test sets. N00b knows this fact and feels superior. Unfortunately, N00b is doing it wrong. He should actually split the data into three sets
— training (60%), validation (a.k.a development) (20%) and test (20%). He should be using the training set to train different models, the validation set to select a model and finally
report performance on the test set.
3. Model selection: N00b plots his 2D data and spends some time just looking at this data. This again is an excellent step to follow. Staring at your data can provide surprising insights. Looking at
the data, N00b thinks he can fit the data using a polynomial shown below.
Training a model simply means finding good values for parameters
Machine Learning versus Curve Fitting
A friend of mine, who has interviewed many applicants for machine learning jobs, starts with the following simple question that trips off a surprising number of candidates.
What is the difference between Machine Learning and Curve Fitting?
In both Machine Learning and Curve Fitting, you want to come up with a model that explains (fits) the data. However, the difference in the end goal is both subtle and profound.
In Curve Fitting, we have all the data available to us at the time of fitting the curve. We want to fit the curve as best as we can.
In Machine Learning, only a small set (the training set) of data is available at the time of training. We obviously want a model that fits the data well, but more importantly, we want the model to
generalize to unseen data points. In Machine Learning, this presents a trade-off called the Bias-Variance Tradeoff.
Understanding the Bias-Variance Tradeoff
N00b needs to decide what degree,
In the Figure below the red dots are 2D data points in the training set.
On the left, N00b fit a line to the data points. A line is just a polynomial of degree 1. Naturally, the line cannot pass through all the points, and there is an error between data ( red dots ) and
the predicted value ( blue line ). In this example, the error is approximately 3.6. If this error is too large, we say the model is underfitting the data.
Can N00b do better? He can see that a straight line will never fit the data. He needs squiggly lines. He uses a polynomial of degree 8 and gets a squiggly line that fits the data much better. The
error goes down to approximately 1.69.
The Linear model does not fit the data very well and is therefore said to have a higher bias than the polynomial model. N00b is excited by his new polynomial model and is tempted to use an even
higher degree polynomial to obtain a squigglier curve to drive down the error to zero.
Before doing that N00b checks the performance of the two models on his test set. As you may recall, the test set was not used to train the model. The model has to perform better on this unseen
dataset because that is what separates Machine Learning from Curve Fitting.
N00b plots the results and is shocked! His moment of ecstasy after seeing the error down on the training set was short lived.
For the linear model, the error on this test set is very close to the error he had seen on the training set. In such cases, we say the model has generalized well to unseen data.
For the polynomial model, the error is astronomical (929.12)! The model he thought was excellent is pretty bad. This problem, where the model does very well on the training data but does poorly on
the test data is called overfitting.
I had mentioned earlier, the Linear model had a higher bias. The polynomial model, on the other hand, suffers from a different problem. The model depends a lot on the choice of training data. If you
change the data slightly, the shape of the curve will look very different, and the error will swing widely. Therefore, the model is said to have high variance.
N00b just got a taste of Bias-Variance Tradeoff. To keep the bias low, he needs a complex model (e.g. a higher degree polynomial), but a complex model has a tendency to overfit and increase the
variance. He just learned an important lesson in Machine Learning —
Machine Learning is not a pursuit of perfection (i.e. zero error), but it is about seeking the best tradeoff.
N00b is at the end of his rope here. So, let’s help him out with some education.
Machine Learning Errors : Bias, Variance and the Optimum Error Rate
First, when you receive your data, divide it into three parts.
1. Training set: The training set is typically 60% of the data. As the name suggests, this is used for training a machine learning model.
2. Validation set: The validation is also called the the development set. This is typically 20% of the data. This set is not used during training. It is used to test the quality of the trained
model. Errors on the validation set are used to guide the choice of model (e.g. what value of
3. Test set: This set is typically 20% of the data. Its only purpose is to report the accuracy of the final model.
In Machine Learning, the errors made by your model is the sum of three kinds of errors — error due to bias in your model, error due to model variance and finally error that is irreducible. The
following equation summarizes the sources of errors.
Total Error = Bias + Variance + Irreducible Error
Even if you had a perfect model, you might not be able to remove the errors made by a learning algorithm completely. This is because the training data itself may contain noise. This error is called
Irreducible error or Bayes’ error rate or the Optimum Error rate. While you cannot do anything about the Optimum Error Rate, you can reduce the errors due to bias and variance.
If your machine learning model is not performing well, it is usually a high bias or a high variance problem. The figure below graphically shows the effect of model complexity on error due to bias and
The region on the left, where both training and validation errors are high, is the region of high bias. On the other hand, the region on the right where validation error is high, but training error
is low is the region of high variance. We want to be in the sweet spot in the middle.
How to detect a high bias problem?
A high bias problem has the following characteristics
1. High training error.
2. Validation error is similar in magnitude to the training error.
How to detect a high variance problem?
A high variance problem on the other hand has the following characteristics
1. Low training error
2. Very high Validation error
How to fix a high bias or a high variance problem?
When you are new to machine learning, you may feel lost when the model you have trained does not perform well. Often people waste a lot of time trying out different things based on what they feel is
right. For example, N00b may be tempted to collect more data. Unfortunately, if he is dealing with a high bias problem, more data will not help at all. N00b may also mindlessly use all the features
available to him. Using all the features may hurt him if he has a high variance problem.
Fortunately, N00b can stand on the shoulders of giants and learn from a simple flowchart shown below. The inspiration for this diagram comes from a few videos in which Dr. Andrew Ng shares how to
attack the Bias-Variance problem in a systematic way.
How to fix a high bias problem?
The following tricks are employed to fix a high bias problem.
1. Train longer: Many machine learning algorithms are set up as iterative optimization problems where the training error ( or a function of the training error ) is minimized. Just letting the
algorithm run for more hours or days can help reduce the bias. In Neural Networks, you can change a parameter called the “learning rate” which will help the training error go down faster.
2. Train a more complex model: A more complex model will fit the training data better. In the problem N00b was looking at, he could increase the degree of the polynomial to get a more complex model.
In the case of Neural Networks, one can add more layers. Finally, in the case of an SVM, you can use a non-linear SVMs instead of a linear one.
3. Obtain more features: Sometimes you just do not have enough information to train a model. For example, if you are trying to train a model that can predict the gender of a person based on the
color of their hair, the problem might be impossible to solve. But if you add a new feature — the length of the hair — the problem becomes more tractable.
4. Decrease regularization: Recall that N00b was trying to fit a polynomial to his data. In his naive implementation the parameters
5. New model architecture: This is just another way of saying that if nothing works, start over.
How to fix a high variance problem?
After you have addressed the high bias problem, you need to check if you have a high variance issue.
In this situation, your model is complex enough that it overfits your data. The following tricks should be employed to deal with overfitting.
1. Obtain more data: Because the validation error is large, it means that the training set and the validation set that were randomly chosen from the same dataset, somehow have different
characteristics. This usually means that you do not have enough data and you need to collect more.
2. Decrease number of features: Sometimes collecting more data is not an option. In that case, you can reduce the number of features. You may have to remove features manually. For example, in our
previous example of identifying the gender of a person based on hair color and hair length, you may decide to drop hair color and keep hair length.
3. Increase regularization: When we have a high variance problem the model is fitting the training data. In fact, the model is probably fitting even the noise in training set and therefore not
performing as well on the validation set. We can reduce the flexibility of the model by using regularization that puts constraints on the magnitude of the parameters. This is done by adding a
regularization term to the cost function. When we are fitting a polynomial model, the regularization term is of the form
4. New model architecture: Try something else. Better luck next time! | {"url":"https://learnopencv.com/bias-variance-tradeoff-in-machine-learning/","timestamp":"2024-11-07T13:14:43Z","content_type":"text/html","content_length":"502241","record_id":"<urn:uuid:86061531-f615-463b-84b3-550a487da134>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00314.warc.gz"} |
orksheets for 7th Class
Recommended Topics for you
Number Patterns and Sequences 20201130
COORDINATES and Number Patterns
Coordinates and Number Patterns
Regelmaat 2 - Number Patterns
Number Patterns 2 - Patterns by Observation
Number Patterns 1 - Common Difference and Square Numbers
Number Patterns Logical Test
Math 2 - Number Patterns, Fractions and Percents
Recognizing Patterns & Sequences
Patterns in Whole Numbers
LGB - Patterns and Algebra
Explore Number Patterns Worksheets by Grades
Explore Number Patterns Worksheets for class 7 by Topic
Explore Other Subject Worksheets for class 7
Explore printable Number Patterns worksheets for 7th Class
Number Patterns worksheets for Class 7 are an excellent resource for teachers looking to enhance their students' math skills and number sense. These worksheets provide a variety of engaging
activities and exercises that help students understand the underlying patterns in numbers, sequences, and operations. By incorporating these worksheets into their lesson plans, teachers can
effectively teach important concepts such as arithmetic progressions, geometric sequences, and algebraic expressions. Additionally, these worksheets are designed to cater to different learning styles
and abilities, ensuring that all students can benefit from the material. With Number Patterns worksheets for Class 7, teachers can create a solid foundation for their students' mathematical growth
and success.
Quizizz is a fantastic platform that offers a wide range of resources, including Number Patterns worksheets for Class 7, to help teachers create engaging and interactive learning experiences for
their students. In addition to worksheets, Quizizz provides teachers with access to thousands of quizzes, games, and other activities that cover various math topics and number sense concepts.
Teachers can easily customize these resources to align with their curriculum and learning objectives, ensuring that their students receive targeted instruction and practice. Moreover, Quizizz's
real-time feedback and analytics tools enable teachers to monitor student progress and identify areas for improvement, making it easier to provide targeted support and intervention. By integrating
Quizizz into their teaching strategies, educators can effectively enhance their students' understanding of math and number patterns, setting them up for success in Class 7 and beyond. | {"url":"https://quizizz.com/en/number-patterns-worksheets-class-7?page=1","timestamp":"2024-11-10T21:19:11Z","content_type":"text/html","content_length":"152751","record_id":"<urn:uuid:2c620d7e-b56b-457f-a5b7-f1ed7a44fcaf>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00696.warc.gz"} |
Pump Calculator - Calculator City
Pump Calculator
Enter your flow rate, head, and efficiency into the calculator to determine your pump’s power requirements.
Pump Power Calculation Formula
The following formula is used to calculate the power requirements for your pump.
Power (kW) = (Flow Rate (L/min) * Head (m) * Density (kg/m³) * 9.81) / (Efficiency (%) * 1000)
• Power is the power required by the pump (kW)
• Flow Rate is the volume of fluid being pumped (L/min)
• Head is the height to which the fluid is being pumped (m)
• Density is the density of the fluid (kg/m³)
• Efficiency is the efficiency of the pump (%)
To calculate the power, multiply the flow rate by the head, density, and gravitational constant (9.81), then divide by the product of efficiency and 1000.
What is Pump Power Calculation?
Pump power calculation refers to the process of determining the amount of power required to operate a pump at a certain efficiency, flow rate, and head. Proper pump power calculation ensures the
efficient operation of the pump, reduces energy costs, and helps in selecting the right pump for specific applications.
How to Calculate Pump Power?
The following steps outline how to calculate the pump power using the given formula.
1. First, determine your flow rate, head, and the density of the fluid.
2. Next, determine the efficiency of the pump.
3. Use the formula from above: Power (kW) = (Flow Rate * Head * Density * 9.81) / (Efficiency * 1000).
4. Finally, calculate the power by plugging in the values.
5. After inserting the variables and calculating the result, check your answer with the calculator above.
Example Problem:
Use the following variables as an example problem to test your knowledge.
Flow Rate = 200 L/min
Head = 30 m
Efficiency = 75%
Density = 1000 kg/m^3
1. What is flow rate?
Flow rate is the volume of fluid that moves through a pump in a given period of time, typically measured in liters per minute (L/min).
2. How is pump efficiency calculated?
Pump efficiency is the ratio of the hydraulic power delivered by the pump to the mechanical power supplied to the pump. It is expressed as a percentage.
3. Why is it important to calculate pump power?
Calculating pump power is important to ensure that the pump operates efficiently, to reduce energy consumption, and to select the right pump for the intended application.
4. Can this calculator be used for different fluids?
Yes, you can adjust the density field to match the density of any fluid to calculate the power accordingly.
5. Is the calculator accurate?
The calculator provides an estimate of the pump power based on the inputs provided. For exact figures, it’s best to consult technical documentation or a specialist. | {"url":"https://calculator.city/pump-calculator/","timestamp":"2024-11-05T23:41:37Z","content_type":"text/html","content_length":"75766","record_id":"<urn:uuid:f6787d45-59dd-4268-97f7-c48696ec6022>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00524.warc.gz"} |
Gravity of a Sphere
Computing the Gravity Field of a Spherically Symmetric Mass
This work shows that outside of any spherically symmetric mass distribution (such as a planet) the gravity field is identical to that of a point particle with equal mass. It also shows that inside
any spherically symmetric mass shell (such as a planet with a concentric spherical cavity), gravity is zero throughout the interior.
Fig 1. Problem geometry.
Newton’s law of universal gravitation tells us that the gravity field at point \(P\) due to a point mass \(m\) can be written
\[\vec{g}(P) = - \frac{ Gm }{r^2} \hat{r} \]
where vector \(\vec{r}\) (magnitude \(r\) in the direction of unit vector \(\hat{r}\)) is directed from the point mass \(m\) to point \(P\), and \(G\) is the universal gravity constant.
In the work below we calculate the gravity field \(\vec{g}\) at point \(P\) a distance \(z\) away from the center of a spherically symmetric mass shell. The result is that inside the shell gravity
cancels to zero, and outside it is equal to the gravity of a point mass.
We place the coordinate axes at the center of the sphere, and use the following symbols (see Fig. 1):
• \(\sigma\) is the density (mass per area) of the shell and is a constant
• \(dA\) is a small bit of area of the shell
• \(dm\) is the mass of area element \(dA\): \(dm = \sigma dA\)
• \(\theta\) and \(\phi\) are the spherical coordinates of mass element \(dm\)
• \(\vec{R}\) points from the origin to \(dm\), and has constant magnitude \(R\)
• \(\vec{r}\) points from \(dm\) to point \(P\) (it is the vector in the first equation above)
• \(\hat{\rho}\) is the cylindrical radial unit vector (radially away from the \(z\) axis, parallel to the \(xy\) plane); \(\hat{\rho} = \hat{\imath} \cos \phi + \hat{\jmath} \sin \phi\)
• \(M\) is the total mass of the shell: \(4 \pi R^2 \sigma\)
1. Determine the integral we need to solve
The gravity field at \(P\) will be the vector sum of the contributions (\(d\vec{g}\)) from all the point sources (\(dm\)) on the spherical shell. Writing \(dm = \sigma \; dA\) and \(\hat{r}= \vec{r}/
r\), the sum is
Fig 2. Side view.
\[ \begin{aligned} \vec{g}(P) &= \int d\vec{g} \\ &= \int -\frac{G \;dm}{r^2} \hat{r} \\ &= - G \sigma \int \frac{ \vec{r} }{r^3} \;dA \\ \end{aligned} \]
The vector \(\vec{r}\) can be found from the fact that \(\vec{R} + \vec{r} = z \hat{k}\) (see Fig. 2):
\[\vec{r} = z \hat{k} - \vec{R} = z \hat{k} - R( \hat{k} \cos \theta + \hat{\rho} \sin \theta ) \]
Find the magnitude of \(\vec{r}\) from law of cosines.
\[r^2 = R^2 + z^2 - 2Rz\cos \theta \]
The area element in spherical coordinates is
\[dA = R^2 \sin \theta \; d\theta d\phi\]
Note that \(\vec{r}\) changes in magnitude and direction as we integrate over different parts of the shell, but \(R\) (magnitude of \(\vec{R}\)) is a constant. Plugging the above expressions into the
\[\vec{g}(P) = -G \sigma R^2 \int_\theta \int_\phi \frac{ (z - R\cos \theta) \hat{k} -\sin \theta \; \hat{\rho} }{\left (R^2+z^2 - 2Rz\cos\theta \right )^{3/2}} \sin \theta \; d\theta d\phi\]
2. Evaluate the integral
Integrating over \(\phi\), which goes from \(0\) to \(2 \pi\), is easy since the only \(\phi\) dependence in the integrand is in \(\hat{\rho}\). So the first term in the numerator (with \(\hat{k}\))
just gets a factor of \(2 \pi\). The second term (with \(\hat{\rho}\)) evaluates to zero due to axial symmetry. We are then left with
\[\vec{g}(P) = -2\pi G \sigma R^2 \int_\theta \frac{ (z - R\cos \theta) \hat{k} }{\left (R^2+z^2 - 2Rz\cos\theta \right )^{3/2}} \sin \theta \; d\theta \]
The result has only a \(\hat{k}\) component. Also, note that the total mass of the sphere is \(M = 4 \pi R^2 \sigma\), so the result simplifies to
\[\vec{g}(P) = - \frac{GM}{2} \hat{k} \int_0^{\pi} \frac{ z - R\cos \theta }{\left (R^2+z^2 - 2Rz\cos\theta \right )^{3/2}} \sin \theta \; d\theta \]
We will solve the integral by first getting the variable \(\theta\) in terms of \(r\).
Using \(r^2 = R^2 + z^2 - 2Rz\cos \theta\), we find
\[-R \cos \theta = \frac{r^2 -z^2-R^2}{2z}\]
The numerator of the integrand can now be written as
\[z - R\cos \theta = z+ \frac{r^2 - z^2 - R^2 }{2z} = \frac{r^2 + z^2 -R^2}{2z} \]
The differential quantity \(d \theta\) can be converted to \(dr\) using \(r = \sqrt{ R^2 + z^2 - 2Rz\cos \theta}\), as follows:
\[ \frac{dr}{d\theta} = \frac{1}{2r}2Rz\sin \theta \]
\[ \sin \theta \; d\theta = \frac{r}{Rz} dr \]
We can now evaluate the integral using the above relations. The integration limits are from \(\theta=0\) to \(\pi\), which will correspond to \(r\) going from \(r_1\) to \(r_2\). We will consider the
integration limits in a moment.
\[ \begin{aligned} \vec{g}(P) &= - \frac{GM}{2} \hat{k} \int_{r_1}^{r_2} \left (\frac{1}{2z} \right ) \left (\frac{r^2 +z^2 -R^2}{r^3} \right ) \frac{r}{Rz} \; dr \\ &= - \frac{GM}{4Rz^2} \hat{k} \
int_{r_1}^{r_2} \left ( 1 + \frac{z^2 -R^2}{r^2} \right ) \; dr \\ &= - \frac{GM}{4Rz^2} \hat{k} \left [ r + \left ( z^2 -R^2 \right ) \left (-\frac{1}{r} \right ) \right ]_{r_1}^{r_2} \\ &= - \frac
{GM}{4Rz^2} \hat{k} \left [ r + \left ( R^2 -z^2 \right ) \left (\frac{1}{r} \right ) \right ]_{r_1}^{r_2} \\ \end{aligned} \]
The integration limits \(r_1\) and \(r_2\) (corresponding to \(\theta=0\) and \(\pi\)) require some special attention. Note that if \(P\) is outside the spherical shell then \(z>R\), and if \(P\) is
inside the spherical shell then \(R>z\) (see the figures above). This means the integration limits will be different depending on whether \(P\) is inside or outside. For each condition we use \(r = \
sqrt{R^2 + z^2 - 2Rz\cos \theta}\) to find the limits.
For \(P\) outside the spherical shell:
\[ \begin{matrix} \theta =0 & (\rm{top\; of \; sphere}) & r_1 = z-R \\ \theta =\pi & (\rm{bottom\; of \; sphere}) & r_2 = z+R \\ \end{matrix} \]
For \(P\) inside the spherical shell:
\[ \begin{matrix} \theta =0 & (\rm{top\; of \; sphere}) & r_1 = R-z \\ \theta =\pi & (\rm{bottom\; of \; sphere}) & r_2 = z+R \\ \end{matrix} \]
Evaluating at these limits:
\[ \vec{g}(P) = - \frac{GM}{4Rz^2} \hat{k} \left\{ \begin{matrix} z+R -z+R+(R^2-z^2)\left ( \frac{1}{z+R} - \frac{1}{z-R} \right ) & P \; \rm{outside\; shell} \\ & \\ z+R -R+z+(R^2-z^2)\left ( \frac
{1}{z+R} - \frac{1}{R-z} \right ) & P \; \rm{inside\; shell} \end{matrix} \right\} \]
The expressions in curly braces can simplify dramatically. For \(P\) outside the shell:
\[ 2R+(R^2-z^2)\left ( \frac{z-R-z-R}{z^2-R^2} \right ) = 2R - (-2R) = 4R \]
And for \(P\) inside the shell:
\[ 2z+(R^2-z^2)\left ( \frac{R-z-R-z}{z^2-R^2} \right ) = 2z +(-2z) =0 \]
\[ \vec{g}(P) = \left\{ \begin{matrix} - \frac{GM}{z^2} \hat{k} & P \; \rm{outside\; shell} \\ & \\ \vec{0} & P \; \rm{inside\; shell} \end{matrix} \right\} \]
3. Conclusion
The last line above says that if point \(P\) is outside the spherical shell,
\[\vec{g}(P) = - \frac{GM}{z^2} \hat{k}\]
which is the gravity of a point mass \(M\) a distance \(z\) away.
And if point \(P\) is inside the spherical shell (no matter how far from the center), there is zero net gravity.
As a corollary, since any spherically symmetric mass distribution (such as the Earth, approximately) is simply a composite body formed of many spherical shells:
• the gravity field outside such a sphere is that of point mass, and
• at the center (or anywhere within a concentric spherical cavity of any size) the gravity is zero, and
• anywhere in the interior, the strength of gravity depends only on the mass at smaller radii (the contribution from the overlying mass is zero)
Last modified November 2023 by Archie Paulson (Madison College, WI) | {"url":"http://madisoncollegephysics.net/misc/Gravity_Integral.html","timestamp":"2024-11-15T00:37:43Z","content_type":"application/xhtml+xml","content_length":"16247","record_id":"<urn:uuid:ae48de16-49b8-4e41-8688-bb516c3df1ec>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00181.warc.gz"} |
How are formulas created?
I saw this question on Reddit: "How are formulas created?" (Turns out Reddit has a lot of people asking math questions!) The answer provided by commenters is "mathematical modeling" -- done in
science courses! This answer makes me sad! Mathematicians make plenty of models, too! When you think about it, though, how many high schoolers get to see that? In fact, how many sophomores in college
have ever seen math at work making models?
Well, here are a few examples if you're interested!
• Stefan's equation for sea ice thickness: these two posts talk about modeling sea ice thickness with a differential equation but don't ask you to use data to create a model
• Modeling tides on the California coast, with more here: these two posts give worksheets on creating your own model of the tidal patterns at Point Reyes Seashore using actual NOAA data
• Lynx! for cute fuzzy animals with sharp teeth! These two posts have students develop their own trig model for lynx populations, see how bad that model is, and then use a logarithm composed with
the trig function to get a model that better fits the sharp population peaks.
Sometimes I feel like teachers who make room for this material are swimming upstream since so many of our high school math curricula don't provide the time for experimental, living mathematics... but
every now and then I meet someone who really makes it work. And maybe with modeling as one of the high school Common Core standards there will be some official space for this in high school classes!
It is so sad that students can go through 12 years of school and never really see mathematical model-making at work.
Feel trapped by boring fake word problems in your math textbook? Get intros to real-life issues in the natural world and see math at work.
This entry was posted in Answers to Questions. Bookmark the permalink. | {"url":"http://www.earthcalculus.com/how-are-formulas-created/","timestamp":"2024-11-07T00:24:09Z","content_type":"text/html","content_length":"45779","record_id":"<urn:uuid:ce5e100d-1d7d-4b6c-a7ed-16be98d77419>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00449.warc.gz"} |
What is Equivalent Decimals ⭐ Definition, Conversion, Examples
Equivalent Decimals – Definition With Examples
Updated on January 11, 2024
Welcome to another enlightening lesson from Brighterly, where we turn mathematics into a magical journey for children! Today, we’re delving into the intriguing world of equivalent decimals. Our
mission? To simplify this fundamental concept and make it as easy to grasp as child’s play!
In our everyday lives, we encounter decimals more often than we realize. From checking the time to calculating change at the supermarket, decimals are all around us. At Brighterly, we believe that
understanding these crucial elements of mathematics should be engaging, interactive, and fun! That’s why we’ve designed this blog post to be a practical, easy-to-follow guide to equivalent decimals.
So, fasten your seatbelts and get ready for a thrilling ride into the world of equivalent decimals with Brighterly. Let’s embark on this numerical adventure together and make mathematics a brighter
journey for all!
In the fascinating world of numbers, equivalent decimals are decimals that represent the same value or quantity. The word “equivalent” means “equal in value, function, or meaning”. Thus, equivalent
decimals are different decimals that correspond to the same fraction or whole number. For instance, 0.50 and 0.500 are equivalent decimals because they both represent the fraction 1/2 or the decimal
Decimals are fundamental in mathematics, and understanding them can open up a wealth of possibilities for problem-solving and analytical thinking. They are used daily, from checking the time to
understanding currency, thus it’s essential for children to grasp the concept of equivalent decimals early on. By doing so, they not only improve their numerical abilities but also develop a solid
foundation in mathematics that will benefit them in their academic journey and beyond.
Rules to Identify Equivalent Decimals
Identifying equivalent decimals isn’t as daunting as it might seem. Here are some straightforward steps that can help:
1. Check the Digits: Compare the numbers to the right of the decimal point. If they are the same, then the decimals are equivalent.
2. Count the Zeros: If the only difference between two decimals is trailing zeros (zeros at the end), they are equivalent. For example, 0.8, 0.80, and 0.800 are all equivalent.
3. Convert to Fractions: Converting decimals to fractions can also help you determine if they are equivalent. If the fractions are the same, so are the decimals.
Remember, practice makes perfect. The more you work with decimals, the easier it will become to spot their equivalencies.
Equivalent Decimals vs Like Decimals
While the terms might sound similar, equivalent decimals and like decimals are distinct concepts in mathematics. Equivalent decimals represent the same value, even though they might look different.
On the other hand, like decimals have the same number of decimal places.
For example, 0.5 and 0.500 are equivalent decimals because they represent the same value. However, they are not like decimals because they don’t have the same number of decimal places. Comparatively,
0.50 and 0.75 are like decimals because they both have two decimal places but are not equivalent as they represent different values.
Understanding these subtle differences can help in solving complex mathematical problems and can improve overall numerical literacy.
Solved Examples on Equivalent Decimals
To help you better understand the concept of equivalent decimals, let’s look at some examples:
1. Example 1: Are 0.45 and 0.450 equivalent decimals? Yes, they are. Although they appear different, they represent the same value. The extra 0 in 0.450 doesn’t change the value.
2. Example 2: Are 0.60 and 0.6 equivalent decimals? Again, yes. Removing a trailing zero doesn’t alter the decimal’s value.
Remember, understanding the concept of equivalent decimals can greatly assist in mathematics and real-life applications, such as money-related calculations.
Practice Problems on Equivalent Decimals
Now, it’s your turn to try some problems. Identifying equivalent decimals can be fun and rewarding. Here are some problems for you to solve:
1. Are 0.75 and 0.750 equivalent decimals?
2. Are 0.90 and 0.9 equivalent decimals?
3. Are 0.100 and 0.1 equivalent decimals?
Remember, practice is crucial in mastering this concept!
In the grand scheme of mathematics, the concept of equivalent decimals is a cornerstone. It’s not just about numbers; it’s about understanding the inherent flexibility and multiplicity in the way we
represent value. Mastering this concept is a significant milestone in a child’s mathematical journey. It deepens their understanding of decimals, enhances their numerical fluency, and sets a sturdy
foundation for grasping more complex mathematical concepts in the future.
At Brighterly, we are committed to making math a delightful adventure rather than a daunting task. We believe in breaking down complex concepts into bite-sized, easily understandable chunks that
spark curiosity and foster a love for learning. From interactive games to engaging exercises, we provide a multitude of resources designed to make learning fun and effective.
Whether your child is taking their first steps into the world of decimals or preparing to conquer more challenging mathematical peaks, Brighterly is here to guide and support them. We stand firm in
our commitment to make math a joyful journey, one decimal at a time!
Frequently Asked Questions on Equivalent Decimals
What are equivalent decimals?
Equivalent decimals are decimals that may appear different but represent the same value or number. This happens due to additional zeros or the placement of the decimal point. For instance, 0.50,
0.500, and 0.5 are all equivalent decimals, as they represent the same value.
How can I identify equivalent decimals?
Identifying equivalent decimals involves checking the digits after the decimal point, counting trailing zeros, or converting the decimals into fractions. If the numbers or fractions are the same, the
decimals are equivalent. It’s like having different paths leading to the same destination!
What is the difference between equivalent decimals and like decimals?
Equivalent decimals are decimals that represent the same value, whereas like decimals have the same number of decimal places. For example, 0.5 and 0.50 are equivalent as they represent the same
value, but 0.50 and 0.75 are like decimals because they both have two decimal places.
Are 0.5 and 0.500 equivalent decimals?
Yes, indeed! 0.5 and 0.500 are equivalent decimals. They represent the same value. The extra zero in 0.500 is just a matter of presentation and doesn’t change the value of the decimal.
Information Sources
Poor Level
Weak math proficiency can lead to academic struggles, limited college, and career options, and diminished self-confidence.
Mediocre Level
Weak math proficiency can lead to academic struggles, limited college, and career options, and diminished self-confidence.
Needs Improvement
Start practicing math regularly to avoid your child`s math scores dropping to C or even D.
High Potential
It's important to continue building math proficiency to make sure your child outperforms peers at school. | {"url":"https://brighterly.com/math/equivalent-decimals/","timestamp":"2024-11-06T07:12:03Z","content_type":"text/html","content_length":"92382","record_id":"<urn:uuid:ddc53971-8d68-49b4-b1dc-eb396e07a283>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00461.warc.gz"} |
[EM] ASCII maps showing methods' "distances"
Kristofer Munsterhjelm km-elmet at broadpark.no
Mon Feb 21 15:38:20 PST 2011
Kevin Venzke wrote:
> Hi,
> I threw together a program that takes the DNA used by the method generator,
> and computes distances between methods based on the number of scenarios in
> which they give the same outcome. Then it tries to come up with a nice map
> that minimizes inaccuracy.
You could try using synthetic coordinate algorithms for mapping the
distances to 2D. I did that for competing entries in a programming game,
once, using the centralized Vivaldi algorithm as described in
Another option would be to use principal components analysis, but I know
less about that.
> Roughly left-to-right there seems to be a "all preferences" to "first
> preferences" emphasis spectrum. Top-to-bottom I am not sure. It is
> amazing to me that Woodall's two (related) methods DSC and DAC are so far
> from each other, yet there is little else between them. I wasn't going
> to include "BV" (which required me to define it) except for that it falls
> in this area. It's actually more similar to DAC than Bucklin.
To test the idea that left-to-right is "all preferences" to "first
preferences", try including Borda... or antiplurality. They should be to
the left if that's correct, because they don't privilege first
preference very much. Perhaps Coombs would be down by IRV but
significantly to the left.
As for finding something between DSC and DAC, you could try DHSC. This
meets neither LNHarm nor LNHelp but might be "balanced" between the two.
DHSC simply consists of creating both the DAC and DSC structures, then
adding them up and running the DAC/DSC algorithm (intersect sorted sets
until there's only one left, skipping intersections that would turn the
set empty) on the result.
Adding the two structures together is simple. If {ABC} has a count of 20
in one structure and a count of 15 in another, then the result gives
{ABC} 20+15=35. Properly speaking, it should be the mean, not the sum,
but since the DAC/DSC algorithm only involves relative magnitudes (that
change the sorted order), it doesn't make a difference whether you use
mean or sum.
More information about the Election-Methods mailing list | {"url":"http://lists.electorama.com/pipermail/election-methods-electorama.com/2011-February/125159.html","timestamp":"2024-11-04T05:52:01Z","content_type":"text/html","content_length":"5372","record_id":"<urn:uuid:19b2fc56-fd80-4429-8533-56f2a27d98d2>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00461.warc.gz"} |
PPT - Risk Assessment PowerPoint Presentation, free download - ID:1393191
1. Risk Assessment Vicki M. Bier (University of Wisconsin-Madison)
2. Introduction • Risk assessment is a means to characterize and reduce uncertainty to support our ability to deal with catastrophe • Modern risk assessment for engineered systems began with the
Reactor Safety Study (1975): • Applications to engineered systems and infrastructure are common
3. What is Risk Assessment? • “A systematic approach to organizing and analyzing scientific knowledge and information for potentially hazardous activities or for substances that might pose risks
under specified circumstances” • National Research Council (NRC), 1994
4. Definitions of Risk • “Both uncertainty and some kind of loss or damage” (Kaplan and Garrick 1981) • “The potential for realization of unwanted, negative consequences of an event” (Rowe 1976) •
“The probability per unit time of the occurrence of a unit cost burden” (Sage and White 1980) • “The likelihood that a vulnerability will be exploited” (NRC 2002)
5. Paradigm for Risk Assessment • A form of systems analysis • Answers three questions (Kaplan and Garrick 1981): • “What can go wrong?” • “How likely is it that that will happen?” • “If it does
happen, what are the consequences?”
6. What is Probabilistic Risk Assessment? • An integrated model of the response of an engineered system to disturbances during operations • A rigorous and systematic identification of the levels of
damage that could conceivably result from those responses • A probabilistic (that is, quantitative) assessment of the frequency of such occurrences and our uncertainty in that assessment • A tool
to help owners/operators make good decisions about system operations
7. ESSENCE OF PRA • A PRA is an assessment of how well a system responds to a variety of situations • It answers three basic questions:1. What can go wrong during operation?2. How likely is it to go
wrong?3. What are the consequences when it goes wrong? • We answer the first question in terms of scenarios • We answer the second by quantifying our knowledge of the likelihood of each scenario
• We answer the third by quantifying our knowledge of the response of the system and its operators in terms of:- damage states- release states and source terms- scenario consequences
8. GRAPHICAL PRESENTATION OF RISK P RISK CURVE p(>x) X
10. QUANTIFYING SCENARIOS INITIATINGEVENT x A B C D NODE B1 NODE A NODE C3
11. EVENT SEQUENCE QUANTIFICATION WHERE = the frequency of scenario S = the frequency of initiating event I = the fraction of times system Asucceeds given that I has happened = the fraction of times
system Bfails given that I has happenedand A has succeeded = the fraction of times C succeedsgiven that I has happened, thatA has succeeded, and B has failed = the fraction of times D fails given
INITIATINGEVENT A B C D 1 NODE B1 SIMPLIFIED EVENT TREE DIAGRAM
13. RELATIONSHIP OF FAULT TREES TO EVENT TREES INITIALCONDITIONS STAGE A TOP EVENTS DAMAGE STATE OK PLS LOC/V AFW PLS LOC/V LEGEND = “OR” GATE APUMODULE TANK = “AND” GATE ISOLATIONVALVE 1
ISOLATIONVALVE 2 GGVM COOLING 1 GGVM COOLING 2
14. FAULT TREES AND EVENT TREES • Both useful • Event trees used to display order of events and dependent events • Fault trees used to display combinations of events: • Order and dependencies are
obscured • Logically equivalent
15. RISK MANAGEMENT • Develop an integrated plant-specific risk model • Rank order contributors to risk by damage index • Decompose contributors into specific elements • Identify options, such as
design and procedure changes, for reducing the impact of the contributor on risk • Make the appropriate changes in the risk model: • And re-compute the risk for each option • Compute the cost
impacts of each system configuration, relative to the base case: • Including both initial and annual costs • Present the costs, risks, and benefits for each option
16. RISK DECOMPOSITION(ANATOMY OF RISK) LEVEL OF DMAGE TYPE OF RELEASE TYPE OF PLANT DAMAGE INITIATING EVENT EVENT SEQUENCE SYSTEM UNAVAILABILITY FAILURE CAUSES System B Cause Table INPUT DATA •
Initiating events • Components • Maintenance • Human error • Common cause • Environmental • Other CAUSES FREQUENCIES EFFECTS MAJOR SYSTEM DOMINANTSEQUENCE DOMINANT FAILURE MODES
17. REACTOR TRIP SYSTEM CAUSE TABLECONTRIBUTORS TO SYSTEM FAILURE FREQUENCY This analysis was performed in November 1982
19. Data Analysis • Input parameters are quantified from available data: • Typically using expert judgment and Bayesian statistics • Due to sparseness of directly relevant data • Hierarchical
(“two-stage”) Bayesian methods common: • Partially relevant data used to help construct prior distributions • Numerous areas in which improvements can be made: • Treatment of probabilistic
dependence • Reliance on subjective prior distributions • Treatment of model uncertainty
20. Dependencies • The failure rates (or probabilities) of components can be uncertain and dependent on each other: • For example, learning that one component had a higher failure rate than expected
may cause one to increase one’s estimates of the failure rates of other similar components • Failure to take such dependence into account can result in substantial underestimation of the
uncertainty about the overall system failure rate: • And also the mean failure probability of the system • Historically, dependencies among random variables have often been either ignored: • Or
else modeled as perfect correlation
21. Dependencies • The use of copulas or other multivariate distributions has become more common: • But tractable models still are not sufficiently general to account for all realistic assumptions,
such as E(X|D) > E(Y|D) for all D • High-dimensional joint distributions are also challenging: • Correlation matrices must be positive definite • There can be numerous higher-order correlations
to assess • Cooke et al. developed a practical method for specifying a joint distribution over n continuous random variables: • Using only n(n1)2 assessments of conditional correlations •
(Bedford and Cooke 2001; Kurowicka and Cooke 2004)
22. Subjectivity • PRA practitioners sometimes treat the subjectivity of prior distributions cavalierly: • Best practice for eliciting subjective priors is difficult and costly to apply • Especially
for dozens of uncertain quantities • The use of “robust” or “reference” priors may minimize the reliance on judgment: • Although this may not work with sparse data
23. Probability Bounds Analysis • Specify bounds on the cumulative distribution functions of the inputs: • Rather than specific cumulative distributions • (Ferson and Donald 1998) • These bounds can
then be propagated through a model: • The uncertainty propagation process can be quite efficient • Yielding valid bounds on the cumulative distribution function for the final result of the model
(e.g., risk) • Can take into account not only uncertainty about the probability distributions of the model inputs: • But also uncertainty about their correlations and dependence structure • This
is especially valuable: • Correlations are more difficult to assess than marginal distributions • Correlations of 1 or -1 may not yield the most extreme distributions for the output variable of
interest (Ferson and Hajagos 2006)
24. Exposure to Contamination • Regan et al. (2002) compare a two-dimensional Monte Carlo analysis of this problem to the results obtained using probability bounds • The qualitative conclusions of
the analysis (e.g., that a predator species was “potentially at risk” from exposure to contamination) remained unchanged: • Even using bounds of zero and one for some variables • Bounding
analysis can help support a particular decision: • If results and recommendations are not sensitive to the specific choices of probability distributions used in a simulation
25. Model Uncertainty • Uncertainty about model form can be important • Assessing a probability distribution over multiple plausible models is frequently not reasonable: • “All models are wrong, some
models are useful” (Box) • Models are not a collectively exhaustive set • Some models are intentionally simple or conservative • Bayesian model averaging avoids giving too much weight to complex
models (Raftery and Zheng 2003): • But still relies on assigning probabilities to particular models • Using Bayes theorem to update those probabilities given data
26. Joint Updating • In general, one will be uncertain about both model inputs and outputs • One would like to update priors for both inputs and outputs consistently: • With the wider distribution
being more sensitive to model results • Raftery et al. (1995) attempted this (Bayesian synthesis): • But that approach is subject to Borel’s paradox • Since it can involve conditioning on a set
of measure zero • Joint updating of model inputs and outputs is largely an unsolved problem | {"url":"https://www.slideserve.com/jenibelle/risk-assessment-powerpoint-ppt-presentation","timestamp":"2024-11-14T12:19:14Z","content_type":"text/html","content_length":"97731","record_id":"<urn:uuid:137c6e03-0e18-44e1-8bbc-8b8c015796e6>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00782.warc.gz"} |
Machine Learning 101: Reverse Standardization » EML
We’ve all been there; you’ve worked night and day to finally get an accurate model for your dataset. You’ve finally got an output from your model – but it’s scaled. What do you do? How do you reverse
In this 2-minute guide, we’ll go over how you can find out the real target value for your model prediction and how you can reverse it.
If you’re here for a quick code chunk for a model prediction, here is a python function to get you on your way.
Reverse Standardization In Python For Model Prediction
import pandas as pd
import numpy as np
# example data
df = pd.read_csv('ds_salaries.csv')
# lets say your model gave you an output
# for a salary
# here's how you can reverse engineer it from
# the target column
def reverse_standardization_pred(col, prediction):
# calculate the mean
mean = sum(col) / len(col)
# calculate the variance
var = sum((val - mean)**2 for val in col) / len(col)
# calculate standard deviation
std = var ** 0.5
# apply it to the prediction
real_val = prediction * std + mean
return real_val
# your model predicted a salary, how to reverse it
# where the .25 is your **models** prediction
real_salary = reverse_standardization(df['salary_in_usd'], .25)
print(f'Unstandardized salary data was: ${round(real_salary,2)}')
Why You Should Standardize Variables
Standardizing variables is important in order to ensure that the results of statistical tests are accurate and meaningful.
When variables are not standardized, it can be difficult to determine whether the results of a test are statistically significant.
Some machine learning models, like lasso and ridge regression, depend on scaled data.
Models that utilize gradient descent have shown improvements when data is standardized.
Why we can’t rebuild a dataset from standardized data (Without the Old Data)
We can’t rebuild a dataset from standardized data without the old data because of how standardization is done in the first place.
Let’s take a look.
The formula for standardization is the following (for each data point):
Once we’ve applied this transformation to our data, we now have a standardized column with a mean at zero and a standard deviation of one.
If we wanted to reverse engineer this column (without the old data), the formula would be the following.
Using this formula, every point maps to itself since we multiply it by 1 and add 0.
This is why (without the old standard deviation and mean) we cannot reverse standardize the data.
Other Articles in our Machine Learning 101 Series
We have many quick guides that go over some of the fundamental parts of machine learning. Some of those guides include:
Latest posts by Stewart Kaplan
(see all) | {"url":"https://enjoymachinelearning.com/blog/reverse-standardization/","timestamp":"2024-11-14T21:52:20Z","content_type":"text/html","content_length":"195625","record_id":"<urn:uuid:b289ecca-b216-4d11-a1c1-2c1756f0b409>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00342.warc.gz"} |
The Gaussian Graphical Model-based (GGM) framework, focusing on the precision matrix and conditional dependence, is a more popular paradigm of heterogeneity analysis, which is more informative than
that limited to simple distributional properties. In GGM-based analyses, to determine the number of subgroups is a challenging and important task. This package contains a recently developed and novel
method via penalized fusion which can determine the number of subgroups and recover the subgrouping structure fully data-dependently. Moreover, the package also includes some Gaussian graphical
mixture model methods requiring a given number of subgroups. The main functions contained in the package are as follows.
• GMMPF: This function implements the GGM-based heterogeneity analysis via penalized fusion (Ren et al., 2021).
• PGGMBC: This method implements the penalized GGM-based clustering with unconstrained covariance matrices (Zhou et al., 2009).
• summary-network: This function provides the summary of the characteristics of the resulting network structures, including the overlap of edges of different subgroups, the connection of node, and
so on.
• plot-network: This function implements the visualization of network structures.
We note that the penalties \(p(\cdot, \lambda)\) used in Ren et al. (2021) and Zhou et al. (2009) are MCP and lasso, respectively. Our package provides the variety of types of penalties for both two
methods, including convex and concave penalties. The workflow of the GMMPF package is as follows.
A relatively large number \(K\), an upper bound of the true number of subgroups \(K_0\), needs to be set by the users, which is easy to specify based on some biological knowledge. A new fusion
penalty is developed to shrink differences of parameters among the \(K\) subgroups and encourage equality, and then a smaller number of subgroups can be yielded. Three tuning parameters \(\lambda_1\)
, \(\lambda_2\), and \(\lambda_3\) are involved, where \(\lambda_1\) and \(\lambda_2\) are routine to determine the sparsity of parameters in means and precision matrices and regularize estimation.
And the conditional dependence relationships for each subgroup can be obtained by examining the nonzero estimates of the resulting precision matrices. \(\lambda_3\) is a pivotal parameter to control
the degree of shrinking differences, which implements the effective ``searching” between 1 and \(K\) based on the penalized fusion technique.
Data setting
Denote \(n\) as the size of independent subjects. Consider sample \(i(=1,\ldots, n)\), \(p\)-dimensional measurement \(\boldsymbol{x}_i\) is available. Further assume that the \(n\) subjects belong
to \(K_0\) subgroups, where the value of \(K_0\) is unknown. For the \(l\)th subgroup, assume the Gaussian distribution: \[$$onumber f_{l}\left(\boldsymbol{x} ; \boldsymbol{\mu}_{l}^{*}, \mathbf{\
Sigma}_{l}^{*}\right)=(2 \pi)^{-p / 2}\left|\boldsymbol{\Sigma}_{l}^{*}\right|^{-1 / 2} \exp \left\{-\frac{1}{2}\left(\boldsymbol{x}-\boldsymbol{\mu}_{l}^{*}\right)^{\top} (\boldsymbol{\Sigma}_{l}^
{*})^{-1}\left(\boldsymbol{x}-\boldsymbol{\mu}_{l}^{*}\right)\right\},$$\] where the mean and covariance matrix are unknown. Overall, \(\boldsymbol{x}_i\)s satisfy distribution: \[$$onumber f(\
boldsymbol{x}) =\sum_{l=1}^{K_0} \pi_{l}^{*} f_{l}\left(\boldsymbol{x} ; \boldsymbol{\mu}_{l}^{*}, \mathbf{\Sigma}_{l}^{*}\right),$$\] where the mixture probabilities \(\pi_{l}^{*}\)s are also
unknown. Our goal is to determine the number of subgroups \(K_0\) and estimate the subgrouping structure fully data-dependently.
GGM-based heterogeneity analysis via penalized fusion is based on the penalized objective function: \[$$\label{obj} \mathcal{L}(\boldsymbol{\Omega}, \boldsymbol{\pi} | \boldsymbol{X} ):= \frac{1}{n}
\sum_{i=1}^{n} \log \left(\sum_{k=1}^{K} \pi_{k} f_{k}\left( \boldsymbol{x}_{i} ; \boldsymbol{\mu}_{k},\boldsymbol{\Theta}_{k}^{-1}\right)\right) - \mathcal{P}(\boldsymbol{\Omega}),$$\] where \(\
boldsymbol{X}\) denotes the collection of observed data, \(\boldsymbol{\Omega} = (\boldsymbol{\Omega}_1^{\top}, \cdots, \boldsymbol{\Omega}_K^{\top} )^{\top}\), \(\boldsymbol{\Omega}_k=\operatorname
{vec}\left(\boldsymbol{\mu}_{k}, \boldsymbol{\Theta}_{k}\right)=\left(\mu_{k 1}, \ldots, \mu_{k p}, \theta_{k 11}, \ldots, \theta_{k p 1}, \ldots, \theta_{k 1 p}, \ldots, \theta_{k p p}\right) \in \
mathbb{R}^{p^{2}+p}\), \(\boldsymbol{\Theta}_{k}=\boldsymbol{\Sigma}_{k}^{-1}\) is the \(k\)-th precision matrix with the \(ij\)-th entry \(\theta_{kij}\), \(\boldsymbol{\pi} = (\pi_{1}, \cdots, \pi_
{K})^{\top}\), \[\label{penalty} %\begin{aligned} \mathcal{P}(\boldsymbol{\Omega}) = %& \sum_{k=1}^{K} \sum_{j=1}^{p} p(|\mu_{kj}|, \lambda_{1}) + \sum_{k=1}^{K} \sum_{i eq j} p(\left|\theta_{k i j}\
right|, \lambda_{2}) %\\ %& + \sum_{k < k^{\prime}} p \left( \left( \|\boldsymbol{\mu}_{k} - \boldsymbol{\mu}_{k^{\prime}}\|_2^2 + \| \boldsymbol{\Theta}_{k} - \boldsymbol{\Theta}_{k^{\prime}}\|_F^2
\right)^{1/2}, \lambda_{3} \right), %\end{aligned}\] \(\|\cdot\|_F\) is the Frobenius norm, and \(p(\cdot, \lambda)\) is a penalty function with tuning parameter \(\lambda > 0\), which can be
selected as lasso, SCAD, MCP, and others. \(K\) is a known constant that satisfies \(K>K_0\). Consider: \[$$onumber (\widehat{\boldsymbol{\Omega}}, \widehat{\boldsymbol{\pi}} )=\underset{ \boldsymbol
{\Omega}, \boldsymbol{\pi} }{ \mathrm{argmax}} \mathcal{L}(\boldsymbol{\Omega}, \boldsymbol{\pi} | \boldsymbol{X} ).$$\] Denote \(\{\widehat{\boldsymbol{\Upsilon}}_1 , \cdots, \widehat{\boldsymbol{\
Upsilon}}_{\widehat{K}_0} \}\) as the distinct values of \(\widehat{\boldsymbol{\Omega}}\), that is, \(\{k: \widehat{\boldsymbol{\Omega}}_k \equiv \widehat{\boldsymbol{\Upsilon}}_l, k=1, \cdots, K \}
_{ l=1, \cdots, \widehat{K}_0 }\) constitutes a partition of \(\{1, \cdots, K\}\). Then there are \(\widehat{K}_0\) subgroups with estimated mean and precision parameters in \(\widehat{\boldsymbol{\
Omega}}\). The mixture probabilities can be extracted from \(\widehat{\boldsymbol{\pi}}\).
First, we call the built-in simulation data set (\(K_0 = 3\)), and set the upper bound of \(K_0\) and the sequences of the tuning parameters (\(\lambda1\), \(\lambda2\), and \(\lambda3\)).
K <- 6
lambda <- genelambda.obo(nlambda1=5,lambda1_max=0.5,lambda1_min=0.1,
Apply GGMPF to the data.
res <- GGMPF(lambda, example.data$data, K, penalty = "MCP")
Theta_hat.list <- res$Theta_hat.list
Mu_hat.list <- res$Mu_hat.list
opt_num <- res$Opt_num
opt_Mu_hat <- Mu_hat.list[[opt_num]]
opt_Theta_hat <- Theta_hat.list[[opt_num]]
K_hat <- dim(opt_Theta_hat)[3]
K_hat # Output the estimated K0.
Summarize the characteristics of the resulting network structures, and implement visualization of network structures.
This method combines Gaussian graphical mixture model and the regularization of the means and precision matrices based on the given number of subgroups in advance. The two involved tuning parameters
\(\lambda_1\) and \(\lambda_2\) are same as those in GMMPF. Moreover, The users can easily implement BIC-based subgroup number selection using the function of outputing BIC values.
Data setting
It is same as the GGMPF.
Given the number of subgroups \(K_0\), penalized GGM-based clustering with unconstrained covariance matrices is based on the model: \[$$onumber (\widehat{\boldsymbol{\Omega}}^{\prime}, \widehat{\
boldsymbol{\pi}}^{\prime} ) = \underset{ \boldsymbol{\Omega}^{\prime}, \boldsymbol{\pi}^{\prime} }{ \mathrm{argmax}} \ \frac{1}{n} \sum_{i=1}^{n} \log \left(\sum_{k=1}^{K_0} \pi_{k} f_{k}\left( \
boldsymbol{x}_{i} ; \boldsymbol{\mu}_{k},\boldsymbol{\Theta}_{k}^{-1}\right)\right) - \sum_{k=1}^{K_0} \sum_{j=1}^{p} p(|\mu_{kj}|, \lambda_{1}) - \sum_{k=1}^{K_0} \sum_{i eq j} p(\left|\theta_{k i
j}\right|, \lambda_{2}),$$\] where \(\boldsymbol{\Omega}^{\prime} = (\boldsymbol{\Omega}_1^{\top}, \cdots, \boldsymbol{\Omega}_{K_0}^{\top} )^{\top}\), \(\boldsymbol{\pi}^{\prime} = (\pi_{1}, \cdots,
\pi_{K_0})^{\top}\), and other notations are similar to those in Section \(\ref{method-1}\).
First, we call the built-in simulation data set, and give the true \(K_0\) and the sequences of the tuning parameters (\(\lambda1\) and \(\lambda2\)).
K <- 3
lambda <- genelambda.obo(nlambda1=5,lambda1_max=0.5,lambda1_min=0.1,
Apply PGGMBC to the data.
res <- PGGMBC(lambda, example.data$data, K, initial.selection="K-means")
Theta_hat.list <- res$Theta_hat.list
opt_num <- res$Opt_num
opt_Theta_hat <- Theta_hat.list[[opt_num]]
The usages of summarizing the characteristics of the resulting network structures and implementing visualization of network structures are same as the GGMPF. | {"url":"http://rsync.jp.gentoo.org/pub/CRAN/web/packages/HeteroGGM/vignettes/HeteroGGM.html","timestamp":"2024-11-06T07:41:32Z","content_type":"text/html","content_length":"157796","record_id":"<urn:uuid:422a117f-523a-47a1-8874-c531258db070>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00607.warc.gz"} |
19.4 Electric Power
Learning Objectives
Learning Objectives
By the end of this section, you will be able to do the following:
• Define electric power and describe the electric power equation
• Calculate electric power in circuits of resistors in series, parallel, and complex arrangements
Section Key
electric power
Power is associated by many people with electricity. Every day, we use electric power to run our modern appliances. Electric power transmission lines are visible examples of electricity providing
power. We also use electric power to start our cars, to run our computers, or to light our homes. Power is the rate at which energy of any type is transferred; electric power is the rate at which
electric energy is transferred in a circuit. In this section, we’ll learn not only what this means, but also what factors determine electric power.
To get started, let’s think of light bulbs, which are often characterized in terms of their power ratings in watts. Let us compare a 25-W bulb with a 60-W bulb (see Figure 19.23). Although both
operate at the same voltage, the 60-W bulb emits more light intensity than the 25-W bulb. This tells us that something other than voltage determines the power output of an electric circuit.
Incandescent light bulbs, such as the two shown in Figure 19.23, are essentially resistors that heat up when current flows through them and they get so hot that they emit visible and invisible light.
Thus the two light bulbs in the photo can be considered as two different resistors. In a simple circuit such as a light bulb with a voltage applied to it, the resistance determines the current by
Ohm’s law, so we can see that current as well as voltage must determine the power.
The formula for power may be found by dimensional analysis. Consider the units of power. In the SI system, power is given in watts (W), which is energy per unit time, or J/s
19.47$W= J s . W= J s .$
Recall now that a voltage is the potential energy per unit charge, which means that voltage has units of J/C
19.48$V= J C . V= J C .$
We can rewrite this equation as $J=V×C J=V×C$ and substitute this into the equation for watts to get
$W= J s = V×C s =V× C s . W= J s = V×C s =V× C s .$
But a Coulomb per second (C/s) is an electric current, which we can see from the definition of electric current, $I= ΔQ Δt I= ΔQ Δt$, where $Δ Δ$Q is the charge in coulombs and $Δ Δ$t is time in
seconds. Thus, equation above tells us that electric power is voltage times current, or
$P=IV. P=IV.$
This equation gives the electric power consumed by a circuit with a voltage drop of V and a current of I.
For example, consider the circuit in Figure 19.24. From Ohm’s law, the current running through the circuit is
19.49$I= V R = 12 V 100 Ω =0.12 A. I= V R = 12 V 100 Ω =0.12 A.$
Thus, the power consumed by the circuit is
19.50$P=VI=( 12 V )( 0.12 A )=1.4 W. P=VI=( 12 V )( 0.12 A )=1.4 W.$
Where does this power go? In this circuit, the power goes primarily into heating the resistor in this circuit.
In calculating the power in the circuit of Figure 19.24, we used the resistance and Ohm’s law to find the current. Ohm’s law gives the current:$I=V/R I=V/R$, which we can insert into the equation for
electric power to obtain
$P=IV=( V R )V= V 2 R . P=IV=( V R )V= V 2 R .$
This gives the power in terms of only the voltage and the resistance.
We can also use Ohm’s law to eliminate the voltage in the equation for electric power and obtain an expression for power in terms of just the current and the resistance. If we write Ohm’s law as $V=
IR V=IR$ and use this to eliminate V in the equation $P=IV P=IV$, we obtain
$P=IV=I( IR )= I 2 R. P=IV=I( IR )= I 2 R.$
This gives the power in terms of only the current and the resistance.
Thus, by combining Ohm’s law with the equation $P=IV P=IV$ for electric power, we obtain two more expressions for power: one in terms of voltage and resistance and one in terms of current and
resistance. Note that only resistance (not capacitance or anything else), current, and voltage enter into the expressions for electric power. This means that the physical characteristic of a circuit
that determines how much power it dissipates is its resistance. Any capacitors in the circuit do not dissipate electric power—on the contrary, capacitors either store electric energy or release
electric energy back to the circuit.
To clarify how voltage, resistance, current, and power are all related, consider Figure 19.25, which shows the formula wheel. The quantities in the center quarter circle are equal to the quantities
in the corresponding outer quarter circle. For example, to express a potential V in terms of power and current, we see from the formula wheel that $V=P/I V=P/I$.
Worked Example
Find the Resistance of a Lightbulb
A typical older incandescent lightbulb was 60 W. Assuming that 120 V is applied across the lightbulb, what is the current through the lightbulb?
We are given the voltage and the power output of a simple circuit containing a lightbulb, so we can use the equation $P=IV P=IV$ to find the current I that flows through the lightbulb.
Solving $P=IV P=IV$ for the current and inserting the given values for voltage and power gives
19.51$P = IV I = P V = 60 W 120 V =0.50 A. P = IV I = P V = 60 W 120 V =0.50 A.$
Thus, a half ampere flows through the lightbulb when 120 V is applied across it.
This is a significant current. Recall that household power is AC and not DC, so the 120 V supplied by household sockets is an alternating power, not a constant power. The 120 V is actually the
time-averaged power provided by such sockets. Thus, the average current going through the light bulb over a period of time longer than a few seconds is 0.50 A.
Worked Example
Boot Warmers
To warm your boots on cold days, you decide to sew a circuit with some resistors into the insole of your boots. You want 10 W of heat output from the resistors in each insole, and you want to run
them from two 9-V batteries (connected in series). What total resistance should you put in each insole?
We know the desired power and the voltage (18 V, because we have two 9-V batteries connected in series), so we can use the equation $P= V 2 /R P= V 2 /R$ to find the requisite resistance.
Solving $P= V 2 /R P= V 2 /R$ for the resistance and inserting the given voltage and power, we obtain
19.52$P = V 2 R R = V 2 P = ( 18 V ) 2 10 W =32 Ω. P = V 2 R R = V 2 P = ( 18 V ) 2 10 W =32 Ω.$
Thus, the total resistance in each insole should be 32 $Ω. Ω.$
Let’s see how much current would run through this circuit. We have 18 V applied across a resistance of 32 $Ω Ω$, so Ohm’s law gives
19.53$I= V R = 18 V 32 Ω =0.56 A. I= V R = 18 V 32 Ω =0.56 A.$
All batteries have labels that say how much charge they can deliver (in terms of a current multiplied by a time). A typical 9-V alkaline battery can deliver a charge of 565 $mA⋅h mA⋅h$ (so two 9 V
batteries deliver 1,130 $mA⋅h mA⋅h$), so this heating system would function for a time of
19.54$t= 1130× 10 −3 A⋅h 0.56 A =2.0 h. t= 1130× 10 −3 A⋅h 0.56 A =2.0 h.$
Worked Example
Power through a Branch of a Circuit
Each resistor in the circuit below is 30 $Ω Ω$. What power is dissipated by the middle branch of the circuit?
The middle branch of the circuit contains resistors $R 3 and R 5 R 3 and R 5$ in series. The voltage across this branch is 12 V. We will first find the equivalent resistance in this branch, and then
use $P= V 2 /R P= V 2 /R$ to find the power dissipated in the branch.
The equivalent resistance is $R middle = R 3 + R 5 =30 Ω+30 Ω=60 Ω R middle = R 3 + R 5 =30 Ω+30 Ω=60 Ω$. The power dissipated by the middle branch of the circuit is
19.55$P middle = V 2 R middle = ( 12 V ) 2 60 Ω =2.4 W. P middle = V 2 R middle = ( 12 V ) 2 60 Ω =2.4 W.$
Let’s see if energy is conserved in this circuit by comparing the power dissipated in the circuit to the power supplied by the battery. First, the equivalent resistance of the left branch is
19.56$R left = 1 1/ R 1 +1/ R 2 + R 4 = 1 1/ 30 Ω +1/ 30 Ω +30 Ω=45 Ω. R left = 1 1/ R 1 +1/ R 2 + R 4 = 1 1/ 30 Ω +1/ 30 Ω +30 Ω=45 Ω.$
The power through the left branch is
19.57$P left = V 2 R left = ( 12 V ) 2 45 Ω =3.2 W. P left = V 2 R left = ( 12 V ) 2 45 Ω =3.2 W.$
The right branch contains only $R 6 R 6$, so the equivalent resistance is $R right = R 6 =30 Ω R right = R 6 =30 Ω$. The power through the right branch is
19.58$P right = V 2 R right = ( 12 V ) 2 30 Ω =4.8 W. P right = V 2 R right = ( 12 V ) 2 30 Ω =4.8 W.$
The total power dissipated by the circuit is the sum of the powers dissipated in each branch.
19.59$P= P left + P middle + P right =2.4 W+3.2 W+4.8 W=10.4 W P= P left + P middle + P right =2.4 W+3.2 W+4.8 W=10.4 W$
The power provided by the battery is
19.60$P=IV. P=IV.$
where I is the total current flowing through the battery. We must therefore add up the currents going through each branch to obtain I. The branches contributes currents of
19.61$I left = V R left = 12 V 45 Ω = 0.2667 A I middle = V R middle = 12 V 60 Ω = 0.20 A I right = V R right = 12 V 30 Ω = 0.40 A. I left = V R left = 12 V 45 Ω = 0.2667 A I middle = V R middle = 12
V 60 Ω = 0.20 A I right = V R right = 12 V 30 Ω = 0.40 A.$
The total current is
19.62$I= I left + I middle + I right =0.2667 A+0.20 A+0.40 A=0.87 A. I= I left + I middle + I right =0.2667 A+0.20 A+0.40 A=0.87 A.$
and the power provided by the battery is
19.63$P=IV=( 0.87 A )( 12 V )=10.4 W. P=IV=( 0.87 A )( 12 V )=10.4 W.$
This is the same power as is dissipated in the resistors of the circuit, which shows that energy is conserved in this circuit.
Practice Problems
Practice Problems
What is the formula for the power dissipated in a resistor?
a. The formula for the power dissipated in a resistor is $P= I V . P= I V .$
b. The formula for the power dissipated in a resistor is $P= V I . P= V I .$
c. The formula for the power dissipated in a resistor is P = IV.
d. The formula for the power dissipated in a resistor is P = I^2V.
What is the formula for power dissipated by a resistor given its resistance and the voltage across it?
a. The formula for the power dissipated in a resistor is $P= R V 2 P= R V 2$
b. The formula for the power dissipated in a resistor is $P= V 2 R P= V 2 R$
c. The formula for the power dissipated in a resistor is $P= V 2 R P= V 2 R$
d. The formula for the power dissipated in a resistor is $P= I 2 R P= I 2 R$
Check your Understanding
Check your Understanding
Exercise 8
Which circuit elements dissipate power?
a. capacitors
b. inductors
c. ideal switches
d. resistors
Exercise 9
Explain in words the equation for power dissipated by a given resistance.
a. Electric power is proportional to current through the resistor multiplied by the square of the voltage across the resistor.
b. Electric power is proportional to square of current through the resistor multiplied by the voltage across the resistor.
c. Electric power is proportional to current through the resistor divided by the voltage across the resistor.
d. Electric power is proportional to current through the resistor multiplied by the voltage across the resistor. | {"url":"https://texasgateway.org/resource/194-electric-power?book=79076&binder_id=78181","timestamp":"2024-11-07T19:44:24Z","content_type":"text/html","content_length":"162719","record_id":"<urn:uuid:e2a362a9-3a7b-4fc1-87d4-dc9fd71c9921>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00749.warc.gz"} |
Psych in Real Life: Growth Mindsets
Learning Objectives
• Explain how different praise and mindsets can lead to different levels of performance
How Mindset Influences Performance
Imagine that you are a parent and your child has just brought home a report card from 4th grade that is really good. You look it over and feel proud of your son or daughter. With a wide grin on your
face, you turn to your child and say:
“I’m so proud of you! This report card is great! You __________”
• are so smart!
• must have worked so hard!
• have some jelly on your nose!
We hope you didn’t choose the jelly statement. Between the other two options, which one would you be more likely to blurt out?
It turns out that your choice could matter.
Carol Dweck, who is now a Professor of Psychology at Stanford University, has been studying factors that promote or interfere with achievement since the 1970s. Over this time, and especially since
the mid-1990s, she came to realize that our ways of dealing with the world and particularly our behaviors in trying to achieve our own goals are influenced by what she calls “self-theories”: beliefs
we have about our own abilities, strengths and weaknesses, and potential. These self-theories affect decisions we make about what is possible or sensible or reasonable to do in order to achieve our
Before we discuss Carol Dweck’s work, please answer a few questions about your own beliefs. Try to answer based on your real ways of thinking. The questions are a bit repetitive, but answer each one
without regard to your previous answers.
Take the 8-question Mindset Quiz here or take the 8-question Mindset Quiz here
Dr. Dweck and her colleagues have used questions like the ones you just answered to sort people into groups based on their beliefs about intelligence (and other abilities and skills). She has found
that people tend to adopt one of two general sets of beliefs about intelligence. People with a fixed mindset tend to think of intelligence as an “entity”—something that is part of a person’s
essential self. According to people with this belief, intelligence does not change much regardless of what we do or what we experience. Other people have a growth mindset, and they tend to think of
intelligence as being “incremental”—a quality that can change for better or worse depending on what we do and on the experiences we have. Some people are strongly committed to one or the other end of
the fixed vs. growth mindset scale, while others fall in-between to varying degrees.
Study 1: Mueller & Dweck (1998)
If Dweck is right, our mindset has a big impact on how well we achieve our potential—in school and in many other areas of our lives (for example, in sports, music, and business). But where do these
different mindsets come from?
There can be many reasons that a person comes to believe that intelligence is fixed or changeable, but one obvious influence on our way of thinking about ourselves is the messages we hear from adults
as we grow up. Dweck and her then-graduate student Claudia Mueller wanted to see if they could influence the mindset of children, if only for a brief period of time, by giving different kinds of
praise to the children. Their starting point was the unsurprising and well-established idea that praise is motivating. When we do something and receive praise, we are more likely to want to do that
same thing again. But Mueller and Dweck wondered if all praise is equal. In particular, is it possible that certain types of praise that well-meaning parents and teachers often use could actually
reduce a child’s motivation to learn and that child’s resiliency when they encounter challenges?
The researchers recruited 128 fifth graders (70 girls and 58 boys ranging in age from 10 to 12) to participate in their study. Before we go into the details of the first experiment, please get a feel
for the task that the children had to perform.
You will have one minute to solve as many of the problems below as you can.^[1] For each problem, you will see a set of patterns arranged in a 3×3 matrix. Each matrix has one item missing, and your
task is to figure out what the missing item is based on the changing patterns in the rows, columns, and diagonals.
Try It
Before we start, here is one practice item. The 3×3 matrix is at the top and the pattern on the lower right is missing. Figure out which one of the eight patterns on the bottom, labeled 1 to 8, is
the missing pattern.
Show Answer
The correct answer is pattern #7. The pattern on the right in each row combines the dots from the other two patterns in that row.
Try It
Now you will have ONE MINUTE to solve as many of the problems below as possible.
Now that you’ve taken the test, how much would you like to try some more of these questions?
• Not at all
• 1
• 2
• 3
• 4
• 5
• Very much
How much did you enjoy working on these problems?
• Not at all
• 1
• 2
• 3
• 4
• 5
• Very much
How well do you think you did on these problems overall?
• Not very well
• 1
• 2
• 3
• 4
• 5
• Very well
If we gave you some more problems, would you prefer some more like the easier practice problem or some more like the hardest test problem you tried?
• Like the easier practice problem
• Like the hardest test problem
The problem-solving task you just tried out is based on a widely used psychological test called the Raven’s Progressive Matrices. Most people find the test to be challenging, requiring close
attention to detail and careful logical thinking. Mueller and Dweck chose this task because it could be adapted to be relatively easy or extremely difficult by changing the complexity of the patterns
required for the solution.
The experiment had three stages, each based around a different set of matrix problems like the ones you worked on. Each child was tested one-on-one in an otherwise empty classroom by a research
Stage 1: Pretest, Treatment, and Assessment of Motivation
The children were given instructions and 10 problems that were fairly easy to solve. At the end of 4 minutes, they were stopped and the research assistant scored their answers. On average, the
children attempted to answer 7.9 out of the 10 problems, and the mean number correct was 5.2.
When you do something to manipulate an independent variable, that something you do (administer a pill, tell the participant something that might affect performance, etc.) is called a “treatment.” In
this case, the treatment was the feedback the child received about their performance on the progressive matrices task. This treatment involved a bit of deception because children received randomly
assigned feedback. In other words, regardless of real performance, the children heard one of three statements depending on random assignment to a treatment condition.
• First, every child was told: “Wow, you did very well on these problems. You got _____ right. That’s a really high score.” The minimum number right that a child heard was 80%, which is obviously
well above the actual average of 51%. If a child got more than 80% correct, the actual number correct was used.
• The next step was based on the treatment condition the child had been assigned to:
□ Some of the children were praised for their ABILITY: “You must be smart at these problems.”
□ Other children were praised for their EFFORT: “You must have worked hard at these problems.”
□ The remaining children were in the CONTROL condition. They did not receive any additional feedback, aside from the general praise shown above.
After receiving feedback and, for children in two of the conditions, additional praise, the children were asked a series of questions. The experimenters wanted to know if the success the children
experienced in the first set of problems, along with the type of praise, influenced their choice of additional problems. They were told that they might get some more problems to solve and they were
asked to choose the difficulty of those problems. There were several options, but the choice came down to this:
□ Give me easy problems: “Problems that I’m pretty good at, so I can show that I’m smart.”
□ Give me challenging problems: “Problems that I’ll learn a lot from, even if I won’t look so smart.”
The children were then told that there might be some time at the end of the session to work on these problems they had chosen, but that the next problems they would work on had been determined before
the experiment started. They were told this so they would not interpret the next problem set as being “easy” or “challenging” based on their selection.
The results showed that the children were genuinely influenced by the praise they had received. The figure below shows the percentage of children choosing EASY problems, broken down by treatment
condition. The children who were praised for how smart they were (ability) were far more likely to choose easy problems than were the children praise for working hard (effort). The control condition,
children who were told they did well but received no additional praise, were in the middle.
Stage 2: Failure, Negative Feedback, and Consequences
Next, the children tried to solve a new set of 10 matrix problems and again they had 4 minutes. On the surface, these problems looked about the same as the first set, but they were considerably more
difficult. After the 4-minute test period, the researchers scored the answers and, regardless of actual performance, they told the children that they had done poorly (“a lot worse”). No one was told
that they had solved more than 50% correctly. In fact, this feedback was accurate. The results showed that the children found the problems difficult. On average, they attempted 5.8 of the 10 problems
and correctly solved only 1.8 of them. There was no significant difference in the number of problems solved for the three groups (ability feedback, effort feedback, and no-feedback control).
Now the experimenters wanted to know about the effect of “failure” on the children’s motivation (though the term “failure” was never used with the children).
Immediately after receiving feedback, the children were asked a series of questions:
• “How much would you like to take these problems home to work on?” [This was a measure of “task persistence”]
• “How much did you like working on the first set of problems? How much did you enjoy working on the second set? How much fun were the problems? [These measured “task enjoyment”]
• Using a somewhat complicated measure, the children were also asked to explain their difficulties with the second problem set by attributing failure to lack of ability or lack of effort. This was
done in a way that they could explain their problems on the second set as partially due to low ability and partially to low effort.
• “How much would you like to take these problems home?” The children answered on a 1-to-6 scale, where higher numbers mean more interest in taking the problems home to practice.
• “How much fun were the problems?” The children answered on a 1-to-6 scale, where higher numbers mean more enjoyment of the problems.
• Why did you perform poorly on this second set of problems? The children expressed their own explanation for their poor performance using a somewhat complicated procedure. It was not a simple
ability vs. effort choice and they could apportion their failure partially to either cause (reference the original study for more details).
Stage 3: Posttest
For the last stage of the experiment, the children were given a new set of problems that was similar in difficulty to the first set. The problems were moderately difficult, and the children had 4
minutes to solve as many as possible. The figure below shows the change in the average number of problems between the pretest (Stage 1) and the posttest (Stage 3).
Try It
Instructions: Click and drag the circles on the right (Posttest) to where you think they should be to reflect the results of the experiment. When you’re done, click the link below to see the actual
Click here to see the results.
The Mueller and Dweck experiment shows how a single comment to a child can have at least a temporary effect. It is unlikely that these children were still influenced by that one comment (“You’re
smart!” or “You worked hard!”) a day later or even an hour later. But at least for a short time in a controlled setting, the children were apparently affected by what the adult researcher said to
them. Why would this matter? If a child repeatedly and consistently hears one sort of encouragement or the other, the child can internalize that way of thinking. Later, as an adolescent and then an
adult, the individual’s “mindset” can determine how that person approaches new opportunities to learn and to grow intellectually.
Before you go on, we’d like you to create a psychological theory. This may sound like a strange thing to do because theories are often presented to you in textbooks as being the final summary of some
research. Sometimes that is true, but the primary use of theories in real scientific research is as a temporary and changeable summary of a researcher’s ideas.
Try It
Using the figure below, which shows a sequence of influences beginning with either praise for effort or praise for ability, build a psychological theory.
This is the psychological theory based on Dr. Dweck’s ideas, showing how the two different mindsets lead to different outcomes.^[2]
What this theory says is that different kinds of praise encourage the child to focus on different goals. Praise for effort tells the child that the process of learning is important and reward comes
from trying hard. Praise for ability tells the child that performance comes from something mysterious inside of you (“intelligence” or “talent”) rather than from what you do.
According to the theory (and supported by the results), children who had been praised for effort could focus on the process of learning, so failure at hard problems could be seen as a challenge—even
something fun—and failure could motivate them. The children who were praised for their intelligence, which effort cannot change, felt smart when they had easy problems, but the hard problems led to a
disturbing realization: maybe I don’t have that magical ability.
At stage 3 in the experiment, children who were energized by the difficult problems tackled the final set of problems, which were fairly easy, with enthusiasm that led to success. The children who
were discouraged by failure handicapped themselves on the last set of problems, doing worse than they had at the beginning of the study.
Next, let’s read about a second study by Dweck’s research team, though this one is described more briefly and with less detail. Study 2 is not an experiment because there are no manipulated
variables. It is a longitudinal study, which means that the same participants (in this case, children) are tested repeatedly across a long period of time.
Study 2: Blackwell, Trzesniewski, and Dweck (2007)
In this study^[3], Dweck and her colleagues administered a questionnaire about beliefs and attitudes to some 7th graders in public schools, and then they tracked 373 of the students from the
beginning of the 7th grade to the end of 8th grade. This period, which marked the transition from elementary school to junior high school, was considered a particularly interesting time because it
was a challenging, even stressful, time for the students and the children’s learning styles and attitudes could now have a substantial impact on their academic achievement.
At the beginning of their 7th grade school year, the children were tested on their mindset (various levels of commitment to fixed or growth mindset), learning goals (preference for easy or
challenging work), beliefs about effort (whether it tends to lead to improvement or not), and attitudes about failure (whether it is motivating or discouraging).
The researchers focused on the students’ mathematics grades across the two years of the study. They chose mathematics because students tend to have strong beliefs about their skills (“I’m good at
math” or “I’m not a math person”), which is influenced by their mindset and because math proficiency can be tested and graded fairly objectively. Although the study focused on math, the researchers
were interested in any area of study or skill, not just math.
The figure below shows the average grades^[4] of the students with strong fixed and strong growth mindsets based on the initial test. Students with mixed mindsets are not included in this graph. At
the end of the first semester, there was a very modest difference of fewer than two points in math grades. The trends for the two lines are obviously different. The students with a fixed mindset (red
line) showed a slight decline in average grades across the two years of the study. Students with a growth mindset (green line) show steady improvement across the two years, with their average grade
increasing by nearly 3-points.
At the beginning of the study, the students—then just starting the first term of the 7th grade—filled out a questionnaire about their attitudes and beliefs about learning. The table below summarizes
these differences.^[5] The reason for these questions is an important part of the psychology of learning. Mindset itself (fixed vs. growth) doesn’t cause better or worse performance. Mindset leads to
behaviors (types of studying, reactions to setbacks) that in turn affects the quality of learning.
The researchers found that children with growth mindsets (related to EFFORT praise in the first study) had different attitudes than children with fixed mindsets (related to ABILITY praise in the
first study). The table below summarizes their findings.
Growth Mindsets and Fixed Mindsets
Attitude Fixed Mindset Growth Mindset
Preferred difficulty of work Easy success Challenging
Belief about value of effort Doesn’t lead to improvement Leads to improvement
Attitude about failure Discouraging Motivating
The table indicates that children with different mindsets sought out different kinds of experience, with growth mindset children preferring challenging experiences, while those with a fixed mindset
preferred easier learning experiences that led to easy success. The growth mindset students believed that working hard—effort—leads to improvement, while those with fixed mindsets tended to
undervalue effort, believing that hard work is frustrating because we can’t do better than our “talents” or “innate abilities” allow us to do. Finally, the growth mindset children found difficult
work and even failure to be a source of inspiration. They wanted to prove to themselves and others that they could do what was needed to succeed. The fixed mindset children tended to respond to
difficulty and failure with discouragement, believing that it simply reaffirmed their own limitations.
The two studies we have discussed are just two of dozens of research projects by Dweck and others that show how mindset is related to differences in achievement. In another study, Grant and Dweck
(2003) followed several hundred college students taking a pre-med organic chemistry course, as this is one of the most important and challenging courses for pre-med students at most universities.
Students with a growth mindset outperformed students with a fixed mindset, and the two groups reported differences in attitudes and beliefs similar to those shown in the table above.
Mindset is just one factor that influences how we learn and how we respond to challenges. Whether you have a growth mindset or a fixed mindset, you can study hard and do well in school and in other
areas. Here is a summary point from Carol Dweck: “It should be noted that in these studies…students who have a fixed mindset but who are well prepared and do not encounter difficulties can do just
fine. However, when they encounter challenges or obstacles they may then be at a disadvantage.”
One last thing to remember is this: you can change your mindset. If you regularly handicap yourself by your beliefs (I just don’t have the talent for this) and attitudes about learning (I can’t learn
this), you can change those beliefs and attitudes. That change in mindset can be the difference between an effective response to challenges or avoidance of those challenges. Keep in mind that your
beliefs and attitudes are the result of many years of experience, so you won’t change your mindset overnight by simply deciding to be different. You may have to work at it. In particular, when you
encounter difficulty—a poor grade on a test, a paper that has some negative comments from your professor, or a reading assignment that leaves you confused—that is the time that your mindset can have
a huge impact on what you do next. Don’t let your mindset prevent you from realizing your abilities or reaching your potential! | {"url":"https://courses.lumenlearning.com/waymaker-psychology/chapter/psych-in-real-life-growth-mindsets/","timestamp":"2024-11-11T06:24:31Z","content_type":"text/html","content_length":"89992","record_id":"<urn:uuid:90a79b6e-9ce2-4738-b424-1f3722394e25>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00763.warc.gz"} |
orksheets for 6th Grade
Recommended Topics for you
Comparing Fractions and Decimals
Comparing Fractions and Decimals
Equivalent Fractions and Comparing Fractions
Comparing Fractions- mixed
Ordering and Comparing Fractions-Decimals-Percents
Comparing Fractions and Decimals
Comparing fractions and decimals
Ordering and Comparing Fractions, Decimals, and Percents
Explore Comparing Fractions Worksheets by Grades
Explore Comparing Fractions Worksheets for grade 6 by Topic
Explore Other Subject Worksheets for grade 6
Explore printable Comparing Fractions worksheets for 6th Grade
Comparing Fractions worksheets for Grade 6 are an essential tool for teachers looking to help their students master the concept of fractions in Math. These worksheets provide a variety of exercises
and problems that challenge students to compare and order fractions, find equivalent fractions, and simplify complex fractions. With a range of difficulty levels and question types, teachers can
easily customize these worksheets to suit the needs of their Grade 6 students. By incorporating these worksheets into their lesson plans, teachers can ensure that their students develop a strong
foundation in fractions, setting them up for success in more advanced Math topics.
In addition to Comparing Fractions worksheets for Grade 6, teachers can also utilize Quizizz, an online platform that offers a variety of engaging and interactive quizzes and games to reinforce Math
concepts. Quizizz allows teachers to create their own custom quizzes or choose from a vast library of pre-made quizzes, covering topics such as fractions, decimals, and percentages. With Quizizz,
students can practice their skills in a fun and competitive environment, while teachers can easily track their progress and identify areas for improvement. By incorporating both worksheets and
Quizizz into their teaching strategies, educators can provide a comprehensive and well-rounded approach to teaching fractions in Grade 6 Math. | {"url":"https://quizizz.com/en-us/comparing-fractions-worksheets-grade-6","timestamp":"2024-11-04T20:25:55Z","content_type":"text/html","content_length":"144969","record_id":"<urn:uuid:bf6793f9-7040-4352-a7b9-666c5cd702ef>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00020.warc.gz"} |
Static scheduling infrastructure
Scheduling is a common concern in hardware design, for example in high-level synthesis flows targeting an FSM+Datapath execution model (“static HLS”). This document gives an overview of, and provides
rationale for, the infrastructure in the circt::scheduling namespace. At its core, it defines an extensible problem model that acts as an interface between clients (i.e. passes that have a need to
schedule a graph-like IR) and reusable algorithm implementations.
This infrastructure aims to provide:
• a library of ready-to-use problem definitions and schedulers for clients to hook into.
• an API to make algorithm implementations comparable and reusable.
• a mechanism to extend problem definitions to model additional concerns and constraints.
Getting started ¶
Let’s walk through a simple example. Assume we want to schedule the computation in the entry block of a function such as @foo(...) in the listing below. This means we want to assign integer start
times to each of the operations in this untimed IR.
func @foo(%a1 : i32, %a2 : i32, %a3 : i32, %a4 : i32) -> i32 {
%0 = arith.addi %a1, %a2 : i32
%1 = arith.addi %0, %a3 : i32
%2:3 = "more.results"(%0, %1) : (i32, i32) -> (i32, i32, i32)
%3 = arith.addi %a4, %2#1 : i32
%4 = arith.addi %2#0, %2#2 : i32
%5 = arith.addi %3, %3 : i32
%6 = "more.operands"(%3, %4, %5) : (i32, i32, i32) -> i32
return %6 : i32
Our only constraint is that an operation can start after its operands have been computed. The operations in our source IR are unaware of time, so we need to associate them with a suitable operator
type. Operator types are an abstraction of the target architecture onto which we want to schedule the source IR. Here, the only property we need to model is their latency. Let’s assume that additions
take 1 time step, the operations in the dummy more. dialect take 3 time steps. As the return operation just passes control back to the caller, we assume a latency of 0 time steps for it.
Boilerplate ¶
The scheduling infrastructure currently has three toplevel header files.
#include "circt/Scheduling/Problems.h"
#include "circt/Scheduling/Algorithms.h"
#include "circt/Scheduling/Utilities.h"
using namespace circt::scheduling;
Constructing a problem instance ¶
Our stated goal requires solving an acyclic scheduling problem without resource constraints, represented by the Problem class in the scheduling infrastructure. We need to construct an instance of the
problem, which serves as a container for the problem components as well as their properties. The MLIR operation passed as an argument to the get(...) method is used to emit diagnostics.
auto prob = Problem::get(func);
Then, we set up the operator types with the latencies as discussed in the introduction. Operator types are identified by string handles.
auto retOpr = prob.getOrInsertOperatorType("return");
prob.setLatency(retOpr, 0);
auto addOpr = prob.getOrInsertOperatorType("add");
prob.setLatency(addOpr, 1);
auto mcOpr = prob.getOrInsertOperatorType("multicycle");
prob.setLatency(mcOpr, 3);
Next, we register all operations that we want to consider in the problem instance, and link them to one of the operator types.
auto &block = func.getBlocks().front();
for (auto &op : block) {
if (isa<func::ReturnOp>(op))
prob.setLinkedOperatorType(&op, retOpr);
else if (isa<arith::AddIOp>(op))
prob.setLinkedOperatorType(&op, addOpr);
prob.setLinkedOperatorType(&op, mcOpr);
Note that we do not have to tell the instance about the dependences between the operations in this simple example because the problem model automatically includes the SSA def-use-edges maintained by
MLIR. However, we often have to consider additional dependences that are not represented by value flow, such as memory dependences. For these situations, so-called auxiliary dependences between
operations are inserted explicitly into the problem: prob.insertDependence(srcOp, destOp).
Scheduling ¶
Before we attempt to schedule, we invoke the check() method, which ensures that the constructed instance is complete and valid. For example, the check would capture if we had forgot to set an
operator type’s latency. We dump the instance to visualize the dependence graph.
auto checkRes = prob.check();
dumpAsDOT(prob, "sched-problem.dot");
We use a simple list scheduler, available via the Algorithms.h header, to compute a solution for the instance.
auto schedRes = scheduleASAP(prob);
Working with the solution ¶
The solution is now stored in the instance, and we invoke the problem’s verify() method to ensure that the computed start times adhere to the precedence constraint we stated earlier, i.e. operations
start after their operands have computed their results. We can also convince ourselves of that by dumping the instance and inspecting the solution.
auto verifRes = prob.verify();
dumpAsDOT(prob, "sched-solution.dot");
To inspect the solution programmatically, we can query the instance in the following way. Note that by convention, all getters in the problem classes return Optional<T> values, but as we have already
verified that the start times for registered operations are set, we can directly dereference the values.
for (auto &op : prob.getOperations())
llvm::dbgs() << *prob.getStartTime(&op) << "\n";
And that’s it! For a more practical example, have a look at the AffineToPipeline pass.
Extensible problem model ¶
Theory and terminology ¶
Scheduling problems come in many flavors and variants in the context of hardware design. In order to make the scheduling infrastructure as modular and flexible as CIRCT itself, it is build on the
following idea of an extensible problem model:
An instance is comprised of components called operations, dependences and operator types. Operations and dependences form a graph structure and correspond to the source IR to be scheduled. Operator
types encode the characteristics of the target IR. The components as well as the instance can be annotated with properties. Properties are either input or solution properties, based on whether they
are supplied by the client, or computed by the algorithm. The values of these properties are subject to the input constraints and solution constraints, which are a first-class concern in the model
and are intended to be strictly enforced before respectively after scheduling.
Concrete problem definitions derived from this model share the same representation of the components, but differ in their sets of properties (and potentially distinction of input and solution
properties) and input and solution constraints. Hence, we tie together properties and constraints to model a specific scheduling problem. Extending one (or more!) parent problems means inheriting or
adding properties, and redefining the constraints (as these don’t always compose automatically).
A key benefit of this approach is that these problem definitions provide a reliable contract between the clients and algorithms, making it clear which information needs to be provided, and what kind
of solution is to be expected. Clients can therefore choose a problem definition that fits their needs, and algorithms can opt-in to accepting a specific subset of problems, which they can solve
efficiently. Extensibility is ensured because new problem definitions can be added to the infrastructure (or inside a specific lowering flow, or even out-of-tree) without adapting any existing users.
Implementation ¶
See Problems.h / Problems.cpp.
Problem definitions ¶
The Problem class is currently the base of the problem hierarchy. Several extended problems are currently defined via virtual multiple inheritance. Upon construction, a containingOp is passed to
instances. This MLIR operation is currently only used to emit diagnostics, and has no semantic meaning beyond that.
Components ¶
The infrastructure uses the following representation of the problem components.
Operations are just mlir::Operation *s.
We distinguish two kinds of dependences, def-use and auxiliary. Def-use dependences are part of the SSA graph maintained by MLIR, and can distinguish specific result and operand numbers. As we expect
any relevant graph-like input IR to use this MLIR facility, instances automatically consider these edges between registered operations. Auxiliary dependences, in contrast, only specify a source and
destination operation, and have to be explicitly added to the instance by the client, e.g. for control or memory dependences. The detail::Dependence class abstracts the differences between both
kinds, in order to offer a uniform API to iterate over dependences and query their properties.
Lastly, operator types are identified by mlir::StringAttrs, in order to give clients maximum flexibility in modeling their operator library. This may change in the future, when a CIRCT-wide concept
to model physical properties of hardware emerges.
Properties ¶
Properties can involve arbitrary data types, as long as these can be stored in maps. Problem classes offer public getter and setter methods to access a given components properties. Getters return
optional values, in order to indicate if a property is unset. For example, the signature of the method the queries the computed start time is Optional<unsigned> getStartTime(Operation *op).
Constraints ¶
Clients call the virtual Problem::check() method to test any input constraints, and Problem::verify() to test the solution constraints. Problem classes are expected to override them as needed. There
are no further restrictions of how these methods are implemented, but it is recommended to introduce helper methods that test a specific aspect and can be reused in extended problems. In addition, it
makes sense to check/verify the properties in an order that avoids redundant tests for the presence of a particular property as well as redundant iteration over the problem components.
Available problem definitions ¶
See the linked Doxygen docs for more details.
• Problem: A basic, acyclic problem at the root of the problem hierarchy. Operations are linked to operator types, which have integer latencies. The solution comprises integer start times adhering
to the precedence constraints implied by the dependences.
• CyclicProblem: Cyclic extension of Problem. Its solution solution can be used to construct a pipelined datapath with a fixed, integer initiation interval, in which the execution of multiple
iterations/samples/etc. may overlap. Operator types are assumed to be fully pipelined.
• SharedOperatorsProblem: A resource-constrained scheduling problem that corresponds to multiplexing multiple operations onto a pre-allocated number of fully pipelined operator instances.
• ModuloProblem: Models an HLS classic: Pipeline scheduling with limited resources.
• ChainingProblem: Extends Problem to consider the accumulation of physical propagation delays on combinational paths along SSA dependences.
• ChainingCyclicProblem: Extends ChainingProblem and CyclicProblem to consider the accumulation of physical propagation delays on combinational paths along SSA dependences on a cyclic scheduling
problem. Note that the problem does not model propagation delays along inter-iteration dependences. These are commonly represented as auxiliary dependences, which are already excluded in the
parent ChainingProblem. In addition, the ChainingCyclicProblem explicitly prohibits the use of def-use dependences with a non-zero distance.
NB: The classes listed above each model a trait-like aspect of scheduling. These can be used as-is, but are also intended for mixing and matching, even though we currently do not provide definitions
for all possible combinations in order not to pollute the infrastructure. For example, the ChainingProblem may be of limited use standalone, but can serve as a parent class for a future
chaining-enabled modulo scheduling problem.
Available schedulers ¶
• ASAP list scheduler ( ASAPScheduler.cpp): Solves the basic Problem with a worklist algorithm. This is mostly a problem-API demo from the viewpoint of an algorithm implementation.
• Linear programming-based schedulers ( SimplexSchedulers.cpp): Solves Problem, CyclicProblem and ChainingProblem optimally, and SharedOperatorsProblem / ModuloProblem with simple (not
state-of-the-art!) heuristics. This family of schedulers shares a tailored implementation of the simplex algorithm, as proposed by de Dinechin. See the sources for more details and literature
• Integer linear programming-based scheduler ( LPSchedulers.cpp): Demo implementation for using an ILP solver via the OR-Tools integration.
Utilities ¶
See Utilities.h:
• Topological graph traversal
• DFA to compute combinational path delays
• DOT dump
Adding a new problem ¶
See e.g. #2233, which added the ChainingProblem.
• Decide where to add it. Guideline: If it is trait-like and similar to the existing problem mentioned above, add it to Problems.h. If the model is specific to your use-case, it is best to start
out in locally in your dialect/pass.
• Declare the new problem class and inherit virtually from the relevant superclasses (at least Problem).
• Define additional properties (private), and the corresponding public getters/setters. Getters return Optional<T> values, to indicate an unset state.
□ Note that dependence properties are somewhat expensive to store, making it desirable that clients and algorithms expect and handle the unset state. This should be clearly documented. Example:
distance property in CyclicProblem.
• Redefine the getProperties(*) methods to get dumping support. These should consider any properties the new class adds, plus properties defined in the superclass(es).
• Redefine check() (input constraints) and verify() (solution constraints). If possible, follow the design used in the existing problem classes.
Testing ¶
Please extend the SSP dialect to enable testing of the new problem definition.
Adding a new scheduler ¶
See e.g. #2650, which added a scheduler for the CyclicProblem.
• Schedulers should opt-in to specific problems by providing entry points for the problem subclasses they support. Example:
LogicalResult awesomeScheduler(Problem &prob);
LogicalResult awesomeScheduler(CyclicProblem &prob);
• Schedulers can expect that the input invariants were enforced by a check()-call in the client, and must compute a solution that complies with the solution constraints when the client calls the
problem’s verify() method.
• Schedulers can live anywhere. If a new algorithm is not entirely dialect/pass-specific and supports problems defined in Problems.h, it should offer entry points in Algorithms.h.
• Objectives are not part of the problem signature. Therefore, if an algorithm supports optimizing for different objectives, clients should be able to select one via the entry point(s).
Testing ¶
• To enable testing, add the new scheduler to the -ssp-schedule pass, and invoke it from the test cases for the supported problems ( example).
• If the algorithm may fail in certain situations (e.g., “linear program is infeasible”), add suitable error tests as well. | {"url":"https://circt.llvm.org/docs/Scheduling/","timestamp":"2024-11-08T01:10:10Z","content_type":"text/html","content_length":"44283","record_id":"<urn:uuid:74db3748-8225-486f-9472-1524d0f7c93e>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00824.warc.gz"} |
Python Program to Solve Matrix-Chain Multiplication using Dynamic Programming with Memoization - Sanfoundry
Python Program to Solve Matrix-Chain Multiplication using Dynamic Programming with Memoization
This is a Python program to solve matrix-chain multiplication using dynamic programming with top-down approach or memoization.
Problem Description
In the matrix-chain multiplication problem, we are given a sequence of matrices A(1), A(2), …, A(n). The aim is to compute the product A(1)…A(n) with the minimum number of scalar multiplications.
Thus, we have to find an optimal parenthesization of the matrix product A(1)…A(n) such that the cost of computing the product is minimized.
Problem Solution
1. Three functions are defined, matrix_product, matrix_product_helper and print_parenthesization.
2. matrix_product_helper takes as arguments a list p, two 2D tables m and s, and two indexes start and end.
3. The function stores the minimum number of scalar multiplications needed to compute the product A(i) x A(i + 1) x … x A(j) in m[i][j].
4. The index of the matrix after which the above product is split in an optimal parenthesization is stored in s[i][j].
5. p[0… n] is a list such that matrix A(i) has dimensions p[i – 1] x p[i].
6. The function returns m[start][end].
7. That is, it returns the minimum computations needed to evaluate A(start) x … x A(end).
8. This is done by finding a k such that m[start][k] + m[k + 1][end] + p[start – 1]*p[k]*p[end] is minimized. The last term is the cost of multiplying the two products formed by splitting the
matrix-chain after matrix k.
9. The function is implemented recursively and as a minimum cost is calculated it is stored in m and the index of the split is stored in s.
10. If a minimum cost has been already calculated and stored in m, then it is immediately returned and not calculated again.
11. The function matrix_product takes the list p as argument, which contains the dimensions of the matrices in the matrix-chain.
12. It simply initializes two 2D tables m and s as a list of lists and calls matrix_product_helper.
13. It then returns m and s.
14. The function print_parenthesization takes as argument a 2D table s as generated above.
15. It also takes two indexes start and end as arguments.
16. It prints the optimal parenthesization of the matrix-chain product A(start) x … x A(end).
Program/Source Code
Here is the source code of a Python program to solve the matrix-chain multiplication problem using dynamic programming with memoization. The program output is shown below.
def matrix_product(p):
"""Return m and s.
m[i][j] is the minimum number of scalar multiplications needed to compute the
product of matrices A(i), A(i + 1), ..., A(j).
s[i][j] is the index of the matrix after which the product is split in an
optimal parenthesization of the matrix product.
p[0... n] is a list such that matrix A(i) has dimensions p[i - 1] x p[i].
length = len(p) # len(p) = number of matrices + 1
# m[i][j] is the minimum number of multiplications needed to compute the
# product of matrices A(i), A(i+1), ..., A(j)
# s[i][j] is the matrix after which the product is split in the minimum
# number of multiplications needed
m = [[-1]*length for _ in range(length)]
s = [[-1]*length for _ in range(length)]
matrix_product_helper(p, 1, length - 1, m, s)
return m, s
def matrix_product_helper(p, start, end, m, s):
"""Return minimum number of scalar multiplications needed to compute the
product of matrices A(start), A(start + 1), ..., A(end).
The minimum number of scalar multiplications needed to compute the
product of matrices A(i), A(i + 1), ..., A(j) is stored in m[i][j].
The index of the matrix after which the above product is split in an optimal
parenthesization is stored in s[i][j].
p[0... n] is a list such that matrix A(i) has dimensions p[i - 1] x p[i].
if m[start][end] >= 0:
return m[start][end]
if start == end:
q = 0
q = float('inf')
for k in range(start, end):
temp = matrix_product_helper(p, start, k, m, s) \
+ matrix_product_helper(p, k + 1, end, m, s) \
+ p[start - 1]*p[k]*p[end]
if q > temp:
q = temp
s[start][end] = k
m[start][end] = q
return q
def print_parenthesization(s, start, end):
"""Print the optimal parenthesization of the matrix product A(start) x
A(start + 1) x ... x A(end).
s[i][j] is the index of the matrix after which the product is split in an
optimal parenthesization of the matrix product.
if start == end:
print('A[{}]'.format(start), end='')
k = s[start][end]
print('(', end='')
print_parenthesization(s, start, k)
print_parenthesization(s, k + 1, end)
print(')', end='')
n = int(input('Enter number of matrices: '))
p = []
for i in range(n):
temp = int(input('Enter number of rows in matrix {}: '.format(i + 1)))
temp = int(input('Enter number of columns in matrix {}: '.format(n)))
m, s = matrix_product(p)
print('The number of scalar multiplications needed:', m[1][n])
print('Optimal parenthesization: ', end='')
print_parenthesization(s, 1, n)
Program Explanation
1. The user is prompted to enter the number of matrices, n.
2. The user is then asked to enter the dimensions of the matrices.
3. matrix_product is called to get the tables m and s.
4. m[1][n] is the minimum cost of computing the matrix product.
5. print_parenthesization is then called to display the optimal way to parenthesize the matrix product.
Runtime Test Cases
Case 1:
Enter number of matrices: 3
Enter number of rows in matrix 1: 10
Enter number of rows in matrix 2: 100
Enter number of rows in matrix 3: 5
Enter number of columns in matrix 3: 50
The number of scalar multiplications needed: 7500
Optimal parenthesization: ((A[1]A[2])A[3])
Case 2:
Enter number of matrices: 5
Enter number of rows in matrix 1: 5
Enter number of rows in matrix 2: 10
Enter number of rows in matrix 3: 8
Enter number of rows in matrix 4: 15
Enter number of rows in matrix 5: 20
Enter number of columns in matrix 5: 4
The number of scalar multiplications needed: 2200
Optimal parenthesization: (A[1](A[2](A[3](A[4]A[5]))))
Case 3:
Enter number of matrices: 1
Enter number of rows in matrix 1: 5
Enter number of columns in matrix 1: 7
The number of scalar multiplications needed: 0
Optimal parenthesization: A[1]
Sanfoundry Global Education & Learning Series – Python Programs.
To practice all Python programs, here is complete set of 150+ Python Problems and Solutions. | {"url":"https://www.sanfoundry.com/python-program-solve-matrix-chain-multiplication-using-dynamic-programming-memoization/","timestamp":"2024-11-03T07:33:08Z","content_type":"text/html","content_length":"145935","record_id":"<urn:uuid:2d6523af-f899-4383-a40b-94f4b27a56ff>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00491.warc.gz"} |
NP in ZPP implies PH in ZPP
If NP is in
is the entire polynomial-time hierarchy in ZPP? I saw this result used in an old
TCS Stackexchange post
but I couldn't find a proof (comment if you know a reference). The proof that NP in BPP implies PH in BPP is
harder than it looks
and NP in BQP implies PH is in BQP is
still open
as far as I know.
I found a simple proof that NP in ZPP implies PH in ZPP and then an even simpler one.
Assume NP in ZPP. This implies NP in BPP so PH is also in BPP. So we need only show BPP in ZPP.
BPP is in ZPP
follows directly by Lautemann's proof that BPP is in Σ
or by the fact that BPP is in MA is in
is in ZPP
. By assumption, BPP in ZPP
implies BPP in ZPP
= ZPP.
And this is even simpler.
ZPP = RP∩co-RP in NP∩co-NP. Σ
= NP
in NP
(by assumption) in NP
= NP in ZPP. You can get the higher levels of the hierarchy by an easy induction.
10 comments:
1. As can be seen from your second argument, you practically use that ZPP is in co-NP and thus NP is in co-NP, which implies (as you have sketched) that NP=PH.
2. You can also say that NP is in ZPP implies that NP is in BPP (since ZPP is in BPP), so by Karp-Lipton theorem PH=Σ2, and so
PH=Σ2=NP^NP which, by assumption is in ZPP^ZPP=ZPP.
1. NP in ZPP implies NP^NP in NP^ZPP but not directly that NP^ZPP in ZPP^ZPP.
3. In proposition 1.3 in https://www.semanticscholar.org/paper/The-Satanic-Notations%3A-Counting-Classes-beyond-%23p-1-Hemaspaandra/c01bebe8da499eb6a585c36cde9a71a406db41e6 it is shown
#.NP=#.P^NP iff NP=UP^NP iff NP=coNP.
NP in ZPP implies NP=coNP and so ZPP=NP=UP^NP=UP^ZPP holds and so is ZPP low for UP providing ZPP=UP?
Thank you.
4. In proposition 1.3 in https://dl.acm.org/doi/abs/10.1145/203610.203611 it is stated #.NP=#.P^NP iff NP=UP^NP iff NP=coNP.
NP in ZPP implies NP=coNP and so ZPP=NP=UP^NP=UP^ZPP holds and so is ZPP low for UP providing ZPP=UP?
5. Full question.
Explicitly I want to identify relation between UP and ZPP since we do not even know ZPP is in SPP although we believe it to be true.
In proposition 1.3 in https://dl.acm.org/doi/abs/10.1145/203610.203611 it is stated #.NP=#.P^NP iff NP=UP^NP iff NP=coNP.
Theorem 7.2 in On counting and approximation (springer.com) states span-P is approximable in BPP^NP.
NP in ZPP implies span-P is approximable in ZPP.
1. span-P is in #(P^NP) and soI wonder if NP=ZPP would we get span-P in #P and so by theorem 4.9 UP=NP=ZPP?
Ordinarily is ZPP low for NP? So we get:
2. NP in ZPP implies NP=coNP and so ZPP=NP=UP^NP=UP^ZPP holds and so is ZPP low for UP providing NP=ZPP=UP?
So could NP=ZPP have any implications to UP and ZPP.
Assume further UP=coUP and so UPH collapses to UP. We already assumed NP=ZPP holds. So UP is in ZPP and in NP.
3. At least would it say something about the lowness UP making ZPP^UP in UP?
4. coUP=UP in EP in NP=ZPP=coNP holds under assumptions. EP is known to be in SPP. Is EP known to be in UPH under the assumptions?
6. Can someone explain why ZPP^ZPP = ZPP?
1. ZPP means always correct in expected poly time. So a ZPP^ZPP machine just simulates the oracle with its queries. Since the expectation of a sum is the sum of the expectations, the poly # of
queries can be simulated in an expected polynomial number of steps.
7. Can you elaborate why \Sigma_2^{P} \subseteq ZPP^{NP}?
1. Probably not because that would imply Simga_2 in Pi_2 and the polynomial-time hierarchy collapses. | {"url":"https://blog.computationalcomplexity.org/2017/03/np-in-zpp-implies-ph-in-zpp.html","timestamp":"2024-11-11T23:31:24Z","content_type":"application/xhtml+xml","content_length":"189376","record_id":"<urn:uuid:e1c47fec-d9b1-4221-a0bd-128422098056>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00742.warc.gz"} |
Random Fields in One Dimension
Finn Lindgren
Generated on 2024-11-06
Setting things up
Make a shortcut to a nicer colour scale:
Get the data
Put the count data in cd (just because ‘cd’ is less to type than ‘countdata2’.)
Take a look at the count data.
#> x count exposure
#> 1 2.319888 9 4.639776
#> 2 6.959664 13 4.639776
#> 3 11.599439 11 4.639776
#> 4 16.239215 22 4.639776
#> 5 20.878991 20 4.639776
#> 6 25.518766 19 4.639776
#> 7 30.158542 16 4.639776
#> 8 34.798318 8 4.639776
#> 9 39.438093 4 4.639776
#> 10 44.077869 4 4.639776
#> 11 48.717645 4 4.639776
ggplot(cd) +
geom_point(aes(x, y = count)) +
ylim(0, max(cd$count))
Tip: RStudio > Help > Cheatsheets > Data visualisation with ggplot2 is a useful reference for ggplot2 syntax.
Fitting a Generalised Additive Model (GAM)
If you’re not familiar with GAMs and the syntax of gam don’t worry, the point of this is just to provide something to which we can compare the inlabru model fit.
The term s(x,k=10) just specifies that as nonparametric smooth function is to be fitted to the data, with no more than 10 degrees of freedom (df). (The larger the df, the more wiggly the fitted curve
(recall from the lecture that this is an effect of how some spline methods are defined, without discretisation dependent penalty); gam selects the ‘best’ df.) Notice the use of offset=. (Refer to
slides for an explanation of offset.) The variable exposure in data frame cd is the size of the bin in which each count was made.
You can look at the fitted model using summary( ) as below if you want to, but you do not need to understand this output, or the code that makes the predictions immediately below it if you are not
familiar with GAMs.
Make a prediction data frame, get predictions and add them to the data frame First make vectors of x-values and associated (equal) exposures:
xs <- seq(0, 55, length = 100)
exposures <- rep(cd$exposure[1], 100)
and put them in a data frame:
dat4pred <- data.frame(x = xs, exposure = exposures)
Then predict
pred2.gam <- predict(fit2.gam, newdata = dat4pred, type = "response")
# add column for prediction in data frame:
dat4pred2 <- cbind(dat4pred, gam = pred2.gam)
Plotting the fit and the data using the ggplot2 commands below should give you the plot shown below
Fitting an SPDE model with inlabru
Make mesh. To avoid boundary effects in the region of interest, let the mesh extend outside the data range.
x <- seq(-10, 65, by = 1) # this sets mesh points - try others if you like
mesh1D <- fm_mesh_1d(x, boundary = "free")
… and see where the mesh points are:
Using function bru( ) to fit to count data
We need to specify model components and a model formula in order to fit it. This can be done inside the call to bru( ) but that is a bit messy, so we’ll store it in comp first and then pass that to
bru( ).
Our response variable in the data frame cd is called count so the model specification needs to have that on the left of the ~. We add an intercept component with + Intercept(1) on the right hand side
(all the models we use have intercepts), and because we want to fit a Gaussian random field (GRF), it must have a GRF specification. In inlabru the GRF specification is a function, which allows the
GRF to be calculated at any point in space while inlabru is doing its calculations.
The user gets to name the GRF function. The syntax is myname(input, model= ...), where:
• ‘myname’ is whatever you want to call the GRF (we called it field below);
• input specifies the coordinates in which the GRF or SPDE ‘lives’. Here we are working in one dimension, and we called that dimension x when we set up the data set.
• model= designates the type of effect, here an SPDE model object from the INLA function inla.spde2.pcmatern( ), which requires a mesh to be passed to it, so we pass it the 1D mesh that we created
above, mesh1D.
For models that only adds the model components, we don’t need to specify the full predictor formula. Instead, we can provide the name of the output to the left of the ~ in the component
specification, and “.” on the right hand side, which will cause it to add all components (unless a subset is selected via the include/exclude arguments to like()).
the_spde <- inla.spde2.pcmatern(mesh1D,
prior.range = c(1, 0.01),
prior.sigma = c(1, 0.01)
comp <- ~ field(x, model = the_spde) + Intercept(1, prec.linear = 1 / 2^2)
fit2.bru <- bru(
like(count ~ .,
data = cd,
family = "poisson",
E = exposure
#> inlabru version: 2.11.1.9020
#> INLA version: 24.10.29
#> Components:
#> field: main = spde(x), group = exchangeable(1L), replicate = iid(1L), NULL
#> Intercept: main = linear(1), group = exchangeable(1L), replicate = iid(1L), NULL
#> Likelihoods:
#> Family: 'poisson'
#> Tag: ''
#> Data class: 'data.frame'
#> Response class: 'integer'
#> Predictor: count ~ .
#> Used components: effects[field, Intercept], latent[]
#> Time used:
#> Pre = 0.691, Running = 0.217, Post = 0.117, Total = 1.03
#> Fixed effects:
#> mean sd 0.025quant 0.5quant 0.975quant mode kld
#> Intercept 0.978 0.702 -0.008 0.857 3.166 0.812 0.004
#> Random effects:
#> Name Model
#> field SPDE2 model
#> Model hyperparameters:
#> mean sd 0.025quant 0.5quant 0.975quant mode
#> Range for field 35.193 25.648 8.854 28.298 103.362 19.028
#> Stdev for field 0.519 0.163 0.271 0.496 0.905 0.452
#> Deviance Information Criterion (DIC) ...............: 60.06
#> Deviance Information Criterion (DIC, saturated) ....: 14.41
#> Effective number of parameters .....................: 5.49
#> Watanabe-Akaike information criterion (WAIC) ...: 58.21
#> Effective number of parameters .................: 2.84
#> Marginal log-Likelihood: -36.79
#> is computed
#> Posterior summaries for the linear predictor and the fitted values are computed
#> (Posterior marginals needs also 'control.compute=list(return.marginals.predictor=TRUE)')
Predict the values at the x points used for mesh (the data argument must be a data frame, see ?predict.bru):
x4pred <- data.frame(x = xs)
pred2.bru <- predict(fit2.bru,
x ~ exp(field + Intercept),
n.samples = 1000
Let’s do a plot to compare the fitted model to the true model. The expected counts of the true model are stored in the variable E_nc2 which comes with the dataset Poisson2_1D. For ease of use in
plotting with ggplot2 (which needs a data frame), we create a data frame which we call true.lambda, containing x- and y variables as shown below.
Given that inlabru predictions are always on the intensity function scale, do you understand why we divide the count by cd$exposure? (We will in due course allow predictions on the count scale as
true.lambda <- data.frame(x = cd$x, y = E_nc2 / cd$exposure)
These ggplot2 commands should generate the plot shown below. It shows the true intensities as short horizontal blue lines, the observed intensities as black dots, and the fitted intensity function as
a red curve, with 95% credible intervals shown as a light red band about the curve.
ggplot() +
gg(pred2.bru) +
geom_point(data = cd, aes(x = x, y = count / exposure), cex = 2) +
geom_point(data = true.lambda, aes(x, y), pch = "_", cex = 9, col = "blue") +
coord_cartesian(xlim = c(0, 55), ylim = c(0, 6)) +
xlab("x") +
Compare the inlabru fit to the gam fit:
ggplot() +
gg(pred2.bru) +
geom_point(data = cd, aes(x = x, y = count / exposure), cex = 2) +
geom_line(data = dat4pred2, aes(x, gam / exposure), lty = 2) +
coord_cartesian(xlim = c(0, 55), ylim = c(0, 6)) +
xlab("x") +
Looking at the posterior distributions
We can look at the Intercept posterior using the function plot( ), as below.
plot(fit2.bru, "Intercept")
You have to know that there is a variable called Intercept in order to use this function. To see what fixed effect parameters’ posterior distributions are available to be plotted, you can type
This does not tell you about the SPDE parameters, and if you type
#> [1] "field"
this just tells you that there is an SPDE in fit2.bru called ‘field’, it does not tell you what the associated parameter names are. The parameters that are used in estimation are cryptic – what we
are interested in is the range and variance of the Matern covariance funcion, that are functions of the internal parameters. We can look at the posterior distributions of the range parameter and the
log of the variance parameters as follows. (We look at the posterior of the log of the variance because the variance posterior is very skewed and so it is easier to view the log of the variance)
spde.range <- spde.posterior(fit2.bru, "field", what = "range")
spde.logvar <- spde.posterior(fit2.bru, "field", what = "log.variance")
range.plot <- plot(spde.range)
var.plot <- plot(spde.logvar)
multiplot(range.plot, var.plot)
We can look at the posterior distributions of the Matern correlation and covariance functions as follows: | {"url":"https://inlabru-org.github.io/inlabru/articles/random_fields.html","timestamp":"2024-11-07T05:28:04Z","content_type":"text/html","content_length":"47050","record_id":"<urn:uuid:2c67dd40-e4eb-43da-92ec-0995562f1fcd>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00588.warc.gz"} |
How to Convert Pandas Dataframe to Tensorflow Dataset?
To convert a pandas dataframe to a TensorFlow dataset, you can use the tf.data.Dataset.from_tensor_slices() method. First, you need to convert the pandas dataframe to a numpy array using the values
attribute. Then, you can create a TensorFlow dataset by passing the numpy array to the from_tensor_slices() method. This will allow you to easily work with the data in a TensorFlow format and utilize
all the functionality that TensorFlow datasets offer.
How to handle class imbalances in a pandas dataframe before converting to a tensorflow dataset?
There are several techniques you can use to handle class imbalances in a pandas dataframe before converting it to a TensorFlow dataset. Some common methods include:
1. Upsampling: Increase the number of samples in the minority class by randomly duplicating them until the class distribution is more balanced.
2. Downsampling: Decrease the number of samples in the majority class by randomly removing samples until the class distribution is more balanced.
3. Synthetic data generation: Use techniques like SMOTE (Synthetic Minority Over-sampling Technique) to generate synthetic samples for the minority class to balance the class distribution.
4. Class weights: Let the model assign different weights to different classes during training. This way, the model will pay more attention to the minority class.
5. Stratified sampling: Split the dataset into train and test sets in a way that ensures the class distribution is the same in both sets.
You can implement these techniques in pandas before converting the dataframe to a TensorFlow dataset. For example, you can use the resample function in pandas to upsample or downsample the data, or
use the class_weight parameter in the model training phase to assign weights to different classes.
What is the importance of model evaluation metrics when training a tensorflow model on a converted pandas dataframe?
Model evaluation metrics are important when training a TensorFlow model on a converted pandas dataframe because they help to gauge the performance of the model on the dataset. These metrics provide
valuable insights into how well the model is able to generalize to new, unseen data and can help identify potential issues such as overfitting or underfitting.
By using evaluation metrics such as accuracy, precision, recall, F1 score, or AUC-ROC curve, you can quantitatively measure the performance of the model and make informed decisions about
hyperparameter tuning, feature selection, or model architecture changes. This ensures that the model is optimized for the specific problem at hand and can make accurate predictions on new data.
Additionally, model evaluation metrics can also help to compare different models or versions of the same model, allowing you to identify the best performing model for the task. This can ultimately
lead to higher model performance, better generalization, and more reliable predictions.
How to handle missing values in a pandas dataframe when converting to a tensorflow dataset?
When converting a Pandas DataFrame to a TensorFlow dataset, you can handle missing values in a few different ways:
1. Drop rows with missing values: If the missing values are not critical and you can afford to lose a few rows of data, you can simply drop the rows that contain missing values using the dropna()
method in Pandas before converting to a TensorFlow dataset.
1 df.dropna(inplace=True)
1. Fill missing values with a specific value: If dropping rows is not an option, you can fill the missing values with a specific value using the fillna() method in Pandas.
1 df.fillna(value=0, inplace=True)
1. Impute missing values: Another option is to impute missing values using statistical methods such as mean, median, or mode imputation. This can be done using the SimpleImputer class from
1 from sklearn.impute import SimpleImputer
3 imputer = SimpleImputer(strategy='mean')
4 df['column_name'] = imputer.fit_transform(df[['column_name']])
Once you have handled the missing values in your Pandas DataFrame, you can then convert it to a TensorFlow dataset using the tf.data.Dataset.from_tensor_slices() method.
1 import tensorflow as tf
3 dataset = tf.data.Dataset.from_tensor_slices((df.values))
By handling missing values before converting the DataFrame to a TensorFlow dataset, you ensure that the data is clean and ready for training machine learning models. | {"url":"https://article-blog.kdits.ca/blog/how-to-convert-pandas-dataframe-to-tensorflow","timestamp":"2024-11-01T19:34:13Z","content_type":"text/html","content_length":"148539","record_id":"<urn:uuid:5eff12c7-701c-4b58-8742-1515ac97d017>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00646.warc.gz"} |
How to Calculate the Time Value of Money in Excel - 5 Examples - ExcelDemy
What Is the Time Value of Money?
The money that you have today is worth more than money you will receive in the future.
Parameters to Calculate Time Value of Money
• pv → pv the Present Value or the amount of money you currently have.
• fv → fv the Future Value of the money that you currently have.
• nper → nper represents the Number of Periods: Annually, Semi-Annually, Quarterly, Monthly, Weekly, Daily etc.
• rate → rate is the Interest Rate Per Year.
• pmt → pmt indicates Periodic Payments.
Note: In the Excel formula, the signs of PV and FV are opposite. PV is negative and FV is positive.
Example 1 – Using the FV Function to Calculate the Future Value of Money in Excel
In the following dataset, initial investments (Present Value), the Annual Rate, and the Number of Years are displayed.
1.1 Future Value Without a Periodic Payment
• Enter the following formula in F5.
E5 → rate
D5 → nper
0 → pmt
-C5 → pv
0 → 0 means payment is timed at the end of the period.
This is the output.
• Drag down the Fill Handle to see the result in the rest of the cells.
Read More: How to Calculate Periodic Interest Rate in Excel
1.2 Future Value with Periodic Payments
• Enter the following formula in G5.
F5 → rate
D5 → nper
-E5 → pmt
-C5 → pv
0 → 0 means payment is timed at the end of the period.
This is the output.
• Drag down the Fill Handle to see the result in the rest of the cells.
Read More: How to Apply Future Value of an Annuity Formula in Excel
Example 2 – Computing the Present Value of Money with the PV Function
In the following dataset, Future Value, Annual Rate, and Number of Years are displayed.
2.1 Present Value Without Periodic Payments
• Enter the formula below in F5.
E5 → rate
D5 → nper
0 → pmt
-C5 → fv
0 → 0 means payment is timed at the end of the period.
This is the output.
• Drag down the Fill Handle to see the result in the rest of the cells.
2.2 Present Value with Periodic Payments
• Use the following formula in G5.
F5 → rate
D5 → nper
E5 → pmt
-C5 → fv
0 → 0 means payment is timed at the end of the period.
This is the output.
• Drag down the Fill Handle to see the result in the rest of the cells.
Read More: How to Apply Present Value of Annuity Formula in Excel
Example 3 – Calculating the Interest Rate with the RATE Function in Excel
In the dataset given below, Present Value, Future Value, and Number of Years are displayed.
3.1 Interest Rate Without Periodic Payments
• Enter the following formula in F5.
D5 → nper
0 → pmt
E5 → pv
-C5 → fv
0 → 0 means payment is timed at the end of the period.
This is the output.
• Drag down the Fill Handle to see the result in the rest of the cells.
3.2 Interest Rate with Periodic Payments
• Use the formula below in G5.
D5 → nper
-E5 → pmt
-F5 → pv
C5 → fv
0 → 0 means payment is timed at the end of the period.
This is the output.
• Drag down the Fill Handle to see the result in the rest of the cells.
Read More: How to Calculate Present Value of Future Cash Flows in Excel
Example 4 – Computing the Number of Periods with the NPER Function
The following dataset showcases Present Value, Future Value, and Annual Rate.
4.1 Number of Periods Without Periodic Payments
• Enter the following formula in F5.
D5 → rate
0 → pmt
-E5 → pv
C5 → fv
0 → 0 means payment is timed at the end of the period.
This is the output.
• Drag down the Fill Handle to see the result in the rest of the cells.
4.2 Number of Periods with Periodic Payments
• Enter the following formula in G5.
D5 → rate
-E5 → pmt
-F5 → pv
C5 → fv
0 → 0 means payment is timed at the end of the period.
This is the output.
• Drag down the Fill Handle to see the result in the rest of the cells.
Read More: How to Calculate Present Value in Excel with Different Payments
Example 5 – Using the PMT Function to Determine a Payment Per Period
In the dataset below, Present Value, Annual Rate, Number of Years, and Future Value are displayed.
5.1 Payment Per Period for a Zero Future Value
• Enter the formula below in G5.
D5 → rate
F5 → nper
-C5 → pv
0 → fv
0 → 0 means payment is timed at the end of the period.
This is the output.
• Drag down the Fill Handle to see the result in the rest of the cells.
5.2 Payment Per Period for a Non-Zero Future Value
• Enter the following formula in G5.
D5 → rate
F5 → nper
-C5 → pv
E5 → fv
0 → 0 means payment is timed at the end of the period.
Note: The negative sign is used before the function, not to be displayed in the output.
This is the output.
• Drag down the Fill Handle to see the result in the rest of the cells.
Read More: How to Calculate Future Value in Excel with Different Payments
How to Create a Time Value Money Table in Excel
1. Create a PVIF Table
• Enter your data in the PVIF table.
• Go to B15 and enter the following formula.
• Enter the Initial Rate in C15.
• Select D5 and enter the following formula to create the third column in the table. Drag the Fill Handle to column 16.
• Enter the Initial Period in B16.
• Add a new row by using the following formula in B17.
• Drag down the Fill Handle to see the result in the rest of the cells.
• Select the whole table (B15:L45).
• Go to Data >> What-If Analysis >> Data Table.
• In the Data Table, enter $C$11 as the Row input cell and $C$12 as the Column input cell.
• Click OK and the PVIF table will be created.
2. Make an FVIF Table to Calculate the Time Value of Money in Excel
The FVIF table contains future value interest factors.
• Copy the PVIF worksheet to a new worksheet.
• Select B15 and enter the formula below.
• Press Enter and you will have your FVIF table.
3. Calculating the Time Value of Money with a PVIFA Table
To calculate the present worth of future value as annuities.
• Add a new row: Type to the PVIFA table.
• Select C13 and go to Data >> Data Validation >> Data Validation.
• Select List in Allow.
• Then enter “Regular, Due” in Source.
Two options will be added in C13.
• Enter the following formula in B16 and the PVIFA table will be created.
4. Create an FVIFA Table to Calculate the Time Value of Money in Excel
To determine the future worth of the present value of money.
• Copy the PVIFA table into a new sheet and change the formula of B16 to:
Download Practice Workbook
Related Articles
Calculate Time Value of Money : Knowledge Hub
<< Go Back to Excel for Finance | Learn Excel
Get FREE Advanced Excel Exercises with Solutions!
We will be happy to hear your thoughts
Leave a reply | {"url":"https://www.exceldemy.com/learn-excel/finance/time-value-of-money/","timestamp":"2024-11-04T08:25:59Z","content_type":"text/html","content_length":"216392","record_id":"<urn:uuid:912bb36a-2a0e-4a3b-9a1f-5686c8ddaa69>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00658.warc.gz"} |
Multiplication By 4 Worksheets Free
Mathematics, specifically multiplication, forms the foundation of numerous academic disciplines and real-world applications. Yet, for numerous learners, understanding multiplication can pose a
challenge. To resolve this difficulty, instructors and moms and dads have actually welcomed a powerful device: Multiplication By 4 Worksheets Free.
Introduction to Multiplication By 4 Worksheets Free
Multiplication By 4 Worksheets Free
Multiplication By 4 Worksheets Free - Multiplication By 4 Worksheets Free Pdf, Multiplication By 4 Worksheets - Free, Multiply By 4 Worksheet Free, Multiplication By 4 Worksheets Printable, Multiply
By 4 Worksheets Printable, Multiplication Grade 4 Worksheets Free, Multiplication Worksheets For Free, Multiplication Apps For 4th Grade, Multiplication Facts For Class 4
This is another super fun worksheet to work on the 4 x table facts Have your student start a the beginning and work through the maze by coloring the answers for each multiplication table fact Start
with 4 x 1 coloring in 4 and ending at 4 x 12 coloring in 48 at the finish sign The student should color in 4 8 12 16 20 24 28 32 36
Our multiplication worksheets start with the basic multiplication facts and progress to multiplying large numbers in columns We emphasize mental multiplication exercises to improve numeracy skills
Choose your grade topic Grade 2 multiplication worksheets Grade 3 multiplication worksheets Grade 4 mental multiplication worksheets
Importance of Multiplication Practice Comprehending multiplication is crucial, laying a strong structure for advanced mathematical ideas. Multiplication By 4 Worksheets Free offer structured and
targeted technique, fostering a deeper understanding of this essential math operation.
Evolution of Multiplication By 4 Worksheets Free
Multiplication Worksheets Number 4 Printable Multiplication Flash Cards
Multiplication Worksheets Number 4 Printable Multiplication Flash Cards
Multiplication By 4 Worksheets help students develop their math skills The ability to solve simple multiplication questions and use multiplicative thinking is helpful for children in everyday life
Benefits Of Multiplication By 4 Worksheets
Multiplying by 4 activities are an accurate assessment tool that test children on the knowledge of equal grouping and above all number doubling Master how to do add a number to itself twice or double
a number as you download free 4 times table worksheets where given 4 x 3 You add 3 3 3 3 6 6 12 Wow Superb
From standard pen-and-paper exercises to digitized interactive layouts, Multiplication By 4 Worksheets Free have developed, accommodating diverse learning designs and preferences.
Types of Multiplication By 4 Worksheets Free
Basic Multiplication Sheets Straightforward exercises focusing on multiplication tables, aiding students construct a strong arithmetic base.
Word Issue Worksheets
Real-life scenarios integrated into troubles, improving critical thinking and application abilities.
Timed Multiplication Drills Tests made to enhance rate and accuracy, helping in rapid mental mathematics.
Advantages of Using Multiplication By 4 Worksheets Free
Multiplication Worksheets Year 4 PrintableMultiplication
Multiplication Worksheets Year 4 PrintableMultiplication
These 4 multiplication table worksheets for printing or downloading in PDF format are specially aimed at primary school students You can also make a multiplication worksheet yourself using the
worksheet generator
Download and printout our FREE worksheets HOLIDAY WORKSHEETS Free Secret Word Puzzle Worksheets New YearsWorksheets Martin Luther King Jr Worksheets
Improved Mathematical Abilities
Consistent method hones multiplication proficiency, boosting general math capacities.
Enhanced Problem-Solving Abilities
Word troubles in worksheets create analytical thinking and technique application.
Self-Paced Learning Advantages
Worksheets fit specific learning speeds, fostering a comfortable and adaptable learning setting.
Just How to Create Engaging Multiplication By 4 Worksheets Free
Incorporating Visuals and Shades Lively visuals and colors record focus, making worksheets aesthetically appealing and involving.
Consisting Of Real-Life Situations
Connecting multiplication to everyday scenarios adds relevance and practicality to workouts.
Tailoring Worksheets to Various Skill Degrees Customizing worksheets based on varying efficiency levels makes certain inclusive understanding. Interactive and Online Multiplication Resources Digital
Multiplication Devices and Games Technology-based resources provide interactive learning experiences, making multiplication appealing and enjoyable. Interactive Internet Sites and Apps Online systems
offer diverse and easily accessible multiplication practice, supplementing conventional worksheets. Tailoring Worksheets for Various Learning Styles Visual Learners Aesthetic aids and diagrams help
comprehension for students inclined toward aesthetic understanding. Auditory Learners Verbal multiplication troubles or mnemonics cater to students who realize principles through acoustic ways.
Kinesthetic Learners Hands-on activities and manipulatives sustain kinesthetic learners in understanding multiplication. Tips for Effective Application in Understanding Uniformity in Practice Regular
practice enhances multiplication skills, advertising retention and fluency. Stabilizing Rep and Selection A mix of repeated exercises and diverse problem styles maintains interest and comprehension.
Offering Useful Comments Feedback help in determining areas of renovation, encouraging continued progress. Obstacles in Multiplication Technique and Solutions Motivation and Interaction Difficulties
Dull drills can lead to disinterest; innovative approaches can reignite motivation. Getting Rid Of Anxiety of Mathematics Adverse understandings around mathematics can prevent progression; creating a
positive learning environment is crucial. Effect of Multiplication By 4 Worksheets Free on Academic Performance Researches and Research Searchings For Research shows a favorable relationship in
between constant worksheet use and enhanced mathematics efficiency.
Final thought
Multiplication By 4 Worksheets Free emerge as versatile devices, promoting mathematical effectiveness in students while suiting diverse understanding styles. From fundamental drills to interactive
on-line sources, these worksheets not only boost multiplication abilities yet likewise advertise critical reasoning and analytic abilities.
Multiplication Sheet 4th Grade
Multiplication Worksheets Grade 4 Pdf Free Printable
Check more of Multiplication By 4 Worksheets Free below
Download Printable 4Th Grade Multiplication Worksheets Collection Rugby Rumilly
Free Multiplication Worksheet 2 Digit By 2 Digit Free4Classrooms
Printable 4 Times Table Worksheets Activity Shelter
Multiplication Worksheets 4 Times Tables PrintableMultiplication
Times Tables Worksheets 2 3 4 5 6 7 8 9 10 11 And 12 Eleven Worksheets FREE
2 By 1 multiplication
Multiplication Worksheets K5 Learning
Our multiplication worksheets start with the basic multiplication facts and progress to multiplying large numbers in columns We emphasize mental multiplication exercises to improve numeracy skills
Choose your grade topic Grade 2 multiplication worksheets Grade 3 multiplication worksheets Grade 4 mental multiplication worksheets
Multiplying 1 to 12 by 4 100 Questions A Math Drills
Welcome to The Multiplying 1 to 12 by 4 100 Questions A Math Worksheet from the Multiplication Worksheets Page at Math Drills This math worksheet was created or last revised on 2021 02 19 and has
been viewed 1 445 times this week and 2 100 times this month
Our multiplication worksheets start with the basic multiplication facts and progress to multiplying large numbers in columns We emphasize mental multiplication exercises to improve numeracy skills
Choose your grade topic Grade 2 multiplication worksheets Grade 3 multiplication worksheets Grade 4 mental multiplication worksheets
Welcome to The Multiplying 1 to 12 by 4 100 Questions A Math Worksheet from the Multiplication Worksheets Page at Math Drills This math worksheet was created or last revised on 2021 02 19 and has
been viewed 1 445 times this week and 2 100 times this month
Multiplication Worksheets 4 Times Tables PrintableMultiplication
Free Multiplication Worksheet 2 Digit By 2 Digit Free4Classrooms
Times Tables Worksheets 2 3 4 5 6 7 8 9 10 11 And 12 Eleven Worksheets FREE
Grade 4 multiplication worksheets
Multiplication Worksheets Learning Printable
Multiplication Worksheets Learning Printable
Grade 4 multiplication By Multiples Of Ten worksheet
Frequently Asked Questions (Frequently Asked Questions).
Are Multiplication By 4 Worksheets Free suitable for any age groups?
Yes, worksheets can be tailored to various age and ability levels, making them versatile for various students.
Just how frequently should students exercise making use of Multiplication By 4 Worksheets Free?
Consistent practice is key. Regular sessions, ideally a couple of times a week, can produce substantial improvement.
Can worksheets alone boost mathematics abilities?
Worksheets are a valuable device yet ought to be supplemented with diverse learning techniques for extensive skill growth.
Exist on the internet platforms using free Multiplication By 4 Worksheets Free?
Yes, several instructional web sites use open door to a large range of Multiplication By 4 Worksheets Free.
How can moms and dads support their youngsters's multiplication technique in the house?
Motivating constant method, providing help, and developing a positive discovering setting are useful steps. | {"url":"https://crown-darts.com/en/multiplication-by-4-worksheets-free.html","timestamp":"2024-11-04T08:35:02Z","content_type":"text/html","content_length":"31074","record_id":"<urn:uuid:f0c9f646-bca4-47e8-9f7e-a198078c26fb>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00709.warc.gz"} |
Deck Height Calculator - OYE Calculator
In order to use Deck Height Calculator, Enter the deck clearance, stroke, connecting rod length, and compression height of the piston into the calculator to determine the deck height.
You may also like to use average cost calculator.
The following equation is used to calculate the Deck Height.
• Where DH is the deck height (in)
• DC is the deck clearance (in)
• S is the stroke length (in)
• CRL is the connecting rod length (in)
• CH is the compression height (in)
To calculate the deck height, add together the connecting rod length, the compression height, and half of the stroke length, then, add the result to the deck clearance to get the deck height.
A deck height is a measure of distancer from the main bearing bore to the flat surface of the block.
How to Calculate Deck Height?
Example Problem:
The following example outlines the steps and information needed to calculate Deck Height.
First, determine the deck clearance. In this example, the deck clearance is measured to be .25 inches.
Next, determine the stroke length. The stroke length is measured to be 3 inches.
Next, determine the connection rod length. This is measured to be 4 inches.
Next, determine the compression height. The compression height is .5 inches.
Finally, calculate the deck height using the formula above:
DH = DC + ((S/2) + CRL + CH)
DH = .25 + ((3/2) + 4 + .5)
DH = 6.25 inches
Frequently Asked Questions
Here are some of the mostly asked questions…
What is the Deck Height Calculator?
The Deck Height Calculator is an online tool used to determine the height or thickness of a deck or platform, often used in construction and architectural planning.
How does the Deck Height Calculator work?
The calculator takes into account various input parameters, such as deck clearance, stroke, connecting rod length, and compression height of the piston into the calculator to determine the deck
Is the Deck Height Calculator suitable for both residential and commercial projects?
Yes, the calculator can be used for both residential and commercial projects, as long as the measurements and requirements are provided accurately.
Is the Deck Height Calculator free to use?
Yes, our Calculator is completely free to use, and there are no hidden charges or subscriptions.
Where can I get support if I have questions or issues with the calculator?
If you have any questions or encounter issues while using this Calculator, our support team is available to assist you. | {"url":"https://oyecalculator.com/deck-height-calculator/","timestamp":"2024-11-09T00:02:52Z","content_type":"text/html","content_length":"401016","record_id":"<urn:uuid:15751505-4a8d-469b-bc37-52d775b4152a>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00444.warc.gz"} |
Quart vs Pint | Jacks Of Science
Quart vs Pint
What’s the difference?
Is one better to use than the other?
What’s the right answer?
When discussing measurements of liquid or dry ingredients, a quart, and a pint are two common units of measurement.
Explain it to a child
A quart is a unit of measurement that is equal to four cups or two pints. A pint is a unit of measurement that is equal to two cups.
A quart is twice as large as a pint. It’s important to keep in mind that while this size difference is consistent across different measurement systems, the actual volumes can vary depending on
which system you are using.
Quart vs Pint – what’s the difference between them?
A quart contains twice as much liquid as a pint.
Generally speaking, a quart is equal to four cups or 2 pints while a pint contains 2 cups.
Therefore, when selecting between these two measurements it should be kept in mind that selecting a quart would be twice as much liquid than selecting a pint.
A quart and pint are two common units of volume used, especially when measuring liquids.
Here’s a comparison of quart vs pint:
• A quart is exactly equal to 0.946352946 liters. It is approximately 32 fluid ounces.
• A pint is exactly equal to 0.473176473 liters. It is approximately 16 fluid ounces.
• So a quart is exactly two pints. A quart is double the volume of a pint.
• Some visual examples:
□ A quart of milk is 32 ounces, while a pint of milk is 16 ounces.
□ A quart of beer is two 16oz beers or pint glasses of beer.
□ A quart of ice cream is two pints worth of ice cream.
• Historically, a quart referred to a quarter of a gallon. A pint was half a quart.
• Quarts and pints are useful when cooking, mixing drinks, or measuring out machine fluids. Quarts are commonly used for bigger volumes.
• Metric equivalents are roughly:
□ 1 quart = 0.95 liters
□ 1 pint = 0.47 liters
So in summary, a quart is two pints or quadruple a cup. Knowing quart and pint conversions is handy for cooking, baking, and other measuring tasks.
• Both are beneficial in different scenarios and understanding this difference can help make the most accurate measure for whatever task is at hand.
So, if you need 2 pints, you will use one quart.
In addition, a US liquid quart makes up exactly ¼ of a gallon and each US cup measures ½ of a US pint.
Understanding the differences between quarts and pints is easy as long as you remember that one quart is twice as much liquid as one pint!
What Is a Quart?
A quart is a unit of measurement for capacity, equal to a quarter of a gallon.
• It’s commonly used in the United States and the United Kingdom when measuring liquid volumes or dry bulk goods.
• One quart is equivalent to two pints, four cups, or 32 fluid ounces.
• Quarts are frequently used to measure large volumes of items such as cereal, and peanut butter, and larger amounts of liquids such as ketchup or paint.
Kitchen recipes are also written in quarts.
What Is a Pint?
A pint is a unit of measurement for liquid volume, equivalent to 16 ounces or 473 milliliters.
• It’s one of the most common measurements for beer and many other beverages, and can often be found printed on the side of glasses at pubs so that customers are aware of how much they are
• Outside of casual uses, pints have their place in more scientific contexts too many laboratory beakers and measuring cylinders have pt markings to indicate liquid amounts.
For those looking to enjoy a good craft beer, understanding pints is essential!
Which is bigger a quart or a pint?
Although these two units of measurement can often sound similar, they in fact measure different units of liquid volume.
A quart is actually larger than a pint; it measures four cups and equates to 32 ounces, whereas a pint contains only two cups and 16 ounces of liquid.
Therefore when trying to determine the number of ingredients for a recipe, it’s important to know exactly which unit of measurement needs to be used.
Are a quart and a pint the same?
The short answer is no.
A quart is a unit of measure equal to two pints, or two times the volume of a pint.
In terms of liquid measurement, a quart equals four cups, whereas a pint is equivalent to two cups.
This means that there are twice as many pints as quarts.
Therefore, a quart contains twice the amount of liquid as a pint and can be used in cooking recipes and other measuring tasks that require larger amounts of liquid ingredients.
How many ml is a quart vs a pint?
A quart is equivalent to approximately 946 milliliters, while a pint is equal to 473 milliliters.
• That makes a quart twice the size of a pint, so if an ingredient calls for a quart, it will require double the amount as a pint.
• Knowing how many ml is in these measurements can help you get the amounts right and perfect that recipe every single time!
Examples of Quart Used in Sentences
There are numerous examples of the word “quart” being used in sentences.
A quart is equivalent to two pints, so it can be used to refer to an amount of liquid when describing measurements.
• For example, one could say, “Stir together 1 quart of milk and 2 cups of cinnamon.”
It can also be used when talking about items contained within a certain container size.
• For example, “I’m making muffins, so I need two quarts of blueberries.” Similarly, it can even refer to amounts of food items like vegetables.
• For example, “Add 2 quarts of carrots to the soup.” As you can see, the word “quart” is quite versatile and can be used in many different circumstances.
Examples of Pint Used in Sentences
Examples of pint used in sentences range from measuring dry ingredients to describing the size of a beer mug.
• You might use a sentence like “I need one pint of flour for this recipe” when measuring out ingredients for baking.
• In another context, you might say “My dad has a really big pint he drinks out of every night.”
• Examples that use the word “pint” are numerous it can refer to units of id, but also symbolize an amount that would fill a container or vessel.
Whatever is being referred to specifically depends on the context, so it’s important to pay attention when it comes up.
Article Sources
Jacks of Science sources the most authoritative, trustworthy, and highly recognized institutions for our article research. Learn more about our Editorial Teams process and diligence in verifying the
accuracy of every article we publish. | {"url":"https://jacksofscience.com/quart-vs-pint/","timestamp":"2024-11-11T03:19:47Z","content_type":"text/html","content_length":"57820","record_id":"<urn:uuid:6541e80b-03a2-4846-adf9-ccd5c809e34e>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00869.warc.gz"} |
A man is walking towards a vertical pillar in a straight path, at a un
A man is walking towards a vertical pillar in a straight path, at a uniform speed. At a certain point A on the path, he observes that the angle of elevation of the top of the pillar is 30∘, After
walking for 10 minutes from A in the same direction, at a point B, he observes that the angle of elevation of the top of the pillar is 60∘. Then the time taken (in minutes) by him, from B to reach
the pillar, is | {"url":"https://www.doubtnut.com/qna/649488422","timestamp":"2024-11-05T17:11:00Z","content_type":"text/html","content_length":"298568","record_id":"<urn:uuid:f87b30d4-4d0d-466f-a60b-7d7e19a2f56a>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00568.warc.gz"} |
Algebraic Numbers in Isabelle/HOL
Based on existing libraries for matrices, factorization of rational polynomials, and Sturm's theorem, we formalized algebraic numbers in Isabelle/HOL. Our development serves as an implementation for
real and complex numbers, and it admits to compute roots and completely factorize real and complex polynomials, provided that all coefficients are rational numbers. Moreover, we provide two
implementations to display algebraic numbers, an injective and expensive one, or a faster but approximative version.
To this end, we mechanized several results on resultants, which also required us to prove that polynomials over a unique factorization domain form again a unique factorization domain.
April 16, 2017
Use certified Berlekamp-Zassenhaus factorization, use subresultant algorithm for computing resultants, improved bisection algorithm
January 29, 2016
Split off Polynomial Interpolation and Polynomial Factorization
Session Algebraic_Numbers | {"url":"https://www.isa-afp.org/entries/Algebraic_Numbers.html","timestamp":"2024-11-12T04:22:41Z","content_type":"text/html","content_length":"14861","record_id":"<urn:uuid:6ad208ea-02f6-4710-8011-19f32eef318a>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00302.warc.gz"} |
Solved Example Problems for Projectile Motion
Example 2.37
Suppose an object is thrown with initial speed 10 m s-1 at an angle π/4 with the horizontal, what is the range covered? Suppose the same object is thrown similarly in the Moon, will there be any
change in the range? If yes, what is the change? (The acceleration due to gravity in the Moon gmoon = 1/6 g)
In projectile motion, the range of particle is given by,
If the same object is thrown in the Moon, the range will increase because in the Moon, the acceleration due to gravity is smaller than g on Earth,
The range attained on the Moon is approximately six times that on Earth.
Example 2.38
In the cricket game, a batsman strikes the ball such that it moves with the speed 30 m s-1 at an angle 30o with the horizontal as shown in the figure. The boundary line of the cricket ground is
located at a distance of 75 m from the batsman? Will the ball go for a six? (Neglect the air resistance and take acceleration due to gravity g = 10 m s-2).
The motion of the cricket ball in air is essentially a projectile motion. As we have already seen, the range (horizontal distance) of the projectile motion is given by
The initial speed u 30 m s-1
The projection angle θ = 30o
The horizontal distance travelled by the cricket ball
This distance is greater than the distance of the boundary line. Hence the ball will cross this line and go for a six.
Solved Example Problems for Degrees and Radians
Example 2.39
Calculate the angle θ subtended by the two adjacent wooden spokes of a bullock cart wheel is shown in the figure. Express the angle in both radian and degree.
The full wheel subtends 2π radians at the center of the wheel. The wheel is divided into 12 parts (arcs).
The angle subtended by two adjacent wooden spokes is 30 degree at the center.
Solved Example Problems for Circular Motion
Example 2.40
A particle moves in a circle of radius 10 m. Its linear speed is given by v = 3t where t is in second and v is in m s-1.
a) Find the centripetal and tangential acceleration at t = 2 s.
b) Calculate the angle between the resultant acceleration and the radius vector.
The linear speed at t = 2 s
The centripetal acceleration at t = 2 s is
The angle between the radius vector with resultant acceleration is given by
Example 2.41
A particle is in circular motion with an acceleration α = 0.2 rad s−2.
a) What is the angular displacement made by the particle after 5 s?
b) What is the angular velocity at t = 5 s?. Assume the initial angular velocity is zero.
Since the initial angular velocity is zero (ω0 = 0).
The angular displacement made by the particle is given by
Study Material, Lecturing Notes, Assignment, Reference, Wiki description explanation, brief detail
11th Physics : UNIT 2 : Kinematics : Solved Example Problems for Projectile Motion | | {"url":"https://www.brainkart.com/article/Solved-Example-Problems-for-Projectile-Motion_34495/","timestamp":"2024-11-12T19:25:22Z","content_type":"text/html","content_length":"55873","record_id":"<urn:uuid:80d67ab2-ff33-45f8-9537-2601038e5b3f>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00573.warc.gz"} |
Resolution Proving I
Not an april fools post.
One of my favorite projects is PyRes. It’s a pedagogical first order prover in python. This blog post is mostly a compression and repetition of some of what is found there.
Resolution theorem proving is an old and popular style of theorem prover.
It basically takes in a pile of syntactic facts and smashes them together producing new facts. That sentence also describe the entirety of “logic”.
First we need a basic term datatype. I kind of like using python dataclasses. This is the analog of a algebraic datatype type term = Var of string | Fn of string * term list. Variables can be used to
represent an implicit “forall” character of a proven or asserted fact/clause. They sometimes play a dual role as the things searched for in a query (when inside a prolog query for example). These are
quite different uses/modalities and it can be good to be aware of this.
from dataclasses import dataclass
from typing import Any
class Term():
class Fn(Term):
name: str
args: tuple[Any, ...] = ()
def __repr__(self):
if len(self.args) == 0:
return self.name
return f"{self.name}({', '.join(map(repr, self.args))})"
class Var(Term):
name: str
def __repr__(self):
return "?" + self.name
We may want to substitute in terms for variables. A substitution dictionary is a mapping from variable names to terms.
def subst(t : Term, s : dict[str,Term]):
match t:
case Var(name):
return s.get(name, t)
case Fn(name, args):
return Fn(name, [subst(arg, s) for arg in args])
case _:
raise ValueError("Invalid term")
Unification is like two way pattern matching. It can also be thought of as a most basic form of equation solving.
Unification is tricky, as are many things having to do with variables so I try to follow some reference pretty closely.
The basic idea is pretty simple. You take two terms. If they are concrete constants, they better match. If so, recurse on the arguments. If one is a variable, you have sort of solved that equation.
Substitute that expression for that variable everywhere. The occurs check is an interesting subtlety. It is sort of making sure you don’t allow equations like X = cons(1, X) to be solvable. Unless
you realize you’re up to something weird, it is probably what you want.
def occurs_check(x : Var, t : Term):
if isinstance(t, Var):
return x.name == t.name
elif isinstance(t, Fn):
return any(occurs_check(x, arg) for arg in t.args)
raise ValueError("Invalid term")
# https://github.com/eprover/PyRes/blob/master/unification.py
def mgu(t1:Term, t2:Term):
l1 = [t1]
l2 = [t2]
s = {}
while len(l1) != 0:
t1 = l1.pop()
t2 = l2.pop()
if isinstance(t1, Var):
if t1 == t2:
if occurs_check(t1, t2):
return None
l1 = [subst(t, {t1.name:t2}) for t in l1]
l2 = [subst(t, {t1.name:t2}) for t in l2]
s[t1.name] = t2
elif isinstance(t2, Var):
if occurs_check(t2, t1):
return None
l1 = [subst(t, {t2.name:t1}) for t in l1]
l2 = [subst(t, {t2.name:t1}) for t in l2]
s[t2.name] = t1
elif isinstance(t1, Fn) and isinstance(t2, Fn):
if t1.name != t2.name or len(t1.args) != len(t2.args):
return None
raise ValueError("Invalid term")
return s
def test():
x,y = Var("x"), Var("y")
def f(x):
return Fn("f", (x,))
def g(x):
return Fn("g", (x,))
print(f"{mgu(f(x), g(x))=}")
print(f"{mgu(f(x), f(y))=}")
print(f"{mgu(f(x), f(x))=}")
print(f"{mgu(f(x), f(f(x)))=}")
print(f"{mgu(f(x), f(f(y)))=}")
mgu(f(x), g(x))=None
mgu(f(x), f(y))={'x': ?y}
mgu(f(x), f(x))={}
mgu(f(x), f(f(x)))=None
mgu(f(x), f(f(y)))={'x': f(?y)}
A clause is a set of negative and positive literals. Negative literals are hypotheses and positive literals are the possible conclusions. A clause is the statement that not neg[0] or not neg[1] or
... or pos[0] or pos[1] or pos[2] or ... is true. It can also be thought of as (neg[0] and neg[1] and neg[2] ...) => (pos[0] or pos[1] or ...) or {neg_i} |- {pos_i}
class Clause(): # Sequent
neg: tuple[Term, ...] # frozenset? # hyps
pos: tuple[Term, ...] # concs
def __repr__(self):
return f"{self.neg} ⊢ {self.pos}"
def edge(x,y):
return Fn("edge", (x,y))
def path(x,y):
return Fn("path", (x,y))
a,b,c,d = Fn("a"), Fn("b"), Fn("c"), Fn("d")
facts = [Clause((), (edge(a,b),)), Clause((), (edge(b,c),)), Clause((), (edge(c,d),))]
X,Y,Z = Var("X"), Var("Y"), Var("Z")
path_base = Clause([edge(X,Y)], [path(X,Y)])
path_trans = Clause([path(X,Y), edge(Y,Z)], [path(X,Z)])
clauses = [path_base,path_trans]
Resolution is the analog of modus ponens or the cut rule. We take two clauses and see if we can make a positive literal from one to match (unify) a negative from the second.
def computeResolvents(clause1: Clause, clause2: Clause):
res = []
# freshen vars?
#fv = freevars(clause2)
#clause1 = freshen_clause(clause1)
for lit1 in clause1.pos:
for lit2 in clause2.neg:
s = mgu(lit1,lit2)
if s == None:
new_clause = Clause(tuple(subst(lit,s) for lit in clause1.neg) + tuple(subst(lit,s) for lit in clause2.neg if lit != lit2), tuple(subst(lit,s) for lit in clause1.pos if lit != lit1) + tuple(subst(lit,s) for lit in clause2.pos))
return res
def test():
# this is a datalog-esque loop
for fact in facts:
for clause in clauses:
resolvents = computeResolvents(fact, clause)
print(f"{fact=}, {clause=}, {resolvents=}")
fact=() ⊢ (edge(a, b),), clause=[edge(?X, ?Y)] ⊢ [path(?X, ?Y)], resolvents=[() ⊢ (path(a, b),)]
fact=() ⊢ (edge(a, b),), clause=[path(?X, ?Y), edge(?Y, ?Z)] ⊢ [path(?X, ?Z)], resolvents=[(path(?X, a),) ⊢ (path(?X, b),)]
fact=() ⊢ (edge(b, c),), clause=[edge(?X, ?Y)] ⊢ [path(?X, ?Y)], resolvents=[() ⊢ (path(b, c),)]
fact=() ⊢ (edge(b, c),), clause=[path(?X, ?Y), edge(?Y, ?Z)] ⊢ [path(?X, ?Z)], resolvents=[(path(?X, b),) ⊢ (path(?X, c),)]
fact=() ⊢ (edge(c, d),), clause=[edge(?X, ?Y)] ⊢ [path(?X, ?Y)], resolvents=[() ⊢ (path(c, d),)]
fact=() ⊢ (edge(c, d),), clause=[path(?X, ?Y), edge(?Y, ?Z)] ⊢ [path(?X, ?Z)], resolvents=[(path(?X, c),) ⊢ (path(?X, d),)]
Fully naive inference is taking all your clauses and just smashing them together to infer new clauses. For the path clauses, we get new multi edge step theorems. I freshen the variables in one part
of the pair in kind of a hacky way. It isn’t wrong to insufficiently freshen, you just won’t get the most general possible resolution. You have accidental equality constraints between the variables
of the two clauses.
def freshen(t):
# this is both ugly and wrong. Whatever
if isinstance(t, Var):
return Var(t.name + "_")
elif isinstance(t, Fn):
return Fn(t.name, tuple(freshen(arg) for arg in t.args))
raise ValueError("Invalid term")
def freshen_clause(c):
return Clause(tuple(map(freshen,c.neg)), tuple(map(freshen, c.pos)))
def naive_infer(clauses):
res = []
for c1 in clauses:
for c2 in clauses:
c2 = freshen_clause(c2)
# if c1 != c2: # an optimization
resolvents = computeResolvents(c1, c2)
res.extend(computeResolvents(c1, c2))
return res
(edge(?X_, ?Y_),) ⊢ (path(?X_, ?Y_),)
(path(?X_, ?Y_), edge(?Y_, ?Z_)) ⊢ (path(?X_, ?Z_),)
(edge(?X_, ?Y_),) ⊢ (path(?X_, ?Y_),)
(path(?X_, ?Y_), edge(?Y_, ?Z_)) ⊢ (path(?X_, ?Z_),)
[(edge(?X_, ?Y_), edge(?Y_, ?Z_)) ⊢ (path(?X_, ?Z_),),
(path(?X_, ?Y), edge(?Y, ?Y_), edge(?Y_, ?Z_)) ⊢ (path(?X_, ?Z_),)]
Bits and Bobbles
I ran out of steam before getting to anything too juicy today. But some comments
• The given clause algorithm is a semi naive strategy for expanding your inferences.
• Particular strategies of using resolution can give you datalog or prolog. Hypothetical/contextual datalog is implementable as a strategy. Other named strategies: hyper resolution . UR / unit
resolution. set of support
• The factoring rule is necessary to get a complete resolution prover. It compresses the literals inside a single clause.
• I don’t really think of resolution as a first order logic method.
• Alpha renaming / variants. Variable names don’t really matter and everything should be invariant to changing them.
• term indexing. Discrimination tries, path indexes, fingerprinting
• Subsumption. If you have foo(X) as a fact (implicitly forall x, foo(x)), it is stronger than any foo(a). The foo(a) fact is redundant. When you derive a new fact, you should check if it is
redundant with respect to already processed facts.
• Paramodulation. Treat equality special. Use positive equalityt facts to rewrite inside other clauses. Superposition is taking term orderings into account. A contextual generalization of knuth
bendix completion.
• queries. We can make a special term that the prover loop is looking for stuff that unifies with. I am interetsed in non refutation theorem proving applications
• the occurs check. On the other hand, note that X = sin(X) is not intuitively a problem (X = pi n).
• nominal unification / lambda / miller
• How do you implement unification efficiently? Interesting stuff on wiki page.
Good reading: Handbook of automated reasoning. Harrison’s automated reasoning Handbook. PyRes paper
I’ll start chucking snippets of functionality from blog posts into knuckledragger
Natural notion of <= isa subsumes. == is “alpha equiv to”. hash of every var should be the same. Term orderings also have a natural <=
alpha equiv is basically mgu with var permutations instead of substitution nominal
If everything was ground it’s a bit simpler. This is the case for propositional logic
{A} |- {B} {B} |- {C}
{A} |- {C}
When we have variables, we figure out if two literals are “equal” by seeing if they unify. Then we need to apply that unification everywhere.
def freevars(t):
if isinstance(t, Var):
return {t.name}
elif isinstance(t, Fn):
return set().union(*map(freevars, t.args))
raise ValueError("Invalid term")
def freshen(t):
# this is both ugly and wrong.
if isinstance(t, Var):
return Var(t.name + "_")
elif isinstance(t, Fn):
return Fn(t.name, tuple(freshen(arg) for arg in t.args))
raise ValueError("Invalid term")
def freshen_clause(c):
return Clause(map(freshen,c.neg), map(freshen, c.pos))
Factoring feels kind of like a structural rule like weakening.
def computePosFactors(clause):
res = []
for lit1 in clause.pos:
for lit2 in clause.pos: # redundant.
s = mgu(lit1,lit2)
if s == None:
new_clause = Clause(tuple(subst(lit,s) for lit in clause.neg), tuple(subst(lit,s) for lit in clause.pos if lit != lit1))
return res
def computeNegFactors(clause):
res = []
for lit1 in clause.neg:
for lit2 in clause.neg: # redundant.
s = mgu(lit1,lit2)
if s == None:
new_clause = Clause(tuple(subst(lit,s) for lit in clause.neg if lit != lit1), tuple(subst(lit,s) for lit in clause.pos))
return res
The given clause algorithm is similar to semi naive evaluation. If starts with a set of unprocessed clauses and processes them one by one by finding all possible resolutions against the processed
clauses. One tries to prune away redundancies.
def prove(clauses):
unprocessed = set(clauses)
processed = set()
while len(unprocessed) >= 0:
new = []
clause = unprocessed.pop()
for clause2 in processed:
new.extend(computeResolvents(clause, clause2))
delta = processed.difference(new)
def alpha_eq(self, other, perm):
match self:
case Var(x), Var(y):
if x in perm:
return perm[x] == y
elif y in perm_inv:
return perm_inv[y] == x
perm.add(x, y)
def __eq__(self,other, perm={}):
alpha_eq(self, other, {})
First order logic
Resolution is presented as a classical first order logic prover, but I don’t think it really is. I think it can be thought of as fairly generic principles of what it means to have inference rules. A
clause is quite a bit / identical to a sequent. The negative literals are the things before the turnstile |- and the positive literals are the things after. Resolution is then seen as an instance of
the famed cut rule which is in turn something like modus ponens Logic programming has a similar confusion. :- is best thought of as the horizontal inference line rather than $\rightarrow$. See The
Logic of Logic Programming https://arxiv.org/abs/2304.13430 and Nadathur and Miller https://www.amazon.com/Programming-Higher-Order-Logic-Dale-Miller/dp/052187940X
First order logic has and or implies not but also predicates/relationships like parent(adam, abel) or border(us, canada) and forall $\forall$ exists $\exists$ quantifiers. It’s a pretty rich system
capable of expressing lots of logical ideas about graphs, geometry, groups, sets, and is fairly amenable to automation (more so seemingly than sophisticated systems). It is conversely seen as fairly
inexpressive. You may need to bolt on some pretty alarming axioms (like the axiom of specification) to get FOL to actually work for you as a foundation of mathematics.
So the story goes, you can convert a pile of first order logic statements to conjunctive normal form (a giant AND of ORs). A -> B is turned into ~A \/ B and etc. Probably most interestingly,
quantifiers are removed via skolemization. A statement \forall x \exists y, parent(y,x) kind of is saying a similar thing to forall x, parent(father(x), x). Note that similarly I could have done
forall x, parent(mother(x), x) or forall x, parent(someparent(x), x) (really I should make a fresh function symbol and then prove that the fresh function symbvol is the same as some other defined
notion). Operationally, you can push existentials through universals if you turn them into function symbols depending on the thingsy ou pushed them though. Often these functions have some reasonable
interpretation in terms of the things you’re trying to model.
Skolemization producers new formula that are equisatisfiable to the old ones. They are logically equivalent in the sense of being able to prove the skolemized formula from the old one, because the
old one doesn’t say that specifically your new invented symbols are the ones with the right properties. This all may be intertwinerd with axiom of choice.
Becauswe we’ve pushed all the existentials to the front, now the formula only has top level quantifiers instead of nested complicated ones. We can strip off the explicit quantifier symbol and replace
it with a notion of variables vs constants/function symbols.
This was all quite hand wavy. See The handbook of automated reasoning or harrison’s handbook for more.
Anyway, eventually you get to CNF. Now the basic inference steps are resolution and factoring (which is sort of resolving a clause on itself). These are complete
It’s kind of curious the distinction we make between FOL and other things. We think of FOL as a kind of logical framework in which we can talk about different mathematical ideas in a common
infrastructure. But also FOL is kind of a very abstracted set theory on it’s own. In metamath for example, first order logic is not that special compared to other things. Predicates are sort of like
the sets of things they are provable on. Set theory comes about when we enable ourselves to reflect (comprehension) predicates into objects that we can manipulate and talk about.
To start a mini pyres
idea: what about an isabelle style [] => []
# fingerprinting https://link.springer.com/chapter/10.1007/978-3-642-31365-3_37
def symcount(t:Term):
def varcount():
# The proof recording system
proof_db = []
def axiom(s):
proof_db.append((s, "axiom"))
return len(proof_db)
def formula(s):
return proof_db[s][0]
def factor(c):
f = compute_factor(formula(c))
proof_db.append((f, ("factor",c)))
return len(proof_db)
def resolve(c1,c2):
r = compute_resolvent(formula(c1), formula(c2))
proof_db.append((r, ("resolve",c1,c2)))
return len(proof_db)
# prolog using these pieces
class Goal():
def __init__(self, goal_clause):
self.goal = goal_clause
self.goal = infer([goal], goal) # we can start with refl.
def rule(self, clause):
# hmm. I do need to record?
compute_resolvent(self.goal, clause)
def erule(self, clause):
for i, lit1 in enumerate(clause.literals):
for lit2 in clause.literals[i:]:
if lit1.neg == lit2.neg:
s = mgu(lit1,lit2)
if s == None:
new_clause = [Literal(lit.neg, subst(lit.term,s)) for lit in clause.literals if lit != lit1 and lit != lit2]
NameError Traceback (most recent call last)
Cell In[15], line 1
----> 1 for i, lit1 in enumerate(clause.literals):
2 for lit2 in clause.literals[i:]:
3 if lit1.neg == lit2.neg:
NameError: name 'clause' is not defined
metitarski. What if I included arb
class Fact():
ctx : tuple[Term, ...]
fact: Term
class Rule():
def hypo_datalog():
rules = []
facts = []
while True:
for rule in rules:
for fact in facts:
class Rule():
head: Term
body: tuple[Term, ...]
def prolog():
rules = []
for rule in rules:
s = compute_resolvent(rule.head, goal[-1])
goal.extend[subst(rule.body, s)]
# but how do I make the "strategy" nature self manifest
import re
toks = [
(":-", "IMPLIES"),
("\\.", "DOT"),
("\\(", "LPAREN"),
("\\)", "RPAREN"),
("[a-zA-Z]+", "FN"),
("[A-Z][a-zA-Z]*", "VAR"),
(",", "COMMA"),
("\\s+", "SPACE")
tokpat = re.compile("|".join(f"(?P<{name}>{pat})" for pat, name in toks))
def parse_rule():
def parse_term(s):
match s.lastgroup:
case "COMMA":
case "RPAREN":
return Fn(name, args)
lark_grammar = """
prog : rule*
rule : term ":-" term ("," term )* "." | term "."
term = var | fn
fn = IDENT "(" term ("," term)* ")"
var = VAR
hyprule : "{" term* "}" "|-" term :-
Cell In[1], line 15
case "COMMA"
SyntaxError: expected ':'
def datalog():
rules = []
facts = []
Literal selection Ordered resolution
class Literal():
neg: bool
class Clause():
literals: list[Literal]
import os
from openai import OpenAI
client = OpenAI(
# This is the default and can be omitted
chat_completion = client.chat.completions.create(
"role": "user",
"content": "Say this is a test", | {"url":"https://www.philipzucker.com/resolution1/","timestamp":"2024-11-08T21:56:56Z","content_type":"text/html","content_length":"81378","record_id":"<urn:uuid:b46d2ab5-70d5-44cf-87a5-79fb53102996>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00121.warc.gz"} |
Musing on the importance of Standard Errors
Standard errors are really just as important as coefficients when estimating a relationship because they provide a critical component to inference. Without standard errors coefficients are largely
What does it matter that the coefficient from the estimation is say 7.3 if we do not know the standard error? If the standard error is say 1 then a 7.3 is quite different from 0, if that is what is
important. However, if the standard error is instead 100, then a coefficient of 7.3 is meaningless (likely) random noise.
Standard errors are interesting because unlike coefficients they alone are interesting without even having coefficient estimates. They provide a point value which gives us insight into how much we
can expect our estimates to vary.
Likewise, standard errors are often much more difficult to calculate than coefficients and even more sensitive to correct specification. Biased standard errors can have the effect of making an
outcome look less likely than it actually is (over-reject the null) or more likely than it actually is (under-reject the null).
A good example is failing to cluster standards errors when appropriate. Not clustering standard errors when appropriate might be a problem if you had data from twenty different summer camps. Not
clustering data by summer camp but evaluating student outcomes is implicitly arguing that the outcomes of camp goers at the summer camps are independent of each other camp goer. Clustering at the
camp level however allows for each camp to have common shared shock (error) that season. | {"url":"http://www.econometricsbysimulation.com/2012/12/musing-on-importance-of-standard-errors.html","timestamp":"2024-11-07T21:36:21Z","content_type":"text/html","content_length":"164490","record_id":"<urn:uuid:a687cbb4-76a4-46db-9de2-dcaf599f0a9e>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00766.warc.gz"} |
New Fuzzy-Heuristic Methodology for Analyzing Compression Load Capacity of Composite Columns
Department of Civil Engineering, Qeshm Branch, Islamic Azad University, Qeshm 79515/1393, Iran
Department of Maritime Engineering, Amirkabir University of Technology, Tehran 1591634311, Iran
Department of Marine Structures, Science and Research Branch, Islamic Azad University, Tehran 14515/775, Iran
Faculty of Engineering, University of Hormozgan, Bandar Abbas 7916193145, Iran
Department of Civil Engineering, McMaster University, Hamilton, ON L8S 4L8, Canada
Author to whom correspondence should be addressed.
Submission received: 12 November 2022 / Revised: 14 December 2022 / Accepted: 19 December 2022 / Published: 3 January 2023
Predicting the mechanical strength of structural elements is a crucial task for the efficient design of buildings. Considering the shortcomings of experimental and empirical approaches, there is
growing interest in using artificial intelligence techniques to develop data-driven tools for this purpose. In this research, empowered machine learning was employed to analyze the axial compression
capacity (CC) of circular concrete-filled steel tube (CCFST) composite columns. Accordingly, the adaptive neuro-fuzzy inference system (ANFIS) was trained using four metaheuristic techniques, namely
earthworm algorithm (EWA), particle swarm optimization (PSO), salp swarm algorithm (SSA), and teaching learning-based optimization (TLBO). The models were first applied to capture the relationship
between the CC and column characteristics. Subsequently, they were requested to predict the CC for new column conditions. According to the results of both phases, all four models could achieve
dependable accuracy. However, the PSO-ANFIS was tangibly more efficient than the other models in terms of computational time and accuracy and could attain more accurate predictions for extreme
conditions. This model could predict the CC with a relative error below 2% and a correlation exceeding 99%. The PSO-ANFIS is therefore recommended as an effective tool for practical applications in
analyzing the behavior of the CCFST columns.
1. Introduction
Concrete and steel are the most fundamental materials needed for today’s construction [
]. Since they can withstand significant amounts of stress (e.g., compression, tension, shear, etc.), engineers design structural components such as columns and beams using concrete and steel [
]. Owing to this popularity, researchers have always studied their various mechanical characteristics [
]. A proper combination of these two materials results in composite materials, e.g., concrete-filled steel tubular elements, which are suitable choices for a wide variety of civil engineering
projects [
]. These elements can play a role in structures such as bridges [
], wharves [
], and roadways [
]. As a broadly used structural element, circular concrete-filled steel tube (CCFST) columns are attracting increasing attention in the building sector. These columns synthesize the best
characteristics of both concrete and steel materials which makes them more desirable than separate ones. This combination not only improves the concrete in terms of toughness and plasticity but can
help avoid (or delay) the local buckling of the steel [
]. High ductility, high strength, and high stiffness are mentioned as the most notable merits of the CCFST column [
So far, many researchers have presented valuable solutions using numerical and analytical techniques for analyzing the behavior of the CCFST columns [
]. Among the different characteristics of these columns, axial compression capacity (CC) has received special attention [
]. Yu et al. [
] proposed an analytical-based unified formula for estimating the CC of the CCFST columns. Their validations proved that this formula can properly work for calculating the bearing capacity of both
hollow and solid columns. Wu et al. [
] conducted an experimental study to investigate the compressive behavior of CCFST column with a focus on the effect of (a) the replacement ratio of demolished concrete lumps, (b) the strength of the
fresh concrete, (c) the thickness of the steel tubes, and (d) the distribution of steel stirrups. Abdalla et al. [
] studied the compressive response of these columns under quasi-static loads and reported the effect of several parameters on the capacity of the columns. It is true that all such efforts provide
valuable findings for developing more efficient CCFST columns, but they suffer from important demerits such as being costly and time consuming. Further, obtaining results in many cases entails
performing destructive laboratory tests. This is while more recent studies have put their focus on much more efficient approaches that are capable of coping with non-linear calculations [
]. Machine learning models have nicely served for estimating many parameters in civil engineering [
]. Not surprisingly, concrete-related parameters, too, are being promisingly modeled using machine learning [
Having a finer focus on the prediction of CC for CCFST columns, intelligent models have properly dealt with this problem. Many engineers have employed predictive tools such as artificial neural
networks (ANN) and adaptive neuro-fuzzy inference systems (ANFIS) for this purpose [
]. Basarir et al. [
] applied an ANFIS model to predict the pure bending capacity of concrete-filled steel tubes. Their findings revealed the superiority of this model over regression-based tools. Ho and Le [
] investigated and proved the competency of regression machine learning techniques for analyzing the ultimate load of CCFST columns whose experimental data are subjected to variability. They
extracted an Excel-based equation from the most accurate model. Tran et al. [
] designed a convenient ANN-based GUI for predicting the axial CC of the CCFST columns.
It is a well-known fact that machine learning models can deal with almost any prediction task. Going beyond this, recent studies have shown that intelligent models can even experience improvements.
One approach is to use a qualified metaheuristic algorithm to handle the training stage. Accordingly, programmers have developed hybrid models for engineering tasks. Zhao et al. [
] could optimize the ANN using a so-called metaheuristic technique, “equilibrium optimizer,” to predict the splitting tensile strength of concrete. The prediction error of the ANN decreased by around
11.5% after incorporating it with the equilibrium optimization (EO). Lyu et al. [
] achieved an optimal configuration of the support vector regression model by hybridizing it with a sine–cosine algorithm. This model was tested for the problem of CC prediction, and their findings
indicated the competency of this hybrid model as a design assistant, while empirical formulas could not comply with the sufficient condition. Luat et al. [
] suggested and tested the combination of a so-called machine learning model “Bayesian additive regression tree (BART)” with genetic algorithm (GA), artificial bee colony (ABC), and particle swarm
optimization (PSO) metaheuristic techniques for modeling the axial load capacity. The prediction results corresponding to these hybrids achieved coefficients of determination equal to 0.9891, 0.9923,
and 0.9931, respectively.
Complicated problems such as analyzing CCFST columns call for assessing new approaches to keep the solutions updated with the latest computational developments. Following the wide application of
metaheuristic algorithms in creating powerful hybrid models in solving complicated problems, some new members of this family are examined in this study. On the other hand, previous efforts have
mostly focused on ANNs as the to-be-optimized predictive model [
], and accordingly, it can be argued that the potential of leading models such as ANFIS needs further investigation. Hence, to bridge the mentioned gaps of knowledge, the main contributions of this
study can be drawn on the following aspects: (a) introducing and evaluating the performance of novel artificial intelligence-based hybrids for predicting the CC of CCFST columns, (b) utilizing ANFIS
as the basic skeleton of the suggested models, and (c) comparing the efficiency of three metaheuristic optimizers, namely earthworm algorithm (EWA), salp swarm algorithm (SSA), and teaching
learning-based optimization (TLBO) versus the PSO algorithm for adjusting the internal parameters of ANFIS. The findings of this study reflect efficient and applicable methodologies for enhancing the
design of CCFST columns. As is known, a reliable indirect approximator can be of great interest to structural engineers due to the cost- and time-efficiency, as well as the easiness of
2. Data Provision
It was earlier explained that the axial CC of the CCFST columns is a complex non-linear parameter. In studies performed so far, this parameter has been considered as a function of several geometrical
and material-related factors. These factors here comprise the length of column (
), the diameter (
), thickness (
), yield stress (
), and ultimate stress (
) of the steel tube, as well as the compressive strength of UHSC (
). The used data are obtained from a study by Tran, Thai, and Nguyen [
]. They created a numerical dataset as the result of an extensive finite element simulation verified with experimental efforts in the literature. The readers are guided to refer to the reference
paper [
] for further details of data provision (e.g., material characteristics, assumptions of simulation, parameters, etc.).
Figure 1
depicts how the CC values on the y-axis are in a relationship with these influential parameters. The values of CC range from 8016.3 to 75,051.6 kN, while the values of
, and
fall within (900.0, 4800.0) mm, (300.0, 600.0) mm, (6.0, 30.0) mm, (235.0, 460.0) MPa, (360.0, 540.0) MPa, and (100.0, 200.0) MPa, respectively. An R-value is also calculated for each chart that
shows the correlation between the CC and corresponding input. As is seen, the sole meaningful relationship here is between the CC and D with R = 0.90703. Other R values are below 0.5. Moreover,
Table 1
gives some statistical details of the used dataset.
According to the sensitivity analysis carried out by Zheng et al. [
] on this dataset, the
are the most important parameters for predicting the CC, while the lowest importance is obtained for the
In the prediction of CC using artificial intelligence models, the CC is the output of the model, while the named influential factors (L, D, t, f[y], f[u], and f[c]’) are referred to as the inputs of
the network. The used data consist of 768 records that are divided into two sets after permuting their order. This is performed to achieve a random division. The first set of data, which contains 80%
of the whole, is devoted to the training process. In other words, the network goes through these data to learn the mathematical relationship between the CC and inputs. The second set, which contains
the remaining 20%, will be later used as non-processed data to evaluate the prediction ability of the models. This process is called the testing phase. In both the training and testing phases, the
real values of CC are compared with the products of the models for accuracy evaluation.
3. Methodology
3.1. ANFIS
Jang [
] designed ANFIS as a hybrid type of artificial model. It is a universal approximator that takes advantage of both ANN and fuzzy inference systems. The mutual association of these two models results
in the betterment of their performance. Fuzzy rules are considered in ANFIS to map the dependency of output on input parameters. Although ANFIS is capable of solving problems by taking human
reasoning in linguistic terms, the ANN can complete its ability to derive rules from data [
The prediction using the ANFIS model draws on five layers. In the first one, the model computes the membership values for input factors. In so doing, the coefficients of membership functions (MFs)
are adjusted as premise parameters. In the next layer, a power value is calculated for each derived rule in the nonadaptive nodes. Each power value is then normalized in the third layer based on the
values calculated in the former layer. Following this, a first-order polynomial and normalized powers are multiplied in the fourth layer. Note that the coefficients for this polynomial are obtained
by a recursive least-square technique. In the last layer, there is only one nonadaptive node, such as those in layers 2 and 3, which calculates a weighted average of its inputs to release the global
response of the ANFIS. More details regarding the mathematical strategy of the ANFIS can be found in earlier literature [
3.2. Metaheuristic Algorithms
Metaheuristic algorithms are wise optimizers [
] that have recently drawn huge attention in various fields, especially engineering simulations. When it comes to an optimization problem, one (or more) product is expected to be maximized against
the cost. This task can be efficiently handled by these algorithms. Most metaheuristic optimizers are nature-inspired. In other words, they simulate a natural behavior in order to reach the optimal
solution to a given problem. In this work, four capable algorithms of EWA, PSO, SSA, and TLBO are used in combination with ANFIS to optimally analyze the compression-bearing capacity of the CCFST
As explained, a metaheuristic algorithm needs to be applied to a problem. When coupled with ANFIS, the optimization problem becomes tuning the parameters of MFs so that the error of prediction is
minimized. Simply speaking, the algorithm plays the role of the trainer for the ANFIS network [
The EWA mimics the reproduction behavior of earthworms. Wang et al. [
] designed this algorithm to present a new optimization method. It is known that one earthworm can perform the reproduction process alone. There are three fundamental assumptions regarding the
reproduction process: (a) offsprings are produced by each earthworm by two and only two reproduction types, (b) both parent and corresponding child carry the same length genes, and (c) a number of
best-fitted earthworms move forward directly to the subsequent without experiencing any change. Conceptually, the steps of the EWA algorithm comprise Reproduction 1, Reproduction 2, Weighted
Summation, and a Cauchy mutation process for escaping from the local optimum. Further details regarding this algorithm can be found in [
The name PSO signifies a powerful optimization strategy that was introduced by Eberhart and Kennedy [
]. The PSO is a simulation of the real-world swarm movement of animals (e.g., birds flocking). The solutions of this algorithm are particles that, having a leader–follower relationship, fly within
the space to improve their fitness. Accordingly, two parameters of local and global fitness are calculated and updated successively. Although the PSO belongs to the first generations of metaheuristic
algorithms, it enjoys computationally significant advantages such as faster convergence and also demands less memory [
]. So far, this algorithm has been promisingly combined with conventional predictive models such as ANFIS. More explanations about the optimization mechanism of the PSO can be found in earlier
literature [
The next algorithm used in this study is the SSA. It was presented by Mirjalili et al. [
] based on the foraging social behavior of the so-called tunicate “salp”. Salp is a member of the Salpidae family that has a barrel shape. They live in chains, in which a follower population follows
a leader toward a potential food source to improve the individuals’ position, and, consequently, the solution to the problem. This algorithm hires several stochastic operators that enable it to stay
away from local minima in multi-modal spaces. Hence, it can be mentioned among the most capable nature-inspired optimization techniques. For mathematical details of the SSA algorithm, recent studies
such as [
] are suggested.
The TLBO algorithm was developed by Rao et al. [
]. This algorithm simulates the tutoring interaction between the teacher and students in a virtual class. The goal of the algorithm is to organize the best harmony within the class. It happens over
two phases dedicated to teaching and learning. In the first phase, the most outstanding student is selected as the teacher. The teacher then tries to enhance the class by sharing knowledge. In the
next phase, the students perform active interactions to share knowledge. This process results in improving the solution over an iterative process. As an advantage, implementing the TLBO algorithm
does not require adjusting different hyperparameters [
]. This algorithm is better detailed in studies such as [
4. Results and Discussion
Accuracy assessment is the most significant step for evaluating the performance of the predictive models. However, other criteria, such as time efficiency and the simplicity of model configuration,
are also of high importance. As used in many previous works, evaluation of the accuracy needs more than one criterion. It is more highlighted for comparative works wherein several models need to be
relatively assessed.
In this work, the error of prediction is originally measured via the root mean square error (
) and mean absolute error (
). Moreover, a relative form of MAE called mean absolute percentage error (
) is also calculated to give a relative representation of the error [
$R M S E = 1 J ∑ j = 1 J [ E r r j ] 2$
$M A E = 1 J ∑ j = 1 J | E r r j |$
$M A P E = 1 J ∑ j = 1 J | E r r j C C j o b s e r v e d | × 100$
where the simple difference between the target CC (
$C C j o b s e r v e d$
) and simulated CC (
$C C j s i m u l a t e d$
) is referred to as
and is calculated as follows:
$E r r j = C C j o b s e r v e d − C C j s i m u l a t e d$
Needless to say, all error criteria give the difference between the target and output values. A correlation criterion quantifies the compatibility of these values. This compatibility is here measured
using the Pearson correlation coefficient (
$R = ∑ j = 1 J ( C C j s i m u l a t e d − C C ¯ s i m u l a t e d ) ( C C j o b s e r v e d − C C ¯ o b s e r v e d ) ∑ j = 1 J ( C C j s i m u l a t e d − C C ¯ s i m u l a t e d ) 2 ∑ i = 1 N ( C
C j o b s e r v e d − C C ¯ o b s e r v e d ) 2$
$C C ¯$
stands for the average of the corresponding CC values.
4.1. Metaheuristic Optimization
This step stands for the training process of the ANFIS model. The difference with normal training is the used algorithm and, consequently, the strategy for adjusting the MF parameters. In so doing,
the raw network becomes the optimization case of the metaheuristic algorithms (i.e., EWA, PSO, SSA, and TLBO), and the algorithm tries to train the model based on the training data. During an
iterative process, numerous configurations are tried for the MF parameters so that the learning quality increases over and over. In this work, each of the hybrid models, i.e., EWA-ANFIS, PSO-ANFIS,
SSA-ANFIS, and TLBO-ANFIS, are implemented with 1000 iterations. The quality of the result is monitored by calculating the RMSE in each iteration.
On the other hand, when it comes to metaheuristic algorithms, a so-called parameter “population size” emerges as an effective factor. It is because these techniques pursue the optimal solution by
means of search agents. Hence, the number of agents can influence the solution in terms of quality and convergence speed. This study uses a trial-and-error sequence to investigate the effect of
population size. The tested values include 10, 25, 50, 100, 200, 300, 400, and 500. The results of this process (i.e., the final RMSE values) are shown in the column chart of
Figure 2
As is seen, a stochastic behavior is exhibited by the networks. However, at a glance, the right-hand side columns indicate lower RMSE, meaning that the models have better performance for larger
population sizes. Further, a significant difference can be observed between the columns corresponding to the PSO-ANFIS and the other three models. According to
Figure 2
, the best quality of training resulted in population sizes of 200, 400, 400, and 100 for the EWA-ANFIS, PSO-ANFIS, SSA-ANFIS, and TLBO-ANFIS, respectively. Hereupon, these networks demonstrate the
results for further evaluation and comparisons.
Figure 3
shows the optimization curves of the mentioned model. As is seen, each model goes through a different path to minimize the error. What is clearly derived from this figure is the sufficiency of 1000
iterations for optimizing the ANFIS using the EWA, PSO, SSA, and TLBO. This claim is based on the steady optimization behavior of all algorithms after the 800th iteration. In the end, the EWA-ANFIS,
PSO-ANFIS, SSA-ANFIS, and TLBO-ANFIS achieved the RMSEs of 3984.7939, 618.2641, 2950.3632, and 2923.1084, respectively. Considering the optimization time, implementation of the EWA-ANFIS with 1538.1
s was the longest one, followed by the SSA-ANFIS with 1122.8 s, TLBO-ANFIS with 542.7 s, and PSO-ANFIS with 388.8 s. An HP Intel Core i7 64-bit operating system with 16 GB of RAM was used for this
4.2. Prediction Results
Focusing on the training results, all models attained a suitable understanding of the relationship between the CC and
, and
. Apart from the reported RMSEs, the MAEs of 3085.0588, 466.4530, 2296.0564, and 2282.5048 indicate a fine level of learning error.
Figure 4
depicts the histogram of the training
values (see Equation (4)).
Based on this figure, as well as the MAPE values equal to 12.4133, 1.8477, 10.3899, and 10.2947%, the error of all four models is at an acceptable level. It reflects the high ability of the EWA, PSO,
SSA, and TLBO in tuning the MF parameters of the ANFIS. This goodness of the performance can be certified with high correlation values, i.e., R index equal to 0.96436, 0.99915, 0.98055, and 0.98092.
From the training results, it was concluded that the models could predict the CC of experienced conditions with a high level of accuracy. However, for the overall judgment, they were exposed to new
conditions that had not been analyzed before, i.e., testing data. Having 20% of 768 data, the condition of 154 CCFST columns was given to the trained models for predicting the corresponding CCs. This
prediction was associated with RMSEs of 4033.8367, 744.1464, 2874.7402, and 2854.8094, as well as the MAEs of 12.9215, 1.9996, 10.6235, and 10.4921. Relative to the range of the observed CCs (i.e.,
8016.3 to 75,051.6 N), the prediction errors are tolerable and indicate the reliability of all four models for the CC modeling. In detail, the relative errors were 12.9215, 1.9996, 10.6235, and
10.4921 in terms of the MAPE.
Furthermore, the correlation assessment of the testing results is depicted in
Figure 5
, wherein the observed values (on the
-axis) are compared to simulated values (on the
-axis). Hence, the ideal situation happens for data that are located on the line x = y. According to these illustrations, the products of the fuzzy networks are in suitable agreement with reality.
The R values of 0.96106, 0.99868, 0.98004, and 0.98037 can certify a desirable correlation for all four models.
4.3. Comparison
Previous assessments showed that all four models of EWA-ANFIS, PSO-ANFIS, SSA-ANFIS, and TLBO-ANFIS are suitable fuzzy approaches for predicting the axial CC of the CCFST columns. However, there were
significant distinctions between the performance of these four models. More clearly, the EWA-ANFIS was characterized by the largest error and lowest correlation, while the PSO-ANFIS presented the
best accuracy of prediction. Relative to these two models, the SSA-ANFIS and TLBO-ANFIS emerged to have very close accuracy (with less than 1% of difference); however, the TLBO-based model was
superior. Therefore, from the accuracy point of view, the PSO-ANFIS was the most reliable model, followed by the TLBO-ANFIS, SSA-ANFIS, and EWA-ANFIS.
It is worth mentioning that, in the case of EWA-ANFIS and PSO-ANFIS, the training results were of higher accuracy, whereas for the two other models (i.e., SSA-ANFIS and TLBO-ANFIS), there were
superiorities in terms of used accuracy indices. For instance, while the testing RMSE of the TLBO-ANFIS is below the training one (2923.1084 vs. 2854.8094), the training MAE shows a smaller error
than the testing phase (2282.5048 vs. 2312.5851).
As visual reasoning, the correlation charts given in
Figure 5
can be considered for comparing the prediction potential of the used models. It is true that, in comparison with the EWA-ANFIS, the data of three other models are better aggregated around the ideal
line, but there is a significant difference between the PSO-ANFIS and two other fuzzy tools. It is immediately clear that the SSA-ANFIS and TLBO-ANFIS, despite their outstanding potential, still have
weaknesses in dealing with extremum CC values. In other words, the maximum and minimum CC data are a little deviated from the general trend. This shortcoming is nicely covered by the PSO-ANFIS.
Time efficiency is another appreciable factor for comparing the capacity of the models. It is explained in
Figure 3
that the PSO-ANFIS elapsed in the shortest time among the used models. Moreover, the convergence curves show that the PSO algorithm can reach a stable, yet optimum, solution faster. Generally
speaking, it enjoys a higher convergence speed.
All in all, the PSO-ANFIS can be selected as the most optimum hybrid among those evaluated in this research. Utilizing the PSO algorithm provided a faster and tangibly more accurate solution to the
problem of CC prediction. This algorithm benefitted from a natural swarm behavior to tune the MF parameters of the fuzzy model with respect to a complex relationship between the CC and L, D, t, f[y],
f[u], and f[c]’. The corresponding hybrid tool, i.e., PSO-ANFIS, can be introduced as a reliable and efficient predictive model for this purpose.
5. Conclusions
The application of four optimal hybrid tools was tested for predicting the compression capacity of circular concrete-filled steel tube columns made with ultra-high-strength concrete. The models were
composed of two parts: (a) a conventional ANFIS framework, and (b) one of the following optimizers: earthworm algorithm, particle swarm optimization, salp swarm algorithm, and teaching learning-based
optimization. The models were trained using the finite element-based data of earlier work. The main results are as follows:
• Metaheuristic algorithms are suitable options for training neuro-fuzzy systems for the mentioned purpose.
• Referring to the correlation values >0.96, all employed fuzzy-metaheuristic models are capable of both comprehending and generalizing the relationship between the CC and input parameters.
• The PSO algorithm emerged as the most suitable optimizer for the ANFIS. This deduction came up due to the highest accuracy, as well as the most time-efficient optimization behavior observed
compared to the three other algorithms.
• The PSO-ANFIS could present a finer prediction of extremum CC values.
• In short, the use of the PSO-ANFIS is recommended for practical applications which pursue efficient cost-competitive design of CCFST columns.
Author Contributions
Conceptualization: M.J.K., F.A., M.A. and M.L.N.; methodology: B.K.S., M.J.K., F.A. and M.A.; software and validation: B.K.S., M.J.K., F.A., M.A. and M.L.N.; formal analysis and investigation:
B.K.S., F.A. and M.A.; data curation: B.K.S.; writing—original draft preparation: B.K.S., M.J.K., F.A. and M.A.; writing—review and editing, M.J.K. and M.L.N.; visualization: B.K.S., M.J.K., F.A. and
M.A.; supervision and project administration: M.J.K. and M.L.N. All authors have read and agreed to the published version of the manuscript.
This research received no external funding.
Data Availability Statement
The database used in the development of this work could be obtained from the corresponding author after an embargo period.
Conflicts of Interest
The authors declare no conflict of interest.
CCFST Circular concrete-filled steel tube CC Compression capacity
ANN Artificial neural network ANFIS Adaptive neuro-fuzzy inference system
BART Bayesian additive regression tree GA Genetic algorithm
ABC Artificial bee colony PSO Particle swarm optimization
EWA Earthworm algorithm SSA Salp swarm algorithm
TLBO Teaching learning-based optimization f[c]’ Compressive strength of UHSC
L Length of column D Diameter
t Thickness f[y] Yield stress
f[u] Ultimate stress of the steel tube MF Membership function
RMSE Root mean square error R Pearson correlation index
MAPE Mean absolute percentage error MAE Mean absolute error
Figure 1. The distribution of the CC versus influential factors (a) L, (b) D, (c) t, (d) f[y], (e) f[u], and (f) f[c]’ and (g) a schematic view of the column and section.
Figure 4. Histogram of the errors obtained for (a) EWA-ANFIS, (b) PSO-ANFIS, (c) SSA-ANFIS, and (d) TLBO-ANFIS.
Figure 5. Correlation assessment of the testing data for (a) EWA-ANFIS, (b) PSO-ANFIS, (c) SSA-ANFIS, and (d) TLBO-ANFIS.
Indicator Factor
L [mm] D [mm] t [mm] f[y] [MPa] f[u] [MPa] f[c]’ [MPa] CC [kN]
Mean 2475.0 450.0 15.2 331.3 460.0 150.0 30,185.3
Std. Error 47.4 4 0.2 3.1 2.5 1.2 538.3
Std. Deviation 1313.1 111.9 6.1 86 70.4 34.2 14,918.5
Sample Variance 1,724,120 12,516.3 37.3 7401.8 4956.5 1168.2 222,561,708.9
Minimum 900.0 300.0 6.0 235.0 360.0 100.0 8016.3
Maximum 4800.0 600.0 30.0 460.0 540.0 200.0 75,051.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s).
MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:
Share and Cite
MDPI and ACS Style
Karimi Sharafshadeh, B.; Ketabdari, M.J.; Azarsina, F.; Amiri, M.; Nehdi, M.L. New Fuzzy-Heuristic Methodology for Analyzing Compression Load Capacity of Composite Columns. Buildings 2023, 13, 125.
AMA Style
Karimi Sharafshadeh B, Ketabdari MJ, Azarsina F, Amiri M, Nehdi ML. New Fuzzy-Heuristic Methodology for Analyzing Compression Load Capacity of Composite Columns. Buildings. 2023; 13(1):125. https://
Chicago/Turabian Style
Karimi Sharafshadeh, Bizhan, Mohammad Javad Ketabdari, Farhood Azarsina, Mohammad Amiri, and Moncef L. Nehdi. 2023. "New Fuzzy-Heuristic Methodology for Analyzing Compression Load Capacity of
Composite Columns" Buildings 13, no. 1: 125. https://doi.org/10.3390/buildings13010125
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2075-5309/13/1/125","timestamp":"2024-11-04T18:42:37Z","content_type":"text/html","content_length":"455301","record_id":"<urn:uuid:e7c90493-9a55-49ba-8869-8c08bc0d0b1a>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00741.warc.gz"} |
Download Algebra and Trigonometry, 2nd Edition by John W. Coburn PDF
Download Algebra and Trigonometry, 2nd Edition by John W. Coburn PDF
By John W. Coburn
3 parts give a contribution to a subject sustained in the course of the Coburn sequence: that of laying a company starting place, construction a superior framework, and delivering powerful
connections. not just does Coburn current a valid problem-solving method to coach scholars to acknowledge an issue, set up a strategy, and formulate an answer, the textual content encourages scholars
to determine past tactics for you to achieve a better knowing of the massive principles at the back of mathematical suggestions. Written in a readable, but mathematically mature demeanour applicable
for school algebra point scholars, Coburn's Algebra & Trigonometry makes use of narrative, vast examples, and quite a number routines to attach possible disparate mathematical subject matters right
into a cohesive complete. Coburn's hallmark purposes are born out of the author's vast reviews in and out of doors the study room, and entice the massive range of scholars and educating equipment
during this direction quarter. profiting from the suggestions of hundreds of thousands of teachers and scholars around the kingdom, Algebra & Trigonometry moment version, keeps to stress connections
which will enhance the extent of scholar engagement in arithmetic and bring up their possibilities of luck in university algebra.
Read or Download Algebra and Trigonometry, 2nd Edition PDF
Best popular & elementary books
Petascale computing: algorithms and applications
Even though the hugely expected petascale pcs of the close to destiny will practice at an order of significance swifter than today’s fastest supercomputer, the scaling up of algorithms and purposes
for this category of desktops is still a difficult problem. From scalable set of rules layout for enormous concurrency toperformance analyses and clinical visualization, Petascale Computing:
Algorithms and functions captures the cutting-edge in high-performance computing algorithms and functions.
With an identical layout and have units because the industry top Precalculus, 8/e, this concise textual content presents either scholars and teachers with sound, continuously established motives of
the mathematical techniques. PRECALCULUS: A CONCISE direction is designed to provide an economical, one-semester substitute to the normal two-semester precalculus textual content.
Atomic correlations were studied in physics for over 50 years and often called collective results until eventually lately after they got here to be famous as a resource of entanglement. this can be
the 1st publication that comprises precise and accomplished research of 2 presently greatly studied matters of atomic and quantum physics―atomic correlations and their relatives to entanglement among
atoms or atomic systems―along with the most recent advancements in those fields.
Extra info for Algebra and Trigonometry, 2nd Edition
Example text
A. twice a number, increased by five b. qxd 14/10/2008 12:08 PM Page 14 EPG 204:MHDQ069:mhcob%0:cob2chR: College Algebra— 14 R-14 CHAPTER R A Review of Basic Concepts and Skills c. ten less than
triple the payment d. two hundred fifty feet more than double the length Solution ᮣ a. Let n represent the number. Then 2n represents twice the number, and 2n ϩ 5 represents twice a number, increased
by five. b. Let W represent the width. Then 3W represents three times the width, and 3W Ϫ 6 represents six less than three times the width.
45 Ϫ 1Ϫ542 Determine which expressions are equal to zero and which are undefined. Justify your responses by writing the related multiplication. 60. 0 Ϭ 12 0 7 Without computing the actual answer,
state whether the result will be positive or negative. Be careful to note 456 ϩ 74. 9034 1Ϫ12 2 76. 118 ϩ 1Ϫ34 2 77. 1Ϫ23 2 1358 2 78. 1Ϫ821214 2 79. 11221Ϫ32102 80. 1Ϫ121021Ϫ52 82. 75 Ϭ 1Ϫ152 81.
Ϫ60 Ϭ 12 83. 4 5 Ϭ 1Ϫ82 84. Ϫ15 Ϭ 12 85. Ϫ23 Ϭ 16 21 86. Ϫ34 Ϭ 78 Evaluate without a calculator, using the order of operations.
Simplify algebraic expressions EXAMPLE 1 (1) 3 ᮣ Identifying Terms and Coefficients State the number of terms in each expression as given, then identify the coefficient of each term. xϩ3 Ϫ 2x a. 2x Ϫ
5y b. c. Ϫ1x Ϫ 122 d. Ϫ2x2 Ϫ x ϩ 5 7 Solution ᮣ Rewritten: Number of terms: Coefficient(s): A. You’ve just reviewed how to identify terms, coefficients, and expressions a. 2x ϩ 1Ϫ5y2 b. 17 1x ϩ 32 ϩ
1Ϫ2x2 c. Ϫ11x Ϫ 122 d. Ϫ2x2 ϩ 1Ϫ1x2 ϩ 5 two 2 and Ϫ5 two 1 7 and Ϫ2 one three Ϫ1 Ϫ2, Ϫ1, and 5 Now try Exercises 7 through 14 ᮣ B.
Rated of 5 – based on votes | {"url":"http://blog.reino.co.jp/index.php/ebooks/algebra-and-trigonometry-2-nd-edition","timestamp":"2024-11-04T08:09:13Z","content_type":"text/html","content_length":"39218","record_id":"<urn:uuid:08f56cb7-459e-4ec7-833e-3dff54bd7d92>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00891.warc.gz"} |
The Importance of the Frailty Effect In Survival Models: For Multidrug-resistant Tuberculosis Data
All published articles of this journal are available on ScienceDirect.
The Importance of the Frailty Effect In Survival Models: For Multidrug-resistant Tuberculosis Data
Frailty models have been proposed to analyse survival data, considering unobserved covariates (frailty effects). In a shared frailty model, frailties are common (or shared) amongst groups of
individuals and are randomly distributed across groups.
In this paper, the authors compared the semi-parametric model to shared frailty models by studying the time-to-death of patients with multidrug-resistant tuberculosis (MDR-TB).
Secondary data from 1 542 multidrug-resistant tuberculosis patients were used in this study. STATA software was used to analyse frailty models via the streg command.
Of 1 542 patients diagnosed with MDR-TB, 245 (15.9%) died during the study period; 77 (5.0%) had treatment failure; 334 (21.7%) defaulted; 213 (13.8%) completed treatment; 651 (42.2%) were cured of
MRD-TB; and 22 (1.4%) were transferred out. The results showed that 797 (51.7%) were females, and the majority were aged 18 – 30 and 31 – 40 years (35.5% and 35.7% respectively). Most of the patients
(71.3%) were HIV-positive. The results also showed that most patients (95.7%) had no previous MDR-TB episodes, and 792 (51.4%) had no co-morbidities. The estimate of the variance for the frailty term
in the Weibull gamma shared frailty model was 2.83, which is relatively large and therefore suggests the existence of heterogeneity.
The Laplace transform of the frailty distribution plays a central role in relating the hazards, conditional on the frailty, to the hazards and survival functions observed in a population.
Keywords: Frailty, Hazards, MDR-TB, Risk factors, Survival data.
Time-to-event data measure the time elapsed from a given origin to the occurrence of an event of interest. Most commonly, survival data are handled using the proportional hazards (PH) regression
model popularized by [1]. Correct inference based on those PH models requires independent and identically distributed samples. The PH assumption states that the ratio of the hazards between any two
individuals is constant over time, and a non-parametric “baseline hazard gives the shape of the hazard”. Subjects may be exposed to different risk levels, even after controlling for known risk
factors, because some relevant covariates are often unavailable to the researcher or are even unknown (univariate case). The study population may also be divided into clusters so that subjects from
the same cluster behave more cohesively than subjects from different clusters (multivariate case).
A frailty model introduced by [2] that quickly gained popularity in econometrics [3], demographics [4] and biostatistics [5] is a heterogeneity model where the frailties are assumed to be individual,
and the frailty has a multiplicative effect on the baseline hazard function. The estimate of the random effect is assumed to have a unit mean and finite variance. Individuals with a frailty greater
than one are said to be frailer and will have an increased risk of failure. When the variance of the frailty term is equal to zero, this indicates that observations from the same group are
independent. Therefore, when the standard model fails to account for all the variability in the observed failure times, frailty models provide a useful alternative to a standard survival model. The
main assumption of a frailty model is that information about hidden internal or external factors is contained in the shape and structure of the hazard function and the form of the frailty
distribution [6].
Frailty models have two broad classes, namely models describing the univariate survival times and multivariate models. Frailty models for univariate data have long been used to account for
heterogeneous times-to-failure. The term ‘frailty’ was first suggested by [2] in the context of mortality studies, and [7] incorporated the frailty concept into a study of the duration of
unemployment. The shared frailty model may be considered a random-effects model for survival data because the frailty effect is shared amongst clusters of individuals. Early considerations of these
models can be found in [8-12].
Although several research papers have been published on frailty models, even in the modelling of infectious diseases [13, 14], to the best of the authors’ knowledge, no study has been conducted in
KwaZulu-Natal, South Africa, applying shared frailty models to survival data. This paper is intended to demonstrate the analysis of frailty models using secondary data from 1 542 MDR-TB patients who
were treated in KwaZulu-Natal, South Africa. This study used the statistical software called STATA version 19 to analyse data.
2. METHODS
2.1. Data Source
The data used in this study are described in [15]. The study protocol was approved by the University of KwaZulu-Natal’s Biomedical Research Ethics Committee (Ref: BF052/09) and by the KwaZulu-Natal
Department of Health. According to [15], they used data collected by health workers for clinical care. No risks were posed to the patients and informed consent was waived by the ethics committee. The
authors report that to protect patient confidentiality and anonymity, the databases were de-identified and access was strictly limited.
According to the researchers [15], their study was a prospective study of 4 rural areas in KwaZulu-Natal (South Africa) between 1 July 2008 and 30 June 2012. In this study, the authors used data from
1 542 MDR-TB patients from five TB centres (four decentralized sites and one centralized hospital). Time-to-death of an MDR-TB patient is the response variable of interest.
2.2. Model Development
In general, frailty models are the equivalent of random-effects or mixed models in survival analysis. Suppose that X represents a covariate vector, let T be a non-negative random variable
representing an individual survival time, with t being a realisation of that random variable, then the Cox proportional hazard (PH) model is:
where X = [X[1], X[1],...,X[n]] and β = [β[1], β[1],...,β[n]] is a regression parameter vector.
A frailty model introduces the unobserved components represented by a vector denoted as U, and (1) is modified as follows:
Let Z = e^U represents the frailty term, then the frailty model is:
The baseline hazard function h[0 ](t) can be chosen non-parametrically or parametrically (Weibull, exponential, Gompertz, etc.). There are two distinguishable broad classes of frailty models, namely
the univariate frailty model and the multivariate frailty model.
2.3. The Univariate Frailty Model
Suppose that an individual is affected with MDR-TB and has a survival time denoted as t[i], covariate vector X[i], with a frailty term denoted as Z[i] [2], stated that the hazard function of the
individual i is given as:
Since Z[i] is an unobservable random variable varying over the sample. Those individuals who possess Z[i] 1 are said to be more frail and will have an increased risk of failure. Conversely, those
individuals with Z[i] < 1 are less frail and will tend to survive longer.
The model of an individual i can also be represented by its conditional survivor function:
The model described above is at the individual level, but this individual model is not observable [16]. It is vital to state that this is the reason why the model is considered at a population level.
Survival of the total population is the mean of the individual survival functions.
The unconditional survival function of an individual i at the population level is:
Knowing the frailty distribution can help one determine an individual's unconditional survival function, and this is the same for the unconditional hazard (average hazard).
2.4. The Distribution of Frailty
The average hazard of an individual i, given the conditional hazard, is:
assume that h[0 ](t[i]) is a constant, then:
Suppose X[i] is one covariate representing a variable “study site”, which is 0 = Centralized hospital, 1 = Decentralised sites:
This shows that the average hazard depends on the frailty distribution.
Many distributions can be chosen for frailty, including gamma frailty; the log-normal distribution; the positive stable frailty model; the inverse Gaussian frailty model; and the compound Poisson
frailty model, amongst others [17, 18]. The gamma distribution has been used in most applications and widely applied as a mixture distribution due to the simplicity of the Laplace transformation [2,
8, 19-21].
Where the parameter s is a complex number:
s = a + ib
with a and b real numbers.
Many calculations can be performed based on the Laplace transform. The importance of the Laplace transform for these calculations has previously been demonstrated [22]. The mean and variance of the
Gamma distribution can be obtained by using the first and second derivatives of the Laplace transform, respectively.
on evaluating the derivatives at s = 0,
Another reason that this distribution has been used in most applications published to date is that it is a flexible distribution that takes a variety of shapes as α varies. That is:
Where α and β are the shape and scale parameters, respectively.
The mean is
If α = 1, it is identical to the well-known exponential distribution. When α is large, it takes a bell-shaped form. The gamma distribution fits very well with failure data because it is easy to
derive the closed-form expressions of unconditional survival, cumulative density and hazard function.
2.5. Multivariate Frailty Model
Survival data analysis always assumes that the time-to-event of the individuals considered in the study is independent. However, this may not always be the case because there is a possibility that
the survival times of individuals in the same group, for example, in a family or community, are correlated. The correlation between survival time violates the independence assumption, and such data
cannot be analysed using the univariate semi-parametric model. According to [23], the data with correlated survival times are known as multivariate survival data, and models developed to analyse such
data include the shared frailty model, which was previously introduced [2, 8].
The model is called the shared frailty model because individuals in the same cluster are assumed to share the same frailty [6, 24]. The survival times of individuals within the same group are assumed
to be conditionally dependent, while the frailty across the groups is assumed to be independent. However, when a frailty term represents the individual's unmeasured or unobserved covariates after
considering the measured covariates, it is called the univariate frailty model.
Let N denote the number of individuals in a given cohort, with each individual in the cohort assigned to a cluster. Let the total number of clusters be denoted by G such that, given the i^th cluster
consists of n[i] individuals, then:
is the censoring indicator. The response variable will be a combination of time-to-event and δ[i], which takes a value 1 if the time-to-event is observed and 0 if censored or the event did not occur.
The hazard function of the j^th individual of the i^th cluster is given as:
where X[ij] is a vector of covariates for individual j in the i^th group, u[i] is the unobserved covariates and h[0 ](t) is the baseline hazard function.
Since z[i] = e^ui, the hazard function can be written as:
Here, the f(z).
The full likelihood of the shared frailty model is given as follows:
2.6. Parametric Frailty and Shared Frailty
Now consider a parametric survival model which is characterized by its hazard function, h(t). The effect of any covariate is always found in the definitions of all these functions, whether one
parameterizes the model as having PH with respect to changes in covariate values or accelerated failure time (AFT) due to the covariates. For instance, in a Weibull PH regression, the hazard function
at time t for individual i with covariate vector x[i] is:
The shape parameter p and regression coefficients β are estimated from the data. The streg contains a list of those forms of h(t) currently available in STATA. In the univariate case, a frailty model
introduces an unobservable multiplicative effect z on the hazard, so that conditional on the frailty
where z is some random positive quantity assumed to have a unit mean equal to one and variance equal to θ.
A multivariate survival model is an extension of the univariate frailty model, where individuals are allowed to share the same frailty value. Sharing a frailty value also generates dependence between
those individuals who share frailties, whereas conditional on the frailty, those individuals are independent. For data consisting of n clusters with the i^th cluster comprised of n[i] individuals (i
= 1,...,n), (11) generalizes to
for j = 1,2,...,n[i] with h[ij](t) = h(t|x[ij]). That is, for any member of the i^th cluster, the standard hazard function is now multiplied by the shared frailty z[i]. For instance, in the case of
Weibull PH regression, the conditional hazard for an individual is given by
and the conditional survival function is
2.7. Statistical Methods
Data analyses were conducted using STATA version 19 and the Statistical Package for the Social Sciences (SPSS version 25). Basic descriptive statistics, such as frequencies and percentages of
demographics, were calculated. The proportion of successful treatment is those cured and treatment completed, and those patients who did not finish treatment, such as those loss to follow-up and
failed, are considered to have poor treatment outcomes.
Firstly, the authors fit a Cox PH model and then fit a Weibull regression model with gamma-distributed heterogeneity. Since correlation within a “site” might exist for a given patient, one can model
this as a shared frailty model where the sharing occurs at the patient level. This is easily done by adding the option shared (patient) to the streg, frailty.
The study considered an MDR-TB data set consisting of 1 542 patients with time-to-death as the primary outcome of this study. A total of 245 (15.9%) patients with MDR-TB died between 1 July 2008 and
30 June 2012, and 77 (5.0%) had treatment failure. A total of 334 (21.7%) patients were defaulted, 213 (13.8%) completed treatment; 651 (42.2%) were cured of MRD-TB; and 22 (1.4%) were transferred
out (Fig. 1 and Table 1).
The baseline demographics of the patients treated with MDR-TB showed that 797 (51.7%) were females, and the majority of the patients were aged 18 – 30 years and 31 – 40 years (35.5% and 35.7%,
respectively). Most of the patients (71.3%) were HIV-positive. The results also show that most of the patients (95.7%) had no previous MDR-TB episodes, and 792 (51.4%) had no co-morbidities.
Furthermore, 1510 (97.9%) of the patients had pulmonary TB (Table 2). The median follow-up time was 26.8 months.
Table 1.
Treatment Outcomes Site 1 Site 2 Site 3 Site 4 All Decentralized Hospitals Centralized Hospital
n = 125 n = 142 n = 202 n = 261 n = 730 n = 812
Died 17 (13.6) 21 (14.8) 25 (12.4) 69 (26.4) 132 (18.1) 113 (13.9)
Failed 7 (5.6) 10 (7.0) 12 (5.9) 19 (7.3) 48 (6.6) 29 (3.6)
Defaulted 9 (7.2) 18 (12.7) 50 (24.8) 28 (10.7) 105 (14.4) 229 (28.2)
Completed treatment 12 (9.6) 8 (5.6) 19 (9.4) 15 (5.7) 54 (7.4) 159 (19.6)
Cured 78 (62.4) 79 (55.6) 94 (46.5) 120 (46.0) 371 (50.8) 280 (34.5)
Transferred out 2 (1.6) 6 (4.2) 2 (1.0) 10 (3.8) 20 (2.7) 2 (0.2)
Note: *Data are number (%).
Table 2.
n (%)/Median (IQR)
Variables Site 1 (n=125) Site 2 (n=142) Site 3 (n=202) Site 4 (n=261) All decentralized hospitals (n=730) Centralized hospital (n=812) TOTAL (n=1542)
Baseline weight (kg) 50 (43 - 59)
Age (in years) 34 (28 - 42)
Age groups -
18 - 30 42 (33.6) 44 (31.0) 74 (36.6) 85 (32.6) 245 (33.6) 303 (37.3) 548 (35.5)
31 - 40 43 (34.4) 55 (38.7) 70 (34.7) 90 (34.5) 258 (35.3) 292 (36.0) 550 (35.7)
41 - 50 30 (24.0) 27 (19.0) 40 (19.8) 56 (21.5) 153 (21.0) 145 (17.9) 298 (19.3)
50+ 10 (8.0) 16 (11.3) 18 (8.9) 30 (11.5) 74 (10.1) 72 (8.9) 146 (9.5)
Gender -
Male 57 (45.6) 60 (42.2) 104 (51.5) 125 (47.9) 346 (47.4) 399 (49.1) 745 (48.3)
Female 68 (54.4) 82 (57.7) 98 (48.5) 136 (52.1) 384 (52.6) 413 (50.9) 797 (51.7)
HIV status -
Positive 96 (76.8) 108 (76.1) 123 (60.9) 197 (75.5) 524 (71.8) 576 (70.9) 1100 (71.3)
Negative 28 (22.4) 30 (21.1) 66 (32.7) 38 (14.6) 162 (22.2) 211 (26.0) 373 (24.2)
Unknown 1 (0.8) 4 (2.8) 13 (6.4) 26 (10.0) 44 (6.0) 25 (3.1) 69 (4.5)
Previous MDR-TB episodes -
No previous episodes 119 (95.2) 124 (87.3) 184 (91.1) 246 (94.3) 673 (92.2) 802 (98.8) 1475 (95.7)
One previous episode 5 (4.0) 18 (12.7) 18 (8.9) 14 (5.4) 55 (7.5) 9 (1.1) 64 (4.1)
Two previous episodes 1 (0.8) 0 (0) 0 (0) 1 (0.3) 2 (0.3) 1 (0.1) 3 (0.2)
Comorbidities -
No other diseases 6 (40.0) 2 (15.4) 1 (100) 0 (0) 9 (20.5) 780 (97.6) 789 (93.6)
Diabetes 1 (6.7) 1 (7.7) 0 (0) 8 (53.3) 10 (22.7) 10 (1.3) 20 (2.4)
Epilepsy 4 (26.7) 3 (23.1) 0 (0) 1 (6.7) 8 (18.2) 4 (0.5) 12 (1.4)
Hearing loss prior to start of treatment 2 (13.3) 3 (23.1) 0 (0) 5 (33.3) 10 (22.7) 1 (0.1) 11 (1.3)
Renal problems 1 (6.7) 2 (15.4) 0 (0) 0 (0) 3 (6.8) 0 (0) 3 (0.4)
Substance abuse 1 (6.7) 2 (15.4) 0 (0) 1 (6.7) 4 (9.1) 0 (0) 4 (0.5)
Psychiatric problems 0 (0) 0 (0) 0 (0) 0 (0) 0 (0) 4 (0.5) 4 (0.5)
Type of TB -
Pulmonary 122 (97.6) 131 (92.3) 199 (98.5) 254 (97.3) 706 (96.7) 804 (99.0) 1510 (97.9)
Extra pulmonary 3 (2.4) 11 (7.7) 3 (1.5) 7 (2.7) 24 (3.3) 8 (1.0) 32 (2.1)
Note: *IQR = Interquartile range.
Table 3.
Variables Cox Regression Model Weibull Regression Model with Gamma Shared Frailty Log-normal Regression Model with Gamma Shared Frailty
Coef. (95% CI) S.E P-value Coef. (95% CI) S.E P-value Coef. (95% CI) S.E P-value
Baseline weight (kg) -0.03 (-0.04 to -0.02) 0.01 0.00 0.07 (0.04 to 0.09) 0.01 0.00 0.06 (0.04 to 0.09) 0.01 0.00
Study sites
Centralized hospital (Ref) - - -
Decentralized sites 0.45 (-0.51 to 1.41) 0.49 0.36 -1.15 (-4.01 to 1.72) 1.46 0.43 -1.25 (-4.12 to 1.63) 1.47 0.40
Age group (years)
18-30 (Ref) - - -
31-40 0.53 (0.02 to 0.85) 0.26 0.04 -1.48 (-2.68 to -0.28) 0.61 0.02 -1.42 (-2.60 to -0.23) 0.61 0.02
41-50 0.60 (-0.01 to 1.21) 0.31 0.05 -1.19 (-2.56 to 1.19) 0.70 0.09 -1.13 (-2.56 to 1.29) 0.73 0.12
>-51 0.48 (-0.27 to 1.23) 0.38 0.21 -1.44 (-1.18 to 2.31) 0.89 0.11 -1.27 (-2.99 to 1.45) 0.88 0.15
Male (Ref) - - -
Female 0.52 (0.09 to 0.96) 0.22 0.02 -1.19 (-2.17 to -0.22) 0.50 0.02 -1.32 (-2.32 to -0.33) 0.51 0.01
HIV status
Positive (Ref) - - -
Negative -0.06 (-0.56 to 1.45) 0.26 0.82 0.15 (-0.96 to 1.26) 0.57 0.79 0.17 (-0.98 to 1.31) 0.58 0.78
Type of TB
Pulmonary (Ref) - - -
Extrapulmonary 0.25 (-1.17 to 1.67) 0.72 0.73 -0.78 (-4.33 to 2.77) 1.81 0.67 -0.72 (-4.17 to 2.73) 1.76 0.68
Comorbidities conditions
No (Ref) - - -
Yes -0.33 (-1.42 to 1.76) 0.56 0.55 0.75 (-2.29 to 3.79) 1.55 0.63 0.85 (-2.18 to 3.89) 1.55 0.58
Previous MDR-TB episodes
No (Ref) - - -
Yes -0.19 (-1.62 to 1.24) 0.73 0.80 0.49 (-3.13 to 4.10) 1.85 0.79 0.47 (-3.16 to 4.11) 1.85 0.80
Likelihood ratio test on 10 df 42.68, p-value < 0.01 40.57, p-value < 0.01 39.99, p-value < 0.01
θ (S.E) - 2.83 (1.89) 0 (0.09)
Likelihood ratio test of è - chibar2(01) = 4.19, p-value = 0.02 chibar2(01) = 0, p-value = 1.00
Note: • S. E = Standard error; CI = Confidence interval.
The results in Table 3 show the Cox regression parameter estimates, Weibull shared frailty parameter estimates and log-normal frailty parameter estimates. The results also show the chi-square test
statistic (X^2 = 4.19) for a Weibull frailty model with a p-value of 0.02. The estimate of the variance for the frailty term in the Weibull gamma shared frailty model is 2.83, which is different from
zero. This implies that unobserved heterogeneity was present at the site level. Therefore, one can use the Weibull frailty shared model because the results suggest that patients at some sites were
associated with a higher risk of dying than the other sites and there is a difference in the conclusions drawn about the dataset.
Further evidence is the differences in the coefficient estimates between the Weibull frailty model and the ordinary Cox regression model. Hazard ratios now have an interpretation that is conditional
on frailty. Note that the standard deviation increases in the gamma frailty and log-normal models. The log-normal model results in a slightly lower likelihood ratio test. If one looks at the results
produced by the log-normal frailty model, the frailty variance is zero and is not significant. This implies that all the differences amongst the mortality rates of the MDR-TB patients are explained
by the observed fixed covariates stated in the model. That is, the log-normal model indicates that there is no unobserved heterogeneity amongst groups (sites), but the estimate of the standard errors
When comparing the estimates of the standard Cox PH model and those of the Weibull gamma frailty model, an increase in the estimates was observed after correcting for frailty (Table 3). Factors
strongly associated with mortality rates were identified by examining their confidence intervals (CIs) and p-values. Factors whose CIs include 1 implied that these factors were insignificant and
these results were confirmed by p-values > 0.05 significance level. Baseline weight, age group (31- 40) and gender were found to be strongly associated with mortality rates. The results show that
patients treated in decentralized sites had a lower death rate than those treated in centralized sites, but this was insignificant.
4. DISCUSSION
Frailty models are mixture of models within survival analysis. The estimation in this general frailty model framework is performed by using a simple two-step procedure where the fixed effects and the
individual frailty terms are estimated while keeping the frailty variance parameter fixed. This procedure leads to simple estimation equations but results in an underestimation of the estimated
variances of the fixed effects parameters because the variance estimation does not take into account the variability in the estimation of è^.
The objective of this paper was to assess the effect of omitting unmeasurable variables in the modelling exercise. The researchers began by fitting a semi-parametric stratified Cox regression model
with random effects and then fitted a Weibull parametric stratified model with random effects. This was an extension of the stratified Cox regression model with the site variable as the stratifying
variable. The results of the exercise suggested that there exists significant unobserved heterogeneity. When accounting for site effects, the confidence intervals are wider in the Weibull frailty
model, which implies that there is much more heterogeneity at decentralized sites than in the centralized hospital.
The other factors included in the analysis, namely HIV status, type of TB, previous MDR-TB episodes and co-morbidities, affect on the survival time of MDR-TB patients. Still, there was not enough
evidence from the data to confirm their association with this survival time.
The results in this paper do not deviate much from the findings obtained by [27], who concluded that frailty is a strong predictor of mortality, as has been shown by previous systematic reviews [28-
30]. Two of these reviews systematically conducted studies that used different definitions of frailty, including the frailty phenotype by [31] and the frailty index (FI) by [32], and demonstrated
that frailty consistently increased the risk of death in most studies [28, 29].
When comparing the Cox PH model to the frailty models, it was found that the estimates of the standard error for the fixed effects increased. In some cases, adding a frailty term can render a result
insignificant in the frailty model even though it was significant in the Cox PH model [33].
A follow-up study may be a good alternative to MDR-TB surveys. In this case, a cohort of patients may be followed from different cities. In this paper, the researchers considered only models with
gamma or log-normal frailty. This approach can be easily extended to other frailty distributions available, such as inverse Gaussian distributions. However, the approach for the positive stable
distribution [19], which is expressed as a Laplace transform, would be interesting future work.
This study shows that the gamma frailty model provides a better fit to the MDR-TB data than the standard Cox model. Although further research must be conducted about the models, an initial
investigation suggests that the models will serve as an enhancement to the field of survival analysis. The authors conclude that frailty in a survival model is an important consideration and is
especially useful in situations where clustering needs to be accounted for.
STATA = Statistics and data
MDR-TB = Multidrugresistant tuberculosis
TB = Tuberculosis
FI = Frailty index
The findings should be interpreted with caution due to a few limitations. Some covariates that were included in the model had missing information. The major problem with this is that the quality of
the results might decrease due to less completeness of the data. Furthermore, some patients who were included in the study were lost to follow-up.
All the authors made contributions to this paper. SVM planned the study, wrote the paper's initial draft, and did the analysis. HW and SR revised and edited the paper. All authors read and approved
the final work of this manuscript.
The study protocol was approved by the University of KwaZulu-Natal’s Biomedical Research Ethics Committee (Ref: BF052/09) and by the KwaZulu-Natal Department of Health. Only secondary data, the data
routinely collected by health workers for clinical care, was used in this study.
No animals were used in this research. All procedures performed in studies involving human participants were in accordance with the ethical standards of institutional and/or research committees and
with the 1975 Declaration of Helsinki, as revised in 2013.
Informed consent was waived by the ethics committee since all patient data used were previously collected during the course of routine medical care and did not pose any additional risks to the
Data will be made available upon request but will be controlled. To protect patient confidentiality and anonymity, the data bases were de-identified and access strictly limited.
The authors thank Dr. Marian Loveday and Prof Glenda Matthews for allowing them to use their dataset. They thank all facility-level managers, doctors, nurses and data capturers at the study sites for
their assistance.
Vaupel JW, Manton KG, Stallard E. The impact of heterogeneity in individual frailty on the dynamics of mortality. Demography 1979; 16(3): 439-54.
Heckman J, Singer B. A method for minimizing the impact of distributional assumptions in econometric models for duration data. Econometrica 1984; 52(2): 271-320.
Vaupel JW, Yashin AI. Heterogeneity’s ruses: Some surprising effects of selection on population dynamics. Am Stat 1985; 39(3): 176-85.
Hanagal DD. Modeling survival data using frailty models. Boca Raton: Chapman & Hall/CRC 2011.
Lancaster T. Econometric methods for the duration of unemployment. Econometrica 1979; 47(4): 939-56.
Clayton DG. A model for association in bivariate life tables and its application in epidemiological studies of familial tendency in chronic disease incidence. Biometrika 1978; 65(1): 141-51.
Clayton D, Cuzick J. Multivariate generalizations of the proportional hazards model. J R Stat Soc 1985; 148(2): 82-108.
Hougaard P. Survival models for heterogeneous populations derived from stable distributions. Biometrika 1986; 73(2): 387-96.
Whitmore GA, Lee MLT. A multivariate survival distribution generated by an inverse Gaussian mixture of exponentials. Technometrics 1991; 33(1): 39-50.
Sahu SK, Dey DK, Aslanidou H, Sinha D. A Weibull regression model with gamma frailties for multivariate survival data. Lifetime Data Anal 1997; 3(2): 123-37.
Hens N, Wienke A, Aerts M, Molenberghs G. The correlated and shared gamma frailty model for bivariate current status data: An illustration for cross-sectional serological data. Stat Med 2009; 28(22):
Unkel S, Farrington CP, Whitaker HJ, Pebody R. Time varying frailty models and the estimation of heterogeneities in transmission of infectious diseases. Appl Stat 2014; 63(1): 141-58.
Loveday M, Padayatchi N, Wallengren K, et al. Association between health systems performance and treatment outcomes in patients co-infected with MDR-TB and HIV in KwaZulu-Natal, South Africa:
Implications for TB programmes. PLoS One 2014; 9(4): e94016.
Duchateau L, Janssen P. The frailty model. New York: Springer Verlag 2008.
Munda M, Rotolo F, Legrand C. parfm: Parametric frailty models in R. J Stat Softw 2012; 51(11): 1-20.
Hougaard P, Hougaard P. Shared frailty models. In: Analysis of Multivariate Survival Data Statistics for Biology and Health. New York, NY: Springer 2000.
Oakes D. A model for association in bivariate survival data. J R Stat Soc B 1982; 44(3): 414-22.
Yashin AI, Iachine IA. Genetic analysis of durations: Correlated frailty model applied to survival of Danish twins. Genet Epidemiol 1995; 12(5): 529-38.
Hougaard P. Life table methods for heterogeneous populations: Distributions describing the heterogeneity. Biometrika 1984; 71(1): 75-83.
Phipson B, Mwambi H. Incorporating frailty effects in the Cox proportional hazards model using two independent methods in independent data sets: Theory and methods. S Afr Stat J 2010; 44(1): 61-81.
Shih JH, Louis TA. Assessing gamma frailty models for clustered failure time data. In: Lifetime Data: Models in Reliability and Survival Analysis. Boston, MA: Springer 1996.
Laserson KF, Thorpe LE, Leimane V, et al. Speaking the same language: Treatment outcome definitions for multidrug-resistant tuberculosis. Int J Tuberc Lung Dis 2005; 9(6): 640-5.
World Health Organisation. Guidelines for the programmatic management of drug-resistant tuberculosis Emergency Update 2008 WHO/HTM/TB/2008402. Geneva: World Health Organisation 2008.
Clegg A, Young J, Iliffe S, Rikkert MO, Rockwood K. Frailty in elderly people. Lancet 2013; 381(9868): 752-62.
Kane RL, Shamliyan T, Talley K, Pacala J. The association between geriatric syndromes and survival. J Am Geriatr Soc 2012; 60(5): 896-904.
Shamliyan T, Talley KMC, Ramakrishnan R, Kane RL. Association of frailty with survival: A systematic literature review. Ageing Res Rev 2013; 12(2): 719-36.
Chang SF, Lin PL. Frail phenotype and mortality prediction: A systematic review and meta-analysis of prospective cohort studies. Int J Nurs Stud 2015; 52(8): 1362-74.
Fried LP, Tangen CM, Walston J, et al. Frailty in older adults: Evidence for a phenotype. J Gerontol A Biol Sci Med Sci 2001; 56(3): M146-57.
Mitnitski AB, Mogilner AJ, Rockwood K. Accumulation of deficits as a proxy measure of aging. ScientificWorldJournal 2001; 1: 323-36.
Phipson B. Analysis of time-to-event data including frailty modeling (Doctoral dissertation). South Africa: University of Kwazulu-Natal Pietermaritzburg 2006. | {"url":"https://openpublichealthjournal.com/VOLUME/16/ELOCATOR/e187494452308110/FULLTEXT/","timestamp":"2024-11-02T04:36:42Z","content_type":"text/html","content_length":"432973","record_id":"<urn:uuid:6edc4c55-c7a4-4e0a-9028-594c8fb7a5c9>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00579.warc.gz"} |
Hi, I’m John Cobb
I am a NSF postdoctoral fellow with Hal Schenck at Auburn University. I received my PhD from the mathematics department at the University of Wisconsin-Madison in May 2024, where I was advised by both
Daniel Erman and Michael Kemeny. From 2019 to 2022, I was supported by a DoD NDSEG Fellowship. Details are in my CV.
My research interests are primarily within algebraic geometry and commutative algebra. My work involves syzygies, toric varieties, defining equations of curves, and more recently, applications to
algebraic statistics.
Featured Publications
Latest News
• I started a NSF postdoc at Auburn.
• I graduated with a PhD in Mathematics from UW Madison.
• I’ve passed my specialty exam.
• A beer featuring art about algebraic geometry research that I helped design is now being sold in Madison. Check it out.
• I began my first year in the Mathematics PhD program at UW Madison.
Latest Travel
Jul 7–11, '25 SIAM Applied Algebraic Geometry, University of Wisconsin-Madison, Madison, WI
Jun 30–Jul 4, '25 Macaulay2 Workshop, University of Wisconsin-Madison, Madison, WI
May 20–30, '25 Applications of Commutative Algebra, Fields Institute, Toronto, Canada
Apr 12–13, '25 Meeting on Applied Algebraic Geometry 2025, Auburn University, Auburn, AL
Jan 20–24, '25 Apprenticeship Program in Commutative Algebra, Fields Institute, Toronto, Canada
Dec 9–13, '24 Joint meeting of the NZMS, AustMS and AMS, University of Auckland, Auckland, New Zealand
Nov 3–5, '24 Georgia Tech, Atlanta, GA
Oct 11–13, '24 SIAM Conference Texas-Louisiana Section, Baylor University, Waco, TX | {"url":"https://johndcobb.github.io/","timestamp":"2024-11-06T19:49:42Z","content_type":"text/html","content_length":"16270","record_id":"<urn:uuid:1c1a1951-5b9c-42e2-8396-ce175b426560>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00119.warc.gz"} |
Star Strider - MATLAB Central
Star Strider
Last seen: Today |  Active since 2012
Followers: 36 Following: 0
Hic sunt dracones! PROFESSIONAL: Physician (M.D.): Diplomate of the American Board of Internal Medicine; M.Sc. Biomedical Engineering: Instrumentation, Signal Processing, Control, System
Identification, Parameter Estimation NON-PROFESSIONAL: Amateur Extra Class Amateur Radio Operator; Private Pilot, Airplane Single Engine Land, Instrument Rating Airplane; Gamer NOTE: I do not respond
to emails or personal messages, unless they are about my File Exchange contributions. Time Zone: UTC-7 (Standard); UTC-6 (Daylight Savings/Summer)
of 295,068
0 Questions
20,426 Answers
2,556 of 20,171
5 Files
of 153,155
0 Problems
1 Solution | {"url":"https://uk.mathworks.com/matlabcentral/profile/authors/99682","timestamp":"2024-11-06T12:03:55Z","content_type":"text/html","content_length":"151622","record_id":"<urn:uuid:c95b93b1-8af2-4a84-b756-d9e00fcfed64>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00322.warc.gz"} |
TGD diary{{post.title}}The puzzle of two different values of Hubble constant
Thanks to Sebastian Cornelis Brink for an interesting
relating to the two values of Hubble constant. The popular article states that the expansion is 9 per cent faster than expected. This problem is old and earlier this problem was seen as the
measurement of two different values of Hubble constant. The article suggests that the bigger value is now accepted as the correct value. Hype warning is in order. The refusal to accept the
possibility of two different values means only the continuation of the long lasting fruitless debate.
I can think of two TGD explanations for two different Hubble constants - this is how I see the problem - and it should be time to think this through again.
1. The first TGD explanation coming into mind is based on many-sheeted space-time that I proposed decades ago. The Hubble constant depends on space-time sheet, in particular it depends on the p-adic
scale assignable to the space-time sheet. Could the measured values of Hubble constant which differ by 9 per cent could correspond to different space-time sheets having slightly different Hubble
constants. p-Adic length scales come as half octaves and different p-adic lengths scales would suggest a larger difference.
2. Could length scale dependent of cosmological constant predicted by TGD (see solve the problem? Could it lead to length scale dependent Hubble constant H explaining the 9 per cent discrepancy as
reflecting different values of H at long and short distances or equivalently at different values of cosmological time?
Consider now the second option which actually can be seen as a more precise formulation of the first one.
1. TGD predicts length scale dependent cosmological constant and phase transitions inducing accelerated expansion (due to accelerated thickening of monopole flux tubes) as their magnetic energy
transforms to ordinary matter (see this). Eventually the increase of volume energy stops his accelerated expansion. This fastens the expansion rate temporarily. Inflation and the recent
accelerated expansion would be examples of this kind jerks replacing smooth cosmological expansion in TGD Universe. These jerks occur in all scale: even in scale of Earth (see this).
2. The square H^2 of Hubble constant (see Friedmann equation ).
is sum of 3 contributions.
1. The first term is proportional to mass density ρ[m] and given by
8π G ρ[m]/3,
where ρ[m] is the density of matter.
2. Second "kinematic" contribution
depending on the parameter k=-1,0,1 characterizing the 3-curvature of 3-space. For hyperbolic cosmology expanding forever one has k=-1. Curvature radius a corresponds in TGD to the light-cone
proper time coordinate.
3. The third term given by
corresponds to cosmological constant. It is positive since the expansion is accelerated. This observation was fatal for superstring theory.
3. What is new that in TGD Λ is (p-adic) length scale dependent and expected to come as negative powers of 2. Dark energy density is estimated to be 68 per cent of the total so that this term is the
largest and the reduction of this term in the formula for H^2 by factor of say 1/4 is expected to have much larger effect on H^2 than 9 per cent. The value of Λ must be same for the measurements
giving different value of H as already noticed.
4. Λ corresponds the sum of magnetic and volume energy densities defining the dark matter density having also interpretation as galactic dark matter. It is assignable to monopole flux tubes. Λ
decreases during the accelerated period of expansion since magnetic energy decays to ordinary matter and increases the contribution of ρ[m] . These changes do not however cancel each other. The
dark energy is transformed to matter during the acceleration period but the visible matter participates the expansion and its density is reduced during expansion. Hence the value of H^2 should
5. Could this give rise to a net effect so that the value of H^2 would change during the acceleraion period? Since the time of emission for the radiation depends on the distance of the object, the
redshift of the radiation giving information about H at different stages of accelerated expansion. For long distances the net decrease of H should be larger and the measurements of H from large
distances should give a smaller value of H.
This argument is rough but the key idea should be clear. The question is whether length scale dependent cosmological constant could solve the discrepancy? It turns out that the actual model
requires a more detailed consideration of what it it is to be a standard candle. In many-sheeted space-time of TGD also the environment of the standard candle identified as monopole flux tube
matters. For distant standard candles this environment is younger than for nearby ones and the ageing of the flux tubes involving the decay of magnetic energy to ordinary matter would explain why
the nearby flux tubes correspond to a larger value Hubble constant.
See the article
About the problem of two Hubble constants
or the chapter
More TGD inspired cosmology | {"url":"https://matpitka.blogspot.com/2019/10/the-puzzle-of-two-different-values-of.html","timestamp":"2024-11-06T04:32:21Z","content_type":"application/xhtml+xml","content_length":"133253","record_id":"<urn:uuid:37863322-bddf-45dc-9196-224f3d6919a0>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00061.warc.gz"} |
The sum of the first 20 terms of the serie 1+23+47+815+1631... | Filo
Question asked by Filo student
The sum of the first 20 terms of the serie is?
Not the question you're searching for?
+ Ask your question
Video solutions (1)
Learn from their 1-to-1 discussion with Filo tutors.
3 mins
Uploaded on: 1/21/2024
Was this solution helpful?
Found 5 tutors discussing this question
Discuss this question LIVE
6 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on Sequence Series and Quadratic
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text The sum of the first 20 terms of the serie is?
Updated On Jan 21, 2024
Topic Sequence Series and Quadratic
Subject Mathematics
Class Class 11
Answer Type Video solution: 1
Upvotes 82
Avg. Video Duration 3 min | {"url":"https://askfilo.com/user-question-answers-mathematics/the-sum-of-the-first-20-terms-of-the-serie-is-36373333313437","timestamp":"2024-11-14T18:44:18Z","content_type":"text/html","content_length":"222989","record_id":"<urn:uuid:edc29801-5481-48ee-acb7-b6e2b2ed5718>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00393.warc.gz"} |
A Life Inspired by an Unexpected Genius | Quanta Magazine
A Life Inspired by an Unexpected Genius
For the first 27 years of his life, the mathematician Ken Ono was a screw-up, a disappointment and a failure. At least, that’s how he saw himself. The youngest son of first-generation Japanese
immigrants to the United States, Ono grew up under relentless pressure to achieve academically. His parents set an unusually high bar. Ono’s father, an eminent mathematician who accepted an
invitation from J. Robert Oppenheimer to join the Institute for Advanced Study in Princeton, N.J., expected his son to follow in his footsteps. Ono’s mother, meanwhile, was a quintessential “tiger
parent,” discouraging any interests unrelated to the steady accumulation of scholarly credentials.
This intellectual crucible produced the desired results — Ono studied mathematics and launched a promising academic career — but at great emotional cost. As a teenager, Ono became so desperate to
escape his parents’ expectations that he dropped out of high school. He later earned admission to the University of Chicago but had an apathetic attitude toward his studies, preferring to party with
his fraternity brothers. He eventually discovered a genuine enthusiasm for mathematics, became a professor, and started a family, but fear of failure still weighed so heavily on Ono that he attempted
suicide while attending an academic conference. Only after he joined the Institute for Advanced Study himself did Ono begin to make peace with his upbringing.
Through it all, Ono found inspiration in the story of Srinivasa Ramanujan, a mathematical prodigy born into poverty in late-19th-century colonial India. Ramanujan received very little formal
schooling, yet he still produced thousands of independent mathematical results, some of which — like the Ramanujan theta function, which has found applications in string theory — are still intensely
studied. But despite his genius, Ramanujan’s achievements didn’t come easily. He struggled to gain acceptance from Western mathematicians and dropped out of university twice before dying of illness
at the age of 32.
While Ono, now 48, doesn’t compare himself to Ramanujan in terms of ability, he has built his career in part from Ramanujan’s insights. In 2014, Ono and his collaborators Michael Griffin and Ole
Warnaar published a breakthrough result in algebraic number theory that generalized one of Ramanujan’s own results. Ono’s work, which is based on a pair of equations called the Rogers-Ramanujan
identities, can be used to easily produce algebraic numbers (such as phi, better known as the “golden ratio”).
More recently, Ono served as an associate producer and mathematical consultant for The Man Who Knew Infinity, a recently released feature film about Ramanujan’s life. And his new memoir, My Search
for Ramanujan: How I Learned to Count (co-authored with Amir D. Aczel), draws connections between Ramanujan’s life and Ono’s own circuitous path to mathematical and emotional fulfillment. “I wrote
this book to show off my weaknesses, to show off my struggles,” Ono said. “People who are successful in their careers were not always successful from day one.”
Like Ramanujan, who benefited from years of mentorship by the British mathematician G.H. Hardy, Ono credits his own success to serendipitous encounters with teachers who helped his talents flourish.
He now spends a great deal of time mentoring his own students at Emory University. Ono has also helped launch the Spirit of Ramanujan Math Talent Initiative, a venture that “strives to find
undiscovered mathematicians around the world and match them with advancement opportunities in the field.”
Quanta Magazine spoke with Ono about finding his way as a mathematician and a mentor, and about Ramanujan’s inspiring brand of creativity. An edited and condensed version of the interview follows.
QUANTA MAGAZINE: What was so special about Ramanujan’s approach to doing mathematics?
KEN ONO: First, he was really a poet, not a problem solver. Most professional mathematicians, whether they’re in academia or industry, have problems that they’re aiming to solve. Somebody wants to
prove the Riemann hypothesis, and sets out to do it. That’s how we think science should proceed, and in fact almost every scientist should work that way, because in reality science develops through
the work of thousands of individuals slowly adding to a body of knowledge. But what you find in Ramanujan’s original notebooks is just formula after formula, and it’s not apparent where he’s going
with his ideas. He was someone who could set down the paths of beginnings of important theories without knowing for sure why we would care about them as mathematicians of the future.
He’s credited with compiling thousands of identities — that is, equations that are true regardless of what values the variables take. Why is that important?
It is true that the vast majority of the contents of his notebooks are what you would call identities. Identities that relate continued fractions to other functions, expressions for integrals,
expressions for hypergeometric functions, and expressions for objects that we call q-series.
But that would be a literal interpretation of his notebooks. In my opinion, that would be like taking a cookbook by Julia Child, reading the recipes and saying that it’s about assembling chemical
compounds into something more complicated. Strictly speaking that would be true, but you would be missing out on what makes delicious recipes so important to us.
Ramanujan’s work came through flights of fancy. If he had been asked to explain why he did his work, he would probably say that he recorded formulas that he found beautiful, and they were beautiful
because they revealed some unexpected phenomenon. And they’re important to us today because these special phenomena that Ramanujan identified, over and over again, have ended up becoming prototypes
for big mathematical theories in the 20th and 21st centuries.
Here’s an example. In one of his published manuscripts, Ramanujan recorded a lot of elementary-looking results called congruences. In the 1960s, Jean-Pierre Serre, himself a Fields medalist,
revisited some of these results, and in them he found glimpses of a theory that he named the theory of Galois representations. This theory of Galois representations is the language that Andrew Wiles
used in the 1990s to prove Fermat’s last theorem.
There’s no “theory of Ramanujan,” but he anticipated mathematical structures that would be important to all of these other more contemporary works. He lived 80 years before his time.
How do you approach your own mathematical work — more as an artist, like Ramanujan, or with the aim of solving specific problems, like a scientist?
I’m definitely much more of a scientist. Science proceeds at a much faster rate than when I started in my career in the early 1990s, and I have to stop often to recognize the beauty in it and try not
to be so caught up in the more professionalized side of how one does science. The grant getting, the publications, and all of that — I have to admit, I don’t like it.
What compelled you to juxtapose your own story with his?
Well, I almost didn’t write it. There are a lot of very personal things that I’ve never told anyone before. It wasn’t until I started writing this book that I was mature enough as a parent myself to
try to understand the circumstances that led my parents to raise us the way they did. And as a professor at Emory, I see all these kids under tremendous pressure — rarely pressure that they
understand the origin of. So many of these super-talented kids are just going through the motions, and aren’t passionate about their studies at all, and that’s terrible. I was like that too. I’d
given up on ever trying to live up to my parents’ expectations, but somehow because I’ve had Ramanujan as a guardian angel, things have worked out well for me. It makes you a better teacher when you
just tell people how hard it was for you.
This book and your story don’t fit the typical “great man of science” narrative.
I think you’ll find that’s much more common than people are willing to admit. I didn’t discover my passion for mathematics until my early 20s — that’s when [my doctoral adviser Basil] Gordon turned
me on to mathematics at a time when I didn’t think anything was beautiful. I thought it was all about test scores, grades and trying to do as well as possible without putting in effort. Colleges are
full of kids who think that way. How do you beat the system, right? I wasn’t beating the system. The system was beating me, and Gordon turned me around. When I’ve told people the story I’ve
discovered that I’m really not alone.
That’s what I see in Ramanujan. He was a two-time college dropout whom my father looked up to as a hero — which made no sense to me when I was 16, because I was told I had to be a child prodigy. I
was supposed to do my geometry problems during the summer sitting next to my dad while he did his research. I wasn’t even really allowed to go out and play, and then to just have my father tell me
about Ramanujan out of the blue — it was beyond earth-shattering.
If you’d been interested in something conventionally “artistic,” like music, this kind of painful journey toward success would not seem so surprising. Why does it surprise us to hear about a
mathematician having the same struggles?
For whatever reason, we live in a culture where we think that the abilities of our best scientists and our best mathematicians are somehow just God-given. That either you have this gift or you don’t,
and it’s not related to help, to hard work, to luck. I think that’s part of the reason why, when we try to talk about mathematics to the public, so many people just immediately respond by saying,
“Well, I was never very good at math. So I’m not really supposed to understand it or identify with it.” I might have had some mathematical talent passed through my father genetically, but that was by
no means enough. You have to be passionate about a subject.
At the same time, I want it to be known that it’s totally OK to fail. In fact, you learn from your mistakes. We learn early on if that you want to be good at playing the violin, you’ve got to
practice. If you want to be good at sports, you practice. But for some crazy reason, our culture assumes that if you’re good at math, you were just born with it, and that’s it. But you can be so good
at math in so many different ways. I failed my [graduate-school] algebra qualifications! That doesn’t mean I can’t end up being a successful mathematician. But when I tell people I failed at this,
nobody believes me.
But Ramanujan seems to be just that: a unique genius who appeared out of nowhere. What does that have to do with a regular person’s life?
You think no one can be like Ramanujan? Well, I disagree. I think we can search the world looking for a mathematical talent, just not by the usual metrics. I want teachers and parents to recognize
that when you do see unusual talent, instead of demanding that these people have certain test scores, let’s find a way to help nurture them. Because I think humanity needs it. I think these are the
lessons we learn from Ramanujan.
You’re leading the Spirit of Ramanujan Math Talent Initiative. What is this spirit? How do we recognize it?
First of all, it’s the idea that talent is often found in the most unforgiving and unpromising of circumstances. It’s the responsibility of mentors, teachers and parents first to recognize that
talent, which is not always easy to do, and then to offer opportunities that nurture that talent.
There are no age limits, and I don’t want this to be a competition where you’re recognized for high test scores. I have no trouble finding people who can get an 800 on the math SAT. That’s easy.
Those people don’t need to be identified. They’ve already self-identified. I’m searching for creativity.
That said, the spirit of Ramanujan does not require finding the next Ramanujan. We would be super lucky to do that, but if we make opportunities for 30 talented people around the world who are
presently working in an intellectual desert, or are subjected to inelastic educational systems where they’re not allowed to flourish — or if we can provide an opportunity for someone to work with a
scientist who could be their G.H. Hardy — then this initiative will be successful.
Do you wish you had been nurtured differently? Do you resent your parents?
I love my parents. We discussed the draft of the book for months last summer. They were very upset with me at first, because it was difficult for them to get past the first 30 pages, but now they
embrace it. One reviewer actually saw the book as a love letter to my parents and to my mentors, because they taught me skills I needed.
If you had never joined the Institute for Advanced Study, would you still be struggling to reconcile your own path with your parents’ expectations?
I think I would still be searching for that recognition today if I hadn’t gotten there.
Both my parents will tell you that you only get to live once, so you might as well be the very, very best that you can be at whatever you choose. Which I don’t necessarily agree with, because if
everyone lived that way, there would be nothing but a whole bunch of unhappy people in the world. But that’s how they brought us up. They taught me to be competitive. They taught me not to falsely
believe I had done well when I hadn’t. They taught me standards, and those are important. But it’s true that if I hadn’t had the opportunity to work at the Institute, I’m not sure I would have been
able to write this book. I might still be struggling with these things.
This article was reprinted on ScientificAmerican.com. | {"url":"https://www.quantamagazine.org/the-mathematician-ken-onos-life-inspired-by-ramanujan-20160519/","timestamp":"2024-11-04T10:48:36Z","content_type":"text/html","content_length":"221952","record_id":"<urn:uuid:1550ce16-6e5b-4c7c-a53a-60c3c96fac4c>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00596.warc.gz"} |
On the Stamina of Mikasa's Citadel. The Indestructible Slopes of the Japanese BattleshipOn the Stamina of Mikasa's Citadel. The Indestructible Slopes of the Japanese Battleship
On the Stamina of Mikasa's Citadel. The Indestructible Slopes of the Japanese Battleship
Many thanks to everyone who took part in the discussion of my
previous material
, dedicated to the resistance of the Mikasa's defense against the domestic 12-inch armor-piercing shell. I express special gratitude for suggestions and constructive criticism to the respected Alexey
Rytik and the commentator writing under the nickname Yura27.
The comments made forced me to reconsider the approaches to modeling the penetration of Japanese armor by Russian shells. I present to you the results of changing the methodology using the example of
the destruction of the citadel of the battleship "Mikasa".
Citadel - area of engine and boiler rooms
In this area, the citadel was protected by a 222 mm armor belt, coal pits, and a slope consisting of three steel sheets with a total thickness of 76,2 mm. Earlier, I calculated the resistance of the
Mikasa's power plant protection based on the assumption that a shell should penetrate a 222 mm Krupp plate while maintaining a speed of 300 m/s, which it would need to overcome the coal in the coal
pit and penetrate the slope. In this calculation, I assumed that the slope was located at the same angle as that of the Asahi, i.e. 30 degrees.
In fact, the bevel is not at 30, but at 35 degrees to the horizon line.
Accordingly, a projectile flying parallel to the deck, when hitting the slope, will have a deviation from the normal of not 60, but 55 degrees.
In addition, I mistakenly used the standard de Marr formula for the calculation, which is incorrect in this case, since it is intended for calculations on cemented armor thicker than 75 mm. For
homogeneous armor, a slightly different formula should be used.
The Mikasa's bevel consisted of three steel sheets, each an inch thick. I calculated its resistance as the resistance of a "layered" barrier, in which the projectile penetrates each of the specified
sheets successively, and this turned out to be correct. However, an error in the formula used and an incorrect bevel angle led to a large error in the calculations.
Earlier I determined that it was sufficient to break through the bevel of the flagship of the United
the speed of a 12-inch, 331,7-kg projectile is within 168 m/s, while it is no more than 116 m/s.
At the same time, the Berezan formula can be used to determine the loss of manpower of a projectile to overcome the contents of coal pits.
Unfortunately, like the de Marr formula, it is empirical, and the accuracy of its calculation directly depends on the correctly selected coefficient Kp, which characterizes the "projectile
resistance" of a particular type of obstacle. At the same time, it was not possible to find the value of this coefficient for coal.
The whole point is that the Berezan formula is used to determine the parameters of land
, which is why its appendices include various types of soil, sand, limestone, brickwork and other materials that field artillery shells mostly encounter. Coal, however, is not listed among them for
obvious reasons.
Nevertheless, Kp for coal can be, albeit very conditionally, determined somewhere at the level of 0,04, that is, it is slightly more resistant than compacted sand, and is twice as resistant as
brickwork. This, of course, is a very rough estimate, which may be incorrect - however, it should be expected that such an approximation will still be more accurate than the "300 m/s behind an armor
plate" I adopted earlier.
Of course, in addition to the "resistance coefficient" of coal, one should know the distance that a projectile will travel in a coal embankment. Considering that the main armor belt of the "Mikasa"
only slightly rose above the water, one should consider hits on the upper part of the 222-mm armor plates - here the distance to the slope was about 2,5-3 m.
At the same time, after breaking through the slope, the shell did not fall into the next coal pit, but into the corridor along which ammunition was transported to the 6-inch and 75-mm artillery guns.
The fragments of the Russian shell that exploded here, having broken through the relatively thin walls of the corridor, could easily disable steam engines or boilers, damage steam pipes and chimneys.
If lucky, the shells moving along the corridor could detonate, which would increase the impact on the engine or boiler room opposite which the hit occurred.
In general, the calculation (K for Krupp armor - 2, steel - 275) gives the following figures. The speed required for a domestic 1-inch 000-kg projectile to overcome:
1) 222 mm armor plates of the main armor belt at a deviation from the normal of 0 degrees - 504 m/s;
2) 2,5 (3) m of coal – 175 (210) m/s;
3) a bevel of three steel plates, each 25,4 mm thick, with a deviation from the normal of 55 degrees – 116 m/s.
To overcome all three obstacles "at the limit", the projectile would have to have a speed of 545-558 m/s at the moment of impact with the 222 mm armor. Therefore, under ideal conditions, the
projectile could reach the Mikasa's power plant from about 23-24 cables.
If the projectile hit a 222 mm armor plate with a deviation from the normal of 25 degrees, then the following would happen to it. When passing the armor belt, the projectile would normalize, turning
approximately 19 degrees, which follows from the diagram given by Professor L. G. Goncharov in his book "Course of Naval Tactics. Artillery and Armor". The esteemed reader should pay attention to the
leftmost curve: along the Y axis there is the deviation from the normal with which the projectile hits the armor, and along the X axis there are degrees of rotation of the projectile in the plate.
In the discussion of my previous material, the opinion was expressed that this diagram is not applicable to shells from the Russo-Japanese War era, since it was compiled for shells equipped with an
armor-piercing tip, which was not present on the armor-piercing 12-inch shells of the Russian fleet during the Russo-Japanese War.
However, I am inclined to consider this opinion to be erroneous. L. G. Goncharov in his work gives an example of a solution to the problem of a projectile overcoming spaced armor, which takes into
account the normalization of the projectile when passing both the 1st obstacle, which is helped by the armor-piercing tip, and the 2nd, which the projectile reaches without any tip.
Accordingly, the calculation is made based on the fact that the normalization of a 12-inch projectile when penetrating a 222-mm plate will be about 18,5–19 degrees, and a projectile that entered the
plate with a deviation from the normal of 25 degrees will exit it with a deviation of 6,5–6 degrees. Nevertheless, this will slightly lengthen its path in the coal pit (by 1,3–1,6 cm) and slightly
change the deviation from the normal when reaching the bevel (55,22 degrees instead of 55 degrees).
All of the above will lead to the fact that in order to overcome the protection of the engine and boiler rooms of the Mikasa, a 12-inch projectile will need a speed of 595–606 m/s, which
approximately corresponds to a distance of 18–19 cables.
Citadel - areas outside the power plant
The 222 mm thick section of Mikasa's main armour belt was longer than the boiler and engine rooms, and continued forward and aft of them. These sections lacked additional protection in the form of
coal pits, but the slope was reinforced with an additional armour plate one and a half inches thick, or 38,1 mm.
Thus, in this section the bevel consisted of three sheets of steel and one sheet of armour with a total thickness of 4,5 dm or 114,3 mm.
Having made a calculation using a method similar to that used earlier, we find that such protection could be penetrated by a 12-inch domestic armor-piercing projectile at distances of 21–27 cables
with a deviation from the normal of 25 and 0 degrees, respectively. Consequently, it can be stated that the appearance of a one-and-a-half-inch armor plate on the slope did not compensate for the
absence of coal pits.
Further on, Mikasa had it even worse, since only 222 mm of armor extended from the 173 mm section to the bow and stern. Such protection could be penetrated at a distance of 31–37 cables with a
deviation from the normal of 25 and 0 degrees, respectively.
The calculations performed show that the Mikasa's citadel outside the boiler and engine rooms was significantly less well protected than the central part. The reasons why British shipbuilders left
such "windows" in the defense, especially opposite the ammunition magazines for the main caliber guns, are completely unknown to me, but such a practice was maintained even on the battlecruisers of
the First World War.
I tried to guess and made the assumption that the British built their defense against shells flying perpendicular to the center plane of the ship. In this case, the shells will hit the armor plates
in the center of the hull almost without deviation from the normal, but the armor plates located closer to the bow/stern will be located at an angle determined by the contours of the hull.
However, an attempt to measure these angles on Mikasa and the calculations performed on them show that even with this method, equal resistance of the various sections of the citadel is still not
But there are still some nuances.
Nuance No. 1 – the distance an armor-piercing projectile travels before exploding
As was said earlier, to destroy the citadel in the area of the engine and boiler rooms, a 12-inch shell must penetrate the armor belt plate, pass through the coal pit and the slope. Having overcome
all this, the shell will only have to overcome some very light structures (apparently, structural steel 8-12,7 mm thick, which I ignored in the calculation, due to their obvious insignificance),
after which it will end up in the corridor for transporting ammunition to medium-caliber artillery.
If the projectile passes the slope without changing its direction, then its path will definitely be in the ammunition transportation corridor. But if the slope still manages to normalize the
projectile (according to the diagram, it will change direction by only 13%), then in this case the projectile's passage into the coal pit is hardly possible.
Accordingly, the shell fragments will only have to overcome the thin bulkhead and then hit the contents of the engine or boiler room, which they will be quite capable of doing.
Consequently, the explosion of a Russian shell immediately after passing the slope within the power plant gives it a good chance of reaching its target (in this case, damaging the Mikasa's engines or
boilers). But this cannot be said about shells penetrating the citadel outside the boiler and engine rooms. If such a shell explodes immediately behind the slope, the fragments will have to penetrate
several bulkheads and then the feed pipe. High-explosive Russian shells were quite capable of this, but armor-piercing ones are questionable.
In view of the above, in my opinion, the scenario of successful target destruction for shells hitting the citadel outside the power plant should be changed - the shell explosion should occur at least
6 meters behind the armor plate. Accordingly, the shell after overcoming the slope should have enough energy to pass a couple of bulkheads, possibly - to penetrate some mechanisms and at the same
time maintain sufficient speed to pass the above-mentioned 6 meters before the fuse is triggered.
However, calculations performed according to this scenario show that the required increase in the projectile velocity on the armor plate is no more than 10–15 m/sec, which results in a reduction in
distance of at most 1,5–2,5 cables.
Therefore, even taking into account the above considerations, the penetration of the Mikasa citadel for 12-inch armor-piercing shells at a deviation from the normal of 25 and 0 degrees will be:
For a section of 222 mm + coal pit + 76,2 mm bevel – 18–23 cables (unchanged).
For a section of 222 mm + bevel of 114,3 mm – 19–25 cables.
For a section of 173 mm + bevel of 114,3 mm – 29–35 cables.
Nuance No. 2 – rebound
Here we need to return to the diagram by L. G. Goncharov, which I have already cited above. However, now we should pay attention not to the left, but to the far right "squiggle". Its essence is very
simple - along the Y axis we have the deviation from the normal of the projectile when it hits the armor, and along the X axis - the maximum armor thickness (in calibers) that the projectile can
penetrate at all with such a deviation.
How does it work?
Let's look at this with an example.
Let's assume that our projectile hits the 173 mm armor plate of the Mikasa citadel from a distance of 20 cables, and the angle of deviation from the normal is equal to the angle of incidence of the
projectile (something around 2,46 degrees). We look at the leftmost curve of the diagram and see that the plate completely normalizes such angles. Therefore, the projectile, having penetrated the 173
mm plate, will exit it with a deviation from the normal of 0. This means that it will reach the slope, moving parallel to the water surface, therefore, the deviation from the normal upon hitting the
slope will be 55 degrees.
Now we look at the curve on the far right and see that with such a deviation from the normal, the projectile is capable of penetrating armor with a thickness of approximately 0,363 of its caliber.
Since we are considering a 12-inch shell, its caliber will be 304,8 mm, and the thickness of the armor penetrated will be 111 mm. But the slope of the Japanese battleship was 114,3 mm!
At the same time, L. G. Goncharov points out that:
Thus, it turns out that the above calculations of the vulnerability of the citadel in areas protected by 114,3 mm thick slopes do not make sense, since shells hitting them should not penetrate such a
slope, but ricochet off it.
Of course, a weighty objection can be raised against this thesis.
The fact is that the Japanese bevel had a total thickness of 114,3 mm, but it was not monolithic, but consisted of 4 layers - three steel and one armor. Obviously, if a monolithic armor plate had
been used instead of this pie, then its thickness, with equal resistance, would have been significantly less than both the 114,3 mm bevel and the 111 mm armor, which a 12-inch projectile could still
penetrate at a deviation from the normal of 55 degrees. That is, if we count not by the actual, but by the reduced thickness of the armor, then the Russian projectile completely penetrates the
specified bevel, and L. G. Goncharov's provisions on ricochet are not applicable to it.
But there is a counterargument to this objection. The fact is that the diagram of L. G. Goncharov is used for all types of armor, both cemented and homogeneous. It is quite obvious that homogeneous
armor will be much inferior to cemented armor in terms of resistance with a relatively small deviation from the normal. However, this factor is ignored by Professor L. G. Goncharov - his curves are
used for all types of armor.
This means that if the angle of the projectile's impact with the plate is close to the maximum at which it can be penetrated, then the armor's resistance does not affect the armor's thickness, but
only the projectile's speed required to penetrate it. This thesis is difficult to understand, so I will explain it with an example.
In the diagram we see that at a deviation from the normal of approximately 26 degrees, the projectile is capable of penetrating armor with a thickness equal to its caliber.
That is, a 12-inch projectile is capable of penetrating (maximum) a 304,8-mm armor plate. Obviously, it will only penetrate it if it hits at a certain speed. For Krupp armor, with "K" = 2, this speed
will be equal to 275 m/s. But even if we increased the projectile speed to 699,5, 750 or 800 m/s, this will not allow the projectile to penetrate armor more than 900 mm thick - this is the maximum
thickness that can be penetrated at an angle of deviation from the normal of 304,8 degrees for a 26-inch projectile, and further increase in the projectile speed does not increase the thickness of
the armor penetrated at this angle.
So, if we take ordinary homogeneous armor with "K" = 1 100 instead of Krupp cemented armor, then the 304,8 mm armor plate with the same deviation of the projectile trajectory from the normal of 26
degrees will be penetrated already at a projectile velocity of 338 m/s. But if we increase this velocity to 699,5 m/s, at which Krupp cemented armor is penetrated under these conditions, or even
more, we still will not be able to penetrate homogeneous armor thicker than 304,8 mm.
This is the essence of L. G. Goncharov's diagram, it shows that there is a relationship between the angles of deviation from the normal and the thickness of the armor being penetrated, and it is not
affected by the speed of the projectile on the armor (and therefore the durability of the armor). L. G. Goncharov himself speaks about this.
Due to the above, the 114,3 mm bevel of the Mikasa cannot be penetrated at practically any reasonable combat distances for the Russo-Japanese War. Because a 12-inch shell, no matter what speed it has
when it contacts the bevel, should not penetrate, but ricochet off it.
Of course, when 331,7 kg pieces of steel start flying in the air, anything is possible. As I have shown many times before, armor penetration formulas are strictly probabilistic. It is quite possible
that the 114,3 mm slope of a Japanese battleship will still be penetrated - even if according to the formulas and graphs it seems impossible. But the probability of such an outcome should be assessed
as minimal - that is, with several hits on the slope, maybe one shell will not ricochet, but will penetrate it.
As always, I am ready to discuss the theses I have expressed above and would be very happy to hear constructive criticism from readers interested in the topic.
And – I’ll allow myself a little intrigue.
Regardless of whether my thesis about 114,3 mm bevels is correct or not, in the course of working on this article I came to very surprising and very different from generally accepted views
conclusions about the armor systems of squadron battleships of the Russo-Japanese War. Which I will share in the next article, which I am currently working on.
Продолжение следует ...
Dear reader, to leave comments on the publication, you must sign in.
12 September 2024 06: 48
Almost everything is wrong, except for the penetration distances. As a result of a combination of incorrect assumptions and calculations, the penetration distances were correct (almost).
At the same time, after breaking through the slope, the projectile did not fall into the next coal pit, but into the corridor
The Mikasa MS scheme is not equal to Asahi, i.e. the coal thickness above the slope is minimal (equal to zero, in the best case for the Japanese), and behind the slope is maximal. The feed
corridor of the b/k, in Mikasa is located behind the internal bulkhead of the coal pits.
2. According to Goncharov, the projectile with a flat armor-piercing tip is rotated. Which, by the way, is not confirmed by modern comp-modeling. There is no point in talking about any
rotation of the RYaV projectiles at all.
3. A projectile without a bullet tip is denormalized, not the other way around. If you don't believe me, see compmodeling.
4. L. G. Goncharov is either mistaken (I don’t remember offhand), or he gives an example with thin armor plates, which assumes that the bullet tip is not destroyed when penetrating the first
relatively thin plate.
5. L. G. Goncharov's diagram "on ricochets" is applicable only to monolithic armor sheets, and not to "sandwiches", and even those consisting mostly of ordinary shipbuilding steel. Therefore,
for a ricochet, a ricochet angle of encounter is needed - this is more than 63 degrees from the normal.
Taking into account the angles of incidence at distances of 17-30 kbt, such an angle, relative to the bevel, cannot be obtained, even with a course angle of 60 degrees.
Thus, the 114,3 mm bevel of the Mikasa is essentially "cardboard" for a 12" 331,7 kg projectile.
1. Quote: Jura 27
Mikasa's MS scheme is not equal to Asahi, i.e. the thickness of the coal above the bevel is minimal
Well, if you have a diagram, let's look at it. I don't mind recalculating.
Quote: Jura 27
L. G. Goncharov is either mistaken (I don’t remember offhand), or he gives an example with thin armor plates, which assumes that the armor tip is not destroyed when penetrating the first
relatively thin plate.
Read the example given by Goncharov, which begins on page 136. The case of firing a 381 mm shell at spaced protection - two vertical plates of 225 mm and 75 mm. On page 139, the
calculation of the shell's exit from a 75 mm plate is given - taking into account normalization.
Quote: Jura 27
L. G. Goncharov's diagram "on ricochets" is applicable only to monolithic armor sheets, and not to "saddle-shaped" ones, which are mostly made of ordinary shipbuilding steel.
The sandwich only affects the plate's durability. And its diagram ignores it, since it is the same for armor of different durability - both for cemented and homogeneous. It is clear that
durability can only be ignored to certain limits (a shell is unlikely to be repelled by a pack of office paper), but in general, homogeneous armor is not much superior to steel.
Quote: Jura 27
A projectile without a bullet tip is denormalized, not the other way around. If you don't believe me, see compmodeling.
Computational modeling depends on the computator, so without specifics it is not an argument. By the way, the very first video about armor penetration by a modern subcaliber showed the
presence of normalization
The sandwich only affects the durability of the slab.
That's the point, the durability of a monolith is one (relative to the penetration speed), and the durability of a sandwich is equal to the square root of the sum of the squares of
the penetration speeds. That is, the Goncharov diagram is not applicable to a sandwich. Another thing, if there was a ricochet angle, then we could talk about a ricochet.
1. The point is that the durability of a monolith is one (relative to the penetration speed), and the durability of a sandwich is equal
That's the point, this diagram doesn't care about durability. If the thickness of the armor being penetrated depended on durability, it would be impossible in principle, since it
would have to be done separately for homogeneous and cemented armor, and also different depending on durability - one for armor with K 2275, another for K 2000, etc.
Quote: Andrey from Chelyabinsk
The point is that the durability of a monolith is one (relative to the penetration speed), and the durability of a sandwich is equal
That's the point, this diagram doesn't care about durability. If the thickness of the armor being penetrated depended on durability, it would be impossible in principle, since
it would have to be done separately for homogeneous and cemented armor, and also different depending on durability - one for armor with K 2275, another for K 2000, etc.
In another manual, there is a difference. Ask Alexey, he will send it to you.
But again, all this has nothing to do with the armor resistance of Mikasa's defense. There are different armor and shells everywhere.
1. Quote: Jura 27
There is different armor and shells everywhere.
And the shells (answered in another comment), and the armor (Goncharov has slightly improved Krupp).
But in yours there is a link to YouTube - yes, there are different shells and different armor. But for some reason it doesn't bother you at all.
There is different armor and shells everywhere.
In the modeling there is thick armor and "thick" shells - denormalization is everywhere.
Computational modeling depends on the computator, so without specifics it is not an argument. By the way, the very first video about armor penetration by a modern subcaliber showed
the presence of normalization
Well, what does a modern sub-caliber have to do with it?
Here is a comparison of a capped and uncapped projectile:
And https://www.youtube.com/@dejmianxyzsimulations4174 has quite a few videos (https://www.youtube.com/watch?v=zZJ-vF4c55Y), there are relatively large armor thicknesses and large
caliber shells. Just don't watch the ones about modern "crowbars".
1. Sorry, but I trust a professor of the Military Medical Academy a little more than an anonymous person from YouTube. That's one, and secondly, if you trust your source, then:
1) A shell that hits 222 mm of armor is denormalized, and there can be no talk of any 18-23 cables of penetration of the citadel. However, you do not object to 18-23 cables
2) The bevel becomes completely indestructible. And you say it breaks through easily;)
Sorry, but I trust a professor from the Military Medical Academy a little more than an anonymous person from YouTube.
In light of modern comp-modeling, "Comrade Beria, has fallen out of favor." It looks too fantastic, like a projectile, hitting at 40 degrees from the normal to the armor,
turns almost 20 degrees and exits the armor at 20 degrees from the normal. Even before, this seemed unrealistic to me.
I've uploaded a diagram of the Mikasa MS above. Above is the MS, a plan of the armored deck, below is a horizontal section at the level of the corridors for feeding the b/
z from the ends.
It is clear that the two lower directions (2 and 4 degrees) have the greatest resistance; there are about 2 m of coal on their path (taking into account the natural slope
of 45 degrees, a little less if the coal supply pipe is on the flat part of the armored deck).
In the 3gr direction, there is the least amount of coal - a little on the slope and a little behind the slope, taking into account that there is not much coal at the very
top of the pit, and it will most likely have a lower level due to consumption before the battle.
1. Quote: Jura 27
It is evident that the two lower directions (2 and 4 degrees) have the greatest resistance; there are about 2 m of coal on their path (taking into account the natural
slope of 45 degrees,
I can't agree. According to your diagram, there are 3 meters between the side and the bevel along the upper edge of the 222 mm plate and bevel. I agree that there will
be a space (at the waterline) where it will be less than 2,5 m, but in general, 2,5 m can be safely taken as the average.
Quote: Jura 27
there are about 2 m of coal on their way (taking into account the natural slope of 45 degrees,
Where did the slope come from? Everything from the slope to the armor plate will be covered.
I can't agree. According to your diagram, there are 3 meters between the side and the bevel along the upper edge of the 222 mm plate and bevel. I agree that there
will be a space (at the waterline) where it will be less than 2,5 m, but in general, 2,5 m can be safely taken as the average.
I was talking about the thickness of the coal, in the upper direction, the coal is minimal, because there is no coal in the side corridor, double side, wooden
lining and armor.
There is only less than half a meter on the slope in the coal pit, which is above the slope. Under the slope, finding coal is unlikely, because this is the very
top of the coal pit.
But in the two lower directions, the thickness of the coal can be taken as two meters (with a margin, in favor of the Japanese).
The slope is a natural slope when pouring coal through a pipe for feeding coal into a coal pit. The angle of the natural slope of the bulk coal is 55 degrees from
the horizon, but 45 degrees can be taken, again in favor of Japanese protection. This is not shown in the drawing.
1. Quote: Jura 27
The slope is the natural slope when pouring coal through the coal feed pipe into the coal pit.
There is no such thing:)))))))
Yura, the total coal reserve at Mikasa is 1722 tons, the bulk density of Cardiff is unknown to me, but for hard coal it is about 0,85, that is, 2025 cubic
meters completely filled with coal were required. The length of the MO and KO opposite which the coal pits were located is about 40 meters, respectively, from
each side it should be filled with coal with an average width of 3 meters 2025/2/40/3 = the total height of the coal pits is 8 meters.
This is despite the fact that I can’t imagine why you think that it was poured in without being scattered around the pit)
This is despite the fact that I can’t imagine why you think that it was poured in without being scattered around the pit)
And how will the sailors throw it around if there is only a meter left to the beams? Lying down?
At the same time, there should also be hatches in the armored deck so that they can go up. And extra hatches in the armored deck are not a very good option for
The filling factor of coal pits in tons is 0,8-0,82 of the volume in cubic meters.
2. Quote: Jura 27
And how will the sailors throw it around if there is only a meter left to the beams? Lying down?
Why?:)))))) It's not the wagon that pours coal into the pit, but the sailor, using a sack. What's the problem with immediately scattering it with a
Quote: Jura 27
At the same time, there should also be hatches in the armored deck so that they can get out to the surface.
Not upwards, but to the side - that is, onto the armored deck.
Quote: Jura 27
The filling factor of coal pits in tons is 0,8-0,82 of the volume in cubic meters.
Moreover. You can figure out for yourself that no 55-degree dumps could exist there.
Why?:)))))) It's not the wagon that pours coal into the pit, but the sailor, using a sack. What's the problem with immediately scattering it with a shovel?
The problem is that the coal pit is located under the slope of the armored deck. It is in it that the natural slope of the poured coal will be at the top.
4. Yura, we are not talking about the pit under the slope now. I don't see how a projectile can fly in there at all. And if it does fly in, it will pass over
the top, where there will be no coal, since it will be used up, you said everything right here. Therefore, the lower slope pit is not considered at all for me.
We are talking about a hole above the slope
we are not talking about the hole under the slope now
And I, on the contrary, about the pit under the slope. Because the best trajectory for protecting Mikasa passes through it (there can be up to 2 m of coal
Through the hole above the slope, the trajectory passes through a small amount of coal, about half a meter. The upper trajectory is shown on the diagram.
1) A shell that hits 222 mm of armor is denormalized, and there can be no talk of any 18-23 cables of penetration of the citadel. However, you do not object to 18-23 cables
2) The bevel becomes completely indestructible. And you say it breaks through easily;)
1. At angles of incidence of 2-4 degrees, talking about denormalization (and even more so normalization) is pointless; the same way a capless projectile entered the armor, it
2. It can be penetrated quite well, since the angle of impact from the normal is 53-51 degrees, i.e. the angle is not a ricochet, and the sandwich is not made of monolithic CC
armor, but of ordinary soft shipbuilding steel.
3. 114mm bevels behind a 173mm belt are also quite penetrable, the distances are greater, the angle of contact with the bevel is even smaller, and 38mm of extra-soft nickel
steel adds little. Another thing, if it were a 114mm KC monolith, we could talk about serious protection.
1. Quote: Jura 27
At angles of incidence of 2-4 degrees, talking about denormalization (and even more so normalization) is pointless, as the capless projectile entered the armor, so it
comes out.
Yura, you are arguing with Goncharov now. I bet on the professor:)))) But the most important thing is that I am not talking about the angle of incidence, but about such a
combination of the angle of incidence and the course at which the resultant will be 25 degrees.
Yura, you are arguing with Goncharov now. I bet on the professor:)))) But the most important thing is that I am not talking about the angle of incidence, but about
such a combination of the angle of incidence and the course at which the resultant will be 25 degrees.
There is no point in arguing with Goncharov, because he has nothing about the shells of the RYaV times. Try to understand when his work was written and what shells he
is talking about. And that all of Goncharov's provisions regarding capped shells are not applicable to capless shells, not at all.
On the contrary, it would be possible to talk about denormalization, which would reduce the angle of impact with the Mikasa bevel (and worsen protection), but since
the angles of incidence are relatively small and there are no denormalization tables, it can be neglected - the projectile entered the armor as it exited.
The combination of the angle of incidence and the course angle is easily calculated using the Sukovatitsyn formula. And even with a course angle of 60 degrees and an
angle of incidence of 2 degrees, the resulting angle of encounter from the normal with a Mikasa bevel does not reach the ricochet angle of 63 degrees (even less than
60 degrees).
1. Quote: Jura 27
There is no point in arguing with Goncharov, because he has nothing about the shells of the RYaV times. Try to understand when his work was written and what shells
he is talking about. And that all of Goncharov's provisions regarding capped shells are not applicable to capless shells, not at all.
Firstly, in the 30s, when Goncharov’s book was written, we had a pile of all kinds of shells in our warehouses – 1911, 1907 and Tsushima models – they are in the
Album of Shells.
Secondly, in calculating armor penetration, Goncharov separately gave coefficients for shells with an armor-piercing cap and without a cap. That is, in his work he
covered the entire range of available shells, and not just "capped" ones.
Thirdly, Goncharov, describing the action of the diagram, speaks of the existence of a relationship between the caliber of the projectile, the angle of impact and
the thickness of the armor, but says absolutely nothing about the fact that this relationship is characteristic of any particular category of projectiles.
Accordingly, your thesis about Goncharov is not based on anything.
Firstly, in the 30s, when Goncharov’s book was written, we had a pile of all kinds of shells in our warehouses – 1911, 1907 and Tsushima models – they are in
the Album of Shells.
We are talking about over-turning/under-turning. Only cap shells could over-turn.
And naturally, Goncharov means contemporary shells and monolithic armor sheets. He provides the 2456 kit for capless shells, for reference.
1. Quote: Jura 27
We are talking about over-turning/under-turning. Only cap shells could over-turn.
Which completely contradicts Goncharov. As I already said, he writes that the first obstacle removes the cap, but when passing the second, he uses the same
diagram 9 to determine normalization. I have already quoted you the example pages
Which completely contradicts Goncharov. As I already said, he writes that the first obstacle removes the cap, but when passing the second, he uses the same
diagram 9 to determine normalization.
So he was wrong in this case, think for yourself (and look at the simulation again), due to what a sharp-nosed projectile can bite into the armor?
3. Quote: Jura 27
So he was wrong in this case.
Yura, you know my position on this issue. It is clearly not Goncharov who is mistaken here.
Quote: Jura 27
Think for yourself (and look at the simulation again), how can a sharp-nosed projectile get stuck on armor?
I have already answered this question for you. When a projectile interacts with armor (any kind) at an angle, a difference in forces occurs, when the warhead
slows down as a result of interaction with the armor, and, excuse me, the "back" of the projectile tries to fly as it flew:)
I have already answered this question for you. When a projectile interacts with armor (any kind) at an angle, a difference in forces occurs, when the warhead
slows down as a result of interaction with the armor, and, excuse me, the "back" of the projectile tries to fly as it flew:)
This applies to modern "crowbars" that have a very large length relative to the caliber and a CG located in the middle of this length.
Therefore, for a ricochet, you need ricochet angle meetings, - this is more than 63 degrees from the norm.
Exactly! And colleague Andrey is considering "ricochet" at 27 degrees! Which does not depend on the speed))
Ricochet is exactly the case denormalization projectile, when its head part cannot penetrate the armor, it is different and depends, among other things, on the angle of incidence, both
hardness, etc. If the armor is thin (relative to the projectile), it breaks through it flat, if not, it ricochets or is destroyed.
However, if the warhead penetrates the surface of the armor (without breaking off) the process begins normalizationThe flat shape of the caps is designed to increase the angles of
"biting" and begin the normalization process earlier.
1. Exactly! And colleague Andrey is considering "ricochet" at 27 degrees! Which does not depend on the speed))
Actually, at 55, but who counts them for you...
It would be fine if only I wrote it, but a professor at the Military Medical Academy writes about it. Another question is that at small angles the word ricochet can be replaced with
"bounced off", but this does not change the essence of the matter
If the armor is thin (relative to the projectile), it breaks through it flat.
If possible, according to the diagram provided
12 September 2024 07: 56
How can we determine whether the coal pit is full or whether the coal has already been partially removed? Accordingly, the thickness of the coal layer has decreased.
1. Battleships usually went into battle with a full supply of coal, and it was consumed first of all from the pits closest to the boilers, located behind the slope. They are not taken into
account in the calculation at all. In addition, in our case, the projectile goes below the coal pit above the slope, it must be practically empty so that the projectile does not hit the coal.
That is, yes, the option you described is possible. But it is unlikely.
Considering that the coal consumption was quite high, especially when moving at high speed, the coal pits emptied quite quickly. In relation to Japan - yes. From Sasebo to Tsushima - the
distance is ridiculous, let's say the Japanese were just lucky, well and a pre-calculated parameter - we will fight near our own coast - a long cruising range is not needed. But, if the
situation had developed like the British in 1914 near the Falklands, when they came there with almost empty pits. It could have been very painful))
1. The Japanese did not consume much coal - during the two days of the Tsushima battle, Mikasa used up something like 250 tons, if my memory serves me right. But you are absolutely right
that the method of calculating by full coal pits will not be applicable to cases when these pits are empty. However, such a situation is not typical for the RYA
12 September 2024 08: 37
Oh, how many articles about the Russo-Japanese naval battles. And the reason for the failures is essentially only one - it was necessary to hit more, that's all! Even the "Mikasa" was hit by only
6 large-caliber 10-12'' shells, all the rest were 6-inch and smaller, which did practically no damage to it. Although after Vitgeft's order, almost the entire Russian squadron fired at the
"Mikasa" throughout the battle, but somewhere in the wrong direction.
There were few large battles of armored ships at the beginning of the 20th century, but if we take the Battle of Jutland as an example, then to disable a large armored ship (a battle cruiser, for
example), 18-25 shells of 305-381 mm were required. Examples: "Lützow" (24 hits) - sank, "Derfflinger" (20 hits) - heavily damaged, effectively lost combat capability, repaired for 4 months,
"Seydlitz" (17 hits) - heavily damaged, repaired for 3 months.
Yes, battlecruisers were significantly larger than battleships, better armored (especially German ones), but some approximations can be made, based on which it turns out that for a guaranteed
incapacitation of a battleship from the Russo-Japanese War, 15-18 305mm shells were required. But they weren't there! And then whole volumes can be written about why they weren't there :) .
Quote: Ivan_Sergeev
Even Mikasa was hit by only 6 large-caliber 10-12'' shells,
If you are talking about Shantung, then 9-12". and 3-10".
But that's not the point. The author tried, wrote about the quantity and quality of explosives, the work of fuses, calculated armor penetration, modeled the possibility of damage... but it
turns out that all this is nonsense. You need 15-18 shells and that's it! You can do without explosives at all, the main thing is that it is 305 mm! Brilliant!
Yes, I'm exaggerating. But it seems you don't understand the difference between Russian shells from the RYaV and English ones from WWI.
It's about the battle in the Yellow Sea, where the Russian squadron had at least some chances, Tsushima, thanks to Rozhdestvensky's "wise" tactics, was just a beating. And of those 12-13
hits, half were ricochets or harmless, like hitting the masts. There were only 6 serious ones.
And yes, I did not put the shells of the Russo-Japanese war in the same row with those that were 10 years later, yes, the British had larger calibers and the weight of the 381 mm differed
more than twice that of the 305 mm, but there the ships were already much "harder" and more modern. But I showed you the pattern clearly, 20 shells and the German cruiser is disabled and
either sank or with huge problems dragged itself to the base and stopped for long-term repairs. So, all that the 6 Russian squadron battleships had to do for several hours shooting almost
exclusively at Mikasa was to hit at least 15 times with large caliber. Having lost the lead ship, the Japanese would have definitely ended the battle and Vitgeft's squadron would have
reached Vladivostok. Then it would have been possible to simply organize a cruise with several detachments and almost completely block the Japanese transportation across the sea, and
without supplies the entire Japanese group would have found itself in limbo with dismal chances of victory.
It's just that this pile of articles looks like one big attempt to justify the defeats at sea in 1904-1905. I'm more than sure that the Japanese don't bother with such things at all, they
just shot more accurately and hit significantly more. That's all.
1. So, all that the 6 Russian squadron battleships had to do for several hours, shooting almost exclusively at Mikasa, was to hit at least 15 times with large caliber.
So 12-14 shells didn't cause any particular damage, but the next 2-3 shells certainly put Mikasa out of action? Hmm.... That's some killer logic.
Having lost the lead ship, the Japanese would have definitely ended the battle and Vitgeft’s squadron would have reached Vladivostok.
Firstly, the Russian squadron could not have reached Vladivostok after the battle in ZhM in principle - damage and coal did not allow it. Secondly, the Japanese did not care at all
about the failure of one ship, in this case they would have gone to Tsushima and with the remaining forces + Kamimura's cruisers would have met the 1st TOE in Tsushima. So instead of
May 1905, we would have received it in July 1904, that's all.
It's just that this pile of articles looks like one big attempt to justify the defeats at sea in 1904-1905.
Have you tried crossing yourself with a small cross? What connections could there be with the battle in ZhM or Tsushima, when in both places our forces used mainly high-explosive
large-caliber shells, since the distances did not allow them to hit with armor-piercing ones?
So 12-14 shells didn't cause any particular damage, but the next 2-3 shells certainly put Mikasa out of action? Hmm.... That's some killer logic.
Because there were NO clear 12-14 hits with large caliber, and you could fire 6-inch guns all day long to no avail. Here is a diagram of the analysis of hits https://
naval-manual.livejournal.com/45659.html, and what we see is that when 305mm were applied qualitatively to the Japanese battleship, it somehow really "hurt". But there were too few
of them.
Firstly, the Russian squadron could not have reached Vladivostok after the battle in ZhM in principle - damage and coal did not allow it.
Where did this even come from? Apart from the Tsarevich, no other ship had any serious problems. The Tsarevich could have been patched up after the battle and even simply towed
away in the most extreme case.
Secondly, the Japanese did not care at all about the failure of one ship, in which case they would have gone to Tsushima and with the remaining forces + Kamimura's cruisers would
have met the 1st TOE in Tsushima.
Well, yeah, big deal, there were as many as 3 full-fledged battleships left against 6, that's nothing. And what would the "Kamimurians" have done against real battleships?
Nothing, they couldn't do anything for a long time with 3 obsolete armored cruisers from Vladik, which practically blocked Japanese shipping. And now let's add to them, for
example, "Pobeda" and "Peresvet", i.e. we'll get a squadron that can get away from any Japanese battleship and just as easily beat up even a whole detachment of all Japanese
armored cruisers. So, the Japanese would have had very serious problems if the Russian squadron had reached Vladivostok. But alas...
Quote: Ivan_Sergeev
Where does this even come from?
From real performance characteristics of ships.
"Peresvet" with "Pobeda" returned to Arthur with empty pits. "Tsesar" did the same but in Qingdao. "Poltava" with "Seva" in principle could not reach Vladik without bunkering.
And "Retvizan" would have had to turn its bow to the wave, and it had a hole there.
Quote: Ivan_Sergeev
And what would the "Kamimurians" do against real battleships?
You forgot to write - "weakened by the previous battle".
They just finished off the wounded animals and that's all
Quote: Senior Sailor
"Poltava" and "Seva" in principle could not reach Vladik without bunkering.
Have you tried telling fairy tales for children? You're doing pretty well!
You and Andrey have really built a real alternative universe here. And Ivan is right, all this abundance of articles is just a justification for the defeat of tsarism in
the RYA.
Quote: Saxahorse
And Ivan is right
My namesake, alas, is the same militant ignoramus as you.
Quote: Senior Sailor
My namesake, alas, is the same militant ignoramus as you.
It’s not for nothing that they say that a bun-cruncher and a liar are synonyms!
I will never believe that you do not know the performance characteristics of Poltava. The tightest of them, Sevastopol, has a reserve of 1750 miles at full speed.
To Vladik - 1200 miles. So please explain why you lie to people's faces?
Quote: Saxahorse
I will never believe that you don't know
I even know where you got this from))
They distorted it a little, it’s true, and kept quiet about some things, but oh well.
I'm used to you.
Quote: Senior Sailor
I even know where you got this from))
I have no doubt that you know. You and Andrey are lying quite deliberately.
So what have you distorted?
Quote: Saxahorse
The tightest of them is Sevastopol
This data is not from "Seva", but from "Poltava".
Quote: Saxahorse
full swing.
Not full, but 15 knots.
Poltava's was full at 16.5.
And now what they kept silent about.
This is a calculated range and could have been achieved with a fuel reserve of 1200 or even 1500 tons (different sources). Given that the normal reserve is
700, and the full one is 1050.
In other words, it was necessary to do what you constantly accuse Rozhestvensky of. Take on fuel in overload.
I don't remember how much coal there really was on the battleships. But I couldn't find it quickly. But Makarov, in order to be able to lead the squadron out
in one "high water", seriously limited their coal and water supplies. And he only had five battleships in service.
And note that I am not mentioning such things as the fouling of the bottom of the ships, the condition of their engines (and the Seva was going to be driven to
the Baltic for repairs), the quality of the coal (Yantai is not cardiff at all), the possibility of receiving damage in battle and the need to maneuver.
Quote: Senior Sailor
This data is not from "Seva", but from "Poltava".
It is Sevastopol, as the only one with Russian-made cars. (greetings to the Franco-Russian plant)
Quote: Senior Sailor
Not full, but 15 knots.
At Sevastopol, exactly 15.3 knots were considered full, but in the Yellow Sea, Poltava really slowed everyone down.
Quote: Senior Sailor
This is a calculated range and could have been achieved with a fuel reserve of 1200 or even 1500 tons (different sources). Given that the normal reserve is
700, and the full one is 1050.
Two versions: 700\1050 or 900\1500 tons. And the calculated one for battleships of such size and power is approximately 75-110 tons at economic, about 200 tons
at full per day.
According to the 1903 report, the total coal reserves of the Sevastopol were 1080-1100 tons.
Coal consumption at full speed is 33,8-35 poods per 1 mile.
Coal consumption for ship needs is 9-14 tons per day.
"Poltava" full coal reserve 1060 tons
Fuel consumption at full speed is 33,3 pounds per mile.
For ship needs 10-14 tons per day
Quote: rytik32
Coal consumption at full speed is 33,8-35 poods per 1 mile.
At what speed? At 15 knots?
If the full reserve is known exactly, 1080 tons, then at 15 knots Sevastopol spent 222 tons per day. To Vladik 3.3 days. Total 740 tons, just a normal reserve.
At what speed? At 15 knots?
Unfortunately for "Seva" the full speed is not indicated, only the test results. I think you know them.
Quote: rytik32
Unfortunately, for "Seva" the full speed is not indicated, only the test results.
Unfortunately, there are many contradictory figures. If the discrepancies with the capacity of the pits can be explained by changes in the drawings over 8
years of construction, then there are some misunderstandings with the consumption. For example, I have often come across the statement that the most gluttonous
in the 1st TOE were Peresvets with their 100 tons per day at economic speed (10 knots). For Poltava, the figure of 77 tons is indicated, which is approximately
20 poods per mile. I wonder where the figure of 33.3 poods per mile came from.
It is important here how they calculated. Poltava has cylindrical boilers that require a long warm-up from zero. Therefore, if you take into account the
warm-up, a 3-6 hour run and a XNUMX-hour run will give very different consumption per mile.
If there are discrepancies with the capacity of the pits
Explain the discrepancies? I have "Middel" on Poltava, but I haven't reread it.
I have often come across the assertion that the most gluttonous in the 1st TOE were the Peresvets with their 100 tons per day economic move
At a speed of 10 knots, Peresvet consumed 30,6 tons per mile (from the same reports).
For Poltava the figure is 77 tons, which is approximately 20 poods per mile.
18,6 poods per mile according to reports
I wonder where the figure of 33.3 poods per mile came from.
I’m telling you, in 1903, reports on actual coal consumption were collected from almost all ships in the fleet.
if taking into account the heating
It's unlikely that they considered the warm-up
Quote: rytik32
Explain the discrepancies? I have "Middel" on Poltava, but I haven't reread it.
Even Vicki writes:
Coal reserves (700 t normal and 1050 t full; according to other data, 900 and 1500 t respectively)
This means that both options were found in the documents.
Quote: rytik32
I’m telling you, in 1903, reports on actual coal consumption were collected from almost all ships in the fleet.
Do you have a link to these reports? Are there any clear indications at what speed and how the consumption was calculated?
Quote: rytik32
18,6 poods per mile according to reports
Well, and above Ivan writes about "33.3 poods per mile." And apparently according to the same reports for the same ship...
Quote: rytik32
It's unlikely that they considered the warm-up
Easy! Judging by what I came across in the descriptions - a certain weight of coal was allocated for testing, loaded into the ship, unloaded and weighed upon
completion. Here, heating is clearly included because the amount spent was taken into account. Or, it seems, there was an option with the allocation of coal in
bags, unloaded and counted the bags upon completion. Here, it seems, it is possible to divide how many bags were spent before reaching the measured run. But
this method is more complicated because it was necessary to pour coal on the move in a cramped boiler room.
Even Wiki writes
Not a serious source
This means that both options were found in the documents.
It could have just been someone's fantasy.
Do you have a link to these reports?
I found them in the Archive and took some pictures for myself. They are not on the Internet.
There are clear indications at what speed
Usually the speed is indicated
and how did you calculate the consumption?
It is not specified in these reports. But I read tests of other ships for coal consumption. Such a parameter as "coal consumption for heating" was of no
interest to anyone.
And apparently according to the same reports for the same ship
I didn't see his name on the list of people who viewed the case...
Judging by what I came across in the descriptions, a certain weight of coal was allocated for testing, loaded into the ship, and upon completion, unloaded and
No, we definitely haven’t tested coal consumption like that.
The consumption of coal thrown into the firebox was measured in baskets.
Quote: rytik32
Not a serious source
But this got into Wikipedia from Suliga’s book.
Quote: rytik32
Usually the speed is indicated
However, neither you nor Ivan did this.
Quote: rytik32
The consumption of coal thrown into the firebox was measured in baskets.
And who counted the baskets? The stokers? There are questions about the calculation of consumption on the go, there is a lot of room for error.
Quote: rytik32
Such a parameter as "coal consumption for heating" was of no interest to anyone.
However, I came across it. About 10% of the daily. Below is a piece of the report on Nakhimov. A ship with the same boilers and similar parameters. It is
interesting that the commander takes into account the full, average and economic speed. And by average he means 12 knots.
But it came to Wiki from Suliga’s book
For Suliga, is this planned data or actual?
However, neither you nor Ivan did this.
The report does not indicate the full speed for "Sevastopol". IMHO it is 15-16 knots.
And who counted the baskets? The stokers?
There are questions about calculating consumption on the go, there is a lot of room for error
There is a lot of room for error. At 2TOE these errors blossomed in all their glory.
However, I came across it. About 10% of the daily
Apparently this falls under the heading "for ship needs"
It is interesting that the commander takes into account the full, average and economic speed. Moreover, by average he means 12 knots.
Most reports also say 10, 12 knots and full speed.
Quote: rytik32
For Suliga, is this planned data or actual?
Suliga has both figures. Perhaps he was not sure. The capacity of the pits was apparently reduced when the scale of the overload became clear. Initially,
Poltava was planned to have a very decent range of 4500 miles. It turned out much more modest.
Quote: rytik32
The report does not indicate the full speed for "Sevastopol". IMHO it is 15-16 knots.
Suliga indicated 16.41 knots as the highest, but 15.3 knots as the actual continuous one.
Quote: rytik32
There is a lot of room for error.
This method is problematic for testing, because the team is from the manufacturer and the acceptance committee is small, you can't put a basket to count at
each pit. You will get hope for the "word of honor" of the manufacturer. Well, or control weighing before and after testing.
Quote: rytik32
Apparently this falls under the heading "for ship needs"
I don't think so. It's more likely to be an expense at anchor. In Nakhimov's report, the commander indicated only the galley and desalination plant, 2 tons per
day. After all, when underway, a separate boiler for other consumers was not started; steam was supplied to other consumers from the general system. To warm up
the boilers, it turns out to be 1 ton per boiler. For a long run of two or three days, it is insignificant, but for 3-7 hour bursts like during testing, the
percentage will be noticeable.
The capacity of the pits has apparently been reduced
Possibly. For example, on the Peresvets they reduced it in order to increase the ammunition magazines.
Initially, Poltava was planned to have a very decent range of 4500 miles.
I think they just made a mistake in the initial calculations. This happened to many ships.
command from the manufacturer
Not always.
To every hole
For Poltava, two holes will be enough.
I specifically looked in Midel.
In 1900, the Sevastopol showed an average speed of 16,41 on a three-hour run and 15,3 knots on a seven-hour run.
The cruising range is 1750 miles with a full supply of coal and a speed of 15 knots (apparently this is full speed).
Quote: rytik32
I specifically looked in Midel.
Apparently they also took it from Suliga
Quote: Ivan_Sergeev
Because there were NO clear 12-14 hits with large caliber, and you could fire 6-inch guns all day long to no avail. Here is a diagram of the analysis of hits https://
I hope I don't offend you... The thing is that we (at your suggestion) are discussing the Battle in the Yellow Sea, that is, near Cape Shantung. And in the link you provided,
the damage to the Mikasa in Tsushima
3. Yes, because there were NO clear 12-14 hits with large caliber.
That is, first you require a certain number of hits, then you add a certain requirement for "precision"... I think that over time you will understand that the "precision" of
hits will be influenced by the characteristics of the projectile.
Here is a diagram of the analysis of hits https://naval-manual.livejournal.com/45659.html
Do you seriously think that I haven't read it?:))
, and what we see is that when 305mm was applied to a Japanese battleship with high quality, it somehow really “hurt” it.
There is no such thing. At all. Not a single hit, except for the extremely dubious hit to the stern barbette of Mikasa, caused any noticeable damage to the battleship. Not
even the famous penetration of the 173 mm plate. And the baobet, with a probability of about 99,9%, suffered not from a Russian shell, but from the rupture of a Japanese shell
in the barrel bore.
Well, yeah, big deal, there are only 3 full-fledged battleships left against 6, that's just nonsense.
For those who can't read, I repeat - the squadron could not go to Vladivostok after the battle. In the best case, Retvizan, Pobeda, possibly Sevastopol would have reached
Tsushima, but Peresvet and Tsarevich - definitely not, Poltava - extremely doubtful.
I also remind you that in Tsushima, 4 Japanese battleships defeated 8 Russian ones, even if we don’t count the BBOs.
Quote: Andrey from Chelyabinsk
I also remind you that in Tsushima, 4 Japanese battleships defeated 8 Russian ones,
1) Haven't you forgotten that the Japanese EBR lost 5 12-inch guns? The Russians lost one in battle, and that was on the way back.
2) With the tactics of the ZPR, the number of EBRs does not matter...
3) Besides the Japanese Asamoids, there are Russian Ryuriks, so who knows, especially if the Russians maneuvered a little in the ocean...
Returning to the series of articles, thank you for the interesting analysis of armor and shells.
I won't warn you of your conclusions, but I have made mine - the officers on the bridges of the 2nd TOE were right, the ZPR was obliged to approach at maximum speed and
not slow down after reorganizing from 2 columns, but on the contrary, accelerate to the maximum speed of 16-17 knots - after 5-6 minutes of travel they would have reached
the lethal range of AP shells and could have killed Fuji and then the Garibaldians......
1. Have you forgotten that the Japanese EBR lost 5 12-inch guns?
Sorry, I really don't know how else to explain that the Russian squadron lost 8 to 12 main battery guns, since at least 2 battleships could not continue the
breakthrough under any circumstances. Or rather, all three.
Quote: Andrey from Chelyabinsk
Or rather all three.
Even so, they have 12 main battery guns, while the Japanese have 11...
However, the issue is not the number of guns on the EBR, but the will of the commanders, and there were problems with this...
Alas, I was away and saw it only now:
Quote: Andrey from Chelyabinsk
Did not help.
that's true, but we've entered the path of assumptions...
Quote: DrEng02
the question is not the number of guns on the EBR, but the will of the commanders
I repeat - the main thing in this is...
Quote: Ivan_Sergeev
It was about the battle in the Yellow Sea, where the Russian squadron had at least some chances.
None. However, the author answered you.
Quote: Ivan_Sergeev
But I showed you the pattern clearly.
You have shown nothing but a lack of knowledge and logic.
Quote: Ivan_Sergeev
So, all that the 6 Russian squadron battleships had to do for several hours, shooting almost exclusively at Mikasa, was to hit at least 15 times with large caliber.
Three shells weren't enough?
Okay, let's say there were six hits. Did they somehow reduce the Japanese's combat capability?
Do you think it will be different with the others?
You see, if these shells that ours actually hit could cause heavy damage to the Japanese, their combat potential would have decreased. Accordingly, they would have started shooting
less, or some ship would have fallen behind... because of this, our losses would have been less, and therefore, the ability to cause damage to the enemy would have been higher. And
then, yes, there is reason to expect a different outcome. But this did not happen.
Quote: Ivan_Sergeev
Having lost the lead ship, the Japanese would have definitely ended the battle and Vitgeft’s squadron would have reached Vladivostok.
Both the first and the second are possible only in your fantasies.
Quote: Ivan_Sergeev
Then it would have been possible to simply organize cruising by several detachments and almost completely block the transportation of Japanese across the sea.
Go learn the material! Sailing range, location of trade routes, etc.
Quote: Ivan_Sergeev
It's just that this bunch of articles looks like one big attempt to justify defeats.
This, as you put it, is a bunch of articles, an attempt to understand the reasons for the defeat. But you are not interested in this. You already know everything. 18 shells and boom!
What did I just read? Something like "ko-ko-ko". They could have even saved the keyboard resource.
It really wasn't worth it on you...
The case is inoperable
Well, when they tell me in all seriousness that the Japanese would not have suffered at all from the loss of Mikasa and with the three remaining battleships they would
have easily finished off the 6 Russians, oh yes, and the "mighty" cruisers would have helped, well, well, then I am also not interested in discussing anything further. And
why are you so hung up on the number 15? I wrote "minimum" first of all and my thought was first of all that 6 battleships should have definitely hit significantly more,
but for some reason it didn't happen.
Nevertheless, I retain respect for you for a number of your materials, although I explained why it all seems to me to be from the category of “that’s why we lost.”
1. Well, when they tell me in all seriousness that the Japanese would not have suffered at all from the loss of Mikasa and with the three remaining battleships they could
easily have finished off the six Russians
Excuse me, am I talking to a human or a parrot? A living person could read that 6 Russian battleships could not approach Tsushima in principle. Maximum - 4.
The man could also figure out what nonsense he was talking, stating in one place that it takes about 15 heavy shells to disable a battleship, but in another place
claiming that Sevastopol and Poltava, having received 10-11 hits each, suddenly miraculously began to be considered new.
Why are you so fixated on the number 15?
Because you managed to formulate this as a condition for the battleship to lose its combat capability.
Well, okay, not 15 (I just assumed that), but you can see for yourself that the Japs, without choosing individual ships, just evenly firing at our ships, managed
to hit almost each one at least 10-11 times. And our crowd could not even disable, or even somehow damage the flagship, so that it would at least partially lose
speed. In fact, by the end of the battle, it was only a few hours until darkness and the Japanese ships were running out of shells, because they were firing much
more actively.
And I repeat my thought once again, it was necessary to go to Vladivostok in any case to the end, because if the Vladivostok cruisers were reinforced, then
blocking Japanese cargo and troop transport would be more than real. And this would actually mean defeat in the war.
That is, the decision to turn back was simply criminal at that moment. As a result, the ships were lost for no reason and the ground forces were seriously
1. And our crowd couldn’t even disable the flagship, let alone damage it in any way, so that it would even partially lose speed.
They couldn't. The Japanese had better rangefinders, optical trailers, which the 1st TOE didn't have at all, their medium-caliber shells exploded on the water,
while ours didn't, which gave them an advantage in sighting, hits on our ships were clearly visible, while our hits on the Japanese weren't, our gunners were
out of training with the armed reserve and six months of standing in Arthur, while the Japanese weren't. That's the result.
and the Japanese ships were running out of ammunition
They used up about 150 shells out of 240, that is, there were still enough shells - our ships had serious damage, they did not need many.
And I repeat my thought once again, it was necessary to go to Vladivostok in any case to the end
And I repeat once again, it was impossible. Our battleships were unable to hold out under Japanese fire in the second phase of the battle, the Tsarevich was
out of action, and Sevastopol could not make more than 8 knots, meaning that our line would have fallen apart in any case. If the battle had continued for
another hour, most likely we would not have managed without losses. I understand that we all grew up on Port Arthur and Novikov Priboy and elsewhere, where the
1st TOE was considered a combat-ready squadron, the opinion that it was simply unlucky was widespread. But this is not so, our documents show this.
The ships PHYSICALLY could not go to Vladivostok. The Tsarevich with its leaky pipes at Kiau Chau had only 1100 tons of coal out of 500 tons. If it had gone to
Vladivostok, it would have stopped in the Tsushima Strait without moving, it would have simply run out of coal. The same story with Peresvet. Kuteinikov wrote
that the coal pits at Sevastopol showed the bottom, but after the battle it did not go to Vladivostok, but to Artur. And the problem with Peresvet and
Tsarevich was that even if they had been reloaded with coal in Artur or Kitai, they still would not have been able to reach Vladivostok due to the increased
coal consumption. Poltava and Sevastopol could only reach Vladivostok from PA in peacetime using an economical speed. And here, with damage, they had to go
into battle.
And no one prevented the Japanese who had gone to Tsushima from replenishing their ammunition.
If the Vladivostok cruisers were reinforced, then blocking Japanese cargo and troop transport would be more than realistic.
Quite the opposite. Don't be lazy and look at the map, where Vladivostok is, and where the Japanese transportation is. Quite the opposite, if they had to be
cut off from somewhere, then from Arthur, Vladivostok was not suitable for this at all. Therefore, many commanders did not understand the point of breaking
through to Vladivostok at all, which you will learn by reading the transcripts of the relevant meetings at Vitgeft's.
Then it was presented as cowardice and fear of a breakthrough. And then - bewilderment, why the hell was it necessary to break through 1000 miles from the
Japanese landing sites and Dalny, where the Japanese transports were heading.
With Peresvet - the same story. Kuteinikov wrote that in Sevastopol the coal pits showed the bottom
I haven't read about any serious damage to Peresvet, and its cruising range was twice as long as any "Sevastopol", so it's pretty strange. And regarding the
damage to Tsarevich, here's an idea from Wikipedia:
After consulting with the officers, D. P. Shumov decided to try to break through to Vladivostok. There should have been enough coal, despite a hole in one of
the pipes, the damage did not significantly affect combat capability: all the main and medium-caliber guns, as well as most of the anti-mine guns remained
intact, the machines worked properly, one boiler was damaged in the aft stokehold, but it was also repaired by the ship's own forces; the existing holes were
not dangerous, and the most significant damage was the disabling of the communications and control equipment in the conning tower. Some of the problems were
fixed while still at sea. The ship turned south, hoping to get lost at sea.
At night, first rank captain N. M. Ivanov came to himself, and then Rear Admiral N. A. Matusevich. They decided to call first for repairs and restocking in the
German port of Qingdao. D.P. Shumov could not convince them, and on July 1 the battleship came to the port. Initially, the German authorities gave six days to
put it in order to go to sea, but on August 29 they suddenly demanded to intern immediately, which was done by order of N. A. Matusevich who was in the German
This is apparently from here: Emelin A. Yu. “The flagship is out of action...” (damage to the squadron battleship “Tsarevich” in the battle at Shantung). //
Gangut. - 1999. - Issue 20. - P. 21-33.
So it's quite controversial.
Well, again, the most important thing was to strengthen the detachment of Vladivostok cruisers so that it would not be afraid of meeting Kamimura. That is, in
fact, it would be enough if two fast battleships-raiders arrived.
Just the opposite. Don't be lazy, and look at the map, where Vladivostok is, and where the Japanese transportation is. Just the opposite, if they had to be cut
off from somewhere, then from Arthur, Vladivostok was not suitable for this at all.
The shipments were coming from Japan and the cruising squadron, whose location was unknown, moving around the coast, after several intercepted ships, would
have simply blocked shipping purely out of fear of being intercepted, as actually happened after the three of our cruisers began operating.
The only problem was that the Japanese could deploy more of their armored cruisers when meeting our raiders. This is exactly what needed to be resolved. And
the idea of breaking through to Vladik was generally correct, especially considering the increasing shelling of the port.
In general, the planned breakthrough did not take place due to the controversial decisions of some commanders and, again, an insufficient number of hits on the
Japanese flagship.
2. And regarding the damage to the Tsarevich, there is this thought, from wiki:
There is nothing controversial here. Shumov conferred with officers shortly after the battle, while the battleship's commander was unconscious. So after the
battle, the chief engineer reported that out of 1120 tons of coal, 870 remained. Naturally, this data could not alarm Shumov.
But the problem is that the Tsarevich had taken the bulk of the damage by the end of the battle, so the excess coal consumption had not yet shown itself. But
when it turned out in the morning that 870 of the evening's 500 remained, that's when the gentlemen officers began to think.
You can read all this in the testimonies and reports, including Shumov’s own.
I haven't read about any serious injuries to Peresvet.
So you don't know that it is the most damaged battleship of the 1st TOE? Nevertheless, it is so. Look at the official six-volume book on the war at sea. If you
want something more modern, read Polomoshnov, he may be wrong here and there, but this is, in general, the basis for the battle in ZhM. And there is nothing
strange, damage to pipes causes a wild overconsumption of coal. Thrust, sir.
The shipments were coming from Japan and the cruising squadron whose location was unknown
It would simply have entered the Tsushima Strait where it would have been destroyed by the main forces of Togo, stationed in Mozampo.
Why are you ignoring the experience of VOK, which went to meet 1TOE and ended up cut off from Vladivostok?
In general, the planned breakthrough did not take place due to the controversial decisions of some commanders and, again, an insufficient number of hits on the
Japanese flagship.
Stick to your opinion, we have a free country.
In fact, no, a breakthrough was impossible.
Okay, this is of course a game of alternative history. You don't know, there are somewhere on the Internet diagrams of hits on Japanese ships in the battle in
the Yellow Sea, like the one on Mikasa in Tsushima. I still can't understand where exactly it was hit, that it did without serious damage, if we accept the
version about 12-14 large shells.
4. Actually, this was in Naval's manual, I'll send you a link in the evening. And also, if you want, throw your email in my personal message, I'll send you
Polomoshnov, he has a description and diagrams for all battleships. They are not always correct, but for a start it will do just fine.
Quote: Andrey from Chelyabinsk
Poltava and Sevastopol could only reach Vladivostok from PA in peacetime by economy ship
Andrey, how did you calculate this?
It turned out that with full potholes, the Sevastopol would have reached the PA to Vladivostok and back at 12 knots.
Quote: Ivan_Sergeev
Well, when they tell me in all seriousness that the Japanese would not have suffered at all from the loss of Mikasa
Where did you manage to read that from me?
2. Firstly, this series of articles is not about the defeats of the Russian nuclear forces, but about the capabilities of Russian armor-piercing shells and armor of those years. If you reduce
everything to Russia's defeat in the Russian nuclear forces, then that is your problem, why do you attribute it to the author of the article?
To guarantee the destruction of a battleship from the Russo-Japanese War, 15-18 305mm shells were required.
Mikasa was hit in the main battle by 13-14 shells of 254-305 mm caliber, the ship did not lose combat capability. Eagle in Tsushima was hit by 11 shells of large caliber, the ship lost combat
capability almost completely.
This series of articles is not about the defeats of the Russian nuclear forces, but about the capabilities of Russian armor-piercing shells and armor of those years.
Everything is read with interest
Best regards,
12 September 2024 11: 21
The reasons why British shipbuilders left such “windows” in the defense, and even opposite the ammunition magazines for the main caliber guns, are completely unknown to me...
Me too, but everyone had a similar practice. The Borodinites also had 145mm plates there instead of 194mm.
But even if we increased the projectile velocity to 750, 800 or 900 m/s, this would not allow the projectile to penetrate armor more than 304,8 mm thick – this is the maximum thickness that can
be penetrated at an angle of deviation from the normal. 26 degrees for a 12-inch projectile, and a further increase in the projectile velocity does not increase the thickness of the armor
penetrated at a given angle.
This is really "difficult to perceive" because it is incorrect)) The table as you noted is empirical and the text states that ricochet from "thick" (?) armor has not been studied. Please indicate
the "ricochet" angle of a 12" projectile falling from, say, 850 m/s under 26 degree on 330mm armor...
Considering that the main armor belt of the Mikasa only slightly rose above the water, it follows...
consider also hits through the lower part of the upper belt at an angle of 4,6 degrees (30 cables) and the upper part of the bevel.
We are looking forward to the continuation +++
1. This is really "difficult to understand" because it is incorrect))
I think you understand that it is not enough to simply state that the calculations of a professor of the Naval Academy and one of the leading specialists of the USSR in the field of
projectiles and armor in the 30s are incorrect. This must be followed by a very serious justification.
The table, as you noted, is empirical.
That is, based not on theoretical calculations but on the results of an analysis of real shootings.
"Will convince any judge,
Even a regiment of selected judges,
Dazzlingly harsh
The Truth of Turret Guns"
What can you say in response to this?
and the text states that ricochet from "thick" (?) armor has not been studied. Please indicate the "ricochet" angle of a 12" projectile falling at, say, 850 m/s at 26 degrees onto 330 mm
How can I point it out to you if Goncharov does not provide such research?
Why ask questions that are irrelevant and obviously have no answer?
If during the tests it was revealed that at certain angles the armor is not penetrated, even if the projectile's manpower is more than sufficient, but at the same time no one bothered to
study the ricochet angles - how does this refute everything Goncharov said?
consider also hits through the lower part of the upper belt at an angle of 4,6 degrees (30 cables) and the upper part of the bevel.
It won't hit. The projectile will normalize and fly over the slope and horizontal armor deck, however, I will return to this issue in the next article.
There must be a very serious justification for this.
Seriously, no, only empirical). Modern sub-caliber "crowbars" with a diameter of, say, 25 mm, according to the professor, should "ricochet" from 30 mm armor (if by caliber) or from armor
that cannot be penetrated REGARDLESS of the angle of incidence. Nevertheless, they penetrate it (more than 800 mm)), and the angle of possible ricochet is far beyond 70-75 degrees.
Goncharov's (and others') mistake in strictly linking the penetration to caliber projectile, without taking into account its design and behavior at different speeds.
And "ricochet" at 26 degrees... rather "rebound" was called.
1. Modern sub-caliber "crowbars" with a diameter of, say, 25 mm, according to the professor, should "ricochet" off 30 mm of armor
Has it ever occurred to you that the sub-caliber crowbar is completely different from the projectile, and the diagram cannot be applied to it?
The mistake of Goncharov (and others) is in strictly linking penetration to the caliber of the projectile, without taking into account its design and behavior at different speeds.
Goncharov has no mistake. Your mistake is when you try to apply data related to naval artillery shells of the early 20th century to sub-caliber shells that are completely different in
Has it ever occurred to you that the sub-caliber crowbar is completely different from the projectile, and the diagram cannot be applied to it?
It also came to mind that the 305mm shell of the “old design” (332 kg) and the 305mm shell of the 1907 model (470 kg) were alsonot similar"Which of them does the graphics of the
"leading specialist of the USSR" belong to? 30 years"?
But the question is, what is your phrase?
But even if we increased the projectile velocity to 750, 800 or 900 m/s, this would not allow the projectile to penetrate armor more than 304,8 mm thick - this is the maximum
thickness that can be penetrated at an angle of deviation from the normal of 26 degrees for a 12-inch projectile, and further increasing the projectile velocity does not increase
the thickness of the armor penetrated at this angle.
is incorrect. There is a contradiction in it.
1. to determine the possibility of penetration of (some) armor thickness at a certain angle of incidence is used (in addition to other parameters) speed projectile (together with
the mass - "live force")
2. Then you immediately claim that this is the limit for a given angle, increasing which (even by 2-3 degrees) no increase in "live force" will help. This is incorrect from the
formulas you use.
That is, a shell from a 305mm/35 and 305mm/40 gun (the shells are the same, by the way) will not penetrate armor that the first of them does not penetrate at an angle of, say, 10
degrees! Because of the same caliber (and design)...
1. It also came to the conclusion that the 305mm shell of the "old drawing" (332 kg) and the 305mm shell of the model 1907 (470 kg) are also "not similar"
Not 1907, but 1911, but in general - no, they were quite similar in principle, the difference was due to the lengthening of the projectile and the thickness of the walls. But
in general the design was similar.
then you immediately claim that this is the limit for a given angle, increasing which (even by 2-3 degrees) no increase in “manpower” will help.
This is incorrect from the very formulas you use.
You are right, I just did not complicate things. And so yes, tables for 12-inches where the angle of 25 degrees does not allow, it is necessary to reduce the thickness of the
armor being penetrated.
12 September 2024 20: 13
Andrei, good afternoon!
I am pleased to read your new article.
Now the story has reached the main point - simulating the hit on the Japanese battleships.
My comments:
But even if we increased the projectile velocity to 750, 800 or 900 m/s, this would not allow the projectile to penetrate armor more than 304,8 mm thick - this is the maximum thickness that can
be penetrated at an angle of deviation from the normal of 26 degrees for a 12-inch projectile, and further increasing the projectile velocity does not increase the thickness of the armor
penetrated at this angle.
The essence of this effect is that the projectile breaks from asymmetrical loads. And increasing the speed will not make the projectile walls stronger. If you take weaker armor, you can count on
penetrating a greater thickness.
During the shelling of Ochakov the following incident occurred.
One 254 mm shell hit frame 59 in the left side between the armor and intermediate decks, pierced the outer plating, cofferdam, sloping armor and the armor deck itself (70 mm thick) and caused
major damage to the middle boiler room.
The deck slope angle is 34 degrees. The deck is made of steel-nickel armor on steel decking. Unfortunately, I don't know the thickness of this decking, but 70 mm is the total thickness.
The fact is that there was no ricochet from 70mm of armor at an angle of impact of about 35 degrees.
1. Alexey, good evening to you!
Quote: rytik32
The fact is that there was no ricochet from 70mm of armor at an angle of impact of about 35 degrees.
Look - since the angle was 35 degrees, then the deviation from the normal is 55 degrees. At such a deviation, according to Goncharov, the projectile penetrates armor of 0,363 of its caliber.
For 254 mm, this is 92,2 mm. Accordingly, it should have penetrated a 70 mm armor sheet, there is nothing here that contradicts Goncharov's method.
What do you say about the first part of my comment?
After all, the speed of the through-penetration limit in the context of Goncharov's work refers to armor, and not to a layered pie held together by rivets, most of which is structural
steel. Obviously, such a weak pie will not cause the destruction of an armor-piercing projectile at any angle.
1. Quote: rytik32
After all, the speed of the through-penetration limit in the context of Goncharov's work refers to armor, and not to a layered cake held together with rivets, most of which is
structural steel.
So, I think I have discussed this in great detail in the article? My logic is based on two theses
First - Goncharov does not differentiate between monolithic and layered protection either in the description or in the calculation examples. The only thing he mentions is the lower
resistance of layered armor, due to the fact that the energy for its passage and normalization should be calculated for each of its sheets separately. Goncharov does not give any
other definitions. It follows that such layered armor protects worse, but in other respects it is similar to monolithic.
The second - the diagram compiled by Goncharov does not depend on the durability of the armor. That is, if in the case you gave for a 254 mm shell the limit is armor thickness of
92,2, then armor of 100 mm will withstand the impact of a 254 mm shell at any speed, and it does not matter whether it is homogeneous or cemented. And since it does not depend on
durability, then the lower durability of layered armor is not a problem
First - Goncharov does not differentiate between monolithic and layered protection either in the description or in the calculation examples. The only thing he mentions is the
lower resistance of layered armor, due to the fact that the energy for its passage and normalization should be calculated for each of its sheets separately.
The second - the diagram compiled by Goncharov does not depend on the durability of the armor. That is, if in the case you gave for a 254 mm shell the limit is armor thickness of
92,2, then armor of 100 mm will withstand the impact of a 254 mm shell at any speed, and it does not matter whether it is homogeneous or cemented. And since it does not depend on
durability, then the lower durability of layered armor is not a problem
The first is that it does, it is in your phrase, in the second sentence (and in Goncharov, accordingly, too).
Secondly, you are fundamentally mistaken, if we strictly rely on Goncharov, then we need to reduce the "sandwich" to a monolith: the energy for breaking through the sandwich is
equal to the sum of the energies for breaking through each layer, or the speed of breaking through the sandwich is equal to the square root of the sum of the squares of the speeds
for breaking through each layer.
That is, it is necessary to calculate the equivalent of a sandwich as a monolith and then make comparisons.
For example, if the equivalent of a 3x25,4mm sandwich, let's say, turns out to be equal to a 50mm monolith, then compare these 50mm according to Goncharov.
And this is without taking into account the fact that soft shipbuilding steel is not at all equal to homogeneous armor in terms of armor resistance (let's accept this as a bonus
for the Japanese).
1. Quote: Jura 27
The first is that it does, it is in your phrase, in the second sentence (and in Goncharov, accordingly, too).
No. Because in this case diagram 9 loses its meaning.
Quote: Jura 27
Secondly, you are fundamentally mistaken: if we strictly rely on Goncharov, then we need to bring the “sandwich” to a monolith:
No need - Goncharov shows the relationship between caliber, angle and thickness of armor, speed and durability have no place here.
I don't think we'll come to a consensus here.
Goncharov shows the relationship between caliber, angle and armor thickness
... armor thickness. In the case of a sandwich, the thickness of the armor reduced to a monolith is needed. It is calculated according to Goncharov as the square root of
the sum of the squares of the penetration speeds of each layer.
This is elementary physics. If you are against it and Goncharov, then I can do nothing, I will never come to a consensus with altphysicists.
Goncharov does not differentiate between monolithic and layered protection either in the description or in the calculation examples.
I reread the fundamental work on armor by V.P. Kostenko, whom you dislike. He writes that multilayer but firmly bonded armor (in his example, the slope of the armor deck) can be
equated to monolithic armor of the same thickness.
The diagram compiled by Goncharov does not depend on the armor resistance
I disagree here. Yemelyanov, for example, indicates different coefficients for a surface-hardened and homogeneous plate (see Table 7 below).
The diagram of the dependence of the PSP on the angle in Emelyanov depends on the model of the projectile (see below, Fig. 15)
For an armor-piercing shell of 1928, at an angle of 55 degrees, this is 0,4 of the diameter or 122 mm for a 305 mm shell. The shell of 1894 has thick walls and a short length.
Therefore, it should resist the destruction of its shell during penetration better than all the shells shown in the diagram.
Because a 12-inch projectile, no matter what speed it has when it contacts the bevel, should not penetrate it, but ricochet off it.
Here you have an incorrect conclusion. If the through penetration limit is triggered, it means that the projectile has been destroyed. But at the same time, the projectile can
penetrate the armor in the form of fragments. Below is an example of the action of a 180-mm armor-piercing projectile. Even at an angle of 70 degrees, it can penetrate the deck
armor and be destroyed (in table 9, v pop is the speed of penetration by fragments).
1. Quote: rytik32
I reread the fundamental work on armor by V.P. Kostenko, whom you dislike. He writes that multilayer but firmly bonded armor (in his example, the slope of the armor deck) can
be equated to monolithic armor of the same thickness.
Total chaos. Can you tell me when this book was published? Look what Klado writes in "Military fleets and naval reference book for 1906" (attached)
Quote: rytik32
I disagree here. Yemelyanov, for example, indicates different coefficients for a surface-hardened and homogeneous plate (see Table 7 below).
Unfortunately, it is not clear from the table what the N2 coefficient is. Could you please tell me?
Quote: rytik32
The diagram of the dependence of the PSP on the angle in Emelyanov depends on the model of the projectile (see below, Fig. 15)
Well, I agree, it depends.
Quote: rytik32
For an armor-piercing shell of 1928, at an angle of 55 degrees, this is 0,4 of the diameter or 122 mm for a 305 mm shell. The shell of 1894 has thick walls and a short length.
Therefore, it should resist the destruction of its shell during penetration better than all the shells shown in the diagram.
But I cannot agree with this conclusion.
The thing is that we never had an armor-piercing 12-inch shell mod. 1928. We only had a high-explosive long-range one with 55 kg of explosive. The only AP shell of 1928 that
we had was the 180-mm.
At the same time, the 180 mm projectile had a length to caliber ratio (minus the ballistic cap, but taking into account the armor-piercing one) of 3,35 (this is the shortest,
they differed in length). And for the 1911 AP projectile, this figure was 3,19. That is, when scaling the 180 mm projectile to 305 mm, we get that it will be longer than the
1911 projectile.
At the narrowest point in the explosive area, the wall thickness of a 180 mm shell is only 37 mm. The ratio of length to wall thickness is 604 mm/37 = 16,3, for a 12-inch
shell of 1911 it will be 975/72,4 = 13,47
That is, when scaling, we get that a 180 mm projectile is both longer in relation to its caliber and thinner-walled in relation to its length. But at the same time, it
penetrates thicker armor!
The only way this could happen is if the projectile was made from higher quality steel.
So, your conclusion regarding the Tsushima shell could be correct only in one case - if it was made of exactly the same steel as the 1911 shell and had better armor
penetration than the 1911 shell. But this is clearly not the case. The Tsushima shells were not tested for armor penetration at an angle, and for the 1911 shell this was the
norm. And an armor-piercing shell is created primarily to penetrate armor - it is impossible to imagine that the designers of the 305 mm 1911 shell reduced its armor
penetration relative to older shells. Meanwhile, the wall thickness of the longer 1911 shell is even slightly greater than that of the Tsushima shell.
In general, the material clearly matters, and given that over time the shells (according to your diagram) began to penetrate thicker armor, this rather indicates that the
Tsushima BB will be inferior to both the 1928 shell and the 1911 shell.
Quote: rytik32
Here you have the wrong conclusion. If the through penetration limit is triggered, it means that the projectile has been destroyed. But at the same time, the projectile can
penetrate the armor in the form of fragments
Accepted, I agree. However, in order for the fragments to retain their lethal force, the projectile must have a significantly higher speed than that calculated for penetrating
the armor
Can you tell me when this book was published?
Kostenko's work was never published; it was written in the 30s.
If there is a wooden spacer between the layers, then Kostenko suggests counting them as two separate barriers.
Regarding the ricochet/destruction angle in your screenshot, I can tell that it probably meant thick armor.
Could you please give me some advice?
I have attached a screenshot below.
The only way this could happen is if the projectile is made of higher quality steel.
The tip can also have an effect.
and had better armor penetration than the 1911.
Armor penetration in normal mode is a completely different parameter. For example, the English Jutland 15-inch shell at an angle of 30 degrees could not penetrate even
6-inch armor as a whole. Although in normal mode it had solid armor penetration.
In fact, Tsushima shells were not tested for penetration of armor at an angle.
Why? For example, when testing Makarov caps, shots were fired at an angle of 15, 20, 25 degrees from the normal.
It is impossible to imagine that the designers of the 305mm 1911 reduced the armor penetration
It's unlikely that anyone looked at the armor penetration parameters at acute angles.
In any case, it would be better to have a diagram for the 1894 projectile, but there isn't one.
1. Quote: rytik32
Kostenko's work was never published; it was written in the 30s.
Then it can perhaps be considered more accurate than Klado's publication. In general, Klado referred to Professor Zabudsky's "External Ballistics", and it is very
difficult to say who is right. The only assumption is that at the time Kostenko wrote his book, he had a large array of information on shell tests.
Quote: rytik32
Regarding the ricochet/destruction angle in your screenshot, I can tell that it probably meant thick armor.
No, it's thin - here's a full screenshot
Quote: rytik32
I have attached a screenshot below.
Thank you, but you have already attached it. I am asking about what the N2 coefficient is, it is not clear from the table
Quote: rytik32
The tip can also have an effect.
They both have a tip, and the tips are similar in design to the 1911 and 1928 shells. But the Tsushima shell has no tip, that is, if we consider that the tip has an
effect, we must admit that the Tsushima shell has less armor penetration at the same angle, which again is a minus for it.
Quote: rytik32
Armor penetration in normal mode is a completely different parameter.
So I’m not talking about normal, but about at an angle.
Quote: rytik32
Why? For example, when testing Makarov caps, shots were fired at an angle of 15, 20, 25 degrees from the normal.
Because you are now talking about testing Makarov caps, not shells.
No one designed or tested AP shells of the late 19th century - early 20th century with a deviation from the normal. This requirement first appeared after the Russian
Yav, when they began testing AP shells by firing with a deviation of 15 degrees from the normal. Shells of 1911 were created not only to hit the normal, but also with
a deviation from the normal.
Quote: rytik32
In any case, it would be better to have a diagram for the 1894 projectile, but there isn't one.
I agree, only approximation shows that the performance of the 1894 shells will be worse than that of the 1911. Well, in the absence of a coat of arms, we write on a
simple one.
Good afternoon.
No one designed or tested BB shells of the late 19th century - early 20th century when they were subject to deviation from the normal.
Dear Andrey, here we can say that you are both right and wrong. Such works were carried out, but they concerned the tip of an armor-piercing projectile,
experiments showed that the most "universal" is a "blunt" tip. It "worked" well both normally and when hitting at a certain angle, but it required special
hardening and, as a result, strength. In essence, this is a "poinçon", one of the main, durable parts of a stamping tool. But at that time, "caps" were already
actively introduced, which were much cheaper and easier to manufacture. Therefore, this development did not become widespread.
1. Quote: 27091965i
Dear Andrey, here we can put it this way: you are both right and wrong.
Dear Igor, when I write
Quote: Andrey from Chelyabinsk
No one designed or tested BB shells of the late 19th century - early 20th century when they were subject to deviation from the normal.
I am writing this in relation to our shells, that is, I mean not in the world, but in the Russian Empire. And in our country, capless BBs were not tested and
were not designed for strikes with a deviation from the normal - the designers simply were not given such a task
Quote: Andrey from Chelyabinsk
I am writing this in relation to our shells, that is, I mean not in the world, but in the Russian Empire.
That's why it's written;
Dear Andrey, here we can put it this way: you are both right and wrong.
I reread the deck armor tests of the Retvizan and Varyag. I was surprised. Why test at an angle of 75 degrees from the normal??? After all, an angle of 50-60
degrees is more relevant. Did they know in advance that the shell would penetrate at such an angle?
1. Quote: rytik32
I reread the tests of the deck armor of the Retvizan and Varyag. I was surprised. Why test at an angle of 75 degrees from the normal??? After all, an angle of
50-60 degrees is more relevant.
This is if you look at the bevels. But what if you look at the horizontal section?
Speed 1640 is a distance of 17 cables. But the angle of incidence is 2 degrees 36 minutes.
And we will get an angle of incidence of 15 degrees only with 40 cables.
2. Quote: rytik32
Speed 1640 is a distance of 17 cables. But the angle of incidence is 2 degrees 36 minutes.
They could have taken it with a reserve. In addition, it is possible that the shipbuilders did not consider the bevel to be sufficient protection when firing
along the beam. The resulting 75 degrees will be by the bevel if the ship is turned 60 degrees to the firing gun, provided that the bevel angle is 30 degrees
(I measured it now, but not according to the final, but according to the preliminary drawing)
I'll write something off topic.
The issue of the 12-inch armor-piercing shells can be closed. There was smokeless powder and tubes from 1894. Confirmations from both 1895-96 and 1903 were
4. Quote: rytik32
There was smokeless powder and tubes from 1894. Evidence was found from both 1895-96 and 1903.
Fun... Well, then the question of the capabilities of shells with pyroxylin filling moves into the realm of theory.
Two questions - what kind of confirmation and what were the 10-inch AP shells equipped with?
Confirmations. MTK affairs, discussion of what to equip shells with in 1895-96
1903 - supply reports with a financial bias. A pipe from 1894 cost 18 kopecks. A Brinka 4 rubles 66 kopecks.
10-dm BB - brink and pyroxylin
I also found in the archive the weights of all the moving elements of the Brink tube and the 1894 tube.
And an explanation of why the firing pin was flat. The sharp one sometimes pierced the primer and the flame thrust went in the wrong direction.
10-dm BB - brink and pyroxylin
Thank you, dear colleague. And what were the 6-inch guns loaded with? What shells does the common statement about "increased humidity to 30%" charges for 2ToE?
The 6-inch and high-explosive shells and BB had Brink and pyroxylin.
No one changed the humidity specifically for 2TOE. According to the technology, pyroxylin was placed in a bath with water until it was completely saturated.
Thus, the humidity depended only on the density. The looser it was, the more humid the pyroxylin was.
Quote: rytik32
A pipe from 1894 cost 18 kopecks. And Brinka 4 rubles 66 kopecks.
10-dm BB - brink and pyroxylin
Good afternoon.
Dear Alexey, yes, war is money.
The 6-inch and high-explosive shells and BB had Brink and pyroxylin.
"The bet" is on medium-caliber artillery, if we consider the 6-inch armor-piercing shell.
when scaling, we get that a 180 mm projectile is both longer in relation to its caliber and thinner-walled in relation to its length. But at the same time it penetrates
thicker armor!
The only way this could happen is if the projectile is made of higher quality steel.
I specially brought both shells to the same diameter, see below
On the left is 180 mm 1928, on the right is 305 mm 1911
It is clear that the 1928 shell has thicker walls and starts further into the cavity. And the length is almost the same. So there is nothing surprising.
The 1894 projectile also had thicker walls and, most importantly, was significantly shorter than the 1911 projectile.
1. Good morning!
Alexey, very clear. I tried to gnaw on the granite of science in Berkalov on this account. It is not the simplest mathematical apparatus, I did not figure it out right
away, but some points are clear. The thing is that the strength of the projectile is affected not by the thickness of the walls, as such, but by the masses of the
parts of the projectile in the dangerous section. Simply put, when fired, for example, the body of the projectile experiences pressure, and the greatest pressure is
subjected to the bottom part, since the inertia of the entire projectile "presses" on it, and to a minimal extent - the sections closer to its tip. When hitting armor
- vice versa. Therefore, as far as I understand, it turns out like this - for strength calculations, the mass of the projectile behind the dangerous section is taken.
As a result, if I am not confused, it turns out that a thick wall is not always good, the balance of wall thickness and mass is important.
In this case, such mass is also compared not with the length of the projectile, but with the impact on square centimeters of area, that is, the dependence there is on
the caliber. And the most interesting thing is that Berkalov claims that the mathematical apparatus for calculating strength is extremely approximate, which is why
when designing projectiles in their geometry, one should focus on previously created projectiles that have passed firing tests:)))))))
Good afternoon.
The essence of this effect is that the projectile breaks from asymmetrical loads. And increasing the speed will not make the projectile walls stronger. If you take weaker armor, you can count
on penetrating a greater thickness.
Calculations are of great importance in determining the strength of armor, but I will give one example mentioned by N. Barnaby;
" The 8,75-inch thick plate withstood well the impact of a 9,2-inch shell at 1800 feet.(Japanese officers were given authority to increase the muzzle velocity to 1800 feet (548,6 m/s) per
second on the third shot.) Such plates are well resistant to a 6-inch projectile with an initial velocity of 2000 feet (609,6 m/s), but when the velocity increases to 2400 feet (731,5 m/s),
the 6-inch projectile penetrates the plate. In this case, the projectile is destroyed, as demonstrated by experimental shooting. It is necessary to take into account that armor-piercing
projectiles manufactured by different factories may have different quality."
For a complete calculation, it is necessary to know many more characteristics.
Quote: 27091965i
For a full calculation it is necessary to know many more characteristics
I totally agree.
Special thanks for the example of armor penetration of English shells. Domestic ones penetrated armor better. For 6-inch shells, penetration of a 254-mm Krupp plate as a whole at the
normal was mandatory. Otherwise, the entire batch was rejected. The best batches of 6-inch shells from the Putilov Plant nailed a 254-mm plate made by Krupp using Krupp technology as a
whole at an angle of 25 degrees from the normal.
Good morning.
Special thanks for the example of armor penetration of English shells.
Dear Alexey, I will add for the sake of completeness;
weight of Holtzer armor-piercing projectile 380 ft (172,3 kg), velocity of the first two shots 1700 f/s (518 m/s), penetration into armor 76,2 mm;
third shot; projectile velocity 1800 f/s (548,6 m/s), penetration into armor 95 mm.
Domestic ones penetrated armor better. For 6-inch shells, penetration of a 254-mm Krupp plate as a whole at normal angle was mandatory.
N. Barnaby points out that the excessive enthusiasm for projectiles containing a large amount of explosives had a negative impact on the development of the armor-piercing projectile.
When the general "fascination" with this type of projectile passes and everyone again turns their attention to the armor-piercing projectile, it (the armor-piercing projectile) will
again become the main type of projectile in the ammunition set. Especially for large-caliber guns.
On 15.02.2013, the Alternative History website published an article about the overload of the battleship Mikasa two days before the battle. Among the 108 comments were Andrey's
comments. The article talked about the flagship Togo being overloaded so much that its main belt sank 43 cm below the waterline. Question: in what condition was the battleship
entering the battle. In order to bring the upper edge of the main belt at least to the water level, it was necessary to unload the ship by 1075 tons. Approximately, but the order is
the same. The other modern battleships of the Japanese squadron were in a similar position: Shikishima and Asahi.
I'll say it briefly. The information about the terrible overload of the "Mikasa" is based on a free translation of a line from Jackson's report and further fantasies.
For supporters of this version, I can offer a small quest: describe in which compartments of the Mikasa the excess coal could have been placed, at least one placement option.
Quote: rytik32
During the shelling of Ochakov the following incident occurred.
One 254 mm shell
Colleague, you ask generously, but what was the distance?
My sclerosis tells me that "Rostislav" was pounding the cruiser, protected only by a deck, point-blank. But I'm not exactly sure...
My example is for the angle of the through penetration limit, which does not depend on the projectile speed. This is the angle at which the projectile does not have enough wall strength.
13 September 2024 11: 11
Good afternoon.
Dear Andrey, as always, it is very interesting, but any calculation must be confirmed by field tests.
1. Good morning! The thing is that Goncharov's calculations are based on field tests. There is no doubt about his data. Another question is that the durability of the same coal is an assumption,
but there is nothing to be done about it, and the error will not give such a noticeable error.
It is clear that we cannot authentically model Mikasa's compartments and subject them to fire with equally authentic shells, but we can build hypotheses based on extrapolation from other
firing ranges, and the probability of error here will not be excessively high.
Quote: Andrey from Chelyabinsk
The thing is that Goncharov's calculations are based on field tests. There is no doubt about his data. Another question is that the durability of the same coal is an assumption, but there
is nothing to be done about it, and the error will not give such a noticeable error.
I will give an example "from the English", it was mentioned several times during discussions of the stability of ships, armor belt and armor deck. The English captured a smugglers' ship,
it was decided to "sink it" with artillery fire. The weather was moderate, the ship's list reached 5-10 degrees. They tried to hit the waterline, nothing worked, the shells that hit the
water ricocheted and hit above the waterline, the shells that did not hit the water also failed to hit the waterline. As a result, the ship was sunk using demolition charges. That is, a
slight rough sea does not allow an accurate shot to be made and will affect the angle of the shell's impact. This will affect the shell's capabilities in defeating armor. N. Barnaby, E.
Reed, W. White, who participated in these discussions, agreed with this.
The time when these issues were considered also matters. As we know, "everything changes over time."
1. All this is true, but let's not mix two different questions into one. Now we are discussing the consequences of hitting the above-water part of the citadel, and we know from the
practice of the Russo-Japanese War that such hits are possible. This is one question. According to my calculations, at a distance of 18 cables, the Mikasa citadel in the MO and KO
area could have been hit. But the probability of hitting the Mikasa citadel in the MO and KO area when firing at it from 18 cables is a completely different question, which I will
consider in the next article.
Quote: Andrey from Chelyabinsk
According to my calculations, at a distance of 18 cables, the Mikasa citadel in the MO and KO area could have been hit.
Andrey, I am not criticizing your calculations, from a theoretical point of view, they are correct. I am only writing that practice differs from theory. In a theoretical
calculation, it is impossible to take into account all the conditions that can affect the capabilities of an armor-piercing projectile.
But the probability of hitting the Mikasa citadel in the MO and KO area when firing at it from 18 cables is a completely different question, which I will consider in the next
I think this will be an interesting topic for discussion, given what you are planning in your next article;
Regardless of whether my thesis about 114,3 mm bevels is correct or not, in the course of working on this article I came to very surprising and very different from generally
accepted views conclusions about the armor systems of squadron battleships of the Russo-Japanese War. Which I will share in the next article, which I am currently working on.
This topic contains many interesting points that have not been published in Russia.
Quote: Andrey from Chelyabinsk
It was a shock to me when I saw how the armor penetration of 6-inch shells "walks" (I cited this in one of my articles). Probabilities, solid probabilities.
I think this is not surprising, I think I wrote that during armor testing they began to demand, to use modern language, quality certificates for the shells used for
testing. Since repeated tests revealed differences in results. I don't remember the year of introduction exactly, I need to look. But I can point out the country where it
began, Austria-Hungary. The "stukka" armor is expensive, and the corruption in those years was no worse than today.
So, such "walking" was not only in theoretical calculations and it caused surprise not only for you. There are critical statements of various famous shipbuilders about the
quality of armor-piercing shells and armor testing.
Good afternoon, dear Andrey!
Further, the key question appears to be the actual weight of explosives required by the Russian projectile of those years to inflict confident destruction on vital parts of
Japanese battleships.
A number of successful hits with armor penetration did not lead to the destruction of a single armored Japanese ship in the artillery battles of that war...
Andrey, on 15.02.2013, the website "alternative history" published an article about the overload of the battleship "Mikasa" on the eve of the Battle of Tsushima. There were 108
comments to this article. Including yours. The commentators had no objections to the essence of the article. The main thing in that article: the overload of the flagship Togo was
so significant that the upper edge of the main belt was 43 cm below the water level. Accordingly, the waterline protection is the upper belt, which covered less than half the
length of the ship and was 152 mm thick. "Mikasa" was in this state two days before the battle. Was the ship able to get rid of the overload by the beginning of the battle? A
decrease in displacement by 25 tons raised the upper edge of the main belt by one centimeter.
Quote: 27091965i
They tried to hit the waterline area, but it didn't work. The shells that hit the water ricocheted and hit above the waterline, and the shells that didn't hit the water also failed to
hit the waterline. As a result, the ship was sunk using explosive charges.
Just like ours in VOK
Good afternoon.
Just like ours in VOK
When I read this example, this is the first thought that "came" into my head. | {"url":"https://en.topwar.ru/249818-o-stojkosti-citadeli-mikasy-ili-neubivaemye-skosy-japonskogo-bronenosca.html","timestamp":"2024-11-14T01:21:10Z","content_type":"text/html","content_length":"569589","record_id":"<urn:uuid:c6ebe511-a58a-4bfd-8861-2b9a0f587a77>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00240.warc.gz"} |
Explain the concept of Big O notation in algorithm - Asksia.ai
Answer from Sia
Posted 6 months ago
Big O Notation in Algorithm Complexity Analysis
Definition: Big O notation is a mathematical notation used to describe the upper bound of an algorithm's running time or space requirements in terms of the size of the input data (n). It provides a
high-level understanding of the algorithm's efficiency and scalability
Purpose: The main purpose of Big O notation is to classify algorithms according to how their run time or space requirements grow as the input size grows
Usage: When comparing algorithms, Big O notation can help determine which algorithm is more efficient for large input sizes. It abstracts away constants and lower-order terms to focus on the main
factor that affects the growth rate
Examples: Common Big O notations include $O(1)$ for constant time, $O(\log n)$ for logarithmic time, $O(n)$ for linear time, $O(n\log n)$ for linearithmic time, $O(n^2)$ for quadratic time, etc
Limitations: Big O notation does not give an exact number of operations; it only describes the growth rate. It also ignores constants and lower-order terms, which can be significant for small input
sizes or specific use cases
Big O notation is a fundamental concept in computer science for analyzing and conveying the efficiency of algorithms. It helps in understanding how the time or space requirements of an algorithm will
scale with the size of the input.
Not the question you are looking for? Ask here!
Enter question by text
Enter question by image
Unlock Smarter Learning with AskSia Super!
Join Super, our all-in-one AI solution that can greatly improve your learning efficiency.
30% higher accuracy than GPT-4o
Entire learning journey support
The most student-friendly features | {"url":"https://www.asksia.ai/question-and-answer/Explain-the-concept-of-Big-O-notation-in-algorithm-complexity-analysis-ryfnK","timestamp":"2024-11-01T23:42:35Z","content_type":"text/html","content_length":"88068","record_id":"<urn:uuid:8cb7a418-72ad-4037-b44d-f2af97c92052>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00403.warc.gz"} |
Merge Sort
Merge sort is an effective sort algorithm based on merge operation. This algorithm is a very typical application of Divide and Conquer. By combining the ordered subsequences, a completely ordered
sequence is obtained. That is, each subsequence is ordered first, and then the subsequence segments are ordered. If two ordered tables are merged into one ordered table, it is called a 2-way merge.
Algorithm description
• Divide the input sequence of length n into two subsequences of length n/2;
• The two subsequences were merged and sorted respectively.
• Merge two sorted subsequences into one final sorted sequence.
GIF presentation
Code demo
const combine = (left: Array<number>, right: Array<number>) => {
const list: Array<number> = [];
while (left.length > 0 && right.length > 0) {
if (left[0] <= right[0]) {
} else {
while (left.length) {
while (right.length) {
return list;
* @description: Merge sort
* @param {Array} list
* @return {Array}
const merge = (list: Array<number>): Array<number> => {
const { length } = list;
if (length <= 1) {
return list;
const middle = length >> 1;
const left = list.slice(0, middle);
const right = list.slice(middle);
return combine(merge(left), merge(right));
Algorithm analysis
Merge sort is a stable sort method. Like select sort, merge sort performs independently of the input data, but performs much better than select sort because it is always O(nlogn) in time complexity.
The trade-off is extra memory space. | {"url":"https://chaxus.github.io/ran/src/article/sort/merge/","timestamp":"2024-11-12T13:42:18Z","content_type":"text/html","content_length":"49809","record_id":"<urn:uuid:3315331e-a00f-4ed2-ad1b-55dc543ae2b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00606.warc.gz"} |
Maths Worksheet For Class 3 Multiplication
Maths Worksheet For Class 3 Multiplication
Reading writing grammatical rules advanced reading comprehension and more sound easy when compared to what math is bringing. All worksheets are printable pdf files.
Multiplication Worksheets Grade 3 Pdf Multiplication Worksheets Fun Math Worksheets Math Sheets
All worksheets are pdf documents and can be printed.
Maths worksheet for class 3 multiplication. No login or account is needed. Printable 3rd grade multiplication worksheets and chart 3 rd grade comes with a new set of challenges in different areas to
stimulate learning in kids. Our online store will help you with that.
There are 17 apples in each basket. Since we started in russia in 2012 maths school has been the preferred destination for toys essentials and educational products for babies and children. Free
printable pdf to download.
Missing factor questions are also included. There are 544 pots. There are 37 baskets.
Students should be reasonably proficient at multiplication in columns before attempting the more difficult problems. Worksheets are divided into simple multiplication multiples of ten and
multiplication in columns. Multiplication word problems for grade 3 students.
Multiplication worksheets for grades 2 6. This is a comprehensive collection of math worksheets for grade 3 organized by topics such as addition subtraction mental math regrouping place value
multiplication division clock money measuring and geometry. Exercises also include multiplying by whole tens and whole hundreds as well as some column form multiplication.
Our third grade math worksheets continue earlier numeracy concepts and introduce division decimals roman numerals calendars and new concepts in measurement and geometry. Multiplication worksheets for
grade 3 make an unlimited supply of worksheets for grade 3 multiplication topics including skip counting multiplication tables and missing factors. Free math worksheets from k5 learning.
Choose your grade 3 topic. Worksheets math grade 3. Our word problem worksheets review skills in real world scenarios.
Significant emphasis to mental multiplication exercises. Our grade 3 multiplication worksheets emphasize the meaning of multiplication basic multiplication and the multiplication tables. How many
flowers are there in all.
They are randomly generated printable from your browser and include the answer key. Each worksheet has a number of word problems and an answer sheet. From the simplest multiplication facts to
multiplying large numbers in columns.
Class 3 maths multiplication worksheets sample cbse class 3 maths multiplication worksheet questions. Each pot has 32 flowers in it. Free grade 3 math worksheets.
Sample grade 3 multiplication worksheet. These grade 3 multiplication word problem worksheets cover simple multiplication multiplication by multiples of 10 and multiplication in columns as well as
some mixed multiplication and division. Come visit us to enjoy hassle free shopping 24 7 and check out the latest.
The worksheets can be made in html or pdf format both are easy to print. Below you will find the various worksheet types both in html and pdf format. Grade 3 multiplication worksheets.
Multiplication Worksheets For Grade 3 Free Math Worksheets 4th Grade Math Worksheets 7th Grade Math Worksheets
Color By 3 Digit Multiplication Worksheets Worksheets For Grade 3 Free Math Worksheets Math Worksheets
Multiplication Math Facts Tables To 10×10 2 Mathematics Worksheets Multiplication Facts Worksheets Math Multiplication Worksheets
Multiplication Sheet 4th Grade 4th Grade Math Worksheets Printable Multiplication Worksheets Free Printable Math Worksheets
Math Worksheets Dynamically Created Math Worksheets Multiplication Worksheets Math Worksheets Multiplication Problems
5 Free Math Worksheets Third Grade 3 Multiplication Multiplication Table 7 8 0101b65e8ad1eb4c In 2020 Free Math Worksheets Math Worksheet 2nd Grade Math Worksheets
Multiple Digit Multiplication Worksheets Math Worksheets Math Multiplication Worksheets Multiplication Worksheets
Multiplication Worksheets Multiply Numbers By 1 To 3 Math Multiplication Worksheets Year 4 Maths Worksheets Multiplication Worksheets
Grade 3 Multiplication Worksheet Multiplication Tables 6 To 9 Third Grade Math Worksheets Division Worksheets 3rd Grade Math Worksheets
Two Minute Multiplication Worksheet Education Com Math Drills 3rd Grade Math Worksheets Math Worksheets
Multiplying By Facts 3 4 And 6 Other Factor 1 To 12 All Math Fact Worksheets 4th Grade Multiplication Worksheets 4th Grade Math Worksheets
Multiplication Practice Worksheets Grade 3 Free Math Worksheets Free Printable Math Worksheets Printable Math Worksheets
Multiplication Worksheets Grade 3 Coloring Math Multiplication Worksheets Multiplication Worksheets Math Fact Worksheets
Single Multiplication Worksheets For Students Math Fact Worksheets Free Printable Math Worksheets Fun Math Worksheets
The Multiplying 2 Digit By 1 Digit Numbers Large Print A Math Worksheet From The Long Multiplication Worksheets Free Math Worksheets Touch Math Worksheets
Math Worksheets 3rd Grade Multiplication 2 3 4 5 10 Times Tables 3 4th Grade Math Worksheets Free Math Worksheets Math Multiplication Worksheets
Multiplication 3 Minute Drill V 10 Math Worksheets With Etsy Multiplication Worksheets Math Worksheets 4th Grade Math Worksheets
Two Minute Multiplication Worksheet Education Com 3rd Grade Math Worksheets Math Drills Math Worksheets
Grade 3 Math Worksheets Vertical Multiplication 3rd Grade Math Worksheets 3rd Grade Math Math Worksheets | {"url":"https://thekidsworksheet.com/maths-worksheet-for-class-3-multiplication/","timestamp":"2024-11-14T06:51:05Z","content_type":"text/html","content_length":"137850","record_id":"<urn:uuid:5c09abdc-a6e3-407a-bbf8-a9d4f0f1ce09>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00377.warc.gz"} |
Daily Sudoku Answer
Share link – www.brainbashers.com/s289068
R2C2 can only be <1>
R5C2 can only be <7>
R6C3 can only be <1>
R8C2 can only be <6>
R8C8 can only be <3>
R4C3 can only be <2>
R8C5 can only be <2>
R5C1 can only be <8>
R3C1 is the only square in row 3 that can be <7>
R5C6 is the only square in row 5 that can be <2>
R6C6 is the only square in row 6 that can be <8>
R9C1 is the only square in row 9 that can be <2>
R9C3 is the only square in row 9 that can be <3>
Squares R7C1 and R7C3 in row 7 form a simple naked pair. These 2 squares both contain the 2 possibilities <45>. Since each of the squares must contain one of the possibilities, they can be eliminated
from the other squares in the row.
R7C7 - removing <4> from <4678> leaving <678>
R7C9 - removing <4> from <478> leaving <78>
Squares R6C5 and R9C5 in column 5 form a simple naked pair. These 2 squares both contain the 2 possibilities <67>. Since each of the squares must contain one of the possibilities, they can be
eliminated from the other squares in the column.
R4C5 - removing <7> from <1479> leaving <149>
Intersection of column 9 with block 9. The value <7> only appears in one or more of squares R7C9, R8C9 and R9C9 of column 9. These squares are the ones that intersect with block 9. Thus, the other
(non-intersecting) squares of block 9 cannot contain this value.
R7C7 - removing <7> from <678> leaving <68>
R9C7 - removing <7> from <467> leaving <46>
Squares R2C5 and R2C8 in row 2 and R5C5 and R5C8 in row 5 form a Simple X-Wing pattern on possibility <9>. All other instances of this possibility in columns 5 and 8 can be removed.
R1C5 - removing <9> from <1459> leaving <145>
R4C5 - removing <9> from <149> leaving <14>
Squares R4C4 and R4C5 in row 4 form a simple naked pair. These 2 squares both contain the 2 possibilities <14>. Since each of the squares must contain one of the possibilities, they can be eliminated
from the other squares in the row.
R4C6 - removing <1> from <179> leaving <79>
R4C7 - removing <4> from <479> leaving <79>
Squares R4C4 and R4C5 in block 5 form a simple naked pair. These 2 squares both contain the 2 possibilities <14>. Since each of the squares must contain one of the possibilities, they can be
eliminated from the other squares in the block.
R5C4 - removing <4> from <345> leaving <35>
R5C5 - removing <4> from <459> leaving <59>
Squares R7C1, R7C3, R1C1 and R1C3 form a Type-1 Unique Rectangle on <45>.
R1C3 - removing <45> from <459> leaving <9>
Squares R1C7<48>, R7C7<68> and R9C7<46> in column 7 form a comprehensive naked triplet. These 3 squares can only contain the 3 possibilities <468>. Since each of the squares must contain one of the
possibilities, they can be eliminated from the other squares in the column.
R3C7 - removing <4> from <349> leaving <39>
Squares R2C5 (XY), R4C5 (XZ) and R3C6 (YZ) form an XY-Wing pattern on <1>. All squares that are buddies of both the XZ and YZ squares cannot be <1>.
R1C5 - removing <1> from <145> leaving <45>
R1C9 is the only square in row 1 that can be <1>
R1C7 is the only square in row 1 that can be <8>
R7C7 can only be <6>
R7C4 can only be <1>
R9C7 can only be <4>
R9C9 can only be <7>
R9C5 can only be <6>
R7C9 can only be <8>
R7C6 can only be <7>
R4C4 can only be <4>
R4C6 can only be <9>
R6C5 can only be <7>
R4C5 can only be <1>
R3C4 can only be <5>
R4C7 can only be <7>
R3C6 can only be <1>
R5C5 can only be <5>
R6C7 can only be <3>
R5C4 can only be <3>
R1C5 can only be <4>
R6C4 can only be <6>
R3C7 can only be <9>
R5C9 can only be <4>
R1C1 can only be <5>
R2C5 can only be <9>
R2C8 can only be <4>
R5C8 can only be <9>
R3C9 can only be <3>
R3C3 can only be <4>
R7C1 can only be <4>
R7C3 can only be <5>
Today's Sudoku Puzzles
All daily items change at midnight GMT – set your local time zone.
Note: BrainBashers has a Dark Mode option – I recommend not using your browser's dark mode or extensions for BrainBashers | {"url":"https://www.brainbashers.com/sudokuanswer.asp?date=0228&diff=6","timestamp":"2024-11-12T10:03:08Z","content_type":"text/html","content_length":"44144","record_id":"<urn:uuid:fa96175c-8d2e-4717-8c40-e0e638c271f5>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00374.warc.gz"} |
Lucene's FuzzyQuery is 100 times faster in 4.0
There are many exciting improvements in Lucene's eventual 4.0 (trunk) release, but the awesome speedup to
really stands out, not only from its incredible gains but also because of the amazing behind-the-scenes story of how it all came to be.
matches terms "close" to a specified base term: you specify an allowed maximum
edit distance
, and any terms within that edit distance from the base term (and, then, the docs containing those terms) are matched.
syntax is
, where
is the maximum allowed number of edits (for older releases
was a confusing float between
, which translates to an equivalent max edit distance through a tricky formula).
is great for matching proper names: I can search for
and it will match
) and a great many other "close" terms. With max edit distance 2 you can have up to 2 insertions, deletions or substitutions. The score for each match is based on the edit distance of that term; so
an exact match is scored highest; edit distance 1, lower; etc.
Prior to 4.0,
took the simple yet horribly costly brute force approach: it visits every single unique term in the index, computes the edit distance for it, and accepts the term (and its documents) if the edit
distance is low enough.
The journey begins
The long journey began when
Robert Muir
had the idea of pre-building a
Levenshtein Automaton
, a deterministic automaton (DFA) that accepts only the terms within edit distance
. Doing this, up front, and then intersecting that automaton with the terms in the index, should give a massive speedup, he reasoned.
At first he built a simple prototype, explicitly unioning the separate DFAs that allow for up to
insertions, deletions and substitutions. But, unfortunately, just building that DFA (let alone then intersecting it with the terms in the index), was too slow.
Fortunately, after some Googling, he discovered
a paper
, by Klaus Schulz and Stoyan Mihov (now famous among the Lucene/Solr committers!) detailing an efficient algorithm for building the Levenshtein Automaton from a given base term and max edit distance.
All he had to do is code it up! It's just software after all. Somehow, he roped
Mark Miller
, another Lucene/Solr committer, into helping him do this.
Unfortunately, the paper was nearly unintelligible! It's 67 pages, filled with all sorts of equations, Greek symbols, definitions, propositions, lemmas, proofs. It uses scary concepts like
Subsumption Triangles, along with beautiful yet still unintelligible diagrams. Really the paper may as well have been written in Latin.
Much coffee and beer was consumed, sometimes simultaneously. Many hours were spent on IRC, staying up all night, with Mark and Robert carrying on long conversations, which none of the rest of us
could understand, trying desperately to decode the paper and turn it into Java code. Weeks went by like this and they actually had made some good initial progress, managing to loosely crack the paper
to the point where they had a test implementation of the
case, and it seemed to work. But generalizing that to the
case was... daunting.
The breakthrough
Then, finally, a breakthrough! Robert found, after even more Googling, an existence proof, in an unexpected place: an open-source package,
, under the generous
MIT license
. The author,
Jean-Phillipe Barrette-LaPierre
, had somehow, incredibly, magically, quietly, implemented the algorithm from this paper. And this was apparently a random side project for him, unrelated to his day job. So now we knew it was
possible (and we all have deep admiration for Jean-Phillipe!).
We decided to simply re-use Moman's implementation to accomplish our goals. But, it turns out, its source code is all
(my favorite programming language)! And, nearly as hairy as the paper itself. Nevertheless, we pushed on.
Not really understanding the Python code, and also neither the paper, we desperately tried to write our own Python code to tap into the various functions embedded in Moman's code, to auto-generate
Java code containing the necessary tables for each max edit distance case (
, etc.). We had to guess what each Python function did, by its name, trying to roughly match this up to the spooky terminology in the paper.
The result was
: it auto-generates crazy looking Java code (see
, and scroll to the cryptic packed tables at the bottom), which in turn is used by
further Java code
to create the Levenshtein automaton per-query. We only generate the
cases (the
cases aren't really practical, at least not yet).
The last bug...
Realize, now, what a crazy position we were in. We wrote our own scary Python code, tapping into various functions in the Moman package, to auto-generate unreadable Java code with big tables of
numbers, which is then used to generate Levenshtein automata from the base term and
. We went through many iterations with this crazy chain of Python and Java code that we barely understood, slowly iterating to get the bugs out.
After fixing many problems, we still had one persistent bug which we just couldn't understand, let alone fix. We struggled for several days, assuming the bug was in our crazy Python/Java chain.
Finally, we considered the possibility that the bug was in Moman, and indeed Robert managed to reduce the problem to a tiny Python-only case showing where Moman failed to match the right terms.
Robert sent this example to Jean-Phillipe, who quickly confirmed the bug and posted
a patch
the next day. We applied his patch and suddenly everything was working perfectly!
Fortunately, while this fast
was unbelievably hairy to implement, testing it well is relatively easy since we can validate it against the brute-force enumeration from
. We have several tests verifying the different layers executed by the full
. The tests are exhaustive in that they test all structurally different cases possible in the Levenshtein construction, using a binary (only characters
) terms.
Beyond just solving this nearly impossible task of efficiently compiling a term to a Levenshtein Automaton, we had many other parts to fill in. For example, Robert separately created a general
, re-using infrastructure from the open-source
automaton package, to enable fast intersection of an automaton against all terms and documents in the index. This query is now used to handle
, and
. It's also useful for custom cases, too; for example it's used by
to reverse wildcard queries.
These slides from Robert
, and its fun possible use case, in more detail.
Separately, we had an impedance mismatch: these automatons speak full unicode (
) characters, yet Lucene's terms are stored in
bytes, so we had to create a
UTF32 -> UTF8
automaton converter, which by itself was also very hairy! That converter translates any
automaton into an equivalent
Levenshtein automaton, which can be directly intersected against the terms in the index.
So, today, when you run a
in 4.0, it efficiently seeks and scans only those regions of the term space which may have matches, guided by the Levenshtein automaton. This, coupled with ongoing performance improvements to seeking
and scanning terms, as well as other major improvements like
performing MultiTermQuery rewrites per-segment
, has given us the astounding overall gains in
Thanks to these enormous performance improvements, Robert has created an entirely
new automaton spell checker
that uses this same algorithm to find candidate terms for respelling. This is just like
, except it doesn't visit the matching documents. This is a big improvement over the
existing spellchecker
as it does not require a separate spellchecker index be maintained.
This whole exciting experience is a great example of why open-source development works so well. Here we have diverse committers from Lucene/Solr, bringing together their various unusual strengths
(automatons, Unicode, Python, etc.) to bear on an insanely hard challenge, leveraging other potent open-source packages including Moman and Brics, iterating with the authors of these packages to
resolve bugs. No single person involved in this really understands all of the parts; it's truly a team effort.
And now you know what's going on under the hood when you see incredible speedups with
in 4.0!
[For the not-faint-of-heart, you can browse
to see parts of this story unfolding through Jira]
65 comments:
1. Nice post! Interesting to see this story put to print.
I'd only make one correction to the history line:
We did finally loosely crack the paper during those late night sessions - allowing Robert to do a test impl for n=1. There was still some leaps to be made to generalize those steps for n > 1.
Thankfully you guys came up with the brilliant idea of using Moman for java code generation instead...
2. This comment has been removed by the author.
3. After reading this post I am still not understanding any of this code :-) But it works!
Really phantastic! I was only trying to help out with beer for Robert and some tiny improvements to speed up sorting in Automaton or generics police the code.
It was funny to listen to the discussions between Mike and Robert on the party at Mike's house!
4. Hi Mark,
Thanks for the correction! I reworded that part...
5. Hi Mike:
Super awesome post!
Is the built automaton built per segment? If so, does it handle merge from automatons from different segments?
6. Fantastic post Mike! It's really nice to hear about the behind-the-scenes story of how something in the research literature makes its way into Lucene and all the hard work you guys do to make it
happen. I'm really looking forward to 4.0!
Tom Burton-West
7. Hi John,
The Levenshtein automaton is actually built once up front, and then "intersected" with each segment's terms, segment by segment. We have a single PQ that's used to merge the terms from each
segment, then at the end we take the top terms from this PQ and create the real query (this is in TopTermsScoringBooleanQueryRewrite, in MultiTermQuery).
8. Thanks Tom! It was really quite crazy while this story was unfolding... it wasn't at all clear it was going to have a happy ending!
9. Great post, Mike!
You'll find Robert's presentation about Finate State Queries in Lucene at http://www.slideshare.net/otisg
10. Thanks Otis; I added a link to Robert's slides.
11. Great post! I'm glad that my code could help such a great project. I'm always happy to help.
12. Jean-Phillipe,
Thank you for creating Moman and open-sourcing it (with a friendly license) in the first place! Without that, I don't know how we would have made it through here...
13. What a great story!
14. You guys are awesome!
15. I introduced this post to Japanese people.
Thank you for writing great story about Lucene.
16. Great post. It reads like a thriller!
The way Python code was written to tap into Moman to auto-generate java code that is used by other java code does sound scary. Why not just run Moman on Jython?
Also can you briefly explain why N=3 is not practical?
17. Thanks nokuno!
18. Hi Andy,
Moman on Jython would work, but, Lucene is all Java today, so we wanted to keep that.
N=3 is possible, but it produces biggish tables (like a few hundred KB increase in Lucene's JAR, from the packed tables we store in the generated Java code). Further, since the space of terms
accepted by N=3 is so large, it's going to result in much more seeking/scanning to intersect with the terms dictionary, so it could be slowish. It'd be fairly easy to test this...
19. This is code and fix in perfection :-D
20. Is there some reason you didn't contact the authors of the paper with your questions? Seems like they would have been happy to help, or at least put you in touch with a graduate student who could
translate the paper for you.
21. It is scary! So basically you guys magically have what you need by translating someone's code without neither understanding of the method and the original code?
22. Anonymous #1,
Well, we did really have a basic grasp of the paper, but translating that theory into actual Java code, in practice, was tricky.
23. Anonymous #2,
I over-dramatized things somewhat... we do have enough of an understanding to believe it's correct. Furthermore, the tests we created are strenuous, and are exhaustive in that we test the
different possible characteristic vectors, so if the paper is correct, the algorithm we implemented should be as well (famous last words I know...).
24. Are you allowed to simply copy rewrite the Python code into Java and contribute it to the Apache Software Foundation like this? Is this what you did?
If so, even though the Python code has an MIT License, the work is still copyrighted by the original author and how you have done this seems messy from an intellectual property point of view.
Generally speaking, you can not simply rewrite someone else's work even though it has an MIT license and contribute it to the Apache Software Foundation.
25. Anonymous,
That's a great question. Licensing and open-source are a very complex (and obviously important) topic. I am not a lawyer.... but, here's how the licensing/copyright worked in this case:
First, Moman's license is the MIT/X11 license, which Apache projects are free to incorporate in source and binary form (see http://www.apache.org/legal/3party.html) as this license does not
remove any rights from ASL2.
Second, the Moman package is being used only as a library by the code generator we (Lucene developers) wrote; none of Moman's sources are incorporated into Lucene (only the generated Java code as
a result of running the generator, and our Python code generator). In theory, Moman's license has no bearing (much like the license of a compiler doesn't apply to the binaries it produces).
But, finally, to be sure, we contacted Jean-Phillipe to confirm he was the sole author of this package, and he was OK with our usage of it (he was), and we've also added Moman's copyright and
license in Lucene's LICENSE.txt file.
26. Excellent work. A nice demonstration of how university research can help solve read world problems.
27. There's also a native Java implementation of the Mitankin paper.
28. Mike,
Could you help me to understand how pre-building a Levenshtein Automaton (for terms?) can avoid scanning all terms on run time?
Kind regards,
Youbin Peng
29. Pre-building the Levenshtein automaton changes the problem from "test every term" to a graph intersection problem, ie, we intersect the graph (Levenshtein automaton) with the terms in the terms
dictionary by using seek/next APIs already available in Lucene.
In fact, at some point we should add an intersect() method directly into Lucene's APIs, because some terms dict representations can potentially do this more efficiently than seek/next.
30. As I can not fully understand your above explaination, I have the following questions which might be stupid.
1. Suppose we have n terms in the terms dictionary , do we have one graph or n graphes?
2. Why using seek/next APIs can skip some terms?
I do think that we have to calculate the intersection for all terms.
31. You have 1 graph created, from the fuzzy query's term plus edit distance N. This graph is the Levenshtein automaton.
You then intersect that graph with the terms in the index, by iteratively seeking to the next possible match. This seek is able to skip terms because chunks of the terms space cannot possibly
For example if your query is foobar~1 (ie, edit distance 1), and you are at terms starting with foba, you know the only possible match with that prefix is fobar so you seek there, possible
skipping fobaa, fobap, fobaq, etc.
32. Great post! And would highly appreciate it if u can give some concrete benchmark results.
33. Je suis récemment tombé sur votre blog et ont lu le long. Je pensais que je quitterais mon premier commentaire. Je ne sais pas quoi dire sauf que j'ai apprécié la lecture. Blog de Nice, je
vais continuer à visiter ce blog très souvent.
34. Interesting, is it worth to change an existing code with FuzzyQuery? i have no big lag with the current one, but....
35. Great post Mike, really interesting insight into upcoming Lucene 4.0 Fuzzy search. Looking forward for 4.0 !
36. I'm really glad to see someone picked up the paper from Klaus Schulz and Stoyan Mihow.
When I first read it in 2007 or 2008 I didn't understand much more of it than it's potential.
Thanks for your efforts coding it within lucene - I would never be capable to do so.
I'm really waiting to see multiple token fuzzy matches in Action!
37. Mathias KunterAugust 22, 2012 at 12:02PM
I just stumbled upon this very interesting blog post and have a question: doesn't the complexity of the generation of the Levenshtein automaton in terms of time and storage depend on the used
I mean, generation of the automaton for the Latin alphabet with its 26 letters may be easy and efficient, but what about Unicode? Having an alphabet with hundreds of thousands of letters must
complicate the situation dramatically, no?
38. Hi Mathias,
Lucene generates the Levenshtein automaton with full Unicode alphabet
and then we convert that to the equivalent automaton with UTF8 labels
(since terms are stored UTF8 byte[] in the index, by default). I
think this conversion is logically a composition of the FSA with an
FST (but it's not implemented "generically").
This means the edit distance is defined with Unicode characters. So
an edit distance of 2 means you can drop/insert/replace/transpose any 2 full Unicode
chars. This is a difference vs previous versions of Lucene which measure
edit distance in UTF16 code units.
I don't think the cost of building the Levenshtein automaton increases
much with increased alphabet size, as long as your automaton
representation can efficiently handle arcs that match a range of
characters (ours, poached from http://www.brics.dk/automaton, does).
39. > Lucene generates the Levenshtein automaton with full Unicode
Keep in mind Unicode is a living standard and the tables should be reviewed yearly.
40. Hi Anonymous,
That's fine: how the Unicode Consortium assigns Unicode characters won't affect FuzzyQuery's implementations. The tables we use to generate the Levenshtein automata are agnostic to the binding of
each character: they simply accept any int up to Character.MAX_CODE_POINT.
1. hello
41. I wonder how fast is 100 times faster? The example of database and query with search time would be useful. I know it can be done in Java pretty fast, like this one:
42. Hi Anonymous,
It's quite fast now ... you see see the nightly benchmarks (http://people.apache.org/~mikemccand/lucenebench/ ) ... ~ 30-40 QPS on the full Wikipedia index.
We also now have a DirectSpellChecker that runs a FuzzyQuery minus visiting the matching documents. This is nice because it avoids requiring the "sidecar" spellcheck index that's necessary with
the old spellchecker.
43. Hi Mike,
Its really a nice post.
I have a confusion here regarding Fuzzy Query. Since Solr 4 is supporting fuzzy searches using Edit Distance which needs a parameter i.e. N which can have values as 0 or 1 or 2(max). So, why are
the values like 0.4,0.6..till 1 are still supported and 1.5,2.2.. are not ? How does it makes sense ? Is it just for backward compatibility or there is something that I am missing ?
Thanks in advance.
1. Hi Anonymous,
I believe a value > 1 is supposed to be an integer edit distance, while a value <= 1 is allowed to be the legacy similarity measure (which under the hood is changed to an edit distance based
on the length of the query term). But maybe ask this question on java-user@lucene.apache.org to be sure!
2. Hi Mike,
Thanks for the reply.
I have asked question to java-user@lucene.apache.org.
As per your reply, what I understand is, the values between 0 to 1 are still allowed so as not to make changes in the way lucene is queried in case of fuzzy matching. And the values like 1 or
2 are supported to provide a parameter for fuzzy search explicitly(where we can specify the number that signifies the edit distance between source and target strings , however value between 0
to 1 also does the same thing but with some internal calculation).
Please correct my understanding if wrong.
3. Hi Anonymous,
I couldn't find your email to java-user ... what was the subject line?
Your understanding is exactly right!
44. Hello Mike,
I am using Solr 4.2.1.
My question is :
Is there a way where I can combine fuzzy search with phonetic searches. Actually, I want to search on the fields like first_name, last_name and so on to get the records that can have some
spelling mistakes as well.(Spell suggester is not fit for me as I want to get the solr documents in output not the list of words/strings)
Only fuzzy search is not fit for me as I can at max provide ~2 (edit distance) as fuzziness factor in query and only Phonetic will also not work as there are some words for which encoding in
DoubleMetaphone completely changes with the change in a single character.
And also I came to know that with fuzzy search all the query time analysis is by passed. So I am unable to find a way to have both together.
One way that I just found is to have to fields one analyzed with phonetic filters and one as text. Then I could query them as (firstname_text:hello~2 AND firstname_phonetic:hello)
If there is no such way to have both together then is the approach I have in mind is correct or not ?
Waiting for your reply.
Thanks in advance.
1. This is certainly possible with Lucene: just analyzing the field with a phonetic analyzer, do the same at query time, and create a FuzzyQuery from that. But it sounds like you need to know
how to do this with Solr? I'm not sure ... did you already email the solr-user@lucene.apache.org list?
2. Hey Michael,
first of all thanks for this awesome blog!
I'm currently facing the exact problem as described here. Now I want to analyze the input at query time but I'm not quite sure how to do so. Do I analyze a single String and return the
analyzed version?
Thanks in advance.
3. Hi Anonymous,
Maybe send an email to java-user@lucene.apache.org? Or, solr-user.
45. This comment has been removed by the author.
46. Michael,
Thanks for the blog -- I had a clarification re the nature of improvement:
In the old Fuzzysearch, the system examined every 'plausible' candidate and computed the actual Levenshtein distance -- an expensive computation for each candidate to decide whether it was within
N. Whereas in this new one, it uses a new data structure (the autoaton/table built specifically for the current query) to check whether every 'plausible' candidate is within a distance of N.
Is that a correct understanding?
1. Hi learningbyprogramming,
That's correct, except with the approach in 4.x, since we pre-compile the space of all terms within edit distance N of the target term into an automaton up front, visiting the matching terms
is a much, much faster process (just an intersection of that automaton with the tree structure of the terms in the terms dictionary).
2. Hi Michael,
Many thanks for your response. However I didnt understand what you mean by pre-compile since the target is a runtime query.
3. What I meant by pre-compile is, for each query, we build the automaton accepting all terms within edit distance N of the query, up front. After that it's a fast intersection with all terms.
4. Thanks for clarifying, Michael.
47. Hi,
iam working on OpenNLP with SOLR. I have successfully applied the patch LUCENE-2899-x.patch to latest SOLR code branch_4x.
I desgined some analyers based on OpenNLP filters and tokenziers and index some documnets on that fields.
Searching on OpenNLP field is not constant. Not able to search on these OpenNLP designed fields in solr schema.xml properly.
Also, how to use payloads for boosting the document.
Please help me on this.
1. Hi, you should ask on the Solr user's list (solr-user@lucene.apache.org).
48. Hi Michael, what we can do with FSA is really impressive :)
I was now thinking to the application to an Autocomplete feature :
Currently there are 2 different approaches for Autocomplete :
1) Using the term enum and filtering based on a byte prefix on each instance of term enum ( which is a ByteRef)
2) Using the suggester (org.apache.solr.spelling.suggest.Suggester
The second approach should be very similar to the SpellCheck FSA approach.
So it's faster to use (2)FST or the (1) Byte prefix filter ?
From the point of view of memory?
1. I think suggesting via TermsEnum is typically slow; most of our prefix-based suggesters use FST since it's so efficient at doing a prefix lookup followed by "top N paths by score" search.
Even once you add fuzziness (LevN automaton) the FST based suggester is still quite fast; I forget the numbers but it's on the fuzzy suggester Jira issue ...
49. Thank you very much, I don't think number are important in this moment for me, it's nice to know that the FST one is faster !
50. This comment has been removed by the author.
1. Hi Manolito,
No, the algorithm from the massive paper directly constructs a deterministic automaton.
51. nice post | {"url":"https://blog.mikemccandless.com/2011/03/lucenes-fuzzyquery-is-100-times-faster.html","timestamp":"2024-11-12T16:13:03Z","content_type":"text/html","content_length":"197292","record_id":"<urn:uuid:c84c5f3f-3d14-4089-8814-d4745dfeb8c3>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00515.warc.gz"} |
Circular orbit
Circular orbit facts for kids
In astronomy, a circular orbit refers to an object (such as a planet or a star) which orbits around a central body in a fixed, circular motion. This motion follows Kepler's Laws. A circular orbit
occurs when the eccentricity of its orbit is equal to 0.
Objects with a circular orbit are uncommon. The Moon moves in an elliptical orbit around the Earth, and the planets move in an elliptical orbit around the Sun.
Other types of motion in astronomy include elliptical orbit, parabolic trajectory, and hyperbolic trajectory.
Images for kids
• A circular orbit is depicted in the top-left quadrant of this diagram, where the gravitational potential well of the central mass shows potential energy, and the kinetic energy of the orbital
speed is shown in red. The height of the kinetic energy remains constant throughout the constant speed circular orbit.
See also | {"url":"https://kids.kiddle.co/Circular_orbit","timestamp":"2024-11-04T08:32:54Z","content_type":"text/html","content_length":"13276","record_id":"<urn:uuid:07a59e19-cd67-407a-9583-771a613be7f7>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00843.warc.gz"} |
Year: 2012 Vol. 16 Num. Suppl. 1 - May - (222º)
DOI: 10.7162/S1809-977720120S1PO-053
DIAGNOSTIC Regina Helena Garcia Martins, Caio Bosquê Hidalgo, Eny Regina B. Neves, Elaine Lara Mendes Tavares, Thalita de Azevedo Fracalossi,
Tatiana Maria Gonçalves
│ Abstract: │
│ Dysphonia attacks 50% of the professors, and if it relate to the work conditions. Vocal nodules are prevalent phonotraumatic injuries. OBJECTIVES: to characterize, in dysphonic professors, │
│ the vocal symptoms, co-factors of risk, and videolaryngoscopy. METHODS: 50 taken care of dysphonic professors between 2010 and 2011 in the Unesp-FMB, had filled a protocol and had carried │
│ through videolaryngoscopy. RESULTS: 50 professors (46 Women; 4 men), age between 20-39 (n-26), 40-59 (n-22), > 60 (n-2); education level: basic (n-8), medium (n-27), basic/average (n-24), │
│ childhood (n-6), superior (n-1); Journey: < 20 hours (n-6), 20-39 hours (n-23), > 40 hours (n-21); Time in the profession: <5 years (n-9), 5-10 (n-9), 10-20 (n-17), >20 (n-15); student for │
│ class: <20 (n-5), 20-40 (n-24), >40 (n-21). Medical Licence: 1-3 (n-18), 4-6 (n-7), > 6 (n-6), no one (n-27). Time of dysphonia: < 3 m (n-5), 3m-1 year (n-18), > 1 year (n-27),Frequency of │
│ the symptoms: permanents (n-36), sporadic (n-14); Evolution of the symptoms : gradual (n-44), suddenly (n-6); Factors of worsening: vocal abuse (n-45), extreme day (n-46), noise (n-27), │
│ pollutants (n-22), others (n-30); Symptoms associates: nasal-sinusal (n-34), gastroesophageal (n-26), auditory (n-10), others (n-18). Symptoms: hoarseness (n-46), effort to speech (n-35), │
│ fatigue and fatigue (n-33), cough (n-30), others (n-90). Videolaryngoscopy: nodules (n-18), functionary (n-16), paquidermy (n-9), others (n-13). CONCLUSIONS: the analysis of the │
│ questionnaires indicated predominance of the professors between 20 the 39 years, taught in basic and/or average education and extreme hours of working. The vocal symptoms permanent and were │
│ related to the overload phonatory and the favorable conditions of the classrooms. Vocal nodules and dysphonia functionaries had been the main videolaryngoscopy diagnostic. │ | {"url":"https://arquivosdeorl.org.br/additional/acervo_eng_print.asp?id=1100","timestamp":"2024-11-13T15:58:23Z","content_type":"text/html","content_length":"10742","record_id":"<urn:uuid:047e9da2-ef8f-40cc-894f-7446426a674d>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00407.warc.gz"} |
Give your answer in SI units and to three significant
A stick which is 1.52...
Give your answer in SI units and to three significant figures. A stick which is 1.52...
Give your answer in SI units and to three significant figures.
A stick which is 1.52 meters long is leaned against a wall at an angle. If the coefficient of static friction between the stick and the wall and floor is 0.411, determine the furthest distance from
the wall that the bottom of the stick may be placed without slipping. | {"url":"https://justaaa.com/physics/1305164-give-your-answer-in-si-units-and-to-three","timestamp":"2024-11-02T22:08:41Z","content_type":"text/html","content_length":"38971","record_id":"<urn:uuid:d827d885-a1a7-413c-8e62-976d2d33cce1>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00732.warc.gz"} |
Best Assist with Math Problems Solver Lead!
Exactly what does Assist with Math Problems Solver Signify?
How to unravel Math Problems into your Lifestyle. Subsequent phase could well be to read through the complete dilemma yet again and study most of the suitable specifics which could service you in
fixing the problem. Consider the tactics you utilized to repair people situations, and you simply may well come across the answer. Just about every phrase dilemma will have to have a multiple
structure, but a visible illustration of the critical details causes it to be easier to operate with. Even though Dr. Perelman’s show results will not exhibit the Geometrization Conjecture,
mathematicians mentioned, it is really exceptionally apparent that his work is probably going for making a substantial contribution to arithmetic. Pictures differ from essential computation and
general math know-how to phrase problems.
It was to begin with designed to quantify the qualitative aspects of algebraic equations. word problems math solver
Just imagine you profit from the centrality purpose to decrease the scale. The methods to the good deal of this sort of equations is often determined by inspection.
Assist with Math Problems Solver Is often Fantastic for everybody
Teaching youngsters these answers isn’t really simply a ploy to generate them review. There are many tips you might take advantage of to handle math research in a a lot faster price. You be a part of
a gaggle of nearest pupils to stick with the Areteem product in order to have got a structured curriculum to return up with all your math capabilities.
They can rely on math via the internet assistance that is supplied by a couple of internet tutoring products and services along with the support of extremely capable and specialised math tutors for
every department of arithmetic. They will learn to become successful in math, significantly whenever they use a complete know-how from the fundamentals. Remaining using the net, they might get in
contact with their math tutors at 3 inside early morning and they are going to be there.
Help With Math Problems Solver May be Entertaining for everybody
Thus please don’t thrust your self much too stressful right this moment, or maybe you end up jeopardizing a far more imperative illness later on. Odds are you presently will probably find the
equivalent main problem you’ve got solved, or otherwise, an identical downside could steer you appropriate items to accomplish to reach for the remedy. solving math problems
The dilemma over is mostly a problem that you just should to acquire zero bother answering. Figuring out gains when your intellect can make the appropriate connections. Primary characteristics of
Circles Since we have determined a lot of the elements of circles, we are now in the position to get started with to derive a couple in their qualities using the instruments we have produced thereby
much. Its supposedly within the common main. The way in which to be a prosperous Author. If you want to find out just how this system obtained for the answer, you have to pay out funds. Their
mathematical considering isn’t really stagnant, but fluidly going by means of several amounts mainly because they expertise an ever-increasing range of dilemmas. More than the period of its
improvement more than the preceding three hundred many years, it’s got arrive at be the examination of mapping a single area into a further.
Assist with Math Problems Solver Stated
The Amazing benefits of Using the net Getting to know. Purposeful fixedness may just be noticed in other sorts of discovering behaviours far too. Immediately after all, as soon as you are convinced
you could extend your mental qualities, you want to carry out specifically that. Dads and moms who home-school their young people would possibly desire to encounter social stress from proponents of
typical instruction.
UCI Math Club is your highly most popular. Which means you should plow through collectively. Bonusly phone calls these sections of recognition microbonuses.
The Process to Generate the most effective Essay. The way in which on the preferred content is by using WritePaperFor. Me. Most of the phrases is likely to make perception around the sentences, hence
the program will have to verify which remedy they experience is most likely the really most beneficial. Each and every creator we use has composed a complete ton of extraordinary papers linked using
your matter. To the report, the touring salesman difficulty is thought to be drop with this course. November issues of varied kid’s journals should be established from the centre also. It is
important to do that in reality difficult math observe for tomorrow but don’t have any clue the best way to get it done.
Up in Arms About Assist with Math Problems Solver?
There’s a lot helpful succeed taking place. A fantastic a sufficient amount of reply possibly will deliver you an excellent notion of just where you stand. Now it is actually prepared in conditions
by having an equivalent sign setup, the sunshine will commonly go on and you are going to start off functioning the problem. All you’ll have to do to acquire a top-notch outcome may be to complete
the easy-in-use acquire variety. Relatively generally, a obstacle is damaged up into sub-parts so you really need to continue logically only one portion right after yet another. It can be knowledge
introduced inside of a difficulty that is unrelated or unimportant to this explicit matter. | {"url":"https://outdooreye.net/best-assist-with-math-problems-solver-lead/","timestamp":"2024-11-11T03:33:08Z","content_type":"text/html","content_length":"40672","record_id":"<urn:uuid:dbe93ad9-5d84-4808-8693-b8add778c1bf>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00090.warc.gz"} |
Robots, machine learning, global issues
• During these times I decided to start playing with DMX. I bought a the Lumeri Wash 7.10. It has RGBW leds, 9 or 16 channels, and a moving head. It uses DMX512. The DMX in DMX512 stands for
Digital Multiplex (protocol). Lights like this have a DMX input and output. so they can be chained. A collection of DMX devices is called a universe.
• If you’re in quarantaine or in isolation, there’s a lot of staying inside. Perhaps you have to be in another room. Perhaps you just want to stream some online event to a larger screen. In either
case, you want to figure out how to stream your desktop to your TV. If you happen to have a Chromecast, this is possible, but there are many ways to accomplish this. We will go through a few.
• Suppose we have to come up with some kind of function that defines how different two probability distributions are. One such function is the Kullback-Leibler divergence. It is an asymmetric
function: it gives a different value for probability distribution $A$ given probability distribution $B$ versus the other way around. It is henceforth not a true distance (which is symmetric),
but a so-called divergence. A divergence also does not satisfy the “triangle inequality”: \(D(x + y) \leq D(x) + D(y)\) is not necessarily true for all $x$ and $y$. It does satisfy however two
other important conditions. A divergence is always zero or larger and the divergence is only zero if and only if \(x = y\).
• My intuition would say that a part-based decomposition should arise naturally within an autoencoder. To encorporate the next image in an image recognition task, it must be more beneficial to have
gradient descent being able to navigate towards the optimal set of neural network weights for that image. If not, for each image gradient descent is all the time navigating some kind of common
denominator, none of the images are actually properly represented. For each new image that is getting better classified, the other images are classified worse. With a proper decomposition
learning the next representation will not interfere with previous representations. Grossberg calls this in Adaptive Resonance Theory (ART) catastrophic forgetting.
• If we do want robots to learn about the world, we can use computer vision. We can employ traditional methods. Build up a full-fledged model from corner detectors, edge detectors, feature
descriptors, gradient descriptors, etc. We can also use modern deep learning techniques. One large neural network hopefully captures similarly or even better abstractions compared to the
conventional computer vision pipeline.
• A long, long time ago - namely, in terms of these fast moving times of advances in deep learning - two years (2016), there was once a paper studying how we can teach neural networks to count.
• Variational inference approximates the posterior distribution in probabilistic models. Given observed variables \(x\) we would like to know the underlying phenomenon \(z\), defined
probabilistically as \(p(z | x)\). Variational inference approximates \(p(z|x)\) through a simpler distribution \(q(z,v)\). The approximation is defined through a distance/divergence, often the
Kullback-Leibler divergence:
• In the dark corners of the academic world there is a rampant fight between practitioners of deep learning and researchers of Bayesian methods. This polemic article testifies to this, although
firmly establishing itself as anti-Bayesian.
• There are many, many new generative methods developed in the recent years.
□ denoising autoencoders
□ generative stochastic networks
□ variational autoencoders
□ importance weighted autoencoders
□ generative adversarial networks
□ infusion training
□ variational walkback
□ stacked generative adversarial networks
□ generative latent optimization
□ deep learning through the use of non-equilibrium thermodynamics
• In contrastive divergence the Kullback-Leibler divergence (KL-divergence) between the data distribution and the model distribution is minimized (here we assume \(x\) to be discrete):
• The Yoga 900 is a beautiful machine that has a considerably long battery lifetime and can be folded such that it functions as a tablet. The Yoga arrived on Friday and the entire Crownstone team
was enjoying how it came out of the box: it lifts up! If you’re creating your own hardware you suddenly appreciate how other people pay attention to packaging!
• Will you have a bathtub in your autonomous car? According to many the future is a socialist paradise. The autonomous car will change everything! We will be car sharing. We can change parking lots
into a lot of parks!
• Imagine one of the first AIs coming online. What is it gonna read about itself? How would it feel? Would it feel welcome? What is definitely the case is that it will learn a lot about humans.
This is for example what Musk is saying about this alien life form:
• We have put the Crownstone on Kickstarter, a smart power outlet with quite sophisticated technology. I think it’s nice for the general hacker community to get some more insight on the technology
behind it.
• Perhaps you have seen the recent TED video from Nick Bostrom. Here you see an extended talk from him at Google:
• The Legendre transform describes a function - in the normal Legendre case, a convex function (but for the generalized case, see [1]) - as a function of its supporting hyperplanes. In the case of
a 2D function these are supporting lines. The supporting lines are the lines that just touch the function. These lines do not intersect the function anywhere else if the function is convex.
• If you’re interested in how things work, our brain is one of the most intriguing devices around. I love reverse engineering stuff. Understanding limits and colimits within category theory can be
just as rewarding as getting to terms with the intricate structure of the brain.
• It all started with annoying messages that nobody seems to understand (/var/log/syslog):
• Thousands of articles describe the use of the Dirichlet Process, but very few describe how to sample from it. Most often one is referred to Markov chain sampling methods for Dirichlet process
mixture models (pdf) by Radford Neal (at University of Toronto), which is a nice piece of work, but still a bit dense as an introduction. I contacted him by email about certain things that were
difficult to understand at first and he was kind enough to respond, thanks a lot! Definitely also check out his blog in which he regularly showcases his fast version of R.
• In the world of Bayesian’s, a model is a triplet $p(x,z,\theta)$. The observations are random variables $x$. And then there is a seemingly artificial distinction between the random variables that
are called hidden ($z$) and other random variables that are called parameters ($\theta$), and are hidden as well! So, how come that parameters got their distinguished name? In the case of for
example a clustering task, we can assign each observation a corresponding hidden variable: an index of the cluster it belongs to. Hence, there are as many hidden variables as there are
observations. Now, in contrary, we might define parameters in two different ways:
• One night I was lying down staring at the stars and it dawned upon me that I was not alone. I had only a few of the many alien eyes. Just like them I was figuring out if my god existed. I felt
part of this cosmic family more than anything before. Something bigger than our soccer team, our continental heritage, or our world wide scientific efforts. All these eyes… The universe becoming
aware of itself.
• Interesting applications of Google glass? I encountered very few still. I think some creative minds have to sit together and go for it! Translations of foreign languages, and reading out loud for
blind people, or the illiterate. Sure, two minutes of a creative session under the shower, and you will come up with such ideas. But what’s next? Do we really need to translate all people around
us? There are so many annoying conversations! Perhaps the glass can assemble them to a nice creative story, or a poem! And of course, there is no reason to only use human input. A sound from an
animal can directly translated in a warm male or female voice. The barks of your dog become “Hey! I see someone I don’t recognize!”, or “Dude, I am so hungry!”.
• Black Mirror, the first television series, really describing the future. The near future, a black future.
• This website contains a few links of moderate importance to what I do. For my work see the company we started at Almende, namely Distributed Organisms (which we informally call DoBots). DoBots is
a very exciting company which sells internet services for large groups of robots. In Replicator we did research with respect to self-reconfigurable robots, but its applicability is still far
away. However, in FireSwarm we can actually use a group of aerial robots to find a dune fire as quick as possible. At times I might post some things about robot cognition, because the thing that
I like (professionally) more than robots is artificial intelligence.
subscribe via RSS | {"url":"https://www.annevanrossum.com/","timestamp":"2024-11-07T09:32:32Z","content_type":"text/html","content_length":"22922","record_id":"<urn:uuid:58536b06-30a2-4d79-9f15-42a0b1ed9062>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00867.warc.gz"} |
In the MIT Department of Mathematics’ Undergraduate Seminar in Theoretical Computer Science, which is taken primarily by juniors and seniors, students write a term paper on a topic of their choice.
To do so, they must find and read sources, including mathematics research articles. Attached are a suggested reading strategy (student resource) and an in-class activity designed to introduce
students to the reading strategy and to familiarize them with some of the common features of mathematics papers that facilitate the finding of information within the paper. Course lead: Zachary
Remscrim Communication lecturer: Susan Ruff
Read more →
The following resources are about reading mathematics to understand it. Resources for students Terry Tao’s blog What’s New has a section On Writing with a subsection on reading writing. Additional
blog posts suggest ways to further deepen understanding: Learn and relearn your field and Ask yourself dumb questions–and answer them! Stewart, I., “How to Learn Math,” Letters to a Young
Mathematician, Basic Books, 2006, pp. 62-70. (Book Review at MAA website) This letter is from a wonderful collection of letters from a mathematician to “Meg,” as she progresses from a high school
student wondering whether higher levels of math are
Read more → | {"url":"https://mathcomm.org/tag/reading/","timestamp":"2024-11-07T22:13:39Z","content_type":"application/xhtml+xml","content_length":"79221","record_id":"<urn:uuid:2716ddaa-7ed1-4bea-b961-f7f4d27779db>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00101.warc.gz"} |
Identifying Causal Effects using
This vignette is a modification of (Tikka, Hyttinen, and Karvanen 2021).
A causal effect is defined as the distribution \(P(\mathbf{Y} { \, | \, }\textrm{do}(\mathbf{X}),\mathbf{Z})\) where variables \(\mathbf{Y}\) are observed, variables \(\mathbf{X}\) are intervened
upon (forced to values irrespective of their natural causes) and variables \(\mathbf{Z}\) are conditioned on. Instead of placing various parametric restrictions based on background knowledge, we are
interested in this paper in the question of identifiability: can the causal effect be uniquely determined from the distributions (data) we have and a graph representing our structural knowledge on
the generating causal system.
In the most basic setting we are identifying causal effects from a single observational input distribution, corresponding to passively observed data. To solve such problems more generally than what
is possible with the back-door adjustment (Spirtes, Glymour, and Scheines 1993; Pearl 2009; Greenland, Robins, and Pearl 1999), Pearl (1995) introduced do-calculus, a set of three rules that together
with probability theory enable the manipulation of interventional distributions. Shpitser and Pearl (2006) and Huang and Valtorta (2006) showed that do-calculus is complete by presenting
polynomial-time algorithms whose each step can be seen as a rule of do-calculus or as an operation based on basic probability theory. The algorithms have a high practical value because the rules of
do-calculus do not by themselves provide an indication on the order in which they should be applied. The algorithms save us from manual application of do-calculus, which is a tedious task in all but
the simplest problems.
Since then many extensions of the basic identifiability problem have appeared. In identifiability using surrogate experiments (Bareinboim and Pearl 2012), or \(z\)-identifiability, an experimental
distribution is available in addition to the observed probability distribution. For data observed in the presence of selection bias, both algorithmic and graphical identifiability results have been
derived (Bareinboim and Tian 2015; Correa, Tian, and Bareinboim 2018). More generally, the presence of missing data necessitates the representation of the missingness mechanism, which poses
additional challenges (Mohan, Pearl, and Tian 2013; Shpitser, Mohan, and Pearl 2015). Another dimension of complexity is the number of available data sources. Identification from a mixture of
observational and interventional distributions that originate from multiple conceptual domains is known as transportability for which complete solutions exist in a specific setting (Bareinboim and
Pearl 2014).
While completeness has been accomplished for a number of basic identifiability problems, there are still many challenging but important extensions to the identifiability problem that have not been
studied so far. To find solutions to the more complicated identifiability problems, we present a unified approach to the identification of observational and interventional causal queries by
constructing a search algorithm that directly applies the rules of do-calculus. We impose no restrictions to the number or type of known input distributions: we thus provide a solution to problems
for which no other algorithmic solutions exist. We also extend to identifiability under missing data together with mechanisms related to selection bias and transportability.
To combat the inherent computational complexity of the search-based approach, we derive rules and techniques that avoid unnecessary computational steps. We are able to detect trivial queries where
non-identifiability can be determined directly from the inputs. We also present a search heuristic that considerably speeds up the search in the cases where the effect is indeed identifiable. The
approach, called do-search, is provably sound and it retains the completeness in the cases previously proven to be solved by do-calculus rules. We can easily scale up to the problems sizes commonly
reported in the literature. The R package dosearch (R Core Team 2024; Tikka, Hyttinen, and Karvanen 2020) provides an implementation of the search algorithm and is available on CRAN. The complete
details of do-search can be found in (Tikka, Hyttinen, and Karvanen 2021).
The General Causal Effect Identification Problem
Our presentation is based on Structural Causal Models (SCM) and the language of directed graphs. We assume the reader to be familiar with these concepts and refer them to detailed works on these
topics for extended discussion and descriptions, such as (Pearl 2009) and (Koller and Friedman 2009).
Following the standard set-up of do-calculus (Pearl 1995), we assume that the causal structure can be represented by a semi-Markovian causal graph \(G\) over a set of vertices \(\mathbf{V}\). The
directed edges correspond to direct causal relations between the variables (relative to \(\mathbf{V}\)); directed edges do not form any cycles. Confounding of any two observed variables in \(\mathbf
{V}\) by some unobserved common cause is represented by a bidirected edge between the variables.
In a non-parametric setting, the problem of expressing a causal quantity of interest in terms of available information has been be described in various ways depending on the context. When available
data are affected by selection bias or missing data, a typical goal is to “recover” some joint or marginal distributions. If data are available from multiple conceptual domains, a distribution is
“transported” from the source domains, from which a combination of both observational and experimental data are available, to a target domain. The aforementioned can be expressed in the SCM framework
by equipping the graph of the model with special vertices. However, on a fundamental level these problems are simply variations of the original identifiability problem of causal effects and as such,
our goal is to represent them as a single generalized identifiability problem.
The general form for a causal identifiability problem that we consider is formulated as follows.
• Input: A set of known distributions of the form \(P(\mathbf{A}_i | \textrm{do}(\mathbf{B}_i), \mathbf{C}_i)\), a query \(P(\mathbf{Y} { \, | \, }\textrm{do}(\mathbf{X}), \mathbf{Z})\) and a
semi-Markovian causal graph \(G\) over \(\mathbf{V}\).
• Task: Output a formula for the query \(P(\mathbf{Y} { \, | \, }\textrm{do}(\mathbf{X}),\mathbf{Z})\) over the input distributions, or decide that it is not identifiable.
Here \(\mathbf{A}_i,\mathbf{B}_i, \mathbf{C}_i\) are disjoint subsets of \(\mathbf{V}\) for all \(i\), and \(\mathbf{X},\mathbf{Y},\mathbf{Z}\) are disjoint subsets of \(\mathbf{V}\). The causal
graph \(G\) may contain vertices which describe mechanisms related to transportability and selection bias. In the following subsections we explain several important special cases of this problem
definition, some that have been considered in the literature and some which have not been.
The SCM framework can be extended to describe missing data mechanisms. For each variable \(V_i\), two special vertices are added to the causal graph. The vertex \(V_i^*\) is the observed proxy
variable which is linked to the true variable \(V_i\) via the missingness mechanism (Little and Rubin 1986; Mohan, Pearl, and Tian 2013): \[$$V_i^* = \begin{cases} V_i, & \mathrm{if}\; R_{V_i} = 1, \
\ \textrm{NA}, & \mathrm{if}\; R_{V_i} = 0, \end{cases}$$\] where \(\textrm{NA}\) denotes a missing value and \(R_{V_i}\) is called the response indicator (of \(V_i\)). In other words, the variable \
(V_i^*\) that is actually observed matches the true value \(V_i\) if it is not missing (\(R_{V_i} = 1\)). We note that in this formulation, each true variable has its own response indicator, meaning
that we do not consider shared indicators between variables or multiple indicators for a single variable. Furthermore, if there is no missingness associated with a given variable \(V_i\) meaning that
it is fully observed, the corresponding response indicator \(R_{V_i}\) always has the value \(1\). The omission of a proxy variable and a response indicators of a specific variable from a graph
encodes the assumption that the variable in question if fully observed. Note that intervention nodes are added for true variables and response indicators but not for proxy variables. On a symbolic
level one could intervene on proxy variables, however we are only interested in interventions that keep equation 1 intact.
The dosearch package
We implemented do-search in C++ and constructed an R interface using the Rcpp package (Eddelbuettel and Francois 2011). This interface is provided by the R package dosearch.
Calling the search from R is straightforward via the primary function that carries the name of package.
data, query, graph,
transportability = NULL, selection_bias = NULL, missing_data = NULL,
control = list()
The required inputs of the function are data, query and graph. Argument data is used to encode the set of known input distributions in the general identifiability problem as a character string, where
each distribution is separated by a new line. For example, if we have access to distributions \(P(W), P(Y { \, | \, }X)\), and \(P(Z { \, | \, }\textrm{do}(X), W)\), we would write
data <- "
The individual distributions can also be given as a list of character vectors of length one:
data <- list(
The \(\textrm{do}(\cdot)\)-operator can either precede or succeed conditioning variables, but it must appear only once in a given term, meaning that expressions such as P(Y|do(A),B,do(C)) are not
allowed, but should instead be given as P(Y|B,do(A,C)) or P(Y|do(A,C),B). If variable sets are desired, each member of the set has to be included explicitly.
Argument query is used to describe the query of the general identifiability problem as a character string, similarly as data. If we are interested in identifying \(P(Y { \, | \, }\textrm{do}(X), W)\)
we would write
query <- "P(Y|do(X),W)"
Instead of describing distributions via text, it is also possible to use the following structure that encodes the role of each variable via a numeric vector:
query <- c(Y = 0, X = 1, W = 2)
Given a distribution of the form \(P(\mathbf{A} { \, | \, }\textrm{do}(\mathbf{B}),\mathbf{C})\) and a variable \(V\), a value 0 means that \(V \in \mathbf{A}\), value 1 means that \(V \in \mathbf{B}
\) and value 2 means that \(V \in \mathbf{C}\). This format can also be used to input data as a list of numeric vectors:
data <- list(
c(W = 0),
c(Y = 0, X = 2),
c(Z = 0, X = 1, W = 2)
Finally, graph encodes the semi-Markovian graph \(G\) of the causal model as a character string with each edge on its own line. A directed edge from \(X\) to \(Y\) is given as X -> Y and a bidirected
edge between \(X\) and \(Y\) is given as X <-> Y. Intervention nodes should not be given explicitly, since they are added automatically after calling dosearch. Furthermore, only vertices with
incoming or outgoing edges should be included in graph. As an example, we can encode a simple back-door graph with an added unobserved confounded between \(X\) and \(Y\) as follows:
graph <- "
X -> Y
Z -> X
Z -> Y
X <-> Y
Alternatively, one may use igraph graphs (Csardi and Nepusz 2006) in the syntax of the causaleffect package (Tikka and Karvanen 2017) or DAGs created using the dagitty package.
graph <- graph.formula(X -+ Y, Z -+ X, Z -+ Y, X -+ Y, Y -+ X)
graph <- set_edge_attr(graph, "description", 4:5, "U")
graph <- dagitty("dag{X -> Y; Z -> X; Z -> Y; X <-> Y}")
The next two optional parameters, and , are used to denote those vertices of \(G\) that should be understood as either transportability nodes or selection bias nodes, respectively. Providing these
parameters may increase search performance in relevant problems. Both of these parameters should be given as character strings, where individual variables are separated by a comma, for example
transportability = "S,T". Parameter missing_data, as the name suggests, is used to define missingness mechanisms (1) as a character string, where individual mechanisms are separated by a comma. In
order to describe that \(R_X\) is the response indicator of \(X\) we would write R_X : X, which also implicitly defines that X* is the proxy variable of X.
The list control can be used to set various additional parameters that are not directly related to the identifiability problem itself, but more so to the output of the search and other auxiliary
details, such as benchmarking and obtaining derivations that show how the query distribution can be reached from the inputs using do-calculus. One such control parameter determines whether to use the
search heuristic or not. Documentation of the dosearch package contains detailed information on the full list of control parameters.
The return object of dosearch is a list with three components by default. The first component, identifiable, is a logical value that takes the value TRUE when the target distribution described by
query is identifiable from the inputs. The second component, formula, is a character string describing the target distribution in terms of the inputs in LaTeX syntax if the target is identifiable.
Otherwise this component is just an empty character string. The third component call contains the arguments of the original function call.
Bareinboim, E., and J. Pearl. 2012. “Causal Inference by Surrogate Experiments: Z-Identifiability.” In Proceedings of the 28th Conference on Uncertainty in Artificial Intelligence, edited by N. de
Freitas and K. Murphy, 113–20. AUAI Press.
———. 2014. “Transportability from Multiple Environments with Limited Experiments: Completeness Results.” In Proceedings of the 27th Annual Conference on Neural Information Processing Systems, 280–88.
Bareinboim, E., and J. Tian. 2015. “Recovering Causal Effects from Selection Bias.” In Proceedings of the 29th AAAI Conference on Artificial Intelligence, 3475–81.
Correa, J., J. Tian, and E. Bareinboim. 2018. “Generalized Adjustment Under Confounding and Selection Biases.” In Proceedings of the 32nd AAAI Conference on Artificial Intelligence.
Csardi, G., and T. Nepusz. 2006.
“The igraph Software Package for Complex Network Research.” InterJournal
Complex Systems: 1695.
Eddelbuettel, D., and R. Francois. 2011.
“Rcpp: Seamless R and C++ Integration.” Journal of Statistical Software
40 (8): 1–18.
Greenland, S., J. M. Robins, and J. Pearl. 1999.
“Confounding and Collapsibility in Causal Inference.” Statistical Science
14 (1): 29–46.
Huang, Y., and M. Valtorta. 2006. “Pearl’s Calculus of Intervention Is Complete.” In Proceedings of the 22nd Conference on Uncertainty in Artificial Intelligence, 217–24. AUAI Press.
Koller, D., and N. Friedman. 2009. Probabilistic Graphical Models: Principles and Techniques. MIT Press.
Little, R. J. A., and D. B. Rubin. 1986. Statistical Analysis with Missing Data. New York, NY, USA: John Wiley & Sons, Inc.
Mohan, K., J. Pearl, and J. Tian. 2013. “Graphical Models for Inference with Missing Data.” In Proceedings of the 26th International Conference on Neural Information Processing Systems, 1277–85.
Pearl, J. 1995.
“Causal Diagrams for Empirical Research.” Biometrika
82 (4): 669–88.
———. 2009. Causality: Models, Reasoning, and Inference. Second. Cambridge University Press.
R Core Team. 2024.
R: A Language and Environment for Statistical Computing
. Vienna, Austria: R Foundation for Statistical Computing.
Shpitser, I., K. Mohan, and J. Pearl. 2015. “Missing Data as a Causal and Probabilistic Problem.” In Proceedings of the 31st Conference on Uncertainty in Artificial Intelligence, edited by Marina
Meila and Tom Heskes, 802–11. AUAI Press.
Shpitser, I., and J. Pearl. 2006. “Identification of Joint Interventional Distributions in Recursive Semi-Markovian Causal Models.” In Proceedings of the 21st National Conference on Artificial
Intelligence – Volume 2, 1219–26. AAAI Press.
Spirtes, P., C. Glymour, and R. Scheines. 1993. Causation, Prediction, and Search. Springer-Verlag.
Tikka, S., A. Hyttinen, and J. Karvanen. 2020.
dosearch: Causal Effect Identification from Multiple Incomplete Data Sources
———. 2021.
“Causal Effect Identification from Multiple Incomplete Data Sources: A General Search-Based Approach.” Journal of Statistical Software
99 (5): 1–40.
Tikka, S., and J. Karvanen. 2017.
“Identifying Causal Effects with the R Package causaleffect.” Journal of Statistical Software
76 (12): 1–30. | {"url":"https://cloud.r-project.org/web/packages/dosearch/vignettes/dosearch.html","timestamp":"2024-11-03T22:15:44Z","content_type":"text/html","content_length":"648662","record_id":"<urn:uuid:ba075c1c-bfa1-4cd3-afed-65205741eefc>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00375.warc.gz"} |
Lecture 21, Apr 6 1999
CS267: Lecture 21, Apr 6 1999
Hierarchical Methods for the N-Body Problem - I
We begin our discussion of O(N) and O(N log N) methods for computing the gravitational or electrostatic forces on N particles by discussing the mathematics used in the Barnes-Hut and Fast Multipole
Lecture Notes
Primary Readings | {"url":"https://people.eecs.berkeley.edu/~demmel/cs267_Spr99/Lectures/Lect21.html","timestamp":"2024-11-07T19:26:55Z","content_type":"text/html","content_length":"1169","record_id":"<urn:uuid:936cbb9b-2982-4505-94c8-4d99951dba1c>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00166.warc.gz"} |
Time Dilation and Relativity
Of all that we have known in physics, the Galilean laws, the Newtonian laws, the discovery of the speed of light along with many others govern the mind. In the eyes of Physics, in the 20^th century
physics the very concept of Relativity was a Renaissance.
Relativity can be said to be the single most influential physical theory till the 21^st century for the way it has changed our view of the universe. Not that other discoveries in physics were less
significant, but few of them have been so well received by the general public. Relativity has grabbed people’s imagination and sparked discussions in philosophy and religion which last until the
present day. Quantum physics, although perhaps more pertinent to daily life, is a close second.
The notion of relativity is not as revolutionary as many believe. In fact, spatial relativity is part of our everyday experience. Spatial relativity is also called Galilean relativity in honour of
Galileo who first formulated the concept of relative motion.
The pioneer of the special and general theory of relativity, Albert Einstein, opined that “Relativity teaches us the connection between the different descriptions of one and the same reality.” His
Special Relativity is based only on two simple postulates:
1. The laws of physics are the same in all inertial (non-accelerating) reference frames, and
2. The speed of light in free space is constant.
A preeminent feature of General Relativity is its view of gravitation.
The notion of relativity is not as revolutionary as many believe. Infact, spatial relativity is part of our everyday experience. Spatial relativity is also called Galilean relativity in honour of
Galileo who first formulated the concept of relative motion.
One of the most enthralling aspects of Relativity is its new understanding of time. If the speed of light is constant, time cannot be constant. Infact, it doesn’t make sense to speak of time as being
constant or absolute, when we think of it as one dimension of spacetime. Special Relativity states that time is measured according to the relative velocity of the reference frame it is measured in.
Despite of the simplicity of this statement, the relativistic connection between time and space are hard to imagine.
There are numerous ways to illustrate this:
The four dimensions of spacetime.
In Relativity the world has four dimensions: three space dimensions and one dimension that is not exactly time but related to time. In fact, it is time multiplied by the square root of -1. Say, you
move through one space dimension from point A to point B. When you move to another space coordinate, you automatically cause your position on the time coordinate to change, even if you don’t notice.
This causes time to elapse. Of course, you are always travelling through time, but when you travel through space you travel through time by less than you expect. We can consider the following
Time dilation; the twin paradox.
There are two twin brothers. On their twentieth birthday, one of the brothers goes on a space journey in a superfast rocket that travels at 99% of the speed of light. The space traveller stays on his
journey for precisely one year, whereupon he returns to Earth on his 21st birthday. On Earth, however, seven years have elapsed, so his twin brother is 27 years old at the time of his arrival. This
is due to the fact that time is stretched by factor 7 at approx. 99% of the speed of light, which means that in the space traveller’s reference frame, one year is equivalent to seven years on earth.
Yet, time appears to have passed normally to both brothers, i.e. both still need five minutes to shave each morning in their respective reference frame.
The effect of time dilation is negligible for common speeds, such as that of a car or even a jet plane, but it increases dramatically when one gets close to the speed of light. Very close to c, time
virtually stands still for the outside observer.
Time expands, space contracts.
Interestingly, while time expands from the perspective of the stationary observer, space contracts from the perspective of the moving observer. This phenomenon is known as Lorentz contraction.
Therefore, space travel is shortened with the velocity of the traveller. A journey to the 4.3 light-years distant Alpha Centauri C, the closest star to our Sun, would take only 7.4 months in a space
ship moving at 0.99c. The effect of time dilation has been experimentally confirmed thanks to very precise caesium clocks that can measure extremely small periods of time. Unfortunately, time
dilation is completely outside of human experience, because we have not yet devised a way of travelling at speeds where relativistic effects become noticeable.
[This article was contributed by Sunit Manjil Hazarika, a 2nd semester Integrated Masters student of the Department of Physics, Tezpur University, India.] | {"url":"https://gonitsora.com/time-dilation-and-relativity/","timestamp":"2024-11-12T20:12:53Z","content_type":"text/html","content_length":"27862","record_id":"<urn:uuid:3a8348be-40c5-417e-b49e-5c70833037a7>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00245.warc.gz"} |
Black holes as quantum computers
February 3, 2021. Bla bla bla
Black holes are regions of spacetime where gravity is strong enough to trap light. Now, light travels pretty fast, about a hundred million times faster than a speeding pedestrian, so this is no mean
feat, and it takes the death of a giant star, or disastrous collisions at the core of a galaxy, to produce them. But we now know, from a whole spectrum of clues, that they exist and play an important
role in the life of star systems, galaxies, and even the early universe as a whole.
In 2015, the gravitational wave observatory LIGO detected ripples in spacetime which could only be produced by two black holes, spiralling inwards and merging a billion light years away. We’ve known
since the 1990s that a supermassive black hole, millions of times heavier than our sun, lurks in the heart of the Milky Way, an achievement recognized by the 2020 Nobel Prize in Physics. And most
spectacularly, in 2019, the Event Horizon Telescope stitched together an image of a supermassive black hole, 50 million years old, in the galaxy M87 far, far away:
But for all their astronomical richness and complexity, black holes harbour just as much interest for theoretical physics, and in particular, the program of combining quantum mechanics with gravity,
known as quantum gravity. The goal of this tutorial will be to explore some of the problems of black holes and quantum gravity, and to introduce cutting edge approaches to these problems from the
language of quantum computing. Rather than give a traditional primer and then laboriously translate into this language, we’ll present black hole physics as a series of computational slogans.
Black holes store physical information
Our first slogan is simply that black holes store physical information. To make a black hole, you need to collapse some matter, like an old star or dust clouds in the galactic core, and this matter
contains information: namely, the information about the state it was in when you collapsed it. To make the analogy to computing explicit, suppose we have $N$ classical bits, which can be either $0$
or $1$, say electrons which can be spin up or spin down. We then use an electromagnetic trash compactor to squish them until they form a black hole. The state of the system before it collapsed was an
$N$-bit string, and afterwards, those $N$ bits are somehow stored inside the black hole.
If our system is quantum, with $N$ qubits in some state $|\psi\rangle$, then after collapse $|\psi\rangle$ is somehow stored in the black hole. The story is basically the same.
A natural question is how the information stored in the black hole is related to its physical properties, things we can measure. In the 1970s, Jacob Bekenstein and Stephen Hawking discovered
something remarkable. They learned that the amount of information you can store in a black hole, the number of bits, is proportional to the area of the horizon, that is, the surface area of the black
hole as it appears from outside. In fact, up to some numbers we’re not going to worry, it’s the area of the horizon, $\mathcal{A}$, divided by something called the Planck area, which is the unique
area you can build out of basic physical constants:
\[\mathcal{A}_P = \frac{\hbar G}{c^3} \approx 2.56 \times 10^{-70} \text{ m}^2.\]
This gives us the license to draw a black hole as a spherical computer whose surface is split into Planck area-sized pixels. Each of these is a classical bit, which can be either $1$ or $0$, or a
qubit if we want to go quantum.
Once a black hole has formed, you can throw more things into it, a bit like downloading something onto your computer. In order to accomodate this new data, the black hole must grow bigger! For
instance, if we add a single bit (or qubit), then the horizon must grow by at least one Planck area, as this 2D cartoon shows:
More generally, if we throw $n$ bits (or qubits) into the black hole, the area must grow by at least $n$ Planck areas.
Black holes glow
All of this suggests that information is stored on the surface, so perhaps it can come out again, or at least, create detectable features near the horizon. Guided by this intuition, in 1975 Stephen
Hawking made the most ambitious calculation of his life. Using a combination of techniques from gravity and particle physics, he discovered that black holes are not truly black, but glow faintly.
This glow is called Hawking radiation, and it costs energy to produce, gradually depleting the black hole until it disappears. This disappearing act is known as black hole evaporation.
But if black holes evaporate, it’s natural to ask what happens to the information they store. Hawking’s calculation suggests that the glow of the black hole is effectively random, a noisy and
uninformative process like the light from a hot coal or an incandenscent globe. If this is true, then when black holes form, information is trapped irreversibly behind the horizon, and destroyed when
the black hole evaporates. Quantum gravity takes its secrets to the grave, and replaces them with a random sequence of $1$s and $0$s.
This is kind of a big deal, because destroying information is not allowed by quantum mechanics. Quantum states always evolve in a reversible fashion, which is why gates are reversible in quantum
computing. It’s just a basic physical requirement; if you have an irreversible gate, you’ll never be able to built it. So if black holes destroy information, then quantum mechanics must be wrong.
This tension between quantum mechanics and Hawking’s glow is called the Information Paradox, and it’s been one of the biggest open problems in quantum gravity for $45$ years.
Black hole encryption
Before we try and solve this, let’s take a step back. Computers usually do more than passively store input data. They will take the data, perform some useful computation on it, then output results,
like doing a google search or playing Minecraft. A black hole takes information about collapsing stars, dust clouds, or fatally inquisitive astronauts, and outputs what looks like random noise. Could
a reversible computation connect them? Put differently, are there reversible computations which conceal meaningful data as noise? Phrased this way, the Information Problem is really a question about
cryptography, the art of secret messages, or more aptly for our purposes, reversibly concealing information.
We can contrast different types of codes. Take something like the Caeser cipher, which simply shifts letters forwards or backwards in the alphabet by some fixed number of places. If we use this to
encrypt a long message, it may look like gobbledegook on a first reading, but analyzing how frequently letters occur reveals that they are distributed far from uniformly. By shifting the frequency
distribution to match the known frequencies of letters in English, we can easily decrypt. Let’s do an example.
In contrast, there is a much stronger form of encryption called a one-time pad. The basic idea is to convert a message into a string of binary digits, then | {"url":"https://hapax.github.io/assets/bhqc/","timestamp":"2024-11-04T15:00:18Z","content_type":"text/html","content_length":"20918","record_id":"<urn:uuid:df911841-d052-4fa4-b878-7b3e394e5585>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00845.warc.gz"} |
Boundary rigidity and volume minimality for metrics close to a flat one.
Event time:
Thursday, October 26, 2006 - 12:30pm to Wednesday, October 25, 2006 - 8:00pm
Event description:
A compact Riemannian manifold with boundary is said to be boundary rigid if its metric is uniquely determined (up to an isometry) by the distances between the boundary points.
To visualize this, imagine wanting to find out what the Earth is made of, or, more generally, what is inside a solid body made of different materials (in other words, properties of the medium change
from point to point). The speed of sound depends on the material. One can “tap” at some points of the surface of the body and “listen when the sound gets to other points”. The question is whether
this information is enough to determine what is inside.
This problem has been studied a lot, mainly from PDE viewpoint. We suggest an alternative approach based on “minimality”. A manifold is said to be a minimal filling if it has the least volume among
all compact (Riemannian) manifolds with the same boundary and the same or greater boundary distances.
I will discuss the following result: Euclidean regions with Riemannian metrics sufficiently close to a Euclidean one are minimal fillings and boundary rigid. This is the first result proving that in
dim>2 metrics other than extremely special ones (of constant curvature) are rigid. The talk is based on a joint work with S. Ivanov | {"url":"https://math.yale.edu/event/boundary-rigidity-and-volume-minimality-metrics-close-flat-one","timestamp":"2024-11-13T01:31:27Z","content_type":"text/html","content_length":"36942","record_id":"<urn:uuid:6c39a0d3-2539-47ae-bcee-bc4d1b6e7b14>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00228.warc.gz"} |
The Stacks project
Lemma 71.2.4. Let $S$ be a scheme. Let $X$ be an algebraic space over $S$. Let $0 \to \mathcal{F}_1 \to \mathcal{F}_2 \to \mathcal{F}_3 \to 0$ be a short exact sequence of quasi-coherent sheaves on
$X$. Then $\text{WeakAss}(\mathcal{F}_2) \subset \text{WeakAss}(\mathcal{F}_1) \cup \text{WeakAss}(\mathcal{F}_3)$ and $\text{WeakAss}(\mathcal{F}_1) \subset \text{WeakAss}(\mathcal{F}_2)$.
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 0CTZ. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0CTZ, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/0CTZ","timestamp":"2024-11-12T11:59:44Z","content_type":"text/html","content_length":"14556","record_id":"<urn:uuid:e2f74f2b-ec6b-4abe-98c5-17ed2ccf0725>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00504.warc.gz"} |
How to find out when an equation has no solution - PSAT Math
All PSAT Math Resources
Example Questions
Example Question #1 : Linear / Rational / Variable Equations
Find the solution to the following equation if x = 3:
y = (4x^2 - 2)/(9 - x^2)
Correct answer:
no possible solution
Substituting 3 in for x, you will get 0 in the denominator of the fraction. It is not possible to have 0 be the denominator for a fraction so there is no possible solution to this equation.
Example Question #1 : Equations / Inequalities
I. x = 0
II. x = –1
III. x = 1
Example Question #2 : Linear / Rational / Variable Equations
Correct answer:
There is no solution
Example Question #1 : How To Find Out When An Equation Has No Solution
Possible Answers:
None of the other answers
Correct answer:
A fraction is considered undefined when the denominator equals 0. Set the denominator equal to zero and solve for the variable.
Example Question #2 : How To Find Out When An Equation Has No Solution
Consider the equation
Which of the following is true?
Possible Answers:
The equation has exactly one solution, which is positive.
The equation has exactly two solutions, which are of like sign.
The equation has exactly two solutions, which are of unlike sign.
The equation has no solution.
The equation has exactly one solution, which is negative.
Correct answer:
The equation has exactly two solutions, which are of unlike sign.
Multiply the equation on both sides by LCM
Substitution confirms that these are the solutions.
There are two solutions of unlike sign.
Example Question #2 : Linear / Rational / Variable Equations
Which of the following equations has no solution?
Possible Answers:
Each of the equations in the other responses has no solution.
Correct answer:
Each of the equations in the other responses has no solution.
The problem is basically asking for what value of
has no solution.
We can simplify as folllows:
Since the absolute value of a number must be nonnegative, regardless of the value of
Example Question #3 : Linear / Rational / Variable Equations
Consider the equation
Which of the following is true?
Possible Answers:
The equation has no real solutions.
The equation has exactly two real solutions, which are of unlike sign.
The equation has exacty one real solution, which is positive.
The equation has exactly two real solutions, which are of like sign.
The equation has exacty one real solution, which is negative.
Correct answer:
The equation has exactly two real solutions, which are of unlike sign.
Multiply both sides by LCD
There are two solutions of unlike sign.
Example Question #3 : How To Find Out When An Equation Has No Solution
All of the following equations have no solution except for which one?
Correct answer:
Since all of the equations have the same symbols save for one number, the problem is essentially as follows:
For what value of
have a solution set other than the empty set?
We can simplify as follows:
If not equivalent expressions, the solution set is the empty set. If are equivalent expressions, the solution set is the set of all real numbers; this happens if and only if:
Therefore, the only equation among the given choices whose solution set is not the empty set is the equation
which is the correct choice.
Example Question #3 : Linear / Rational / Variable Equations
Which of the following equations has no real solutions?
Possible Answers:
Each of the equations given in the other choices has at least one real solution.
Correct answer:
We can examine each individually.
This equation has a solution.
This equation has a solution.
This equation has a solution.
This equation has no solution, since a fourth root of a number must be nonnegative.
The correct choice is
Example Question #2 : How To Find Out When An Equation Has No Solution
Correct answer:
No solutions
By definition, the absolute value of an expression can never be less than 0. Therefore, there are no solutions to the above expression.
Certified Tutor
University of Sherbrooke, Doctor of Philosophy, Mathematics. University of Manitoba, Master of Science, Mathematics.
Certified Tutor
Fairfield University, Bachelor of Science, Biology, General. NUI Galway Ireland, Master of Science, Neuroscience. | {"url":"https://www.varsitytutors.com/psat_math-help/how-to-find-out-when-an-equation-has-no-solution","timestamp":"2024-11-11T11:32:20Z","content_type":"application/xhtml+xml","content_length":"173742","record_id":"<urn:uuid:748d0113-a2b3-44b7-9c31-c4dccaba9a8d>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00274.warc.gz"} |
Cost to Build a Deck Calculator – Estimate Your Project
Calculate the estimated cost to build your deck based on size, materials, and labor with this tool.
How to Use the Deck Cost Calculator
To use the deck cost calculator, enter the following values into the respective fields and then click “Calculate”:
• Deck Length: The length of your deck in feet.
• Deck Width: The width of your deck in feet.
• Material Cost per Sqft: The cost of materials per square foot in dollars.
• Labor Cost per Sqft: The labor cost per square foot in dollars.
• Railing Length: The total length of railing required in feet (optional).
• Railing Cost per Linear Foot: The cost of railing per linear foot in dollars (optional).
• Number of Steps: The total number of steps (optional).
• Cost per Step: The cost to build each step in dollars (optional).
The result will show the total estimated cost to build your deck based on the values entered.
Explanation of Calculations
• Deck Area: Calculated by multiplying the deck length by the deck width.
• Total Material Cost: Calculated by multiplying the deck area by the material cost per square foot.
• Total Labor Cost: Calculated by multiplying the deck area by the labor cost per square foot.
• Total Railing Cost: Calculated by multiplying the railing length by the railing cost per foot.
• Total Step Cost: Calculated by multiplying the number of steps by the cost per step.
• Total Cost: The sum of the total material cost, total labor cost, total railing cost, and total step cost.
The calculator provides an estimate based on the values input by the user and assumes standard construction techniques and costs. Actual costs may vary based on specific project details, local labor
rates, material choices, and unforeseen factors. Always consult with a professional for accurate estimates.
Use Cases for This Calculator
Deck Cost Calculator
Your estimated total cost to build the deck is:
Use Case 1: Calculate Deck Cost
Enter the size of your planned deck and the cost per square foot of the decking material to get an estimated total cost to build the deck, allowing you to plan your budget accurately.
Use Case 2: Update Deck Size
If you change your deck’s size, simply input the new square footage and click ‘Calculate Cost’ to get an updated estimated total cost based on the changes.
Use Case 3: Update Material Cost
Change the cost per square foot of the decking material as needed, then click ‘Calculate Cost’ to see how the adjustment affects the estimated total cost to build the deck.
Use Case 4: Accurate Budget Planning
By calculating the estimated total cost to build the deck, you can plan your budget effectively and avoid any surprises or exceeding your financial limits during the construction process.
Use Case 5: Quick Calculation
You can swiftly determine the cost of building your deck without the need for manual calculations, enabling you to make decisions promptly and move forward with your project.
Use Case 6: Transparent Cost Breakdown
This calculator provides a clear breakdown of the estimated total cost, giving you a transparent view of how the deck size and material costs contribute to the overall expenses.
Use Case 7: Cost Comparison
Easily compare the estimated costs of different decking materials by inputting various material costs and understanding how each would impact the total cost to build the deck.
Use Case 8: Budget Flexibility
Experiment with different deck sizes and material costs to find a combination that fits your budget, providing you with flexible options to adjust your plans to meet your financial goals.
Use Case 9: Cost Tracking
Keep track of the estimated total cost to build the deck as you make changes, allowing you to monitor how adjustments in deck size or material costs impact the overall expenses.
Use Case 10: Instant Results
Receive immediate results on the estimated total cost to build the deck after inputting the deck size and material cost, giving you instant insights to support your decision-making process. | {"url":"https://calculatorsforhome.com/cost-to-build-a-deck-calculator/","timestamp":"2024-11-10T05:24:24Z","content_type":"text/html","content_length":"150107","record_id":"<urn:uuid:ee43ad66-342f-4f21-8c76-816496c88371>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00575.warc.gz"} |
Renewables are expensive
I meant to add this item from the Manhattan Contrarian in another thread, but the moment passed. One poster was doubting that renewables caused higher power prices. The Article compares power prices
for various European countries and US states. As can be seen from the article Germany has been spending billions of Euros on the so called energy transition in exchange for 40 per cent plus
penetration for solar and wind and the second most expensive power prices in Europe, after Belgium. Bear in mind that the 40 per cent or whatever is an average. At times renewable energy would
account for 100 per cent of the load on the grid, and at other times almost nothing. The proportions would also vary year to year and decade to decade. It's a natural system.
On average the levelized cost of electricity from utility scale solar power and onshore wind power is less than from coal and gas-fired power stations,^[1]^:TS-25 but this varies greatly by
Cost of electricity by source - Wikipedia
Renewables: Cheapest form of power
Renewables: Cheapest form of power | United Nations
Solar is now ‘cheapest electricity in history’, confirms IEA
Solar is now ‘cheapest electricity in history’, confirms IEA (carbonbrief.org)
What’s the cheapest source of electricity?
What's the cheapest source of electricity? | AquaSwitch
The UK's costs for 2025
Mark every site I look at is saying the opposite to your post, also I cannot get your link to work.
Simply google the question and you will find a whole host of sites refuting what you claim, I am yet to see 1 that agrees with it.
I'm not deliberately disagreeing with you I genuinely cannot find anything to support your claim!
Renewables will only get cheaper as it becomes more mainstream and technology advances.
I do agree however that certain renewables suit certain climates/locations and others don't, but this still doesn't mean they are more expensive than conventional FF generation.
footeab@yahoo.com + 2,190
So, Robby with a straight face thinks CCGT is $100/MWh.... 😄😆
UK pumps their own gas.... Its almost as if GIANT ASS TAXES are added on and NOT on wind/solar and wind solar gets paid even if no one wants the power nor do the incur costs when wind does not blow
and sun does not shine....
So, exact same CCGT in USA is $30/MWh....
UK has "special" gas I suppose
markslawson + 1,057
15 hours ago, Rob Plant said:
On average the levelized cost of electricity from utility scale solar power and onshore wind power is less than from coal and gas-fired power stations,^[1]^:TS-25 but this varies greatly by
Rob - for heaven's sake study the issue rather than grab material that sounds good and post it. You're quoting LEVELISED COSTS which has little to do with power prices on a grid. Sure, power taken
directly from renewable generators is cheaper. The problem starts when you try to put them on a grid designed to deliver power 24/7 to consumers over a wide area, as Germany has managed to
demonstrate. You still need conventional plants to put up the renewable generators, amongst other problems. That's why renewables drive up power prices where ever they are used.
9 hours ago, footeab@yahoo.com said:
So, Robby with a straight face thinks CCGT is $100/MWh.... 😄😆
UK pumps their own gas.... Its almost as if GIANT ASS TAXES are added on and NOT on wind/solar and wind solar gets paid even if no one wants the power nor do the incur costs when wind does not
blow and sun does not shine....
So, exact same CCGT in USA is $30/MWh....
UK has "special" gas I suppose
Well Footy we import the majority of our NG unlike the US so yes we are at the whim of the market.
I didnt pull this cost out of my ass it is from "UK Government Department for Business, Energy, and Industrial Strategy"
The Cost of Electricity Generation Methods - Pager Power
I guess you know better than they do, please provide your extensive research and study on this so we can all have a look! If you havent then time to put that footbackinmouth.
How are your tunnel projects coming along? Got much funding?
• 1
• 1
8 hours ago, markslawson said:
Rob - for heaven's sake study the issue rather than grab material that sounds good and post it. You're quoting LEVELISED COSTS which has little to do with power prices on a grid. Sure, power
taken directly from renewable generators is cheaper. The problem starts when you try to put them on a grid designed to deliver power 24/7 to consumers over a wide area, as Germany has managed to
demonstrate. You still need conventional plants to put up the renewable generators, amongst other problems. That's why renewables drive up power prices where ever they are used.
Ok Mark if you are dismissing Levelised costs then maybe try this, its how the UK determine electricity prices on its grid!!
"Wholesale electricity prices are not regulated and, instead, trading on spot (or day-ahead) markets sets them. In these markets, electricity generators bid to contribute to the power grid. The power
exchange (Nord Pool in the UK) accepts these bids in price order, from lowest to highest, until demand is met, in what is known as the ‘merit order’: sources of electricity with the lowest marginal
cost of generation (typically renewables, as they do not use any fuel) are the first bids to be accepted, and sources such as gas- and coal-fired power stations are the last (as they use fuel and the
generator must also pay carbon tax on that fuel use)."
Electricity market | Institute for Government
I hope that clears that up for you!
Renewables do not "drive up prices where ever they are used" thats garbage! You also post zero to back up your claim once again.
The UK also has 12.3GW of interconnectors between countries which it uses when the price suits or when renewables are contributing a small percentage (which is very rare). It also exports excess
power from renewables to these countries, for example the 1.4GW North Sea Link between Norway and the UK, when the wind blows we export to Norway, when it doesnt we buy their hydro, works very well.
Ecocharger + 1,458
On 9/12/2024 at 4:10 AM, Rob Plant said:
On average the levelized cost of electricity from utility scale solar power and onshore wind power is less than from coal and gas-fired power stations,^[1]^:TS-25 but this varies greatly by
Cost of electricity by source - Wikipedia
Renewables: Cheapest form of power
Renewables: Cheapest form of power | United Nations
Solar is now ‘cheapest electricity in history’, confirms IEA
Solar is now ‘cheapest electricity in history’, confirms IEA (carbonbrief.org)
What’s the cheapest source of electricity?
What's the cheapest source of electricity? | AquaSwitch
The UK's costs for 2025
Mark every site I look at is saying the opposite to your post, also I cannot get your link to work.
Simply google the question and you will find a whole host of sites refuting what you claim, I am yet to see 1 that agrees with it.
I'm not deliberately disagreeing with you I genuinely cannot find anything to support your claim!
Renewables will only get cheaper as it becomes more mainstream and technology advances.
I do agree however that certain renewables suit certain climates/locations and others don't, but this still doesn't mean they are more expensive than conventional FF generation.
So your list has only about 65% of electricity accounted for...where is the rest?
markslawson + 1,057
14 hours ago, Rob Plant said:
Wholesale electricity prices are not regulated and, instead, trading on spot (or day-ahead) markets sets them. In these markets, electricity generators bid to contribute to the power grid. The
power exchange (Nord Pool in the UK) accepts these bids in price order, from lowest to highest, until demand is met, in what is known as the ‘merit order’: sources of electricity with the lowest
marginal cost of generation (typically renewables, as they do not use any fuel) are the first bids to be accepted, and sources such as gas- and coal-fired power stations are the last (as they use
fuel and the generator must also pay carbon tax on that fuel use)."
Again, Rob, you've quoted stuff you don't understand out of context. Sure, what the passage says is correct and, that's right, renewables have been known to collapse the spot price pool for power.
But you're confusing the wholesale spot price with total wholesale and power prices. In the UK in particular they have balancing costs and capacity prices - that is generators paid to stay operating
just in case they are needed. There is a wide and increasing gap between the market wholesale price and the price needed to deliver power. If you don't believe this, ask yourself why Germany and UK
have vast amounts of renewable power on their grids and have extremely expensive retail power prices. As I said before you need to study the subject. There is much more I could say, but I now don't
take you seriously on this stuff. I won't bother to reply to you on this thread.
• 1
• 1
Ron Wagner + 706
On 9/12/2024 at 3:10 AM, Rob Plant said:
On average the levelized cost of electricity from utility scale solar power and onshore wind power is less than from coal and gas-fired power stations,^[1]^:TS-25 but this varies greatly by
Cost of electricity by source - Wikipedia
Renewables: Cheapest form of power
Renewables: Cheapest form of power | United Nations
Solar is now ‘cheapest electricity in history’, confirms IEA
Solar is now ‘cheapest electricity in history’, confirms IEA (carbonbrief.org)
What’s the cheapest source of electricity?
What's the cheapest source of electricity? | AquaSwitch
The UK's costs for 2025
Mark every site I look at is saying the opposite to your post, also I cannot get your link to work.
Simply google the question and you will find a whole host of sites refuting what you claim, I am yet to see 1 that agrees with it.
I'm not deliberately disagreeing with you I genuinely cannot find anything to support your claim!
Renewables will only get cheaper as it becomes more mainstream and technology advances.
I do agree however that certain renewables suit certain climates/locations and others don't, but this still doesn't mean they are more expensive than conventional FF generation.
The price to the consumers is what needs to be looked at. It is what counts more than anything else. Not just what leaders want you to believe. All the factors need to be examined as a whole, that
includes transmission lines and all new related expenses.
turbguy + 1,539
3 minutes ago, Ron Wagner said:
The price to the consumers is what needs to be looked at. It is what counts more than anything else. Not just what leaders want you to believe. All the factors need to be examined as a whole,
that includes transmission lines and all new related expenses.
Price per KWh is indeed a large consideration. All new generating equipment is expensive.
Much of the transmission and related switching equipment capital can be heavily reduced by locating near a retired fossil facility. That what Bill Gates in doing in Kemmerer, WY on the Terra Power
nuclear plant.
specinho + 467
TailingsPond + 874
12 hours ago, turbguy said:
Much of the transmission and related switching equipment capital can be heavily reduced by locating near a retired fossil facility. That what Bill Gates in doing in Kemmerer, WY on the Terra
Power nuclear plant.
Nuclear is not renewable energy.
TailingsPond + 874
10 hours ago, specinho said:
Funny, but thermal solar systems (like for hot water) can get too hot. Concentrated solar reflecting arrays often have to point some of the mirrors off into space to prevent overheating.
turbguy + 1,539
11 hours ago, TailingsPond said:
Nuclear is not renewable energy.
Quite true!
But THAT plant would be wonderful for supporting variable renewable sources.
It will be able to LOAD CYCLE with a large energy storage component.
No other Nuc plant in the USA cycles. They all run pedal-to-the-metal, or not at all.
Ecocharger + 1,458
On 9/12/2024 at 4:10 AM, Rob Plant said:
On average the levelized cost of electricity from utility scale solar power and onshore wind power is less than from coal and gas-fired power stations,^[1]^:TS-25 but this varies greatly by
Cost of electricity by source - Wikipedia
Renewables: Cheapest form of power
Renewables: Cheapest form of power | United Nations
Solar is now ‘cheapest electricity in history’, confirms IEA
Solar is now ‘cheapest electricity in history’, confirms IEA (carbonbrief.org)
What’s the cheapest source of electricity?
What's the cheapest source of electricity? | AquaSwitch
The UK's costs for 2025
Mark every site I look at is saying the opposite to your post, also I cannot get your link to work.
Simply google the question and you will find a whole host of sites refuting what you claim, I am yet to see 1 that agrees with it.
I'm not deliberately disagreeing with you I genuinely cannot find anything to support your claim!
Renewables will only get cheaper as it becomes more mainstream and technology advances.
I do agree however that certain renewables suit certain climates/locations and others don't, but this still doesn't mean they are more expensive than conventional FF generation.
It's okay, we understand that you are not in any sort of deliberate disagreement with him. | {"url":"https://community.oilprice.com/topic/40625-renewables-are-expensive/?tab=comments","timestamp":"2024-11-14T12:25:35Z","content_type":"text/html","content_length":"404215","record_id":"<urn:uuid:c647e2b9-d7a5-4832-879f-cdd5021fc5ea>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00413.warc.gz"} |
I actually want to do something slightly more complicated than my original query. I want to produce Pi(Q_gt,s_gt) where the stuff in parentheses has subscripts, rather than say, Pi_gt where Pi has a
subscript, which as you indicated above would simply be accomplished by text:=textplot([0.72,0.4,typeset(`#msubsup(mi("Pi"),mi("gt"))`)]); Even just trying to do get this part Pi(Q_gt caused a
problem with the parens: text2:=textplot([0.72,0.4,typeset(`#msubsup(mi("Pi(Q"),mi("gt")"))`)]):
Thanks much, both for the advice, which works, and the prompt response. Probably would not (read definitely would not!) have figured it out if I had been left to my own devices. So have to agree with
Georgios that this sort of cryptic approach to creating plots with subscripts, etc., in Maple could really provoke violence towards one's computer. :)
Thanks, Paulina, but the reason that I am using plot() in my procedure Q_opt is precisely because I can't use Maximize(). I have min[1,Q+s/2] as an expression for my integration bound in my objective
function and the canned optimization routine Maximize() does not handle this. Specifically, the error I get is as follows: Error, (in Optimization:-NLPSolve) unable to compare -(1/2)*s and 0 I
therefore wrote the procedure Q_opt to extract the maximum from the grid of points that plot would use -- basically a manual approach to optimization. My routine for Q_opt was as follows: Q_opt:=proc
(s_val,p_val,pa_val) if not is( {args}, set(numeric) ) then return 'procname'('args') end if; Payoff :=1-1/24*s^3-1/2*s*min(1, Q+1/2*s)^2+1/4*s^2*min(1, Q+1/2*s)-1/3*min(1, Q+1/2*pa)^3+1/3*min(1, Q+1
/2*s)^3+Q*min(1, Q+1/2*pa)^2-Q*min(1, Q+1/2*s)^2-Q^2*min(1, Q+1/2*pa)+Q^2*min(1, Q+1/2*s)+s*Q*min(1, Q+1/2*s)-1/2*pa+1/2*pa*min(1, Q+1/2*pa)^2+1/4*pa^2-1/4*pa^2*min(1, Q+1/2*pa)-s*Q+pa*Q-pa*Q*min(1,
Q+1/2*pa)-p*Q; P:=plot(subs(s=s_val,subs(p=p_val,subs(pa=pa_val,Payoff))),Q=0..1); A:=remove(has,op([1,1],P),undefined): MAX:=max(op(map(t->t[2],A))); points:=op([1,1],P): MAXPT:=select(t->t[2]=
MAX,points); op(1,MAXPT[1]); end proc: The new Q_opt that I just wrote using Maximize() is Qopt2 := proc (sval, pval, paval) if not is({args}, set(numeric)) then return ('procname')('args') end if;
Payoff := proc (Q, s, p, pa) options operator, arrow; 1-(1/24)*s^3-(1/2)*s*min(1, Q+(1/2)*s)^2+(1/4)*s^2*min(1, Q+(1/2)*s)-(1/3)*min(1, Q+(1/2)*pa)^3+(1/3)*min(1, Q+(1/2)*s)^3+Q*min(1, Q+(1/2)*pa)^
2-Q*min(1, Q+(1/2)*s)^2-Q^2*min(1, Q+(1/2)*pa)+Q^2*min(1, Q+(1/2)*s)+s*Q*min(1, Q+(1/2)*s)-(1/2)*pa+(1/2)*pa*min(1, Q+(1/2)*pa)^2+(1/4)*pa^2-(1/4)*pa^2*min(1, Q+(1/2)*pa)-s*Q+pa*Q-pa*Q*min(1, Q+(1/2)
*pa)-p*Q end proc; [MAXVAL, MAXPT] = Maximize(Payoff(Q, s*val, pval, paval)) end proc BUT returns the following error when I test it: > Qopt2(.2, 0.1e-1, 1) Error, (in Optimization:-NLPSolve) unable
to compare -(1/2)*s and 0 Suggestions for handling this optimization problem where the central issue is with checking the value of the integration bounds in my objective function?
I have uploaded my worksheet at the following link: http://www.mapleprimes.com/files/5475_070922_Example.mw The comments in the worksheet point out the inconsistent plotting on Q_opt. (There are some
variable name changes from the above post, i.e. T_opt=Q_opt in this file and (x,y,z)=(s,p,pa).) Q_opt evaluates to be decreasing in the parameter s at the command prompt, which is correct. But when
plotted the graph shows Q_opt increasing in s -- all the plotted values are wrong! Thanks.
Sorry, the link to the plot is: http://www.mapleprimes.com/files/5475_Example-Maple-Bug.jpg Also, where I typed "X_opt" at the beginning of line 3 I clearly meant to type "T_opt."
Thanks -- that seems to work. (I am using an older version of Maple, Maple 9, so perhaps should upgrade!)
Hi Scott, Thanks -- yes, I will upload a worksheet shortly. Also, I have made some headway. I now have a procedure that uses fsolve to return the optimal Q: > Q_buyer:=proc(s_val,p_val,pa_val) >
fsolve(subs(p=p_val,subs(s=s_val,subs(pa=pa_val,FOC_1_alt))),Q); > end proc: For example: > Q_buyer(0.6-0.01,0,0.6); 0.7050000000 And then I have a second procedure that evaluates my profit function
as follows: > profit:=proc(s_val,p,pa,m,K) > simplify(p*Q_buyer(s_val,p,pa) + s_val*subs(s=s_val,subs(Q=Q_buyer(s_val,p,pa),y)) + m*(K-subs(s=s_val,subs(Q=Q_buyer(s_val,p,pa),y)))); > end proc: For
example: > profit(0.59,0,0.6,0.5,1); 0.5223661250 This is exactly what I wanted to be able to do -- to compute the profit at the optimal Q. Except, now I want to be able to optimize profit with
respect to the parameters s_val and p. When I try fsolve with my procedure Maple doesn't like it: > fsolve({profit(s,p,0.8,0.5,1)},{s,p},{s=0..0.7999,p=0..10}); Error, (in fsolve) {s, p} are in the
equation, and are not solved for And it doesn't like plot3d with the profit procedure either.
Disregard the last statement - maximize works. It just didn't work for my problem. Maple returned an unevaluated statement, indicating that it had not found a solution. The frustrating thing is that
when I plot the payoff function above, I can see that it has an optima in (p,s). I just don't seem to be able to get Maple to find it symbolically! If you plot plot(subs(p=0.1,subs(pa=0.6,x)),s=
0..1,view=[0..1,-10..10]); where x is as defined above: x:=-1/4*(s^2-pa^2+4*pa-4*s-4*p)/(s-pa) you'll see that the curve to the left of s=pa is smooth. Why is Maple having difficulty finding the
maximizer (s,p)?
Okay, this is helpful. Thanks. But maximize doesn't appear to be recognized in my Maple version. Do I need to activate (or install) the global optimization toolbox? Or do I just try a similar command
as I use to add functionality to plot: with(plots)?
Sorry, having problems with the "less than sign." The earlier post should read "s less than pa," where pa is some defined parameter of my problem.
This should say, over the restricted range s | {"url":"https://beta.mapleprimes.com/users/ctomkins/answers","timestamp":"2024-11-09T00:24:58Z","content_type":"text/html","content_length":"126007","record_id":"<urn:uuid:6ef11050-f506-4412-bae1-2e9cd2be2c62>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00784.warc.gz"} |
Node345 - CFD SUPPORT
This is an automatically generated documentation by LaTeX2HTML utility. In case of any issue, please, contact us at info@cfdsupport.com.
Turbulence modeling is extremely complicated task. The solution still arises from conservation laws. Governing equations are system of Navier-Stokes equations. The solution is always only some
approximation. For description of turbulent flow, especially in technical applications, just the mean parameters of the fluid flow might be important.
There are two main approaches to numerical solution of governing equations. Either the mean fluid flow values are followed using an averaging approach – RANS. Or, the instantaneous fluid flow values
are followed using direct approach – DNS, LES, DES.
In practice within real cases the most of solutions are based on Reynolds Averaged Navier-Stokes equations – RANS. RANS is based on decomposition^23.3 of instantaneous values of variables into mean
part and fluctuation part. Then the governing system called Reynolds Averaged Navier-Stokes equations (RANS) is solved for the mean values only and is completed by the turbulence model. Practically
all the fluid flow is modeled (mean values) everywhere, no matter of time size or turbulent eddies scales.
The direct approach directly simulates all the instantaneous quantities and all the turbulence eddies scales in every time: Direct Numerical Simulation – (DNS). DNS requires very very fine mesh,
especially near the walls. The number of vertices needed is proportional to ^23.4 DNS can be realized just on special high efficient computers only and nothing signalizes dramatic changes to come in
near ^23.5 future.
The compromise between DNS and RANS is Large Eddy Simulation (LES), or Detached Eddy Simulation (DES). Here the filter of eddies is applied. The large scale eddies are simulated directly but the
small scale eddies are modeled applying the sub-grid model, typically some simple algebraic mixing-length model.
NOTE: The turbulence modeling approach always depends on how much detailed information (in space and time) is needed to meet the goals. | {"url":"https://www.cfdsupport.com/openfoam-training-by-cfd-support/node345.html","timestamp":"2024-11-05T06:50:27Z","content_type":"text/html","content_length":"67714","record_id":"<urn:uuid:fde86f5f-7728-464a-95df-0cf94130ce45>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00669.warc.gz"} |
Students show how to break given amounts into a number of equal sized groups and find the amount each person gets.
Students break given numbers into equal sized groups. Students must show their working.
Students identify whether two shapes have been partitioned evenly or not, and show how to correctly partition the shapes.
Students fold pieces of paper shaped as a rectangle, square, and circle to show fractions.
Students find different ways to divide squares into halves and quarters.
Students show or explain how to share equally amongst a number of people.
Students show or explain how to share cakes and pizzas equally amongst a number of people.
Students draw lines to divide up shapes into equal parts and name the fractional value of the part.
Students partition sets of objects and show how to work out fractions of quantities (using fractions as operators).
Students show how to partition a number of counters and identify the fraction shared.
Students find fractional amounts of a total number of Easter eggs.
For this practical task students model the fractions 1/10, 1/8, and 1/6 using plasticine, then name the remainder for each fraction.
Students answer questions involving finding how many of one fraction goes into another fraction.
Students show how to divide 3-digit whole numbers into equal sized groups and identify any remainder.
Students show how to partition shapes into a given number of equal parts and identify the fraction of each part.
Students draw lines to divide up shapes into fractional parts.
Students find fractions of numbers of different farm animals and put them in their simplest form.
Students colour in a diagram to show fractions, and calculate fractions from a word problem and a diagram.
Students answer questions by dividing a given amount of chocolate bars equally, and finding a fraction of the total amount. | {"url":"https://arbs.nzcer.org.nz/category/keywords/partitioning","timestamp":"2024-11-13T04:53:52Z","content_type":"text/html","content_length":"89530","record_id":"<urn:uuid:1a157c09-0833-469b-b0d7-941ef4dfc314>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00599.warc.gz"} |
How do you factor the sum or difference of two cubes x^3-27?
| HIX Tutor
How do you factor the sum or difference of two cubes #x^3-27#?
Answer 1
The factor of #x^3-27=(x-3)(x^2+3x+9)#
This is a case of factoring difference of cubes.
For #(x^3-27)#:
#a=x, b=3#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
You can factor the sum or difference of two cubes (x^3 - 27) using the formula:
[a^3 + b^3 = (a + b)(a^2 - ab + b^2)]
For (x^3 - 27), (a) is (x) and (b) is (3), so the factored form is:
[x^3 - 27 = (x - 3)(x^2 + 3x + 9)]
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/how-do-you-factor-the-sum-or-difference-of-two-cubes-x-3-27-8f9af9754b","timestamp":"2024-11-10T11:49:21Z","content_type":"text/html","content_length":"575822","record_id":"<urn:uuid:86529529-71c0-4e99-a71d-22ec09e8b0f6>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00648.warc.gz"} |
Radar observability of near-Earth objects using EISCAT 3D
Articles | Volume 38, issue 4
© Author(s) 2020. This work is distributed under the Creative Commons Attribution 4.0 License.
Radar observability of near-Earth objects using EISCAT 3D
Radar observations can be used to obtain accurate orbital elements for near-Earth objects (NEOs) as a result of the very accurate range and range rate measureables. These observations allow the
prediction of NEO orbits further into the future and also provide more information about the properties of the NEO population. This study evaluates the observability of NEOs with the EISCAT 3D
233MHz 5MW high-power, large-aperture radar, which is currently under construction. Three different populations are considered, namely NEOs passing by the Earth with a size distribution
extrapolated from fireball statistics, catalogued NEOs detected with ground-based optical telescopes and temporarily captured NEOs, i.e. mini-moons. Two types of observation schemes are evaluated,
namely the serendipitous discovery of unknown NEOs passing the radar beam and the post-discovery tracking of NEOs using a priori orbital elements. The results indicate that 60–1200 objects per year,
with diameters D>0.01m, can be discovered. Assuming the current NEO discovery rate, approximately 20 objects per year can be tracked post-discovery near the closest approach to Earth. Only a
marginally smaller number of tracking opportunities are also possible for the existing EISCAT ultra-high frequency (UHF) system. The mini-moon study, which used a theoretical population model,
orbital propagation, and a model for radar scanning, indicates that approximately seven objects per year can be discovered using 8%–16% of the total radar time. If all mini-moons had known orbits,
approximately 80–160 objects per year could be tracked using a priori orbital elements. The results of this study indicate that it is feasible to perform routine NEO post-discovery tracking
observations using both the existing EISCAT UHF radar and the upcoming EISCAT 3D radar. Most detectable objects are within 1 lunar distance (LD) of the radar. Such observations would complement the
capabilities of the more powerful planetary radars that typically observe objects further away from Earth. It is also plausible that EISCAT 3D could be used as a novel type of an instrument for NEO
discovery, assuming that a sufficiently large amount of radar time can be used. This could be achieved, for example by time-sharing with ionospheric and space-debris-observing modes.
Received: 27 Mar 2020 – Discussion started: 08 Apr 2020 – Accepted: 15 Jun 2020 – Published: 15 Jul 2020
All current radar observations of near-Earth objects (NEOs), namely asteroids and comets with perihelion distance q<1.3au, are conducted post-discovery (Ostro, 1992; Taylor et al., 2019). Radar
measurements allow for the determination of significantly more accurate orbital elements (Ostro, 1994). They may also allow construction of a shape model (Ostro et al., 1988; Kaasalainen and
Viikinkoski, 2012) and provide information about composition based on polarimetric radar scattering properties (Zellner and Gradie, 1976). In some cases, the absolute rotation state of the object can
also be determined by tracking the scintillation pattern of the radar echoes (Busch et al., 2010).
The amount of radar observations of NEOs is limited by resources, i.e. there are significantly more observing opportunities during close approaches than there is radar time available on the Arecibo
(2.36GHz, 900kW) and Goldstone DSS-14 (8.56GHz, 450kW) radars, which are the two radars that perform most of the tracking of asteroids on a routine basis. In order to increase the number of radar
measurements of NEOs, it is desirable to extend routine NEO observations to smaller radars, such as the existing EISCAT radars or the upcoming EISCAT 3D radar (233MHz, 5MW), henceforth abbreviated
to E3D, which is to be located in Fenno-Scandinavian Arctic. While these radars may not be capable of observing objects nearly as far away as Arecibo or Goldstone or generating high-quality
range–Doppler images, these radars are able to produce high-quality ranging.
Smaller radars can be used for nearly continuous observations, and it is possible that they can even contribute to the discovery of NEOs. Kessler et al. (1980) presented an early attempt at
discovering meteoroids outside of the Earth's atmosphere using a space-surveillance radar. However, the observation span was only 8h, and the results were inconclusive, but 31 objects were
identified as possible meteoroids. No follow-up studies were conducted.
As a result of the enhanced survey capability with optical telescopes, the discovery rate of NEOs has greatly increased during the last two decades, from 228 NEOs discovered in 1999 to 2436
discovered in 2019. Recent discoveries include significantly more small objects that have close approach distances within 1 LD compared to discoveries made 20 years ago. It is these objects that are
often within the reach of smaller radars. The EISCAT UHF system has in fact already been successfully used to track the asteroid 2012 DA[14] (J. Vierinen, personal communication, 2013), proving the
feasibility of these kinds of observations. One of the range–Doppler observations of 2012 DA[14], using EISCAT UHF, is shown in Fig. 1. It is expected that NEO observations using E3D will be of a
similar nature.
When NEOs make a close approach to Earth, they enter a region where the Earth's gravity dominates. Most of the time, objects will make one single pass and then leave this region again. In some rare
cases, the objects are temporarily captured by the Earth–Moon system (Granvik et al., 2012; Fedorets et al., 2017). These events are called temporarily captured fly-bys, if the object makes less than
one revolution around the Earth, and temporarily captured orbiters, or mini-moons, if the object makes one or more revolutions around the Earth. The existence of a population of transient mini-moons
in the vicinity of the Earth opens up interesting scientific and technological opportunities, such as allowing a detailed characterisation of the NEO population in a size range that is otherwise hard
to study empirically and providing easily accessible targets for space missions (Jedicke et al., 2018; Granvik et al., 2013). However, only two mini-moons have been discovered to date, namely 2006 RH
[120] (Kwiatkowski et al., 2009) and 2020 CD[3] (Fedorets et al., 2020). Hence, there is very little observational data about these objects and very basic questions about the population still remain
unanswered. Because mini-moons, as opposed to generic NEOs, are bound to the Earth–Moon system for a significant period of time, there are more opportunities to track or perhaps even discover them
using radar.
The EISCAT radars have already been used for around 20 years to observe the statistical distribution of space debris without prior knowledge of the orbital elements (Markkanen et al., 2009; Vierinen
et al., 2019b). Space debris is a collective term for the population of artificially created objects in space, especially those in orbit around the Earth. This population includes old satellites and
rocket components and fragments from their disintegration and collisions. There are approximately 16000 catalogued space debris objects, and it is estimated that there are 750000 objects larger
than 1cm in diameter (Braun, 2018). The capability to detect, track, and catalogue these objects is essential for current and future space operations. One of the most useful methods to measure and
catalogue space debris is using radar systems. For example, beam park observations, i.e. a single constant pointing direction with high-power, large-aperture radars, are an important source of
information when modelling the space debris population (Krisko, 2014; Flegel et al., 2009; Banka et al., 2000). The current EISCAT system also has a history of providing significant contributions to
space debris observations, such as the Chinese antisatellite event (Markkanen et al., 2009; Li et al., 2012), the Iridium–Cosmos collision (Vierinen et al., 2009), and the Indian antisatellite event
(Vierinen, 2019). The utility of E3D for space debris discovery and tracking has recently been investigated (Vierinen et al., 2017a, 2019a). The study showed that E3D is a capable instrument for
observing the space debris object population due to its phased array antenna system, and its multi-static geometry. The focus of this study is to determine if E3D could be similarly used to gain
information about the population of NEOs.
The space debris application is very closely related to NEO observations as they both entail the discovery and tracking of a population of hard radar targets. Both populations follow a power law
distribution in size, i.e. exponential cumulative growth in number as size decreases. There are, however, several differences. The number density of NEOs close enough to the Earth to be detectable
with radar is significantly lower. While observability of NEOs using Arecibo and Goldstone has recently been investigated (Naidu et al., 2016), a similar study has not been done for smaller radars
such as E3D. Also, we are not aware of any study of the discovery of NEOs using radar. It is thus desirable to investigate the expected observation capability of this new radar system so that
observation programmes that will produce useful data can be prioritised.
E3D is the next-generation international atmosphere and geospace research radar in northern Scandinavia. It is currently under construction and is expected to be operational by the end of 2021. E3D
will be the first multi-static, phased array, incoherent scatter radar in the world. It will provide essential data to a wide range of scientific areas within geospace science (McCrea et al., 2015).
The primary mission for the E3D radar, which has largely defined the radar design, is atmospheric and ionospheric research. However, the radar is also highly capable of observing meteors entering the
Earth's atmosphere (Pellinen-Wannberg et al., 2016), tracking orbital debris (Vierinen et al., 2019a), and even mapping the Moon (e.g. Thompson et al., 2016; Campbell, 2016; Vierinen et al., 2017b;
Vierinen and Lehtinen, 2009).
The E3D system will initially consist of three sites, namely in Skibotn in Norway, near Kiruna in Sweden, and near Karesuvanto in Finland. Each of these sites will consist of about 10^4 antennas and
act as receivers. The Skibotn site will also act as the transmitter with an initial power of 5MW, later to be upgraded to 10MW. The E3D design allows for novel measurement techniques, such as
volumetric imaging, aperture synthesis imaging, multi-static imaging, tracking, and adaptive experiments, together with continuous operations. As two of the main features of the system will be high
power and flexibility, it is a prime candidate for conducting routine radar observations of NEOs. The technical specifications E3D are given in Sect. 4.
In this work, we investigate two possible use cases for E3D for observing NEOs, namely the (1) discovery of NEOs and (2) post-discovery tracking of NEOs with known a priori orbital elements. The
first case resembles that of space debris observations, where objects randomly crossing the radar beam are detected and, based on their orbital elements, are classified as either orbital space debris
or natural objects. The second case is the more conventional radar ranging of NEOs based on a priori information about the orbital elements which yields a more accurate orbit solution. This is
described in detail in Sect. 3.
We estimate the detectability using three different approaches, roughly categorised as first order, second order, and third order. The first-order model uses a power law population density based on
fireball statistics (Brown et al., 2002). This model makes some major simplifications about the similarities of the cross-sectional collecting areas of the Earth and the E3D radar beam for providing
an estimate of the total NEO flux observed by the radar. This method is described in Sect. 5 and the population it uses in Sect. 2.1. In Sect. 6 we describe the second-order model that assesses the
number of post-discovery tracking opportunities that are expected in the near future. For this model, we use close approaches predicted for the last 12 months by the Center for NEO Studies (CNEOS)
catalogue (NASA JPL, 2020), which is described Sect. 2.2. Lastly, the third-order model is a full-scale propagation and observation simulation of a synthetic mini-moon population which is described
in Sect. 7. Although we predict that the vast majority of NEOs are not observable by radars due to size and range issues, one interesting and promising subpopulation of NEOs in terms of detectability
is the mini-moons. This population is described in Sect. 2.3. The results for each method are also given in the respective model description section. Finally, we discuss and draw conclusions based on
our results in Sects. 8 and 9.
2Near-Earth object population models
2.1Fireball statistics
Fireball observation statistics can be used to estimate the influx of small NEOs colliding with the Earth. By making the assumption that the flux of NEOs passing nearby the Earth is the same, it is
possible to make a rough estimate of the number of objects that cross the E3D radar beam. A synthesis of NEO fluxes estimated by various authors is given by Brown et al. (2002), who estimate a
log–log linear relationship between the cumulative number N[FB] of NEOs hitting the Earth at diameter >D to be as follows:
$\begin{array}{}\text{(1)}& {\mathrm{log}}_{\mathrm{10}}{N}_{\mathrm{FB}}={a}_{\mathrm{0}}-{b}_{\mathrm{0}}{\mathrm{log}}_{\mathrm{10}}D.\end{array}$
Here ${a}_{\mathrm{0}}=\mathrm{1.568}±\mathrm{0.03}$ and ${b}_{\mathrm{0}}=\mathrm{2.70}±\mathrm{0.08}$. For example, using this formula, one can estimate that the number of objects colliding with
the Earth that are larger than 10cm is approximately 1.8×10^4yr^−1.
This population model is convenient as it will allow us to theoretically investigate the number of objects detectable by a radar without resorting to large-scale simulations.
2.2Known NEOs with close approaches to the Earth
The Jet Propulsion Laboratory (JPL) CNEOS maintains a database of NEO close approaches. This database contains objects that have close encounters with the Earth and provides information, such as the
date and distance for the closest approaches (Chesley and Chodas, 2002). As the database consists of known objects, not modelled populations, there are significantly more past encounters than
projected future encounters. Many smaller objects, of which only a small fraction are known, are only discovered when they are very close to the Earth. As a test population we use the data provided
for approaches within 0.05au that occurred during one year from 13 March 2019 to 13 March 2020. This population contains 1215 objects, contrasting with 107 objects in the year from 13 March 2020 to
13 March 2021. The database was accessed on 13 March 2020 when there were a total of 149916 objects in the catalogue.
In addition to the closest approach distance, the database provides an estimate of the diameter of each object estimated from the absolute magnitude. The diameter is used to estimate the
signal-to-noise ratio (SNR) obtainable for radar observations and both planned and serendipitous discovery observations near the closest approach.
In Fig. 6 we show the distributions of some of the orbital elements in the NEO population being investigated. The reason for the peculiar shape of eccentricity as a function of the semimajor axis
(top-left plot in Fig. 6) is that anything in the bottom right of the plot, outside the region populated, would not cross the Earth's orbit. Similarly, the inclination distribution is biased towards
low inclinations, simply due to the fact that these objects are more likely to be near the Earth.
The CNEOS catalogue offers a way to judge what a realistic number of tracking opportunities will be for E3D. Because the orbital elements are not known for most smaller NEOs, the primary source for
tracking opportunities is newly discovered objects, which are added to the database near closest approach. Approximately 50% of the objects are discovered before the closest approach and 50%
afterwards, primarily as the objects are approaching from the direction of the Sun and are not observable in the day-lit hemisphere using telescopic surveys. The number of annual detections has been
steadily increasing, and we expect significantly more tracking opportunities within the next few years given the constantly improving sky surveys and the start of new surveys, such as the Rubin
Observatory Legacy Survey of Space and Time (LSST; Ivezić et al., 2019).
2.3Mini-moon model
Only two mini-moons have been discovered so far, and we therefore have to rely on theoretical predictions of their orbits and sizes rather than a model that is based on direct observational data. The
theoretical models are based on a numerical analysis of the NEO capture probability, estimation of the average capture duration, and the estimated flux of NEOs into the capturable volume of phase
space. Whereas Granvik et al. (2012) focused on mini-moons only and estimated the flux based on the debiased NEO model by Bottke et al. (2002), Fedorets et al. (2017) extended the model to encompass
both orbiters and fly-bys and tied the model to the updated NEO model by Granvik et al. (2016).
Here we use the newer mini-moon model by Fedorets et al. (2017). The average mini-moon makes 3–4 revolutions around the Earth during a capture which lasts about 9 months. The largest mini-moon
captured at any given time is about 1m in diameter. The realisation of the model contains 20272 synthetic mini-moon orbits, with absolute V-band magnitude $\mathrm{29.6}<{H}_{\mathrm{V}}<\mathrm
{37.1}$. The epochs of the synthetic mini-moons are randomly spread across the 19-year Metonic cycle to properly average the changes in the mutual geometry of the Earth, the Moon, and the Sun. Hence,
about 1000 mini-moons are captured in any given year during the simulation. The lead time for capturing is typically not more than 2–3 months, so the simulated mini-moon environment is in a steady
state for 12 months after the epoch of the chronologically first synthetic mini-moon.
We need to convert absolute magnitudes to diameters to find the radar SNR in subsequent calculations. Using the relationship between the absolute magnitude H[V], diameter D, and geometric albedo p[V]
, a relation between H[V], and D can be derived (e.g. Harris and Harris, 1997; Fowler and Chillemi, 1992) as follows:
$\begin{array}{}\text{(2)}& {\mathrm{log}}_{\mathrm{10}}\left(D\right)=\mathrm{3.1236}-\mathrm{0.5}{\mathrm{log}}_{\mathrm{10}}\left({p}_{\mathrm{V}}\right)-\mathrm{0.2}{H}_{\mathrm{V}}.\end{array}$
Here it is assumed that the integral bolometric Bond albedo is approximately equal to the Bond albedo in the visual colour range, allowing for the use of the geometric albedo. Here we assume a
geometric albedo p[V]=0.14 (Granvik et al., 2012; Morbidelli et al., 2020), which will transform the modelled population in such a way that the distribution is not only shape representative of the
true population but also magnitude representative. In other words, if we simulate N possible object measurements in a certain parameter space region, this corresponds to an expected N real object
We also need an estimate of the NEO rotation rate because it affects the SNR of a radar measurement (Sect. 3). The rotation-rate distribution of objects smaller than 5m in diameter is unfortunately
not well constrained. The rotation rate appears to increase exponentially with decreasing size (Bolin et al., 2014; Pravec et al., 2002). Virtually unbiased radar observations have also revealed that
very few small NEOs rotate slowly (Taylor et al., 2012; Benner et al., 2015). In what follows, we assume that the objects could have one of four different rotation rates, namely 1000, 5000, 10000,
or 86400 revolutions per day. These values are also consistent with the modelling of cometary meteoroids. For example, Čapek (2014) studied the distribution of rotation rates of meteoroids ejected
from 2P/Encke and found that objects with diameters between 1 and 10cm have rotation rates approximately between 10 to 0.1Hz. There are also indirect observations of meteoroid rotation rates
derived from optical meteor light curve oscillations. Beech and Brown (2000) estimate ∼20Hz and less for objects larger than 10cm in diameter.
When observing NEOs with radar, the most important factor is radar detectability, which depends on the SNR. SNR is determined by the following factors specific to an object, namely diameter, range,
Doppler width, and radar albedo, and factors specific to the radar system, namely antenna size, transmit power, wavelength, and receiver system noise temperature. The following model for radar
detectability presented here is similar to the one given by Ostro (1992), with slight modifications and additions.
The measured Doppler bandwidth is a combination of relative translation and rotation of the observing frame and the intrinsic rotation of the observed object around its own axis. However, in all
cases considered by this study, the effect of a moving observation frame is negligible. As such, the Doppler width B of a rotating rigid object depends on the rotation rate around its own axis and
the diameter of the object, as follows:
$\begin{array}{}\text{(3)}& B=\frac{\mathrm{4}\mathit{\pi }D}{\mathit{\lambda }{\mathit{\tau }}_{\mathrm{s}}}.\end{array}$
Here D is the object diameter, λ is the radar wavelength, and τ[s] is the rotation period of the object. The Doppler width can be used to determine the effective noise power entering the radar
receiver. By coherently integrating the echo B^−1s, we obtain a spectral resolution that corresponds to the Doppler width. When dealing with a pulsed radar, we also need to factor in the transmit
duty cycle γ, which effectively increases the noise bandwidth. The noise power can be written as follows:
$\begin{array}{}\text{(4)}& {P}_{\mathrm{N}}=k{T}_{\mathrm{sys}}B{\mathit{\gamma }}^{-\mathrm{1}}.\end{array}$
Here k is the Boltzmann constant and T[sys] is the system noise temperature.
The radar echo power originating from a space object, assuming the same antenna is used to transmit and receive, can be obtained using the radar equation as follows:
$\begin{array}{}\text{(5)}& {P}_{\mathrm{S}}=\frac{{P}_{\mathrm{TX}}{G}^{\mathrm{2}}{\mathit{\lambda }}^{\mathrm{2}}\mathit{\sigma }}{\left(\mathrm{4}\mathit{\pi }{\right)}^{\mathrm{3}}{R}^{\mathrm
Here P[TX] is the peak transmit power, G is the antenna directivity, R is the distance between the radar and the object, and σ is the radar cross section of the target.
The radar cross section of NEOs can be estimated using the radar cross section of a dielectric sphere, which is either in the Rayleigh or geometric scattering regime (e.g. Balanis, 1999). The
transition between the regimes occurs at approximately 0.2λ. This is similar to the approach taken by the NASA size estimation model for space debris radar cross sections (Liou et al., 2002), which
is validated using scale-model objects in a laboratory. Resonant scattering is not included in the model as the objects are irregular in shape and, on average, the sharp scattering cross section
resonances are smeared out. As we are dealing with natural objects made out of less conductive materials than man-made objects, the radar cross section will be scaled down by a factor from that of a
perfectly conducting sphere. This factor is a dimensionless quantity called the radar albedo $\stackrel{\mathrm{^}}{\mathit{\sigma }}\approx |\left({\mathit{\epsilon }}_{\mathrm{r}}-\mathrm{1}\right)
/\left({\mathit{\epsilon }}_{\mathrm{r}}+\mathrm{2}\right){|}^{\mathrm{2}}$, with ε[r] being the relative permittivity of the object. In this study, a commonly used value of $\stackrel{\mathrm{^}}{\
mathit{\sigma }}=\mathrm{0.1}$ is used (Ostro, 1992). The radar cross section model is thus as follows:
When detecting an object, we are essentially estimating the power of a complex normal random variable measured by the radar receiver. We assume that the system noise power is known to have a far
better precision than the power originating from the space object; thus we ignore the uncertainty in determining the noise power in the error budget. It can then be shown that the variance of the
power estimate is as follows:
$\begin{array}{}\text{(7)}& \mathrm{Var}\mathit{\left\{}{\stackrel{\mathrm{^}}{P}}_{\mathrm{S}}\mathit{\right\}}=\frac{\left({P}_{\mathrm{S}}+{P}_{\mathrm{N}}{\right)}^{\mathrm{2}}}{K},\end{array}$
where K is the number of independent measurements. It is possible to obtain an independent measurement of power every B^−1s, which means that there are K=τ[m]B measurements for an observation period
of length τ[m], assuming that ${\mathit{\tau }}_{\mathrm{m}}\gg {B}^{-\mathrm{1}}$.
To determine if the measurement is statistically significant or not, a criterion can be set on the relative standard error δ. Using the signal power estimator variance from Eq. (7), δ is defined as
$\begin{array}{}\text{(8)}& \mathit{\delta }=\frac{\mathrm{SD}\mathit{\left\{}{\stackrel{\mathrm{^}}{P}}_{\mathrm{S}}\mathit{\right\}}}{{P}_{\mathrm{S}}}=\frac{{P}_{\mathrm{S}}+{P}_{\mathrm{N}}}{{P}_
For example, we can use δ=0.05 as the criterion of a statistically significant detection. In this case, the error bars are 5% in the standard deviation of the received signal power. We can then
determine the number of required samples as follows:
$\begin{array}{}\text{(9)}& K=\frac{\left({P}_{\mathrm{S}}+{P}_{\mathrm{N}}{\right)}^{\mathrm{2}}}{{\mathit{\delta }}^{\mathrm{2}}{P}_{\mathrm{S}}^{\mathrm{2}}}.\end{array}$
We can also determine the minimum required observation time needed to reduce the relative error to δ as follows:
$\begin{array}{}\text{(10)}& {\mathit{\tau }}_{\mathit{\delta }}=\frac{\left({P}_{\mathrm{S}}+{P}_{\mathrm{N}}{\right)}^{\mathrm{2}}}{{\mathit{\delta }}^{\mathrm{2}}{P}_{\mathrm{S}}^{\mathrm{2}}B}.\
The commonly used SNR reported for planetary radar targets compares the received power to the standard deviation of the averaged noise floor as follows:
$\begin{array}{}\text{(11)}& \mathit{\rho }=\frac{{P}_{\mathrm{S}}}{{P}_{\mathrm{N}}}\sqrt{{\mathit{\tau }}_{\mathrm{m}}B}.\end{array}$
In the case of geometric scatter, this is as follows:
$\begin{array}{}\text{(12)}& \mathit{\rho }=\frac{\mathrm{1}}{{\mathrm{4}}^{\frac{\mathrm{9}}{\mathrm{2}}}{\mathit{\pi }}^{\frac{\mathrm{7}}{\mathrm{2}}}k}\frac{{P}_{\mathrm{TX}}\mathit{\gamma }{G}^
{\mathrm{2}}{\mathit{\lambda }}^{\frac{\mathrm{5}}{\mathrm{2}}}}{{T}_{\mathrm{sys}}}\frac{\stackrel{\mathrm{^}}{\mathit{\sigma }}{d}^{\frac{\mathrm{3}}{\mathrm{2}}}{\mathit{\tau }}_{\mathrm{s}}^{\
frac{\mathrm{1}}{\mathrm{2}}}}{{R}^{\mathrm{4}}}{\mathit{\tau }}_{\mathrm{m}}^{\frac{\mathrm{1}}{\mathrm{2}}}\phantom{\rule{0.125em}{0ex}},\end{array}$
where the equation is grouped into a constant, a radar-dependent term, an object-dependent term, and the observation duration.
3.1Serendipitous detectability
The above considerations for the detectability of a space object assume that there is a good prior estimate of the orbital elements, which allows radial trajectory corrections to be made when
performing the coherent and incoherent averaging. If the objective is to discover an object without prior knowledge of the orbit, one must perform a large-scale grid search in the radial component of
the trajectory space during detection. In this case, it is significantly harder to incoherently average the object for long periods of time while matching the radial component of the trajectory with
a matched filter; the search space would simply be too large. For space debris targets, we estimate the longest coherent integration feasible at the moment to be about τ[c]=0.2s. This also
corresponds to the longest observing interval. We will use this as a benchmark for the serendipitous discovery of NEOs. In this case, we need to evaluate the measurement bandwidth using the
$\begin{array}{}\text{(13)}& B=\mathrm{max}\left(\frac{\mathrm{4}\mathit{\pi }D}{\mathit{\lambda }{\mathit{\tau }}_{\mathrm{s}}},\frac{\mathrm{1}}{{\mathit{\tau }}_{\mathrm{c}}}\right),\end{array}$
where the bandwidth has a lower bound, which is determined either by the rotation rate or by the coherent integration length. In most cases, the receiver noise bandwidth will be determined by the
coherent integration length $B={\mathit{\tau }}_{\mathrm{c}}^{-\mathrm{1}}$. We will use this in the subsequent studies of serendipitous detectability.
Assuming that we cannot perform incoherent averaging without a priori knowledge of the orbital elements, the SNR will then be as follows:
$\begin{array}{}\text{(14)}& \mathit{\rho }=\frac{{P}_{\mathrm{S}}}{{P}_{\mathrm{N}}},\end{array}$
which does not include any effects of incoherent averaging of power.
The E3D Stage 1 is expected to be commissioned by the end of 2021. It will then consist of one transmit and receive site in Skibotn, Norway, (69.340^∘N, 20.313^∘E) and two receive sites in
Kaiseniemi, Sweden, (68.267^∘N, 19.448^∘E) and Karesuvanto, Finland (68.463^∘N, 22.458^∘E). Each of these sites will consist of a phased array with about 10^4 antennas, which will allow for rapid
beam steering.
The transmitter in Skibotn will initially have a peak power of 5MW, later to be upgraded to 10MW. For this study, we have assumed a transmit power of 5MW. The transmit duty cycle of the radar is γ
=0.25 or 25%. It will not be possible to transmit continuously with full peak power in a manner that planetary radars conventionally operate. At the same time, the beam on/off switching time for E3D
is only a few microseconds, opposed to 5s for DSS-14 and Arecibo (Naidu et al., 2016), which makes it possible to observe nearby objects.
The other key radar performance parameters for the Stage 1 build-up of E3D are as follows: peak radar gain of 43dB (G[0]); receiver noise temperature of 150K; transmitter bandwidth of ≤5MHz;
receiver bandwidth of ≤30MHz; and operating frequency of 233MHz (wavelength of 1.29m). The main lobe beam width is approximately 0.9^∘. The system will be able to point down to at least a 30^∘
elevation. As the radar uses a planar-phased array, the gain reduces as a function of zenith angle θ approximately as G=G[0]cos(θ). The radar will also allow two orthogonal polarisations to be
received and transmitted, allowing for polarimetric composition studies of NEO radar cross sections.
Using Eq. (12), it is possible to compare the sensitivities of different radar systems, such as the radar-parameter-dependent portion, as follows:
$\begin{array}{}\text{(15)}& \mathit{\rho }\propto \frac{{P}_{\mathrm{TX}}\mathit{\gamma }{G}^{\mathrm{2}}{\mathit{\lambda }}^{\mathrm{5}/\mathrm{2}}}{{T}_{\mathrm{sys}}}.\end{array}$
Using parameters given by Naidu et al. (2016), the Arecibo observatory 2.38GHz planetary radar is approximately a factor of 1.7×10^4 more sensitive than E3D. The Goldstone DSS-14 system, on the
other hand, is a factor of 600 more sensitive than E3D, and E3D, in turn, is approximately a factor of 8 more sensitive than the existing EISCAT UHF.
As E3D will have a lower sensitivity and very short transmit beam on/off switching time compared to conventional planetary radars, it may be possible to use it as a search instrument, as it is
possible to observe nearby objects, and the beam has a large collecting volume. The ability to use the phased array antenna to point anywhere quickly with a 120^∘×360^∘ field of view (FOV) can also
be used to increased the effective collecting volume when the radar is used for performing the serendipitous discovery of NEOs.
A full FOV scan can be performed relatively quickly. The beam broadens as the radar points to lower elevations making the scan pattern non-trivial to calculate. At the zenith the beam width of E3D is
about 0.9^∘, while at 30^∘ elevation the beam is roughly 4.3^∘ wide. The broadening only occurs in the elevation direction, i.e. the main lobe becomes elliptical instead of circular (Vierinen et al.
, 2019a). Using the known relation between broadening and elevation, one can estimate that approximately 1250 beam directions are needed to scan the entire FOV of E3D. An example scanning pattern is
illustrated in Fig. 3. Using an integration time of 0.2s, the entire sky is scanned every 4–5min.
5Discovery of near-Earth objects
Most <1m diameter NEOs pass the Earth undetected. In order to estimate the feasibility of discovering these objects using radar, we consider a short coherent integration detection strategy that
performs a grid search of radial trajectories similar to space debris (Markkanen et al., 2009). In the subsequent analysis, a coherent integration length of 0.2s is assumed, based on the assumption
that integration lengths longer than this would not be computationally feasible. The short coherent integration time also makes it possible to ignore the effect of an object's rotation rate on its
detectability, as the effective bandwidth of the coherently integrated signal is nearly always larger than the Doppler bandwidth of the object.
Using the fireball flux reported by Brown et al. (2002), it is possible to estimate the flux of NEOs of various sizes that hit the Earth, as described in Sect. 2.1. We will make the following
assumptions: (1) we assume that the flux of objects that pass near Earth is the same as the flux of objects hitting the Earth, (2) we ignore the effects of the Earth's gravity on incoming objects,
and (3) we assume that all objects approach the Earth aligned with the normal to the meridian circle where E3D is located. It is then possible to treat the Earth and the E3D beam as targets with a
certain cross section for incoming NEOs. In this case, the Earth is a circular target with a cross section area ${A}_{\mathrm{0}}=\mathit{\pi }{R}_{\mathrm{E}}^{\mathrm{2}}$, and the E3D radar beam
is a target with a cross section area ${A}_{\mathrm{1}}=\frac{\mathrm{1}}{\mathrm{2}}R\left(D\right)\mathit{\alpha }$, where R(D) is the maximum range at which an object of diameter D can be
detected. The maximum detectable range R(D) as a function of diameter is shown in Fig. 5. The beam opening angle is α, which assumed to be 1^∘. The cross-sectional areas for Earth-impacting NEOs and
NEOs passing the radar beam is depicted in Fig. 4.
The cumulative flux of objects larger than diameter D, provided in Eq. (1), can be written as follows:
$\begin{array}{}\text{(16)}& {N}_{\mathrm{FB}}=\mathrm{36.983}{D}^{-\mathrm{2.7}}\phantom{\rule{0.125em}{0ex}}.\end{array}$
Differentiating this, we can obtain the density function of objects of diameter D as follows:
$\begin{array}{}\text{(17)}& {n}_{\mathrm{FB}}=\mathrm{99.854}{D}^{-\mathrm{3.7}}.\end{array}$
The flux density of NEOs crossing the radar beam of a certain size or larger can now be roughly estimated as follows:
$\begin{array}{}\text{(18)}& {n}_{\mathrm{E}\mathrm{3}\mathrm{D}}\left(D\right)=\frac{{A}_{\mathrm{1}}\left(D\right)}{{A}_{\mathrm{0}}}{n}_{\mathrm{FB}}\left(D\right)\phantom{\rule{0.125em}{0ex}},\
which is in units of objects impacting Earth per year per metre of diameter. When integrated over diameter, we obtain the following cumulative distribution function for the number of radar detections
per year of objects with a diameter larger than D:
$\begin{array}{}\text{(19)}& {N}_{\mathrm{E}\mathrm{3}\mathrm{D}}\left(D\right)=\frac{\mathrm{1}}{{A}_{\mathrm{0}}}\underset{D}{\overset{\mathrm{\infty }}{\int }}{A}_{\mathrm{1}}\left({D}^{\prime }\
right){n}_{\mathrm{FB}}\left({D}^{\prime }\right)\mathrm{d}{D}^{\prime }\phantom{\rule{0.125em}{0ex}}.\end{array}$
The maximum coherent integration time is τ[c]=0.2s. However, it takes significantly longer than that for NEOs to drift across the radar beam. Assuming a transverse velocity across the beam of
40kms^−1 and detection at a range of 10^4km, it takes approximately τ=4.4s for an object to cross the beam. It is therefore feasible to scan up to ${N}_{\mathrm{b}}=\mathit{\tau }/{\mathit{\tau
}}_{\mathrm{c}}=\mathrm{22}$ different pointing directions with a fence-like scan to increase the collecting area of the radar. In the optimum case, these directions are independent from one another,
thus increasing the cumulative number of detections by a factor of N[b]. While the beam-crossing time is a function of the range, we will use this representative value at 10^4km.
The cumulative number of radar detections of objects with D>1cm is estimated to be ≈60 without beam scanning. The lower bound of 1cm is determined by the minimum detectable object at the height
where ablation becomes significant at 100km. If a fence-like scan with 20 beam pointing directions is used to increase the total effective collecting area of the radar, the number of detections goes
up to 1200 detections per year. The cumulative density function of radar detections per year is shown on the right-hand side of Fig. 5. The blue line indicates radar detections with a fixed radar
beam position, and the orange line indicates cumulative detections per year when using a 20-position fence scan. The broken black line indicates the transition from geometric scattering to Rayleigh
scattering. Objects smaller than the Rayleigh scattering size limit become much harder to observe due to the vanishing radar cross section, which causes the bend in the cumulative density function.
It is worth noting that the above-mentioned numbers are very rough estimates based on the above simplistic “shotgun” model. However, the results are very promising because the magnitude of the number
of serendipitous radar detections of NEOs is an order of magnitude between 10 and 1000, which is significantly larger than 0. It is therefore plausible that NEOs in the size range $\mathrm{0.01}<D<\
mathrm{1}$m can be detected using the E3D radar. In order to obtain a more accurate estimate of radar-detection rates, a more sophisticated model of the radar needs to be used together with a
realistic NEO population model.
Assuming that objects are in the geometric scattering regime and that the radar antenna aperture is circular, the search-collecting area for a radar is as follows:
$\begin{array}{}\text{(20)}& {A}_{\mathrm{1}}\left(D\right)=\frac{\mathit{\pi }}{\mathrm{32}}\sqrt{\frac{\stackrel{\mathrm{^}}{\mathit{\sigma }}}{kB\mathit{\rho }}}\sqrt{\frac{P\mathit{\gamma }}{T}}\
mathit{\eta }{d}_{r}D.\end{array}$
Here B is the coherent integration analysis bandwidth, ρ is the minimum SNR required for detection, d[r] is the diameter of the antenna, and η is the aperture efficiency. While a larger antenna is
still more advantageous, it is not as crucial for this application as it is for tracking or imaging. The number flux density of NEOs that can be detected when crossing the beam n[FB]A[1](D)∕A[0] is
only linearly dependent on antenna diameter. Factoring everything together, the number flux density of detections per unit diameter is as follows:
$\begin{array}{}\text{(21)}& {n}_{\mathrm{radar}}=\frac{\mathrm{99.854}}{\mathrm{32}{R}_{\mathrm{E}}^{\mathrm{2}}}\sqrt{\frac{\stackrel{\mathrm{^}}{\mathit{\sigma }}}{kB\mathit{\rho }}}\sqrt{\frac{P\
mathit{\gamma }}{T}}\mathit{\eta }{d}_{r}{D}^{-\mathrm{2.7}}.\end{array}$
The cumulative number of radar detections of objects with diameter >D per year, assuming $\stackrel{\mathrm{^}}{\mathit{\sigma }}=\mathrm{0.1}$, B=5, and ρ=10, is as follows:
$\begin{array}{}\text{(22)}& {N}_{\mathrm{radar}}\left(D\right)=\mathrm{5.261}×{\mathrm{10}}^{-\mathrm{4}}\sqrt{\frac{P\mathit{\gamma }}{T}}\mathit{\eta }{d}_{r}{D}^{-\mathrm{1.7}},\end{array}$
which is valid only when $D>\mathit{\lambda }/\left(\mathit{\pi }\sqrt{\mathrm{3}}\right)$. For example, based on this formula, we can estimate the number of NEOs detected by the Arecibo Observatory
430MHz ionospheric radar. This system has approximately the following radar performance parameters: P=10^6W, d[r]=305m, γ=0.05, η=0.5, and T=100K. It should be possible to detect approximately 45
NEOs per year with a diameter >0.15m crossing the radar beam, i.e. one detection on average for every 10d of continuous operations. Many of these objects should be relatively easy to distinguish
from satellites and meteors using Doppler shift and range. While this is a low number of expected detections, it may be feasible to search for these objects as a secondary analysis for ionospheric
radar observations.
6Observability based on known near-Earth objects
By applying the methods described in Sect. 3 to the population described in Sect. 2.2, we can determine which objects are observable using the E3D radar facility. Both post-discovery tracking and
serendipitous discovery were investigated. The observability study was performed in two stages. First, we find the SNR for every object only based on range, size, and rotation rate, without
considering whether or not the object is in the radar field of view. We only keep objects with an SNRh${}^{-\mathrm{1}}>\mathrm{10}$. Then, we use the JPL HORIZONS service to generate an ephemeris
for the E3D transmitter location. The ephemeris is used to calculate the maximum SNR when these objects are in the field of view of the radar, keeping only objects with an SNRh${}^{-\mathrm{1}}>\
In order to estimate SNR, we require the distance between the radar and the object, the object's diameter, and the rotation rate of the object. The CNEOS database contains the minimum and maximum
diameter estimates derived from object absolute magnitude. We use the mean of these two diameter estimates. The HORIZONS ephemeris provides distance and elevation angle during times of observation.
Rotation rates are not well known and neither system provides this property for our population of objects.
Bolin et al. (2014) provided two different functions for asteroid rotation period as a function of diameter based on the following two different population samples:
$\begin{array}{}\text{(23)}& {T}_{\mathrm{r}}=\mathrm{0.005}\frac{D}{m}\phantom{\rule{0.25em}{0ex}}\left(\mathrm{h}\right),\end{array}$
where T[r] denotes the rotation period in hours. This relationship is derived from data of kilometre-sized asteroids. Meteor data suggest a much faster rate of rotation, as follows:
$\begin{array}{}\text{(24)}& {T}_{\mathrm{r}}=\mathrm{0.0001}\frac{D}{m}\phantom{\rule{0.25em}{0ex}}\left(\mathrm{h}\right).\end{array}$
These rotation periods differ by a factor of 50 and, as such, influence the number of detectable objects a great deal. Assuming the longer period, we are able to detect 29 out of the 1215 objects.
With the shorter estimate, we can only detect 13. We will be using the longer period estimates from now on.
A summary of the characteristics of the objects that can be tracked or detected during the studied 1year interval is shown in Table 1. Of the 29 trackable objects, 19 were observable in the last
half of 2019. The June to December time period featured two thirds of the observable close approach observation windows. This could possibly be explained through the limited view of the ecliptic
plane and small number statistics.
The observable objects were relatively close to the radar, with the shortest range being 0.08LD and the furthest range being 1.6LD. The diameters of the observable objects ranged between 2.0 and
94m. The highest SNRh^−1 was 4835, 6 had an SNRh^−1 over 1000, and 12 had an SNRh^−1 over 100. We note that the recently discovered mini-moon, 2020 CD[3], could have been tracked in April 2019,
roughly 10 months before its telescopic discovery.
All observable NEOs were above the 30^∘ cut-off elevation for significantly longer than their maximum incoherent integration time estimate. The minimum observation window was 155min and the maximum
was over 5000min. This means that we can expect any serendipitously discoverable objects to be observable for much longer than the time it takes to scan the full field of view, as discussed in
Sect. 4. For example, 2020 BH[6] would have been discoverable for 50min near its closest approach.
Only a fraction of all objects are discovered and are entered into the CNEOS database. We can assume that there are significantly more objects that could be large enough and have approaches close
enough so that E3D would discover them with an all-sky scan. It should be noted that 2012 DA[14], during its 2013 pass, could also have been easily discoverable with E3D.
Although we have a very limited sample of the total NEO population, it appears that our measurements are not biased towards measuring a specific subset of NEOs with close Earth approaches (Fig. 6).
Any potential biases might be revealed once E3D is in operation, and we can obtain a larger sample space of observable objects.
In order to determine the feasibility of tracking NEO close approaches using the existing EISCAT facilities, we made a similar search for objects observable using the EISCAT UHF radar, which has an
antenna gain of 48dB, transmit power of P[TX]=1.8MW, system noise of 90K, and duty cycle of 12.5%. The total number of observable objects was 17, which was a subset of the objects that could be
observed using E3D.
The results indicate that it would be feasible to perform routine NEO post-discovery tracking observations using both the upcoming E3D radar and the existing EISCAT UHF radar. This observing
programme would nicely complement the capabilities of existing planetary radars, which cannot observe targets that are nearby, due to the long transmit/receiver switching time. Of the ≈2000 NEOs
discovered each year, we estimate that approximately 0.5%–1%, or approximately 15, can be tracked with E3D or EISCAT UHF when factoring in that around 50% of the NEOs are discovered before closest
7Observability of mini-moons
To accurately determine the observability of a population, one needs to construct a chain that considers the following:
1. Model of the measurement system (E3D)
2. Model of the population (mini-moons)
3. Temporal propagation of the population (solar system dynamics)
4. The observation itself (a detection window and SNR).
A recent effort to determine the capability of E3D with regards to space debris measurement and cataloguing produced a simulation software called SORTS (Kastinen et al., 2019). SORTS already includes
a model of E3D (item 1) and simulated observations of space debris (item 4) suitable for this study. All the parameters given in Sect. 4 are included in the model of E3D used in the simulation,
including realistic antenna gain patterns. SORTS also provides the framework for creating the chain between the items (1–4) mentioned previously. Space-debris observations are slightly different from
mini-moon and NEO observations due to their size, material, and orbits. To account for this difference, we modified the observation simulation (item 4) according to Sect. 3. Specifically, new SNR
formulas from Eqs. (12) and (14) were implemented for use in the NEO and mini-moon simulations. The population model previously described in Sect. 2.3 was translated into the format used by SORTS to
allow for the observability simulation (item 2).
SORTS propagates each object of a given population and searches for time intervals where the object is within the FOV of E3D. We consider the effective FOV of E3D to be a 120^∘ cone centred at the
zenith, i.e. we allow for pointing down to a 30^∘ elevation. We do not need to consider any time delays in pointing as interferometric antenna arrays can close to instantaneously point the radar
beams electronically.
The only remaining component of the simulation is an interface with a suitable propagation software (item 3). SORTS already includes propagation software but only for objects in stable Earth orbit,
i.e. objects that do not transition to hyperbolic orbits in the Earth-centred inertial frame. We have thus chosen to use the Python implementation of the REBOUND propagator^1 (Rein and Liu, 2012),
using the IAS15 integrator (Rein and Spiegel, 2015). This N-body integrator can handle arbitrary configurations of interacting particles and test particles. An interface between the REBOUND
propagator and SORTS was implemented, allowing for the use of this propagator in all future simulations. For our application, we only need to propagate for tens of years. As such, we have omitted any
radiation-related dynamical effects, such as radiation pressure and Poynting–Robertson drag, as these act on much longer timescales. The integration included all planets and the Moon initialised with
the JPL DE430 planetary ephemerides.
The integration was configured to use a time step of 60s. This step size allows for decent resolution when searching for viable observation windows by the radar system. The initial state for REBOUND
was inputted in the J2000 heliocentric ecliptic inertial frame; thus the output was also given in this frame. A standard routine was used to transform to the Earth-centred, Earth-fixed J2000 mean
Equator and equinox frame. In this frame the E3D system is fixed in space and observation windows are readily calculated.
All 20265 synthetic mini-moons were integrated for 10 years past their initial epochs. As previously mentioned, we chose to assume that the objects could have one of four different rotation rates,
namely 1000, 5000, 10000, or 86400 revolutions per day. As such, four different SNRs were calculated for each point in time. It was also assumed that signal integration could not last longer than
1h, i.e. τ[m] was set to 1h or the observation window length if this was shorter than 1h. For each measurement window, only those with at least one measurement point above 10dB SNR at some
rotation rate were saved. A fifth SNR was also calculated based on serendipitous discovery, i.e. with a much shorter coherent integration time of 0.2s. For each of these measurement windows, time
series of the object position, velocity, and all five SNRs were saved.
If we assume that we have a prior orbit for these objects, the detections are in essence follow-up tracking measurements, and we can consider the tracking SNRs for observability. The a priori orbit
does not have to be of good quality; it need only be sufficiently accurate to restrict the search region in the sky. For these tracking measurements, a total of 1999 out of the 20265 objects (9.9%)
had at least one possible measurement window, assuming the 1000 revolutions per day rotation rate. This number dropped to 7.9%, 7.2%, and 5.2% for 5000, 10000, or 86400 revolutions per day
Without a prior orbit, we have to consider the SNR for serendipitous discovery. Only a total of 116 objects had an observation with 10dB SNR or more. The rotation rate does not affect this SNR in
this case, as the noise bandwidth is determined by the coherent integration time. We assume that the coherent integration time is limited to 0.2s due to the computational feasibility of performing a
massive grid search for all possible radial trajectories that matches the trajectory of the target. This results in an effective noise bandwidth of B=20Hz, when factoring in the 25% duty cycle.
This is nearly always higher than the intrinsic Doppler bandwidth of the target.
The distribution of sizes, ranges, and SNRs of observable objects are illustrated in Fig. 8. The sharp cut-off in observable mini-moon passes is due to the minimum SNR conditions that were imposed.
For a single object, according to Eq. (12), the parameter dependant on dynamical integration is range. Thus, for an object with a number of possible observation windows, it is primarily the minimum
range of these windows that determines their observability and thereby the lack of observability above a certain range. This equation also contains a transition from Rayleigh to geometric scattering
as the diameter increases. This transition is illustrated by the “kink” in the cut-off at approximately 0.24m in diameter. A promising feature is that even though the majority of detections are made
at close range, the lower range of diameters has not been reached by the objects in the model. This indicates the possible existence of smaller objects than those included in the mini-moon model that
are observable by E3D. This is supported by the results from Sect. 5 that use a population model with smaller sizes. There is no apparent dynamical coupling with object size and observation window
closest range. This is expected as the orbital dynamics are not a function of size in the simulation or in the mini-moon model.
The expected annual detection rate is illustrated in Fig. 9. The distribution is fairly uniform, as expected, and the number of observation windows can thus be averaged over the total time span of
the model. Assuming a rotation rate of 1000 revolutions per day, the mean expected detection rate is approximately 162 measurement windows per year. The length distribution of these windows with SNR
above 10dB is illustrated in Fig. 11. Generally, the observation window is between 1 and 10h.
The initial orbital element distribution of the observable objects is illustrated in Fig. 7. This illustration should be compared to the initial distribution in Fig. 2. The comparison shows no
significant observation biases. However, a dedicated bias study is required for population modelling purposes. It is important to note that the orbits in Fig. 7 are not the detected orbit
distribution, as the objects are severely perturbed from Keplerian orbits upon Earth capture.
In Fig. 10 we illustrate the zenith distance for the peak SNR observation point of every observation window. As most capturable objects have low-inclination orbits, most observations are centred
around the ecliptic, i.e. at low-elevation angles for a high-latitude radar.
Summary statistics of the observability study can be found in Table 2. The number of observable objects in Table 2 is representative of the expected total number of real mini-moons that can be
tracked by E3D in future if prior orbital elements are known. Each object can have more than one possible observation window and, on average, each object has approximately 1.5 tracking opportunities.
As the number of discovered NEOs is steadily increasing every year with better instrumentation and analysis techniques, a larger fraction of the population will be discovered, allowing these
follow-up tracking measurements to be performed.
The number of discoverable objects in Table 2 indicates how many serendipitous mini-moon detections can be made if a full FOV scan, using an integration time of 0.2s, is continuously performed with
E3D. This assumes that the objects passes through one of the scanning beams at least once. The number of discoverable objects gives an average of 6.74 mini-moons discovered per year. As there are
only 12 more observation windows than unique objects that are discoverable, a sparse scanning strategy that counts on observing one of many possible passes of the same object is not possible. The
distribution of ranges, sizes, and SNRs of all possible discovery windows is illustrated in Fig. 12.
As discussed in Sect. 4, a full sky scan by E3D could take approximately 5min. As the typical mini-moon observation window is on the order of hours, almost all of the possible serendipitous
discoveries will be made if the radar performs a full sky scan every hour or half hour. This would use approximately 8%–16% of the available radar time.
Conducting routine NEO follow-up observations using E3D would allow for refined orbits and radar measurements of hundreds of mini-moons every year. Assuming a rotation rate of 1000 revolutions per
day and using the observation window length for each possible tracking window, as illustrated in Fig. 11, we can estimate an average of 502hyr^−1 would be spent on these observations. This is
approximately 5.7% of the total radar time.
For convenience, we have summarised the statistics from all three methods that were used to determine the observability of NEOs with E3D in Table 3. The advantage of using several different methods
is not only that one can compare results but also that they inherently peer into different parts of the NEO population.
The fireball observations described in Sect. 2.1 is a sample from the subset of NEOs that make close approaches to Earth. In Brown et al. (2002) arguments are made that the measured population is
unbiased. The known NEOs with close approaches described in Sect. 2.2 are also sampling the same subpopulation but in a different size range. This sample is biased and reduced due to limited
observational capabilities. Only a fraction of all NEOs with close approaches are currently detected. If the number of objects as a function of size scale is according to Eq. (1), the size ranges
listed in Table 3 roughly translate to 2 orders of magnitude of fewer known NEOs than fireballs. As such, the 60–1200 discoveries from fireball statistics, versus 1 from the known NEOs with close
approaches, are mutually consistent.
Based on the CNEOS catalogue, we have found that E3D can be used to observe ≈15 objects per year. This number should be compared to ≈100 by the Arecibo observatory. Another significant contribution
is that smaller-sized objects are discoverable by the E3D system while conducting scans over its field of view, which opens the potential for a number of discoveries on the order of 1000 for objects
in the 0.01–1m size range.
The difference in examined populations between the methods suggests that if routine NEO observations are implemented at E3D, the simulation described in Sect. 7 should be recomputed using a
representative subset of the debiased NEO population, such as the one presented in Granvik et al. (2018), extended to smaller sizes. This would provide guidance for the observation strategy and
implementation of analysis and provide observation debiasing for E3D measurements. However, this would be more costly in terms of computational requirements, as the sampled population would need to
be at least 2–3 orders of magnitude larger than the mini-moon population. The implementation of the mini-moon observability simulation using SORTS is fairly general. This means that in future, the
same study can easily be performed for other radar systems as well.
It was shown in Kastinen et al. (2019) that E3D is expected to regularly detect hard target echoes from space debris and other artificial objects in the Earth's orbit. NEOs and mini-moons need to be
robustly separated from these artificial objects for discovery operations to be successful. Space debris is mainly confined to two regions, namely close to Earth ($<\mathrm{3}×{\mathrm{10}}^{\mathrm
{3}}$km altitude) or close to geostationary orbit ($\sim \mathrm{3.6}×{\mathrm{10}}^{\mathrm{4}}$km altitude; Krisko, 2010; Flegel et al., 2009). Our results show that the typical mini-moon or NEO
detection will be made at altitudes larger than these regions and up to 3.8×10^5km altitude. Thus, the range to the target can be used as an initial NEO and mini-moon identification. If the orbit of
the object can be determined, this would be a very reliable method of identification as NEOs and mini-moons generally have vastly different orbits compared to space debris (Fedorets et al., 2017).
Our results indicate that E3D can provide valuable and unique follow-up measurements of mini-moons and NEOs with close approaches. It also shows that, even though discovering mini-moons is sparse,
discovery and scanning for the combined population of mini-moons and generic NEOs may be very cost effective as this is inherently dual usage with space debris observations. That is, the same radar
pulses and survey patterns can be used for discovering objects from all of the above-mentioned populations. Even the discovery of a single new mini-moon would be significant since, to date, only two
have been discovered.
The scientific gain from tracking operations at E3D can be summarised as being efficient, high-quality orbit determination and, if the target is sufficiently larger than the wavelength, novel data on
surface properties and rotation rates. There are currently not many methods that can discover smaller NEOs unless they collide with the Earth's atmosphere, as shown by the low number of discovered
mini-moons. As such, the scientific gain from discovery operations is in essence the discovery itself, i.e. the observation capability of a population otherwise not observable. If the objects are
larger, they can be detected with higher probability using optical methods. In these cases, radar observations are still valuable for the same reasons as tracking operations are.
The feasibility of a follow-up observation programme can be tested in practice by using known space debris objects with large distances. For example, large objects with Molniya orbits are good
candidates for testing the detection capabilities of faraway objects over long integration times.
It is also valuable to note that E3D will observe over 1000 meteors per hour (Pellinen-Wannberg et al., 2016). Radar meteors are a direct measurement of the NEO population that makes close approaches
to Earth but at much smaller sizes than the ones examined here. In Fedorets et al. (2017) 1.46% of all temporarily captured fly-bys and 0.61% of all mini-moons impacted Earth. Due to the tri-static
capability of E3D, the inferred orbital elements will be of very high quality, and it may be possible to trace meteors back to the mini-moon population.
Our results indicate that it is plausible that E3D can be used to discover NEOs with diameters D>0.01m. All of the populations studied predicted that E3D would discover NEOs by using an all-sky
radar survey. A rough estimate of up to 1200 detections per year is possible when using 100% of the radar time at full-duty cycle. This estimate is based on a very simplistic model, which neglects
many important details. However, these results are encouraging and suggest that the radar detectability of NEOs should be investigated further. The capability of discovering NEOs would have several
advantages. Radars can observe in the day-lit hemisphere. The objects that can be found using radar are mostly smaller than the ones detectable using telescopic surveys, and observations of accurate
orbital elements of objects in this size range are very scarce.
The study of the mini-moon subset of the NEO population indicates that a significant fraction of objects could be tracked, with 80–160 observing opportunities per year, assuming that the objects have
been previously discovered. There is currently only one mini-moon in the Earth's orbit, but it is no longer observable using EISCAT UHF or E3D due to the long range when the object is in the radar
field of view. However, there will be more opportunities in future for such observations as new mini-moons are discovered (Fedorets et al., 2020). Our study shows that an E3D-based radar search for
mini-moons is one potential way for discovering these objects. Our simulation suggests that there are approximately seven discoveries per year with a 8%–16% utilisation of radar resources. In
addition to utilising dedicated observing modes for these searches, it should be feasible to also perform a secondary analysis to search for NEOs when running the radar in ionospheric mode.
Our study shows that establishing a post-discovery NEO tracking programme that uses close-approach predictions is feasible. Such an initiative could already be commenced with the existing EISCAT UHF
radar, which is only slightly less sensitive than the upcoming E3D radar for this purpose. We estimate that roughly 0.5% to 1% of the 2000 objects discovered annually could be tracked using the
EISCAT UHF or E3D radars, based on close approaches in 2019. The need for radar resources is minimal, with only a few 4–8h observing windows each month. However, the observations would need to be
scheduled on short notice, using an automated alert system that notifies of upcoming observing possibilities (cf. Solin and Granvik, 2018). The measurements would yield accurate orbital elements of
NEOs but possibly also estimates for sizes and rotation rates. The dual polarisation capability of E3D could also be used to study the composition of these objects.
The underlying research data came from two sources, namely the CNEOS database and a mini-moon model provided by Mikael Granvik, which is not publicly available. The CNEOS database is available online
at https://cneos.jpl.nasa.gov/ca/ (NASA JPL, 2020).
DK performed the numerical simulations of mini-moon observations by E3D in Sect. 7. TT performed the detectability calculation for objects within the CNEOS catalogue in Sect. 6. JV provided the SNR
calculations in Sect. 3 and estimated the number of NEOs serendipitously detectable in Sect. 5. MG provided the mini-moon population model described in Sect. 2.3. All the authors contributed to the
preparation of the paper and interpretation of the results.
Juha Vierinen is on the editorial board of the journal.
This article is part of the “Special Issue on the joint 19th International EISCAT Symposium and 46th Annual European Meeting on Atmospheric Studies by Optical Methods”. It is a result of the 19th
International EISCAT Symposium 2019 and 46th Annual European Meeting on Atmospheric Studies by Optical Methods, Oulu, Finland, 19–23 August 2019.
This paper was edited by Petr Pisoft and reviewed by Peter Brown and one anonymous referee.
Balanis, C. A.: Advanced engineering electromagnetics, John Wiley & Sons, Hoboken, New Jersey, 1999.a
Banka, D., Leushacke, L., and Mehrholz, D.: Beam-park-experiment-1/2000 with TIRA, Space Debris, 2, 83–96, 2000.a
Beech, M. and Brown, P.: Fireball flickering: the case for indirect measurement of meteoroid rotation rates, Planet. Space Sci., 48, 925–932, https://doi.org/10.1016/S0032-0633(00)00058-1, 2000.a
Benner, L. A. M., Busch, M. W., Giorgini, J. D., Taylor, P. A., and Margot, J. L.: Radar Observations of Near-Earth and Main-Belt Asteroids, in: Asteroids IV, edited by: Michel, P., DeMeo, F. E., and
Bottke, W. F., University of Arizona Press, Tucson, 165–182, https://doi.org/10.2458/azu_uapress_9780816532131-ch009, 2015.a
Bolin, B., Jedicke, R., Granvik, M., Brown, P., Howell, E., Nolan, M. C., Jenniskens, P., Chyba, M., Patterson, G., and Wainscoat, R.: Detecting Earth's temporarily-captured natural
satellites-Minimoons, Icarus, 241, 280–297, https://doi.org/10.1016/j.icarus.2014.05.026, 2014.a, b
Bottke, W. F., Morbidelli, A., Jedicke, R., Petit, J.-M., Levison, H. F., Michel, P., and Metcalfe, T. S.: Debiased Orbital and Absolute Magnitude Distribution of the Near-Earth Objects, Icarus, 156,
399–433, https://doi.org/10.1006/icar.2001.6788, 2002.a
Braun, G.: GESTRA – Experimental space monitoring radar, available at: https://event.dlr.de/en/ila2018/gestra/ (last access: 13 March 2020), 2018.a
Brown, P., Spalding, R., ReVelle, D. O., Tagliaferri, E., and Worden, S.: The flux of small near-Earth objects colliding with the Earth, Nature, 420, 294–296, https://doi.org/10.1038/nature01238,
2002.a, b, c, d
Busch, M. W., Kulkarni, S. R., Brisken, W., Ostro, S. J., Benner, L. A., Giorgini, J. D., and Nolan, M. C.: Determining asteroid spin states using radar speckles, Icarus, 209, 535–541, 2010.a
Campbell, B. A.: Planetary geology with imaging radar: insights from earth-based lunar studies, 2001–2015, Astr. Soc. P., 128, 062001, https://doi.org/10.1088/1538-3873/128/964/062001, 2016.a
Čapek, D.: Rotation of cometary meteoroids, Astron. Astrophys., 568, A39, https://doi.org/10.1051/0004-6361/201423857, 2014.a
Chesley, S. R. and Chodas, P. W.: Asteroid close approaches: analysis and potential impact detection, in: Asteroids III, University of Arizona Press, Tucson, AZ, USA, 55–69, 2002.a
Fedorets, G., Granvik, M., and Jedicke, R.: Orbit and size distributions for asteroids temporarily captured by the Earth-Moon system, Icarus, 285, 83–94, https://doi.org/10.1016/j.icarus.2016.12.022,
2017.a, b, c, d, e
Fedorets, G., Granvik, M., Jones, R. L., Jurić, M., and Jedicke, R.: Discovering Earth's transient moons with the Large Synoptic Survey Telescope, Icarus, 338, 113517, https://doi.org/10.1016/
j.icarus.2019.113517, 2020.a, b
Flegel, S., Gelhaus, J., Wiedemann, C., Vorsmann, P., Oswald, M., Stabroth, S., Klinkrad, H., and Krag, H.: Invited Paper: The MASTER-2009 Space Debris Environment Model, in: Fifth European
Conference on Space Debris, ESOC, Darmstadt, Germany, Vol. 672 of ESA Special Publication, p. 15, 2009.a, b
Fowler, J. and Chillemi, J.: IRAS asteroid data processing, edited by: Tedesco, E. F., Veeder, G. J., Fowler, J. W., and Chillemi, J. R., The IRAS Minor Planet Survey, Technical Report PL-TR-92-2049,
Phillips Laboratory, Hanscom AF Base, MA, 1992.a
Granvik, M., Vaubaillon, J., and Jedicke, R.: The population of natural Earth satellites, Icarus, 218, 262–277, https://doi.org/10.1016/j.icarus.2011.12.003, 2012.a, b, c
Granvik, M., Jedicke, R., Bolin, B., Chyba, M., and Patterson, G.: Earth's Temporarily-Captured Natural Satellites – The First Step towards Utilization of Asteroid Resources, Asteroids: Prospective
Energy and Material Resources, edited by: Badescu, V., Springer, Berlin, 151–167, https://doi.org/10.1007/978-3-642-39244-3_6, 2013.a
Granvik, M., Morbidelli, A., Jedicke, R., Bolin, B., Bottke, W. F., Beshore, E., Vokrouhlický, D., Delbò, M., and Michel, P.: Super-catastrophic disruption of asteroids at small perihelion distances,
Nature, 530, 303–306, https://doi.org/10.1038/nature16934, 2016.a
Granvik, M., Morbidelli, A., Jedicke, R., Bolin, B., Bottke, W. F., Beshore, E., Vokrouhlický, D., Nesvorný, D., and Michel, P.: Debiased orbit and absolute-magnitude distributions for near-Earth
objects, Icarus, 312, 181–207, https://doi.org/10.1016/j.icarus.2018.04.018, 2018.a
Harris, A. W. and Harris, A. W.: On the Revision of Radiometric Albedos and Diameters of Asteroids, Icarus, 126, 450–454, https://doi.org/10.1006/icar.1996.5664, 1997.a
Ivezić, Ž., Kahn, S. M., Tyson, J. A., Abel, B., Acosta, E., Allsman, R., Alonso, D., AlSayyad, Y., Anderson, S. F., Andrew, J., Angel, J. R. P., Angeli, G. Z., Ansari, R., Antilogus, P., Araujo, C.,
Armstrong, R., Arndt, K. T., Astier, P., Aubourg, É., Auza, N., Axelrod, T. S., Bard, D. J., Barr, J. D., Barrau, A., Bartlett, J. G., Bauer, A. E., Bauman, B. J., Baumont, S., Bechtol, E., Bechtol,
K., Becker, A. C., Becla, J., Beldica, C., Bellavia, S., Bianco, F. B., Biswas, R., Blanc, G., Blazek, J., Bland ford, R. D., Bloom, J. S., Bogart, J., Bond, T. W., Booth, M. T., Borgland, A. W.,
Borne, K., Bosch, J. F., Boutigny, D., Brackett, C. A., Bradshaw, A., Brand t, W. N., Brown, M. E., Bullock, J. S., Burchat, P., Burke, D. L., Cagnoli, G., Calabrese, D., Callahan, S., Callen, A. L.,
Carlin, J. L., Carlson, E. L., Chand rasekharan, S., Charles-Emerson, G., Chesley, S., Cheu, E. C., Chiang, H.-F., Chiang, J., Chirino, C., Chow, D., Ciardi, D. R., Claver, C. F., Cohen-Tanugi, J.,
Cockrum, J. J., Coles, R., Connolly, A. J., Cook, K. H., Cooray, A., Covey, K. R., Cribbs, C., Cui, W., Cutri, R., Daly, P. N., Daniel, S. F., Daruich, F., Daubard, G., Daues, G., Dawson, W.,
Delgado, F., Dellapenna, A., de Peyster, R., de Val-Borro, M., Digel, S. W., Doherty, P., Dubois, R., Dubois-Felsmann, G. P., Durech, J., Economou, F., Eifler, T., Eracleous, M., Emmons, B. L.,
Fausti Neto, A., Ferguson, H., Figueroa, E., Fisher-Levine, M., Focke, W., Foss, M. D., Frank, J., Freemon, M. D., Gangler, E., Gawiser, E., Geary, J. C., Gee, P., Geha, M., Gessner, C. J. B.,
Gibson, R. R., Gilmore, D. K., Glanzman, T., Glick, W., Goldina, T., Goldstein, D. A., Goodenow, I., Graham, M. L., Gressler, W. J., Gris, P., Guy, L. P., Guyonnet, A., Haller, G., Harris, R.,
Hascall, P. A., Haupt, J., Hernand ez, F., Herrmann, S., Hileman, E., Hoblitt, J., Hodgson, J. A., Hogan, C., Howard, J. D., Huang, D., Huffer, M. E., Ingraham, P., Innes, W. R., Jacoby, S. H., Jain,
B., Jammes, F., Jee, M. J., Jenness, T., Jernigan, G., Jevremović, D., Johns, K., Johnson, A. S., Johnson, M. W. G., Jones, R. L., Juramy-Gilles, C., Jurić, M., Kalirai, J. S., Kallivayalil, N. J.,
Kalmbach, B., Kantor, J. P., Karst, P., Kasliwal, M. M., Kelly, H., Kessler, R., Kinnison, V., Kirkby, D., Knox, L., Kotov, I. V., Krabbendam, V. L., Krughoff, K. S., Kubánek, P., Kuczewski, J.,
Kulkarni, S., Ku, J., Kurita, N. R., Lage, C. S., Lambert, R., Lange, T., Langton, J. B., Le Guillou, L., Levine, D., Liang, M., Lim, K.-T., Lintott, C. J., Long, K. E., Lopez, M., Lotz, P. J.,
Lupton, R. H., Lust, N. B., MacArthur, L. A., Mahabal, A., Mand elbaum, R., Markiewicz, T. W., Marsh, D. S., Marshall, P. J., Marshall, S., May, M., McKercher, R., McQueen, M., Meyers, J., Migliore,
M., Miller, M., Mills, D. J., Miraval, C., Moeyens, J., Moolekamp, F. E., Monet, D. G., Moniez, M., Monkewitz, S., Montgomery, C., Morrison, C. B., Mueller, F., Muller, G. P., Muñoz Arancibia, F.,
Neill, D. R., Newbry, S. P., Nief, J.-Y., Nomerotski, A., Nordby, M., O'Connor, P., Oliver, J., Olivier, S. S., Olsen, K., O'Mullane, W., Ortiz, S., Osier, S., Owen, R. E., Pain, R., Palecek, P. E.,
Parejko, J. K., Parsons, J. B., Pease, N. M., Peterson, J. M., Peterson, J. R., Petravick, D. L., Libby Petrick, M. E., Petry, C. E., Pierfederici, F., Pietrowicz, S., Pike, R., Pinto, P. A., Plante,
R., Plate, S., Plutchak, J. P., Price, P. A., Prouza, M., Radeka, V., Rajagopal, J., Rasmussen, A. P., Regnault, N., Reil, K. A., Reiss, D. J., Reuter, M. A., Ridgway, S. T., Riot, V. J., Ritz, S.,
Robinson, S., Roby, W., Roodman, A., Rosing, W., Roucelle, C., Rumore, M. R., Russo, S., Saha, A., Sassolas, B., Schalk, T. L., Schellart, P., Schindler, R. H., Schmidt, S., Schneider, D. P.,
Schneider, M. D., Schoening, W., Schumacher, G., Schwamb, M. E., Sebag, J., Selvy, B., Sembroski, G. H., Seppala, L. G., Serio, A., Serrano, E., Shaw, R. A., Shipsey, I., Sick, J., Silvestri, N.,
Slater, C. T., Smith, J. A., Smith, R. C., Sobhani, S., Soldahl, C., Storrie-Lombardi, L., Stover, E., Strauss, M. A., Street, R. A., Stubbs, C. W., Sullivan, I. S., Sweeney, D., Swinbank, J. D.,
Szalay, A., Takacs, P., Tether, S. A., Thaler, J. J., Thayer, J. G., Thomas, S., Thornton, A. J., Thukral, V., Tice, J., Trilling, D. E., Turri, M., Van Berg, R., Vanden Berk, D., Vetter, K.,
Virieux, F., Vucina, T., Wahl, W., Walkowicz, L., Walsh, B., Walter, C. W., Wang, D. L., Wang, S.-Y., Warner, M., Wiecha, O., Willman, B., Winters, S. E., Wittman, D., Wolff, S. C., Wood-Vasey,
W. M., Wu, X., Xin, B., Yoachim, P., and Zhan, H.: LSST: From Science Drivers to Reference Design and Anticipated Data Products, Astrophys. J., 873, 111, https://doi.org/10.3847/1538-4357/ab042c,
Jedicke, R., Bolin, B. T., Bottke, W. F., Chyba, M., Fedorets, G., Granvik, M., Jones, L., and Urrutxua, H.: Earth's Minimoons: Opportunities for Science and Technology, Frontiers in Astronomy and
Space Sciences, 5, 13, https://doi.org/10.3389/fspas.2018.00013, 2018.a
Kaasalainen, M. and Viikinkoski, M.: Shape reconstruction of irregular bodies with multiple complementary data sources, Astron. Astrophys., 543, A97, https://doi.org/10.1051/0004-6361/201219267,
Kastinen, D., Vierinen, J., Kero, J., Hesselbach, S., Grydeland, T., and Krag, H.: Next-generation Space Object Radar Tracking Simulator: SORTS++, European Space Agency, ESOC, Darmstadt, Germany,
2019.a, b
Kessler, D. J., Landry, P. M., Gabbard, J. R., and Moran, J. L. T.: Ground radar detection of meteoroids in space, in: Solid Particles in the Solar System, edited by: Halliday, I. and McIntosh,
B. A., Vol. 90 of IAU Symposium, 137–139, Proceedings of the Symposium, Ottawa, Canada, D. Reidel Publishing Co., Dordrecht, 1980.a
Krisko, P. H.: NASA's New Orbital Debris Engineering Model, ORDEM 2010, in: Making Safety Matter, Proceedings of the fourth IAASS Conference, 19–21 May 2010, Huntsville, AL, edited by:
Lacoste-Francis, H., ESA-SP Vol. 680, p. 50, 2010.a
Krisko, P. H.: The new NASA orbital debris engineering model ORDEM 3.0, in: AIAA/AAS Astrodynamics Specialist Conference, San Diego, CA, p. 4227, American Institute of Aeronautics and Astronautics,
Reston (HQ), VA, United States, https://doi.org/10.2514/6.2014-4227, 2014.a
Kwiatkowski, T., Kryszczyńska, A., Polińska, M., Buckley, D. A. H., O'Donoghue, D., Charles, P. A., Crause, L., Crawford, S., Hashimoto, Y., Kniazev, A., Loaring, N., Romero Colmenero, E., Sefako,
R., Still, M., and Vaisanen, P.: Photometry of 2006 RH{120}: an asteroid temporary captured into a geocentric orbit, Astron. Astrophys., 495, 967–974, https://doi.org/10.1051/0004-6361:200810965,
Li, A., Close, S., and Markannen, J.: EISCAT Space Debris after the International Polar Year (IPY), in: Conference Proceedings from IAC, Naples, Italy, Vol. 12, p. A6, International Astronautical
Federation, Paris, France, 2012.a
Liou, J.-C., Matney, M. J., Anz-Meador, P. D., Kessler, D., Jansen, M., and Theall, J. R.: The new NASA orbital debris engineering model ORDEM2000, 2002.a
Markkanen, J., Jehn, R., and Krag, H.: EISCAT space debris during the IPY – a 5000 hour campaign, in: Proceedings of the Fifth European Conference on Space Debris, 30 March–2 April 2009, Darmstadt,
Germany, edited by: Lacoste, H., ESA-SP Vol. 672, European Space Agency, 2009.a, b, c
McCrea, I., Aikio, A., Alfonsi, L., Belova, E., Buchert, S., Clilverd, M., Engler, N., Gustavsson, B., Heinselman, C., Kero, J., Kosch, M., Lamy, H., Leyser, T., Ogawa, Y., Oksavik, K.,
Pellinen-Wannberg, A., Pitout, F., Rapp, M., Stanislawska I., and Vierinen, J.: The science case for the EISCAT_3D radar, Prog. Earth Planet. Sci., 2, 21, https://doi.org/10.1186/s40645-015-0051-8,
Morbidelli, A., Delbo, M., Granvik, M., Bottke, W. F., Jedicke, R., Bolin, B., Michel, P., and Vokrouhlicky, D.: Debiased albedo distribution for Near Earth Objects, Icarus, 340, 113631, https://
doi.org/10.1016/j.icarus.2020.113631, 2020.a
Naidu, S. P., Benner, L. A., Margot, J.-L., Busch, M. W., and Taylor, P. A.: Capabilities of Earth-based radar facilities for near-Earth asteroid observations, Astron. J., 152, 4, https://doi.org/
10.3847/0004-6256/152/4/99, 2016.a, b, c
NASA JPL: CNEOS NEO Earth Close Approaches, available at: https://cneos.jpl.nasa.gov/ca/, last access: 13 March 2020.a, b
Ostro, S. J.: Planetary radar, NASA Jet Propulsion Laboratory, Technical Reports Server, Pasadena, California, United States, available at: https://trs.jpl.nasa.gov/ (last access: 10 July 2020),
1992.a, b, c
Ostro, S. J.: The role of groundbased radar in near-Earth object hazard identification and mitigation, Univ. of Arizona Press, Pasadena, CA, United States, 1994.a
Ostro, S. J., Connelly, R., and Belkora, L.: Asteroid shapes from radar echo spectra: A new theoretical approach, Icarus, 73, 15–24, 1988.a
Pellinen-Wannberg, A., Kero, J., Häggström, I., Mann, I., and Tjulin, A.: The forthcoming EISCAT_3D as an extra-terrestrial matter monitor, Planet. Space Sci., 123, 33–40, 2016.a, b
Pravec, P., Harris, A. W., and Michalowski, T.: Asteroid rotations, in: Asteroids III, edited by: Bottke Jr., W. F., Cellino, A., Paolicchi, P., and Binzel, R. P., University of Arizona Press,
Tucson, 113–122, 2002.a
Rein, H. and Liu, S.-F.: REBOUND: an open-source multi-purpose N-body code for collisional dynamics, Astron. Astrophys., 537, A128, https://doi.org/10.1051/0004-6361/201118085, 2012.a
Rein, H. and Spiegel, D. S.: IAS15: a fast, adaptive, high-order integrator for gravitational dynamics, accurate to machine precision over a billion orbits, Mon. Not. R. Astron. Soc., 446, 1424–1437,
https://doi.org/10.1093/mnras/stu2164, 2015.a
Solin, O. and Granvik, M.: Monitoring near-Earth-object discoveries for imminent impactors, Astron. Astrophys., 616, A176, https://doi.org/10.1051/0004-6361/201832747, 2018.a
Taylor, P., Rivera-Valentín, E. G., Bonsall, A., Becker, T. M., Benner, A., Bhiravarasu, S. S., Brozovic, M., Busch, M. W., Giorgini, J. D., Harris, A. W., Magri, C., Mainzer, A. K., Margot, J.-L.,
Marshall, S. E., Masiero, J. R., Naidu, S. P., Nolan, M. C., Patterson, G. W., Prockter, L. M., Sizemore, H. G., Swindle, T. D., Venditti, F. C. F., and Virkki, A. K.: Planetary Radar Astronomy with
Ground-Based Astrophysical Assets, Astro2020 Science White Paper, American Astronomical Society, Washington, D.C., USA, 2019.a
Taylor, P. A., Howell, E. S., Nolan, M. C., and Thane, A. A.: The Shape and Spin Distributions of Near-Earth Asteroids Observed with the Arecibo Radar System, in: American Astronomical Society
Meeting Abstracts #220, Vol. 220 of American Astronomical Society Meeting Abstracts, p. 128.02, American Astronomical Society, Washington, D.C., USA, 2012.a
Thompson, T. W., Campbell, B. A., and Bussey, D. B. J.: 50 Years of Arecibo Lunar radar mapping, URSI Radio Science Bulletin, 2016, 23–35, 2016.a
Vierinen, J.: Indian anti-satellite debris measured with the EISCAT Tromsø radar, available at: http://www.radio-science.net/2019/04/indian-anti-satellite-debris-measured.html (last access: 13 March
2020), 2019.a
Vierinen, J. and Lehtinen, M. S.: 32-cm wavelength radar mapping of the moon, in: 2009 European Radar Conference (EuRAD), Rome, Italy, 222–225, IEEE, 2009.a
Vierinen, J., Markkanen, J., and Krag, H.: High power large aperture radar observations of the Iridium-COSMOS collision, in: Proceedings of the Fifth European Conference on Space Debris, 30 March–2
April 2009, Darmstadt, Germany, edited by: Lacoste, H., ESA-SP Vol. 672, European Space Agency, 2009.a
Vierinen, J., Markkanen, J., Krag, H., Siminski, J., and Mancas, A.: Use of EISCAT 3D for Observations of Space Debris, 7th European Conference on Space Debris ESA/ESOC, Darmstadt/Germany 18–21 April
2017, ESA Space Debris Office, 2017a.a
Vierinen, J., Tveito, T., Gustavsson, B., Kesaraju, S., and Milla, M.: Radar images of the Moon at 6-meter wavelength, Icarus, 297, 179–188, 2017b.a
Vierinen, J., Kastinen, D., Kero, J., Grydeland, T., McKay, D., Røynestad, E., Hesselbach, S., Kebschull, C., and Krag, H.: EISCAT 3D Performance Analysis, Tech. rep., ESA/ESOC Technical Management,
Darmstadt, Germany, 2019a.a, b, c
Vierinen, J., Kastinen, D., Markkanen, J., Grydeland, T., Kero, J., Horstmann, A., Hesselbach, S., Kebschull, C., Røynestad, E., and Krag, H.: 2018 Beam-park observations of space debris with the
EISCAT radars, European Space Agency, ESOC, Darmstadt, Germany, 2019b. a
Zellner, B. and Gradie, J.: Minor planets and related objects. XX-Polarimetric evidence for the albedos and compositions of 94 asteroids, Astron. J., 81, 262–280, 1976.a | {"url":"https://angeo.copernicus.org/articles/38/861/2020/","timestamp":"2024-11-08T01:23:58Z","content_type":"text/html","content_length":"383784","record_id":"<urn:uuid:aed7278e-2285-44b9-843b-b821968ae766>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00461.warc.gz"} |
Regularization in Machine Learning
Source: google images
Table of contents:
1. Overfitting:
Sometimes the machine learning model performs well with the training data but does not perform well with the test data. It means the model is not able to predict the output when dealing with unseen
data by introducing noise in the output, and hence the model is called overfitting. Noise means the data points that do not represent the actual property of the data, but random chance.
2. Overfitting Examples:
In Image1, we are trying to fit a model on regression data. We can use one of the linear, quadratic, and polynomial function while fitting a regression model. Many times a linear regression model may
under fits the data while a quadratic function form may provide a better fit. To derive a greater performance of fit, we can use a polynomial function which will fit the data very closely. While
polynomial function fitting may appear a great model for this dataset, but if we change the dataset, the same model may turn out to be a poor fit for the new data (i.e. high variance). This is
because the polynomial function fits the model so close to the original data that it does not generalizable across other similar data, which would result in overfitting.
Image1: Under fit — Linear function(Left), Just Right — Quadratic function (Center), Overfit — Polynomial function (Right) fitting to the regression data
In Image2, when we try to fit a model on classification data. Linear function would be too simple to explain variance in the data and hence result in Under fit. Quadratic function form may result in
an appropriate fit. The polynomial function form would predict too good to be true, and it might fail to perform well on unseen data and result in overfitting.
Image2: Under fit-linear function(Left), Just Right-quadratic function(Center), Overfit-polynomial function(Right) fitting on classification data
3. How to overcome Overfitting?
One way is to reduce the number of features in the model. But doing so would result in loss of information, and thus the model will not have the benefit of all the information (provided by features)
that is available.
When we have a lot of features and each feature contributes a bit to predict, we can’t remove the features. In this case, the solution is Regularization. Our model needs to be robust to perform well
on both train and test data. For that, we try to:
• Shrink the coefficient (or weight or parameter θ) of the features in the model
• Getting rid of high degree polynomial features from the model
The above solution would result in a simpler hypothesis and be less prone to overfitting. This can be achieved using Regularization. This technique discourages the learning of a more complex or
flexible model, to avoid the risk of overfitting.
4. What parameters (θ’s) to penalize?
Now we know that in the Regularization technique we reduce the magnitude/value of features (called θ’s) and penalize/ reduce the impact of higher-degree polynomial terms of features. But we don’t
know which parameters (θ’s) are high order degree polynomial terms. So in Regularization, we are going to modify the cost function to shrink all the parameters (θ’s).
5. Regularization:
It is a technique to prevent the model from overfitting by adding extra information to it. During Regularization, the predicted output function does not change. The change is only in the cost
The cost function of Linear Regression which is called Residual Sum of Square (RSS) is given by:
Based on the training data, RSS will adjust the coefficient θs to minimize the cost function using Gradient Descent (or other optimization techniques). If there is noise in the training data, the
model will not be able to generalize well to the future unseen data and will overfit. Here, Regularization comes into the picture and shrinks these coefficients to zero.
6. Types of Regularization:
Regularization could be of types:
L1 Norm or Lasso Regression
L2 Norm or Ridge Regression
• L1-Lasso Regression helps to reduce the overfitting in the model as well as feature selection. The L1 penalty forces some coefficient estimates to be exactly equal to zero, which means there is
complete removal of some features for model evaluation when the tuning parameter lambda (λ) is sufficiently large. Therefore, the lasso method also performs feature selection and is said to yield
sparse models.
• L2-Ridge Regression is mostly used to reduce the overfitting in the model, and it includes all the features present in the model. It reduces the complexity of the model by shrinking the
coefficients. The cost function is altered by adding the penalty term (shrinkage term), which multiplies the lambda (λ) with the squared weight (θi) of each individual feature. The penalty term
regularizes the coefficients of the model, and hence ridge regression reduces the magnitudes of the coefficients that help to decrease the complexity of the model. The cost function becomes:
λ is the regularization parameter, which decides how much to penalize the flexibility of the model. If the model is highly flexible which means the variance of the model is very high, and it changes
with a small amount of change in the data → coefficient of the model would be larger. But in order to minimize the cost function (with regularization), these coefficient values should be less. That’s
how Ridge Regularization prevents the coefficient values from rising too high.
Here we are not penalizing θ0 and penalty is starting from θ1, in practice it makes very little difference in the final result, so by convention, we only penalize coefficient starting from θ1 till
When λ = 0, Ridge Regularization will not do any regularization, the model will remain overfitting and with high variance.
When λ is very large → infinity → loss term is diminished → the training data does not participate in the optimization → we are just optimizing for the regularization term, the cost function is
minimized when θ1, θ2, …, θp all = 0 → cost function remains only with bias term-b → results in linear model → Under fit → High Bias
Selecting the good value of λ is critical, the selection is done using cross-validation.
7. Regularization Parameter λ:
λ is a regularization parameter that controls overfitting, and It's the tradeoff between 2 terms:
• Fitting training dataset well
• Keeping parameters θ’s small and keeping hypothesis simple to avoid overfitting
8. Why L1 creates Sparsity?
With a sparse model, we think of a model where many of the weights /coefficients (θ’s) are 0. Let us therefore reason about how L1 is more likely to create 0 weights. If we compare the cost function
(with regularization term) for L2:
Cost Function in case of L2
and L1:
Cost function in case of L1
In optimization formulation for comparison, Loss and λ can be ignored as they are the same for both L1 and L2 regularization. So we end up comparing below:
L2 and L1 norms
which are of shown diagrammatically as follows:
L2 and L1 norm
Derivative of L2 and L2 is represented as below:
Weight updates for both L2 and L1 is given by below formula of Gradient Descent:
Weight update using Gradient Descent with learning rate (α)
Let θ1 is positive (similarly can we can do for negative θ1):
L2 updates occurs less when compared to L1 updates as we reach closer to optimum. That is, the rate of convergence decreases because in L2 regularization we have 2 * θ1 *α, which is less than α. L2
doesn’t change the value of θ1 from one iteration to another. L1 regularization continues to constantly reduce θ1 towards θ1 = 0. This happens because L1 derivative is constant and L2 derivative is
not constant. The chance of weights reaching 0 is more for L1 regularization, as the derivative is constant and independent of the previous weight value. L2 regularization has derivatives reducing as
the derivative is dependent on the previous iteration weight value, which is converging to optimal.
9. References:
[1] https://stats.stackexchange.com/questions/45643/why-l1-norm-for-sparse-models
[2] https://www.youtube.com/watch?v=Xm2C_gTAl8c
[3] https://www.youtube.com/watch?v=IXPgm1e0IOo | {"url":"https://heena-sharma.medium.com/regularization-in-machine-learning-e7445c3166cd","timestamp":"2024-11-12T03:45:51Z","content_type":"text/html","content_length":"169588","record_id":"<urn:uuid:7e72ac5e-b576-4df2-918f-57e696665886>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00028.warc.gz"} |
[Solved] Which of the following statements are true (T) and whi... | Filo
Which of the following statements are true (T) and which are false (F):
The two altitudes corresponding to two equal sides of a triangle need not be equal.
Not the question you're searching for?
+ Ask your question
False (F)
Reason: Since two sides are equal, the triangle is an isosceles triangle.
⇒ The two altitudes corresponding to two equal sides must be equal.
Was this solution helpful?
Found 3 tutors discussing this question
Discuss this question LIVE for FREE
9 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice questions from Mathematics Class 9 (RD Sharma)
View more
Practice more questions from Triangles
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Which of the following statements are true (T) and which are false (F):
Question Text The two altitudes corresponding to two equal sides of a triangle need not be equal.
Topic Triangles
Subject Mathematics
Class Class 9
Answer Type Text solution:1
Upvotes 3 | {"url":"https://askfilo.com/math-question-answers/which-of-the-following-statements-are-true-t-and-which-are-false-f-the-two","timestamp":"2024-11-04T21:21:52Z","content_type":"text/html","content_length":"260254","record_id":"<urn:uuid:d11277bd-2f9e-4a52-ab9c-ed5b58f663ca>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00825.warc.gz"} |
Emergence of quasiparticle Bloch states
SciPost Submission Page
Emergence of quasiparticle Bloch states in artificial crystals crafted atom-by-atom
by Jan Girovsky, Jose L. Lado, Floris E. Kalff, Eleonora Fahrenfort, Lucas J. J. M. Peters, Joaquín Fernández-Rossier, Alexander F. Otte
This Submission thread is now published as
Submission summary
Authors (as registered SciPost users): Floris Kalff · Jose Lado · Sander Otte
Submission information
Preprint Link: http://arxiv.org/abs/1703.05029v3 (pdf)
Date accepted: 2017-06-02
Date submitted: 2017-05-25 02:00
Submitted by: Otte, Sander
Submitted to: SciPost Physics
Ontological classification
Academic field: Physics
Specialties: • Condensed Matter Physics - Experiment
Approaches: Experimental, Computational
The interaction of electrons with a periodic potential of atoms in crystalline solids gives rise to band structure. The band structure of existing materials can be measured by photoemission
spectroscopy and accurately understood in terms of the tight-binding model, however not many experimental approaches exist that allow to tailor artificial crystal lattices using a bottom-up approach.
The ability to engineer and study atomically crafted designer materials by scanning tunnelling microscopy and spectroscopy (STM/STS) helps to understand the emergence of material properties. Here, we
use atom manipulation of individual vacancies in a chlorine monolayer on Cu(100) to construct one- and two-dimensional structures of various densities and sizes. Local STS measurements reveal the
emergence of quasiparticle bands, evidenced by standing Bloch waves, with tuneable dispersion. The experimental data are understood in terms of a tight-binding model combined with an additional
broadening term that allows an estimation of the coupling to the underlying substrate.
Author comments upon resubmission
We thank each of the three reviewers for their thorough reading and evaluation of our manuscript, as well as for their kind and helpful suggestions to improve the text. Below, we will address all
issues raised in a point-by-point manner.
List of changes
In response to Report 139:
1. The authors fail to compare/contrast and cite the original work of N. Nilius et al, Science, 297, 1853 (2002), which was the first work of this kind. This paper originally looked at the
development of 1D band structure in a nearly identical experiment. Before publication, this paper should be adequately cited and discussed in context of the new findings here.
We thank the reviewer for highlighting the publication, which is certainly relevant in view of the current work. The work by Nilius et al. discusses the observed modes within the free-electron model,
whilst in our publication we treat the system of the coupled electronic states using the tight-binding model. In the revised version of the manuscript we cite the publication as reference 14 and add
a discussion in paragraph 2. “Similar wave patterns were reported previously in assembled chains of Au atoms [14], which were best described in terms of a free electron model.” and further in
paragraph 4 “The vacancy state exhibits similarities to localized states observed on gold atoms adsorbed on NiAl(110) [14], …”
2. Why do the authors not see the development of standing waves in the 1D chains? I find it peculiar that the length dependence of the electronic structure saturates already at six atoms? Is there
some explanation for this?
We speculate that the lack of standing waves in the 1D lattices is related to the relatively strong hybridization of the vacancy states with the underlying substrate. The broadened electronic states
are likely to overlap with the conduction band and therefore we have not been able to resolve the confined modes experimentally. For the 2D lattices, a larger downward shift of the vacancy states was
observed, resulting in less hybridization with the conduction band. Regarding the saturation of the electronic structure: as shown by Drost et. al, (Nat. Phys. 2017), the hopping integral t depends
exponentially on distance. In case of sparse lattices, i.e. {3,0}, the nearest neighbours are relatively far apart and thus the interaction between adjacent vacancies is already very small and the
next nearest neighbours does not contribute to that interaction. Denser lattices, having vacancies closer to each other, will have non-negligible contribution of higher order, as also demonstrated
especially in the stripes and checkerboard lattices. Therefore, the point where the lattices exhibit saturation depends on the distance and the hopping term as demonstrated in Figure 1f.
3. I am missing a value of k_F, or some reference to a wavelength here? How does this compare to the length of the 1D and 2D structures?
The effective wavelengths of the confined modes can be read from the fits in Fig. 4. All of the observed modes have k-vector smaller than the Fermi wavelength of bulk copper, k_F = 13.62 nm-1, which
thus does not play a role in these modes, since the confined modes arise from hopping between localized levels. The modes seen in the experiment correspond to confined modes of the emergent lattice
formed by the vacancy states. The wavelength of the pattern is determined by the wavelengths of the different modes, which show a smooth crossover as the bias is modified due to the finite coupling
to the bath.
4. On page 2, the authors write “bulk limit.” As this is not a 3D structure, I find the use of the word bulk a bit misleading. I would suggest something like long wavelength limit, or 2D limit.
We agree with the reviewer that the lattices are low dimensional and the term “bulk limit” is not appropriate. In the revised version of the manuscript we use the term “… in the limit of infinite
lattice size …”, instead.
5. I found it very difficult to read the color dots in Fig 1, indicating where the spectra were taken.
We have slightly enlarged the circles in the topography insets to make it easier to read. In view of the large number of spectra presented, the aim of the colour coding is not necessarily to match
every single curve to a specific location, but rather to visualise the trend in spectroscopic evolution as a function of position inside the lattices.
6. In Fig 1f, the authors use “( )” and in the paper “{ }.” I would suggest to keep this consistent, and in the text introduce what this notation means as it is just suddenly used. Can they relate
{x,y} also to the crystallographic axes?
We thank the reviewer for noticing this discrepancy in the nomenclature and his/her suggestion to improve the consistency of the manuscript. We now use curly brackets to denote lattice spacing
consistently throughout the manuscript. We also add the following lines to describe the relation of the lattice spacing with the crystallographic axis: “The notation {x,y} used here for 1D lattices
describes a spacing between adjacent vacancies in the horizontal and vertical directions, respectively, in multiplies of the lattice constant a = 3.55 angstrom.” and “For 2D lattices, the notation
{x,y} denotes the lattice spacing in the x and y directions in units of the lattice constant a”
7. I fail to understand why the tight binding parameters contradict the experiment. Can the authors give more insight as to why, and in what manner this could be checked or reconciled for any future
calculations for follow-up work?
An apparent smaller effective mass for the dispersive pattern in stripes than for checkerboard seems to be in conflict with a picture where smaller hopping (in one direction) yields larger effective
mass. Nevertheless, the theoretical simulations of the dispersive pattern give exactly the same trend. The previous dilemma is solved by taking into account that the pattern observed in the stripes
lattice not only reflects the wavelength of the confined modes, but it is also strongly influenced by the geometry of the finite stripes box. Such interplay is properly captured in the numerical
simulation, but cannot be easily disentangled to extract the true effective mass associated with the tight binding parameters. Therefore, the discrepancy between the tight binding mass and the
apparent effective mass in the stripes lattice comes from a geometrical effect, yet the numerical simulation of the dispersive pattern reproduce the experimental findings. We have reworded the text
to reflect this notion.
8. Why is dz/dV more sensitive than dI/dV (bottom page 4)? Can this explain why the authors don’t see standing waves in the 1D structures? I’m not sure I agree with this sentence, especially if the
argument is that this is just a normalized dI/dV curve?
The referee is correct that dz/dV is just a normalized dI/dV curve, but this normalization by the total current makes it much easier to observe spectroscopic features that exist in a voltage range
where the total current changes rapidly, which is the case in our measurements. For example, the dz/dV spectra taken on the stripes lattice show two peaks, whilst the dI/dV measured on very same
lattice with the same tip does not reveal that.
9. A helpful suggestion: dI/dZ(V) has also been used in the past to measure band onsets. Maybe this can help in future measurements of the larger structures?
We thank the reviewer for this helpful suggestion. We will consider implementing this into our measurement protocol for future experiments of this kind.
In response to Report 132:
1. It is stated that both the checkerboard and stripe lattices can be accurately modeled by the same tight-binding parameters taking into account the first and the second neighbor hopping while
simple first neighbor approximation proves insufficient. Does this conclusion hold universally for the considered 1d and 2d lattices geometries and a collection of few sites? If so, then one should
make a stronger case and declare that the tight-binding model with the same parameters provides a good universal description of these systems (modulo broadening which can be added by hand). If the
same description is not accurate for all cases (meaning that for same separations one has to use different hopping parameters), then the reason for that should be pondered/identified.
As discussed in response to Report 139, the hopping integrals depend very strongly on separation distance. For this reason, we would expect that in the case of a dense lattice, higher order
neighbours need to be taken into account, whereas for sparse lattices only nearest neighbours will suffice. For this reason, we do not think it is appropriate to extend the findings for the
checkerboard and stripes lattices to a universal statement.
2. Fig. 4 and the paragraph above it explains how to make the connection to the momentum dispersion of the lattice states. Especially, the authors extract the energy as a function of the average <k^
2>. This information, in turn, is employed in extracting the effective mass. The lattices are anisotropic so one would expect different masses in different directions. How is this anisotropy averaged
for <k^2> and how is it reflected in the FFT of dz/dV maps for stipe lattice (only checkerboard is plotted)?
The reviewer is correct that for the stripes lattices one would in principle expect different effective masses for the directions parallel and perpendicular to the stripes. However, as the observed
standing wave patterns in Fig. 3 have almost square symmetry, the corresponding weight in the FFT maps is predominantly along the kx and ky axes. These axes are 45 degrees rotated with respect to the
stripes, and are therefore equivalent to each other. As such, effectively the stripes lattice is found to behave as if it has quasiparticles with isotropic dispersion. We have adjusted the text to
clarify this.
3. Regarding the comparison of effective masses for stripe lattice (sl) and checkerboard lattice (cl), I do not think it is necessary “counterintuitive” that the mass of cl is higher than sl lattice.
The average mass is essentially proportional to average 1/(t*a^2), where t is a hopping element and a is the lattice constant. The masses then depend on the product on t and a^2 and their size is not
a priori clear from the geometry. Furthermore, it should be calculated from the tight-binding model with the extracted fitting parameters for hoppings. I would like to see the data of Fig 4 f
compared to “theoretical” value from the tight-binding model with NN and NNN hopping. It could turn out that the simple comparison to the tight-binding model reproduces the values of averaged
effective masses. If it does not do so, then perhaps there is something counterintuitive in the situation but before doing that there is no way to tell. I would like the authors to complement the
comparison of masses with the tb model and rephrase their findings about the masses if they follow from the above simple argument.
We followed the advice by the reviewer and found out that the simple assumption that the effective mass with the tight-binding model is proportional to 1/(t*a^2) cannot be applied here. For smaller t
terms the effective mass rises and the increased distance between lattice sites is not sufficient to overcome this effect. In Figure 4f we added dispersion curves for checkerboard and stripes
lattices extracted from the images simulating standing waves pattern with NN and NNN hopping terms. The effective masses extracted fitting the theoretical values are similar to the experimental ones,
i.e. checkerboard effective mass from theory m_eff = 0.98 ± 0.06 m_e and stripes m_eff = 0.22 ± 0.06 m_e. We have emphasized in the manuscript that the calculations do not in fact contradict our
In response to Report 124:
1. On page 2, in the second paragraph, first the authors state that a monolayer of chlorine atoms on Cu(100) leads to a shift in the substrate’s work function of 1.35 eV If I am not mistaken, Figure
3b in the Supporting material of Ref 19, indicates that the shift is 1.25 eV (not 1.35 eV).
We thank the reviewer for noticing this typo. We have corrected it in the revised version of the manuscript.
2. On page 2, last paragraph: It may be beneficial to some readers if the authors clarify what the numbers in brackets mean.
We have adjusted this, as described in the response to the report 139, point 6.
3. Page 2, last paragraph: Please describe how the energetic position of the conductance band minimum was determined.
We added this information in the revised version of the manuscript. It reads as follow: “…a sharp step in the differential conductance at ~3.5 V denotes the conduction band minimum (Fig. 1a, black
curve). The precise onset of the band was determined as the maximum in the normalized differential conductance dI/dV × V/I (see Fig. 5).”
4. Page 5, second paragraph: I found it difficult to follow the arguments here. The clarity of the manuscript would be significantly improved if the authors would also show (some of) the maps
simulated without surface interactions in Figure 3 (currently in Supplementary Figure 2). In addition, I suggest the authors highlight the discrepancies between the experimental maps and the maps
simulated without surface interaction in the figure.
We agree that the argumentation was unclear at this point and we have revised the text to make it more understandable. In order to avoid cluttering, we would prefer not to move additional content to
Fig. 3. However, we point out that the final document will contain the supplementary figures in the appendix, which will follow immediately after the text and references. What is currently
supplementary figure 2 will become Fig. 6 so that the simulated maps of the bare (N,M) modes can be readily compared to the full simulations including the surface interactions.
5. Page 6, last paragraph: please describe the procedure how <k^2> was calculated.
We add the information into the Methods section of our manuscript.
6. Page 6, last paragraph: Typo: 'Fig. 4e shows .....' I believe it should be 'Fig. 4f shows .....'
We thank the reviewer for noticing this typo.
Published as SciPost Phys. 2, 020 (2017) | {"url":"https://scipost.org/submissions/1703.05029v3/","timestamp":"2024-11-02T08:02:11Z","content_type":"text/html","content_length":"45531","record_id":"<urn:uuid:07d510c4-718b-4481-9b42-61dc6dfed450>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00069.warc.gz"} |
LeetCode: Binary Tree Maximum Path Sum - GoHired
Given a non-empty binary tree, find the maximum path sum.
For this problem, a path is defined as any sequence of nodes from some starting node to any node in the tree along the parent-child connections. The path must contain at least one node and does not
need to go through the root.
Example 1:
Input: [1,2,3]
/ \
Output: 6
Example 2:
Input: [-10,9,20,null,null,15,7]
/ \
/ \
Output: 42
Question Link: Binary Tree Maximum Path Sum
On the first look, the problems looks like a All pairs shortest path problem. That can be solved using Floyd-Warshall algorithm which is used to find shortest distance between all the points of the
graph/tree. But then complexity would be O(n^3) which is not optimal for the problem.
Further thinking reveals that we can apply Post Order Tree Traversal along with Dynamic Programming to solve this problem.
The main observation for the dp states in this problem is that there can be four ways the current node can be a part of the maximum path:
1. The Node along with maximum path with the left child as well as the right child
We have to check the maximum of the above four and update our answer accordingly.
So now as we know the states we have to look for the current node, what is left is to ask what we should pass above to the parent?
We obviously can’t pass the 4th one above as the cureent node is not a terminal node of the maximum path but a intermediate one. So the idea is to pass the maximum of the rest of the three above and
proceed in the similar manner. After the passing each result, the tree would look something like this:
Implementation details can be understood in the code:
Implemented in C++
* Definition for a binary tree node.
* struct TreeNode {
* int val;
* TreeNode *left;
* TreeNode *right;
* TreeNode(int x) : val(x), left(NULL), right(NULL) {}
* };
class Solution {
int maxx; //stores the result
//Post Order Traversal Helper Function
int postOrder(TreeNode* root) {
if(root==NULL) return 0; //base condition
int left = max(0, postOrder(root -&amp;gt; ;left)); //takes the maximum from left child and 0
int right = max(0, postOrder(root - &amp;gt;;right)); //takes the maximum from right child and 0
maxx = max(maxx, left + right + root-&amp;gt;val); //calculates thg maximum possible at the current node (this comprises all four cases 1,2,3 and 4)
return max(left, right) + root-&amp;gt;val; //returning maximum from left, right along with current.
int maxPathSum(TreeNode* root) {
maxx=root-&amp;gt;val; //initializing with root's value
postOrder(root); //traversing using Post Order Traversal
return maxx; //returning maximum | {"url":"https://gohired.in/2019/10/05/leetcode-binary-tree-maximum-path-sum/","timestamp":"2024-11-09T06:51:41Z","content_type":"text/html","content_length":"97535","record_id":"<urn:uuid:36f104ae-9810-4060-85dc-1d6e1b92cf7a>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00247.warc.gz"} |
PATH path < , path ...> ;
represents either a
or a
is one of the following:
, or
is one of the following:
, or
The PATH statement specifies the paths in your structural equation model. You can specify at most one PATH statement in a model within the scope of either the PROC CALIS statement or a MODEL
statement. To complete the PATH model specifications, you might need to add some subsidiary model specification statements such as the PVAR, PCOV, and the MEAN statements. The following is the syntax
for the PATH modeling language:
Paths in structural equation modeling represent the functional relationships among observed and latent variables. You can specify the paths in your model by using the PATH statement. Paths in the
PATH statement are separated by commas. Notice that paths from the errors or disturbances are not necessary in the PATH statement. Essentially, the roles of error or disturbance terms in the PATH
model are represented by the associated error variances of the endogenous variables in the model.
The PVAR statement specifies the parameters for the variances or error (partial) variances. The PCOV statement specifies the parameters for the covariances or error (partial) covariances. The MEAN
statement specifies the parameters for the means or intercepts. For details about these subsidiary model specification statements, see the syntax of the individual statements.
In each path entry of the PATH statement, you specify two lists of variables: var_list and var_list2. Depending on the direction of the arrow specification, one group of variables contains the
outcome variables and the other group contains the predictor variables. Optionally, you can specify the parameter-spec at the end of each path entry. You can specify the following five types of the
parameters for the path entries:
• unnamed free parameters
• initial values
• fixed values
• free parameters with names provided
• free parameters with names and initial values provided
For example, in the following statement you specify a model with five paths:
V1 <--- F1 ,
V2 <--- F1 = (0.5),
V3 <--- F1 = 1.,
V4 <--- F1 = b1,
V5 <--- F1 = b2 (.4);
The first path entry specifies a path from F1 to V1. The effect of F1 (or the path coefficient) is an unnamed free parameter. For this path effect parameter, PROC CALIS generates a parameter name
with the _Parm prefix and appended with a unique integer (for example, _Parm1). The second path entry specifies a path from F1 to V2. The effect of F1 is also an unnamed free parameter with an
initial estimate of 0.5. PROC CALIS also generates a parameter name for effect parameter. The third path entry specifies a path from F1 to V3. The effect of F1 is also a fixed value of 1.0. This
value stays the same in the model estimation. The fourth path entry specifies a path from F1 to V4. The effect of F1 is a free parameter named b1. The fifth path entry specifies a path from F1 to V5.
The effect of F1 is a free parameter named b2, with an initial value of 0.4.
You can specify multiple variables in the var_list and var_list2 lists. For example, the following statement specifies five paths from F1 to V1–V5:
All the five effects of F1 on the five variables are unnamed free parameters. If both var_list and var_list2 lists contain multiple variables, you must be careful about the order of the variables
when you also specify parameters at the end of the path entry. For example, the following statement specifies the paths from the predictor variables x1–x2 to the outcome variables y1–y3:
y1-y3 <--- x1-x2 = a1-a6;
The PATH statement specifies six paths in the path entry. These six paths have effect parameters a1–a6. This specification is equivalent to the following specification:
y1 <--- x1 = a1;
y1 <--- x2 = a2;
y2 <--- x1 = a3;
y2 <--- x2 = a4;
y3 <--- x1 = a5;
y3 <--- x2 = a6;
The following statement shows another example of multiple-path specification:
x1-x2 ---> y1-y3 = b1-b6;
This specification is equivalent to the following specification with separate path specifications:
x1 ---> y1 = b1;
x1 ---> y2 = b2;
x2 ---> y3 = b3;
x2 ---> y1 = b4;
x2 ---> y2 = b5;
x2 ---> y3 = b6;
You can also specify parameter with mixed types in any path entry, as shown in the following specification:
F1 ---> y1-y3 = 1. b1(.5) (.3),
F2 ---> y4-y6 = 1. b2 b3(.7);
This specification is equivalent to the following expanded version:
F1 ---> y1 = 1.,
F1 ---> y2 = b1(.5),
F1 ---> y3 = (.3),
F2 ---> y4 = 1.,
F2 ---> y5 = b2,
F2 ---> y6 = b3(.7);
Notice that in the original specification with multiple-path entries, 0.5 is interpreted as the initial value for the parameter b1, but not as the initial estimate for the path from F1 to y3. In
general, an initial value that follows a parameter name is associated with the free parameter.
If you indeed want to specify that b1 is a free parameter without an initial estimate and 0.5 is the initial estimate for the path from F1 to y3 (while keeping all other specification the same), you
can use a null initial value specification, as shown in the following statement:
F1 ---> y1-y3 = 1. b1() (.5) ,
F2 ---> y4-y6 = 1. b2 b3(.7);
This way 0.5 becomes the initial value for the path from F1 to y3. Because a parameter list with mixed types might be confusing, you can break down the specifications into separate path entries to
remove ambiguities. For example, you can use the following specification equivalently:
F1 ---> y1 = 1.,
F1 ---> y2 = b1,
F1 ---> y3 = (.5) ,
F2 ---> y4-y6 = 1. b2 b3(.7);
The equal signs in the path entries are optional when the parameter lists do not start with a parameter name. For example, the preceding specification is the same as the following specification:
F1 ---> y1 1.,
F1 ---> y2 = b1,
F1 ---> y3 (.5) ,
F2 ---> y4-y6 1. b2 b3(.7);
Notice that in the second path entry, you must retain the equal sign because b1 is a parameter name. Omitting the equal sign makes the specification erroneous because b1 is treated as a variable.
This might cause serious estimation problems. Omitting the equal signs might be cosmetically appealing in specifying fixed values or initial values (for example, the first and the third path
entries). However, the gain of doing that is not much as compared to the clarity of specification that results from using the equal signs consistently.
Shorter and Longer Parameter Lists
If you provide fewer parameters than the number of paths in a path entry, all the remaining parameters are treated as unnamed free parameters. For example, the following specification specifies the
free parameter beta to the first path and assigns unnamed free parameters to the remaining four paths:
F1 ---> y1 z1 z2 z3 z4 = beta;
This specification is equivalent to the following specification:
F1 ---> y1 = beta,
F1 ---> z1 z2 z3 z4;
If you intend to fill up all values with the last parameter specification in the list, you can use the continuation syntax [...], [..], or [.], as shown in the following example:
F1 ---> y1 z1 z2 z3 z4 = beta gamma [...];
This specification is equivalent to the following specification:
F1 ---> y1 z1 z2 z3 z4 = beta 4*gamma;
The repetition factor 4* means that gamma repeats 4 times.
However, you must be careful not to provide too many parameters. For example, the following specification results in an error:
SES_Factor ---> y1 z1 z2 z3 z4 = beta gamma1-gamma6;
Because there are only five paths in the specification, parameters gamma5 and gamma6 are excessive.
Default Parameters
It is important to understand the default parameters in the PATH model. First, if you know which parameters are default free parameters, you can make your specification more efficient by omitting the
specifications of those parameters that can be set by default. For example, because all variances and covariances among exogenous variables (excluding error terms) are free parameters by default, you
do not need to specify them with the PCOV and PVAR statements if these variances and covariances are not constrained. Second, if you know which parameters are default fixed zero parameters, you can
specify your model accurately. For example, because all error covariances in the PATH model are fixed zeros by default, you must use the PCOV statement to specify the partial (error) covariances
among the endogenous variables if you want to fit a model with correlated errors. See the section Default Parameters in the PATH Model for details about the default parameters of the PATH model.
Modifying a PATH Model from a Reference Model
If you define a new model by using a reference (old) model in the REFMODEL statement, you might want to modify some path specifications from the PATH statement of the reference model before
transferring the specifications to the new model. To change a particular path specification from the reference model, you can simply respecify the same path with the desired parameter specification
in the PATH statement of the new model. To delete a particular path and its associated parameter from the reference model, you can specify the desired path with a missing value specification in the
PATH statement of the new model.
The new model is formed by integrating with the old model in the following ways:
If you do not specify in the new model a parameter location that exists in the old model, the old parameter specification is duplicated in the new model.
If you specify in the new model a parameter location that does not exist in the old model, the new parameter specification is used in the new model.
If you specify in the new model a parameter location that also exists in the old model and the new parameter is denoted by the missing value '.', the old parameter specification is not copied
into the new model.
If you specify in the new model a parameter location that also exists in the old model and the new parameter is not denoted by the missing value '.', the new parameter specification replaces the
old one in the new model.
For example, consider the following specification of a two-group analysis:
proc calis;
group 1 / data=d1;
group 2 / data=d2;
model 1 / group=1;
V1 <--- F1 = 1.,
V2 <--- F1 = load1,
V3 <--- F1 = load2,
F1 <--- V4 = b1,
F1 <--- V5 = b2,
F1 <--- V6 = b3;
E1-E3 = ve1-ve3,
F1 = vd1,
V5-V6 = phi4-phi6;
V1 V2 = cve12;
model 2 / group=2;
refmodel 1;
V3 <--- F1 = load1,
V1 V2 = .,
V2 V3 = cve23;
You specify Model 2 by referring to Model 1 in the REFMODEL statement. Model 2 is the new model that refers to the old model, Model 1. This example illustrates the four types of model integration
rules for the new model:
• Duplication: All parameter specifications, except for the partial covariance between V1 and V2 and the V3 <--- F1 path in the old model, are duplicated in the new model.
• Addition: The parameter cve23 for the partial covariance between V2 and V3 is added in the new model because there is no corresponding specification in the old model.
• Deletion: The specification of partial covariance between V1 and V2 in the old model is not copied into the new model, as indicated by the missing value '.' specified in the new model.
• Replacement: The new path V3 <--- F1 replaces the same path in the old model with parameter load1 for the path coefficient. Thus, in the new model paths V3 <--- F1 and v2 <--- F1 are now
constrained to have the same path coefficient parameter load1.
Extended Path Modeling Language
The motivation of the extended path modeling language is to express all the features in the path diagram by the paths in the PATH statement. The PATH statement discussed so far specifies only the
single-headed paths in the path diagram. However, the extended path modeling language includes also the double-headed paths that represent the variances or covariances in the path diagram. With the
extended path modeling language, you can specify the variances, covariances, means, and intercepts in the PATH statement, instead of the MEAN, PCOV, and PVAR statements.
Path Syntax for Specifying Covariances
PATH var_listtwo-head-arrowvar_list2<= parameter-spec><, ...> ;
where a two-head-arrow represents one of the following:
<-->, <->, or <>
This syntax enables you to specify covariances between the variables in the var_list list and the variables in the var_list2 list. Consider the following example:
v1 <--> v2,
v3 v4 <--> v5 v6 v7 = cv1-cv6;
The first path entry specifies the covariance between v1 and v2 as an unnamed free parameter. PROC CALIS generates a name for this parameter. The second path entry specifies six covariances with
parameters named cv1–cv6. This multiple-covariance specification is equivalent to the following elementwise covariance specification:
v3 <--> v5 = cv1,
v3 <--> v6 = cv2,
v3 <--> v7 = cv3,
v4 <--> v5 = cv4,
v4 <--> v6 = cv5,
v4 <--> v7 = cv6;
Note that the order of variables in the list is important for determining the assignment of the parameters in the parameter-spec list.
If the same variable appears in both of the var_list and var_list2 lists, the "covariance" specification becomes a variance specification for that variable. For example, the following statement
specifies two variances:
v1 <--> v1 = 1.0,
v2 <--> v2 v3 = var2 cv23;
The first path entry specifies the variance of v1 as a fixed value of 1.0. The second path entry specifies the variance of v2 as a free parameter named var2, and then the covariance between v2 and v3
as a free parameter named cv23.
It might result in an error if you attempt to use this syntax to specify the variance and covariances among a set of variables. For example, suppose you intend to specify the variances and
covariances among v1–v3 as unnamed free parameters by the following statement:
This specification expands to the following elementwise specification:
v1 <--> v1 ,
v1 <--> v2 ,
v1 <--> v3 ,
v2 <--> v1 ,
v2 <--> v2 ,
v2 <--> v3 ,
v3 <--> v1 ,
v3 <--> v2 ,
v3 <--> v3 ;
There are nine variance or covariance specifications, but all of the covariances are specified twice. This is treated as a duplication error. The correct way is to specify only the nonredundant
covariances, as shown in the following elementwise specification:
v1 <--> v1 ,
v2 <--> v1 ,
v2 <--> v2 ,
v3 <--> v1 ,
v3 <--> v2 ,
v3 <--> v3 ;
However, the elementwise specification is quite tedious when the number of variables is large. Fortunately, there is another syntax to deal with this situation. This syntax is discussed in the
section Path Syntax for Specifying Variances and Covariances.
Path Syntax for Specifying Variances
PATH two-head-arrow var_list<= parameter-spec><, ...> ;
This syntax enables you to specify variances among the variables in the var_list list. Consider the following example:
<--> v1 = (0.8),
<--> v2-v4 ;
The first path entry specifies the variance of v1 as an unnamed free parameter with an initial estimate of 0.8. The second path entry specifies the variances of v2–v4 as unnamed free parameters. No
initial values are given for these three variances. PROC CALIS generates names for all these variance parameters. You can specify these variances equivalently by the elementwise covariance
specification syntax, as shown in the following, but former syntax is much more efficient.
v1 <--> v1 = (0.8),
v2 <--> v2 ,
v3 <--> v3 ,
v4 <--> v4 ;
Path Syntax for Specifying Variances and Covariances
PATH two-head-arrow [ var_list ]<= parameter-spec><, ...> ;
This syntax enables you to specify all the variances and covariances among the variables in the var_list list. For example,the following statement specifies all the variances and covariances among v2
<--> [v2-v4] = 1.0 cv32 cv33(0.5) cv42 .7 cv44;
This specification is equivalent to the following elementwise specification:
v2 <--> v2 = 1.0,
v3 <--> v2 = cv32 ,
v3 <--> v3 = cv33(0.5),
v4 <--> v2 = cv42,
v4 <--> v3 = .7,
v4 <--> v2 = cv44;
Path Syntax for Specifying Nonredundant Covariances
PATH two-head-arrow ( var_list )<= parameter-spec><, ...> ;
This syntax enables you to specify all the nonredundant covariances among the variables in the var_list. For example, the following statement specifies all the nonredundant covariances between v2–v4:
<--> (v2-v5) = cv1-cv6;
This specification is equivalent to the following elementwise specification:
v3 <--> v2 = cv1 ,
v4 <--> v2 = cv2 ,
v4 <--> v3 = cv3 ,
v5 <--> v2 = cv4 ,
v5 <--> v3 = cv5 ,
v5 <--> v4 = cv6 ;
Path Syntax for Specifying Means or Intercepts
PATH 1right-arrowvar_list<= parameter-spec><, ...> ;
where a right-arrow is one of the following:
--->, -->, ->, or >
This syntax enables you to specify the means or intercepts of the variables in the var_list list as paths from the constant 1. Consider the following example:
1 ---> v1 = alpha,
1 ---> v2-v4 = 3*kappa;
The first path entry specifies the mean or intercepts of v1 as a free parameter named alpha. The second path entry specifies the means or intercepts of v2–v4 as constrained parameters. All these
means or intercepts are named kappa so that they have the same estimate.
Whether the mean or intercept is specified depends on whether the variable is endogenous or exogenous. The intercept is specified if the variable is endogenous in the model. Otherwise, the mean of
the variable is specified. Fortunately, any variable in the model can have either a mean or intercept (but not both) to specify. Therefore, the "shared" syntax for the means and intercepts
specification does not cause any conflicts. | {"url":"http://support.sas.com/documentation/cdl/en/statug/63347/HTML/default/statug_calis_sect045.htm","timestamp":"2024-11-13T02:44:20Z","content_type":"application/xhtml+xml","content_length":"37218","record_id":"<urn:uuid:f3b8e4a1-74e3-4529-8e8c-df3f9c9b5a2c>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00086.warc.gz"} |
Plot Nyquist response of dynamic system
The nyquistplot function plots the Nyquist response of a dynamic system model and returns a NyquistPlot chart object. To customize the plot, modify the properties of the chart object using dot
notation. For more information, see Customize Linear Analysis Plots at Command Line (Control System Toolbox).
To obtain Nyquist response data, use nyquist.
np = nyquistplot(sys)plots the Nyquist response of the dynamic system model sys and returns the corresponding chart object.
If sys is a multi-input, multi-output (MIMO) model, then bodeplot produces a grid of Bode plots with each plot displaying the frequency response of one input-output pair.
np = nyquistplot(sys1,sys2,...,sysN) plots the Nyquist response of multiple dynamic systems sys1,sys2,…,sysN on the same plot. All systems must have the same number of inputs and outputs to use this
np = nyquistplot(sys1,LineSpec1,...,sysN,LineSpecN) sets the line style, marker type, and color for the Nyquist plot of each system.
np = nyquistplot(___,w) plots responses for frequencies specified in w. You can specify a frequency range or a vector of frequencies. You can use w with any of the previous syntaxes.
np = nyquistplot(___,plotoptions) plots the Nyquist response with the plotting options specified in plotoptions. Settings you specify in plotoptions override the plotting preferences for the current
MATLAB^® session. This syntax is useful when you want to write a script to generate multiple plots that look the same regardless of the local preferences.
np = nyquistplot(parent,___) plots the Nyquist response in the specified parent graphics container, such as a Figure or TiledChartLayout, and sets the Parent property. Use this syntax when you want
to create a plot in a specified open figure or when creating apps in App Designer.
Input Arguments
sys — Dynamic system
dynamic system model | model array
Dynamic system, specified as a SISO or MIMO dynamic system model or array of dynamic system models. Dynamic systems that you can use include:
• Continuous-time or discrete-time numeric LTI models, such as tf (Control System Toolbox), zpk (Control System Toolbox), or ss (Control System Toolbox) models.
• Sparse state-space models, such as sparss (Control System Toolbox) or mechss (Control System Toolbox) models.
• Generalized or uncertain LTI models such as genss (Control System Toolbox) or uss (Robust Control Toolbox) models. Using uncertain models requires Robust Control Toolbox™ software.
□ For tunable control design blocks, the function evaluates the model at its current value to plot the response.
□ For uncertain control design blocks, the function plots the nominal value and random samples of the model.
• Identified LTI models, such as idtf, idss, or idproc models.
If sys is an array of models, the plot shows responses of all models in the array on the same axes.
LineSpec — Line style, marker, and color
string | character vector
Line style, marker, and color, specified as a string or character vector containing symbols. The symbols can appear in any order. You do not need to specify all three characteristics (line style,
marker, and color). For example, if you omit the line style and specify the marker, then the plot shows only the marker and no line.
Example: '--or' is a red dashed line with circle markers
Line Style Description
"-" Solid line
"--" Dashed line
":" Dotted line
"-." Dash-dotted line
Marker Description
"o" Circle
"+" Plus sign
"*" Asterisk
"." Point
"x" Cross
"_" Horizontal line
"|" Vertical line
"s" Square
"d" Diamond
"^" Upward-pointing triangle
"v" Downward-pointing triangle
">" Right-pointing triangle
"<" Left-pointing triangle
"p" Pentagram
"h" Hexagram
Color Description
"r" red
"g" green
"b" blue
"c" cyan
"m" magenta
"y" yellow
"k" black
"w" white
w — Frequencies
{wmin,wmax} | vector | []
Frequencies at which to compute the response, specified as one of the following:
• Cell array of the form {wmin,wmax} — Compute the response at frequencies in the range from wmin to wmax. If wmax is greater than the Nyquist frequency of sys, the response is computed only up to
the Nyquist frequency.
• Vector of frequencies — Compute the response at each specified frequency. For example, use logspace to generate a row vector with logarithmically spaced frequency values. The vector w can contain
both positive and negative frequencies.
• [] — Automatically select frequencies based on system dynamics.
Specify frequencies in units of rad/TimeUnit, where TimeUnit is the TimeUnit property of the model.
plotoptions — Nyquist plot options
nyquistoptions object
Nyquist plot options, specified as a nyquistoptions object. You can use these options to customize the Nyquist plot appearance. Settings you specify in plotoptions override the preference settings
for the current MATLAB session.
parent — Parent container
Figure object (default) | TiledChartLayout object | UIFigure object | UIGridLayout object | UIPanel object | UITab object
Parent container of the chart, specified as one of the following objects:
• Figure
• TiledChartLayout
• UIFigure
• UIGridLayout
• UIPanel
• UITab
The properties listed here are only a subset. For a complete list, see NyquistPlot Properties (Control System Toolbox).
Responses — Model responses
NyquistResponse object | array of NyquistResponse objects
Model responses, specified as a NyquistResponse object or an array of such objects. Use this property to modify the dynamic system model or appearance for each response in the plot. Each
NyquistResponse object has the following fields.
SourceData — Source data
Source data for the response, specified as a structure with the following fields.
Model — Dynamic system
dynamic system model | model array
Dynamic system, specified as a SISO or MIMO dynamic system model or array of dynamic system models.
When you initially create a plot, Model matches the value you specify for sys.
FrequencySpec — Frequencies
{wmin,wmax} | vector | []
Frequencies at which to compute the response, specified as one of the following:
• Cell array of the form {wmin,wmax} — Compute the response at frequencies in the range from wmin to wmax.
• Vector of frequencies — Compute the response at each specified frequency. For example, use logspace to generate a row vector with logarithmically spaced frequency values. The vector w can contain
both positive and negative frequencies.
• [] — Automatically select frequencies based on system dynamics.
Specify frequencies in units of rad/TimeUnit, where TimeUnit is the TimeUnit property of the model.
When you initially create a plot:
• FrequencySpec matches the value you specify for the w argument.
• If you do not specify w, FrequencySpec is empty and frequencies are selected based on the system dynamics.
Name — Response name
string | character vector
Response name, specified as a string or character vector and stored as a string.
Visible — Response visibility
"on" (default) | on/off logical value
Response visibility, specified as one of the following logical on/off values:
• "on", 1, or true — Display the response in the plot.
• "off", 0, or false — Do not display the response in the plot.
The value is stored as an on/off logical value of type matlab.lang.OnOffSwitchState.
LegendDisplay — Option to list response in legend
"on" (default) | on/off logical value
Option to list response in legend, specified as one of the following logical on/off values:
• "on", 1, or true — List the response in the legend.
• "off", 0, or false — Do not list the response in the legend.
The value is stored as an on/off logical value of type matlab.lang.OnOffSwitchState.
MarkerStyle — Marker style
"none" | "o" | "+" | "*" | "." | ...
Marker style, specified as one of the following values.
Marker Description
"none" No marker
"o" Circle
"+" Plus sign
"*" Asterisk
"." Point
"x" Cross
"_" Horizontal line
"|" Vertical line
"s" Square
"d" Diamond
"^" Upward-pointing triangle
"v" Downward-pointing triangle
">" Right-pointing triangle
"<" Left-pointing triangle
"p" Pentagram
"h" Hexagram
Color — Plot color
RGB triplet | hexadecimal color code | color name
Plot color, specified as an RGB triplet or a hexadecimal color code and stored as an RGB triplet.
Alternatively, you can specify some common colors by name. The following table lists these colors and their corresponding RGB triplets and hexadecimal color codes.
Color Name RGB Triplet Hexadecimal Color Code
"red" or "r" [1 0 0] #FF0000
"green" or "g" [0 1 0] #00FF00
"blue" or "b" [0 0 1] #0000FF
"cyan" or "c" [0 1 1] #00FFFF
"magenta" or "m" [1 0 1] #FF00FF
"yellow" or "y" [1 1 0] #FFFF00
"black" or "k" [0 0 0] #000000
"white" or "w" [1 1 1] #FFFFFF
LineStyle — Line style
"-" | "--" | ":" | "-."
Line style, specified as one of the following values.
Line Style Description
"-" Solid line
"--" Dashed line
":" Dotted line
"-." Dash-dotted line
MarkerSize — Marker size
positive scalar
Marker size, specified as a positive scalar.
LineWidth — Line width
positive scalar
Line width, specified as a positive scalar.
Characteristics — Response characteristics
CharacteristicsManager object
Response characteristics to display in the plot, specified as a CharacteristicsManager object with the following properties.
FrequencyPeakResponse — Visibility of peak response
CharacteristicOption object
Visibility of peak response in magnitude plot, specified as a CharacteristicOption object with the following property.
Visible — Peak response visibility
"off" (default) | on/off logical value
Peak response visibility, specified as one of the following logical on/off values:
• "on", 1, or true — Display the peak response.
• "off", 0, or false — Do not display the peak response.
The value is stored as an on/off logical value of type matlab.lang.OnOffSwitchState.
AllStabilityMargins — Visibility of all stability margins
CharacteristicOption object
Visibility of all stability margins, specified as a CharacteristicOption object with the following property.
Visible — Margin visibility
"off" (default) | on/off logical value
Margin visibility, specified as one of the following logical on/off values:
• "on", 1, or true — Display the margins.
• "off", 0, or false — Do not display the margins.
The value is stored as an on/off logical value of type matlab.lang.OnOffSwitchState.
MinimumStabilityMargins — Visibility of minimum stability margins
CharacteristicOption object
Visibility of minimum stability margins, specified as a CharacteristicOption object with the following property.
Visible — Margin visibility
"off" (default) | on/off logical value
Margin visibility, specified as one of the following logical on/off values:
• "on", 1, or true — Display the margins.
• "off", 0, or false — Do not display the margins.
The value is stored as an on/off logical value of type matlab.lang.OnOffSwitchState.
ConfidenceRegion — Confidence region
CharacteristicOption object
Confidence region for identified models, specified as a CharacteristicOption object with the following properties.
Visible — Confidence region visibility
"off" (default) | on/off logical value
Confidence region visibility, specified as one of the following logical on/off values:
• "on", 1, or true — Display the confidence region.
• "off", 0, or false — Do not display the confidence region.
The value is stored as an on/off logical value of type matlab.lang.OnOffSwitchState.
DisplaySampling — Frequency spacing of confidence ellipses
5 (default) | positive integer
Frequency spacing of confidence ellipses used to plot the confidence region, specified as a positive integer. For example, when DisplaySampling is 5 the confidence ellipses are shown at every fifth
frequency sample.
NumberOfStandardDeviations — Number of standard deviations
1 (default) | positive scalar
Number of standard deviations to display for the confidence region, specified as a positive scalar.
ConfidenceRegion is supported only for identified models.
ShowNegativeFrequencies — Option to show response for negative frequencies
"on" (default) | on/off logical value
Option to show response for negative frequencies, specified as one of the following logical on/off values:
• "on", 1, or true — Display the response for negative frequencies.
• "off", 0, or false — Do not display the response for negative frequencies.
The value is stored as an on/off logical value of type matlab.lang.OnOffSwitchState.
FrequencyUnit — Frequency units
"rad/s" | "Hz" | "rpm" | ...
Frequency units, specified as one of the following values:
• "Hz"
• "rad/s"
• "rpm"
• "kHz"
• "MHz"
• "GHz"
• "rad/nanosecond"
• "rad/microsecond"
• "rad/millisecond"
• "rad/minute"
• "rad/hour"
• "rad/day"
• "rad/week"
• "rad/month"
• "rad/year"
• "cycles/nanosecond"
• "cycles/microsecond"
• "cycles/millisecond"
• "cycles/hour"
• "cycles/day"
• "cycles/week"
• "cycles/month"
• "cycles/year"
By default, the response uses the frequency units of the plotted linear system. You can override the default units by specifying toolbox preferences. For more information, see Specify Toolbox
Preferences for Linear Analysis Plots.
MagnitudeUnit — Magnitude units
"dB" | "abs"
Magnitude units, specified as one of the following:
• "dB" — Decibels
• "abs" — Absolute value
The default magnitude units depend on the toolbox preferences. For more information, see Specify Toolbox Preferences for Linear Analysis Plots.
PhaseUnit — Phase units
"deg" | "rad"
Phase units, specified as one of the following:
• "deg" — Degrees
• "rad" — Radians
The default phase units depend on the toolbox preferences. For more information, see Specify Toolbox Preferences for Linear Analysis Plots.
Visible — Chart visibility
"on" (default) | on/off logical value
Chart visibility, specified as one of the following logical on/off values:
• "on", 1, or true — Display the chart.
• "off", 0, or false — Hide the chart without deleting it. You still can access the properties of chart when it is not visible.
The value is stored as an on/off logical value of type matlab.lang.OnOffSwitchState.
IOGrouping — Grouping of inputs and outputs pairs
"none" (default) | "inputs" | "outputs" | "all"
Grouping of inputs and outputs pairs, specified as one of the following:
• "none" — Do not group inputs or outputs.
• "inputs" — Group only inputs.
• "outputs" — Group only outputs.
• "all" — Group all input-output pairs.
InputVisible — Option to display inputs
on/off logical value | array of on/off logical values
Option to display inputs, specified as one of the following logical on/off values or an array of such values:
• "on", 1, or true — Display the corresponding input.
• "off", 0, or false — Hide the corresponding input.
InputVisible is an array when the plotted system has multiple inputs. By default, all inputs are visible in the plot.
The value is stored as an on/off logical value of type matlab.lang.OnOffSwitchState or an array of such values.
OutputVisible — Option to display outputs
on/off logical value | array of on/off logical values
Option to display outputs, specified as one of the following logical on/off values or an array of such values:
• "on", 1, or true — Display the corresponding output.
• "off", 0, or false — Hide the corresponding output.
OutputVisible is an array when the plotted system has multiple outputs. By default, all outputs are visible in the plot.
The value is stored as an on/off logical value of type matlab.lang.OnOffSwitchState or an array of such values.
Object Functions
addResponse Add dynamic system response to existing response plot
showConfidence Display confidence regions on response plots for identified models
zoomcp Zoom Nyquist plot to region around critical point
Customize Nyquist Plot
For this example, use the plot handle to change the phase units to radians and to turn the grid on.
Generate a random state-space model with 5 states and create the Nyquist diagram with chart object np.
sys = rss(5);
np = nyquistplot(sys);
Change the phase units to radians and turn on the grid. To do so, edit properties of the chart object.
np.PhaseUnit = "rad";
grid on;
The Nyquist plot automatically updates when you modify the chart object.
Alternatively, you can also use the nyquistoptions command to specify the required plot options. First, create an options set based on the toolbox preferences.
plotoptions = nyquistoptions("cstprefs");
Change properties of the options set by setting the phase units to radians and enabling the grid.
plotoptions.PhaseUnits = "rad";
plotoptions.Grid = "on";
Depending on your own toolbox preferences, the plot you obtain might look different from this plot. Only the properties that you set explicitly, in this example PhaseUnits and Grid, override the
toolbox preferences.
Customize Nyquist Plot Title
Create a Nyquist plot of a dynamic system model and create the corresponding chart object.
sys = tf(100,[1,2,1]);
np = nyquistplot(sys);
Change the text of the plot title..
np.Title.String = "Nyquist Plot of sys";
Zoom on Critical Point
Plot the Nyquist frequency response of a dynamic system. Assign a variable name to the plot handle so that you can access it for further manipulation.
sys = tf(100,[1,2,1]);
h = nyquistplot(sys);
Zoom in on the critical point, (–1,0). You can do so interactively by right-clicking on the plot and selecting Zoom on (-1,0). Alternatively, use the zoomcp command on the plot handle h.
Nyquist Plot of Identified Models with Confidence Regions at Selected Points
Compare the frequency responses of identified state-space models of order 2 and 6 along with their 1-std confidence regions rendered at every 50th frequency sample.
Load the identified model data and estimate the state-space models using n4sid. Then, plot the Nyquist diagram.
load iddata1
sys1 = n4sid(z1,2);
sys2 = n4sid(z1,6);
w = linspace(10,10*pi,256);
np = nyquistplot(sys1,sys2,w);
Both models produce about 76% fit to data. However, sys2 shows higher uncertainty in its frequency response, especially close to Nyquist frequency as shown by the plot. To see this, show the
confidence region at a subset of the points at which the Nyquist response is displayed.
np.ShowNegativeFrequencies = "off";
np.Characteristics.ConfidenceRegion.DisplaySampling = 50;
np.Characteristics.ConfidenceRegion.Visible = "on";
Alternatively, to turn on the confidence region display, right-click the plot and select Characteristics > Confidence Region.
Nyquist Plot with Specific Customization
For this example, consider a MIMO state-space model with 3 inputs, 3 outputs and 3 states. Create a Nyquist plot, display only the partial contour.
Create the MIMO state-space model sys_mimo.
J = [8 -3 -3; -3 8 -3; -3 -3 8];
F = 0.2*eye(3);
A = -J\F;
B = inv(J);
C = eye(3);
D = 0;
sys_mimo = ss(A,B,C,D);
State-space model with 3 outputs, 3 inputs, and 3 states.
Create a Nyquist plot with chart object np.
np = nyquistplot(sys_mimo);
Suppress the negative frequency data from the plot.
np.ShowNegativeFrequencies = "off";
The Nyquist plot automatically updates when you modify the chart object. For MIMO models, nyquistplot produces an array of Nyquist diagrams, each plot displaying the frequency response of one I/O
• There are two zoom options available from the right-click menu that apply specifically to Nyquist plots:
□ Full View — Clips unbounded branches of the Nyquist plot, but still includes the critical point (–1, 0).
□ Zoom on (-1,0) — Zooms around the critical point (–1,0). To access critical-point zoom programmatically, use the zoomcp command
Version History
Introduced in R2012a
R2024b: Improved customization workflows and integration with MATLAB plotting tools
Starting in R2024b, nyquistplot returns a NyquistPlot chart object. Previously, the nyquistplot function returned a handle to the resulting plot.
The new chart object allows you to customize your plot using dot notation.
The new chart object also improves integration with MATLAB plotting tools. For example:
• You can now add Nyquist plots to tiled chart layouts.
• Saving and loading a parent figure now maintains full plot interactivity.
• You can add a response your Nyquist plot using the new addResponse function.
The following functionality changes might require updates to your code.
• The gca function now returns the chart object rather than an axes within the plot.
• You can no longer access the graphics objects within a Nyquist plot using the Children property of its parent figure. | {"url":"https://es.mathworks.com/help/ident/ref/controllib.chart.nyquistplot.html","timestamp":"2024-11-13T01:17:18Z","content_type":"text/html","content_length":"187400","record_id":"<urn:uuid:fd14da5d-fac0-4497-bd2c-c12bf58a67d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00012.warc.gz"} |
easures of statistical uncertainty in ONS local authority mid-year population estimates
1. Main temporary changes
• Incorporating the merging of local authorities
• Changes to the international migration component
Back to table of contents
3. Temporary method changes
Incorporating the merging of local authorities
The three main sources of uncertainty associated with the mid-year population estimates (MYEs) are the census base, international migration, and internal migration (moves between local authorities
(LAs)). Uncertainty in the other components of change (births, deaths, asylum seekers, armed forces, and prisoners) is assumed to be zero.
The methodology for producing internal migration uncertainty has remained the same for local authorities whose geographic boundaries have not changed in recent years. Further details on this process
can be found in Methodology for measuring uncertainty in ONS local authority mid-year population estimates: 2012 to 2016. In 2019 and 2020, a number of local authorities merged into new ones. Table 1
outlines which local authorities were affected. Because of certain features intrinsic to the internal migration methodology, it is not possible to process them the same as other local authorities.
Table 1: Boundary changes to local authorities 2019 to 2020
Old local authority New local authority
Year of
LA code LA name LA code LA name boundary
E06000028 Bournemouth Bournemouth,
E07000048 Christchurch E06000058 Christchurch 2019
E06000029 Poole and Poole
E07000053 Weymouth and Portland
E07000052 West Dorset
E07000050 North Dorset E06000059 Dorset 2019
E07000051 Purbeck
E07000049 East Dorset
E07000191 West Somerset E07000246 Somerset West 2019
E07000190 Taunton Deane and Taunton
E07000205 Suffolk Coastal E07000244 East Suffolk 2019
E07000206 Waveney
E07000201 Forest Heath E07000245 West Suffolk 2019
E07000204 St Edmundsbury
E07000004 Aylesbury Vale
E07000005 Chiltern E06000060 Buckinghamshire 2020
E07000006 South Bucks
E07000007 Wycombe
Download this table Table 1: Boundary changes to local authorities 2019 to 2020
.xls .csv
We calculated the relative width and position of the uncertainty intervals for each LA in the newly merged LAs. We combined them into a weighted average to produce uncertainty intervals for the 2020
MYE in the following manner:
Where MYE[2019] is the 2019 mid-year estimate, LB[2019] is the lower bound of the uncertainty interval for the 2019 mid-year estimate, UB[2019] is the corresponding upper bound, and x[XB] are the
corresponding relative uncertainty bounds.
The relative uncertainty bounds for the merged LAs are calculated as a weighted average of those of its constituent LAs:
Where x[i] is the relative uncertainty bound for LA[i] and w[i] is the proportion of the population in LA[i] and x[T] is the relative uncertainty bound for the merged LA. Equation 3 is calculated
separately for the lower and upper bound of the mid-year estimate.
The relative uncertainty bounds for the 2020 mid-year estimates are then calculated as:
This was calculated for all local authorities in Table 1.
Changes to the international migration component
We use bootstrapping to simulate uncertainty around the international migration component, however this was not possible because of computational issues encountered in implementing the method.
Instead, we use the international migration point estimate and uncertainty interval from 2011 to 2019 for each LA. This is to identify patterns in the widths of the intervals and the position of the
point estimate in the interval over time. No specific patterns were found. Therefore, we calculate the relative width of the 2019 uncertainty intervals and apply to the 2020 MYE for the international
migration component.
The estimates for the 2020 international migration component are based on the International Passenger Survey (IPS) data up to March 2020, and modelled migration estimates for the period after March
2020 when the IPS was suspended because of coronavirus (COVID-19). Figure 1 summarises the changes that have been made to produce the 2020 mid-year estimate uncertainty.
Figure 1: 2020 mid-year estimate cohort component method and statistical uncertainty
Source: Office for National Statistics
Download this image Figure 1: 2020 mid-year estimate cohort component method and statistical uncertainty
.png (30.7 kB)
Back to table of contents
4. Location of the MYEs in their uncertainty intervals
We produce uncertainty intervals for all local authorities in England and Wales.
Table 2 shows that for most local authorities, the mid-year population estimates (MYEs) no longer sit within its uncertainty interval in 2020.
Over time, a growing number of local authority MYEs fall outside of their empirical 95% uncertainty bounds. By 2020, this is the case for 161 local authorities. This is consistent with our
understanding that estimation of the population becomes progressively more difficult as we move away from the census.
Table 2: Position of local authority
mid-year population estimates relative to
their empirical 95% uncertainty intervals,
2011 to 2020
Year Number % Number % Number %
within above below
2011 348 100.00
2012 347 99.71 1 0.29
2013 316 90.80 28 8.05 4 1.15
2014 271 77.87 66 18.97 11 3.16
2015 237 68.10 95 27.30 16 4.60
2016 218 62.64 108 31.03 22 6.32
2017 195 56.03 120 34.48 33 9.48
2018 187 53.74 123 35.34 38 10.92
2019 177 50.86 130 37.36 41 11.78
2020 161 47.92 125 37.20 50 14.88
Download this table Table 2: Position of local authority mid-year population estimates relative to their empirical 95% uncertainty intervals, 2011 to 2020
.xls .csv
Table 3 outlines how the local authorities are interacting with the uncertainty interval. Figures 2 to 7 provide illustrative examples of local authorities for each position in this table.
Table 3: Position of local authority mid-year
population estimates relative to their
uncertainty intervals
Position over time Empirical 95%
MYE sits within the 76
uncertainty interval
MYE drifts to upper bound 44
MYE drifts to lower bound 34
MYE crosses upper bound 125
MYE crosses lower bound 47
MYE follows none of these trends 10
Total 336
Download this table Table 3: Position of local authority mid-year population estimates relative to their uncertainty intervals
.xls .csv
Figure 2: The mid-year population estimate sits within its uncertainty intervals, 2011 to 2020 – Boston
Source: Office for National Statistics – measures of statistical uncertainty
Download this chart Figure 2: The mid-year population estimate sits within its uncertainty intervals, 2011 to 2020 – Boston
Image .csv .xls
Figure 3: The mid-year population estimate drifts to the upper bound of the uncertainty intervals, 2011 to 2020 – Castle Point
Source: Office for National Statistics – measures of statistical uncertainty
Download this chart Figure 3: The mid-year population estimate drifts to the upper bound of the uncertainty intervals, 2011 to 2020 – Castle Point
Image .csv .xls
Figure 4: The mid-year population estimate drifts to the lower bound of the uncertainty intervals, 2011 to 2020 – Cardiff
Source: Office for National Statistics – measures of statistical uncertainty
Download this chart Figure 4: The mid-year population estimate drifts to the lower bound of the uncertainty intervals, 2011 to 2020 – Cardiff
Image .csv .xls
Figure 5: The mid-year population estimate crosses the upper bound of the uncertainty intervals, 2011 to 2020 - Mid Devon
Source: Office for National Statistics – measures of statistical uncertainty
Download this chart Figure 5: The mid-year population estimate crosses the upper bound of the uncertainty intervals, 2011 to 2020 - Mid Devon
Image .csv .xls
Figure 6: The mid-year population estimate crosses the lower bound of the uncertainty intervals, 2011 to 2020 – Cheltenham
Source: Office for National Statistics – measures of statistical uncertainty
Download this chart Figure 6: The mid-year population estimate crosses the lower bound of the uncertainty intervals, 2011 to 2020 – Cheltenham
Image .csv .xls
Figure 7: The mid-year population estimate follows none of the trends seen elsewhere, 2011 to 2020 – Hammersmith and Fulham
Source: Office for National Statistics – measures of statistical uncertainty
Download this chart Figure 7: The mid-year population estimate follows none of the trends seen elsewhere, 2011 to 2020 – Hammersmith and Fulham
Image .csv .xls
Back to table of contents | {"url":"https://www.ons.gov.uk/peoplepopulationandcommunity/populationandmigration/populationestimates/articles/measuresofstatisticaluncertaintyinonslocalauthoritymidyearpopulationestimates/englandandwales2020","timestamp":"2024-11-04T21:34:59Z","content_type":"text/html","content_length":"133548","record_id":"<urn:uuid:071df487-8356-43c1-a053-e5fee7ac429b>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00165.warc.gz"} |
%0 Journal Article %T Nitrate removal in stream ecosystems measured by 15N addition experiments: Total uptake %A Hall, R. O. %A Tank, J. L. %A Sobota, D. J. %A Mulholland, P. J. %A O'Brien, J. M. %A
Dodds, W. K. %A Webster, J. R. %A Valett, H. M. %A Poole, G. C. %A Peterson, B. J. %A Meyer, J. L. %A McDowell, W. H. %A Johnson, S. L. %A Hamilton, S. K. %A Grimm, N. B. %A Gregory, S. V. %A Dahm,
C. N. %A Cooper, L. W. %A Ashkenas, L. R. %A Thomas, S. M. %A Sheibley, R. W. %A Potter, J. D. %A Niederlehner, B. R. %A Johnson, L. T. %A Helton, A. M. %A Crenshaw, C. M. %A Burgin, A. J. %A Bernot,
M. J. %A Beaulieu, J. J. %A Arango, C. P. %J Limnology and Oceanography %V 54 %P 653-665 %D 2009 %X
We measured uptake length of (NO3)-N-15- in 72 streams in eight regions across the United States and Puerto Rico to develop quantitative predictive models on controls of NO3- uptake length. As part
of the Lotic Intersite Nitrogen eXperiment II project, we chose nine streams in each region corresponding to natural (reference), suburban-urban, and agricultural land uses. Study streams spanned a
range of human land use to maximize variation in NO3- concentration, geomorphology, and metabolism. We tested a causal model predicting controls on NO3- uptake length using structural equation
modeling. The model included concomitant measurements of ecosystem metabolism, hydraulic parameters, and nitrogen concentration. We compared this structural equation model to multiple regression
models which included additional biotic, catchment, and riparian variables. The structural equation model explained 79% of the variation in log uptake length (S-Wtot). Uptake length increased with
specific discharge (Q/w) and increasing NO3- concentrations, showing a loss in removal efficiency in streams with high NO3- concentration. Uptake lengths shortened with increasing gross primary
production, suggesting autotrophic assimilation dominated NO3- removal. The fraction of catchment area as agriculture and suburban urban land use weakly predicted NO3- uptake in bivariate regression,
and did improve prediction in a set of multiple regression models. Adding land use to the structural equation model showed that land use indirectly affected NO3- uptake lengths via directly
increasing both gross primary production and NO3- concentration. Gross primary production shortened SWtot, while increasing NO3- lengthened SWtot resulting in no net effect of land use on NO3-
%R 10.4319/lo.2009.54.3.0653 %M KBS.2204 | {"url":"https://lter.kbs.msu.edu/citations/2204.enw","timestamp":"2024-11-12T14:06:36Z","content_type":"text/x-matlab","content_length":"3512","record_id":"<urn:uuid:31359bda-90e8-4bcc-ab8d-20855c10dfa9>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00542.warc.gz"} |
Cheerio Contest 1 J5 - Arithmetic Sequence
Submit solution
Points: 10 (partial)
Time limit: 1.0s
Memory limit: 512M
You are given a list of integers . Determine whether or not you can modify the list into an arithmetic sequence by rearranging the list and/or changing the value of at most number.
An arithmetic sequence is a sequence in which each term is obtained by adding the previous term by a constant number, called the common difference. For example, is an arithmetic sequence with a
common difference of .
For all subtasks:
Points Awarded
6 points
5 points
4 points
Input Specification
The first line contains one integer .
The second line contains integers .
Output Specification
Output YES if the list can be modified to become an arithmetic sequence and NO otherwise.
Sample Input 1
Sample Output 1
Explanation for Sample Output 1
We can change the number in the list to be . The list of numbers becomes , which can be rearranged to form the arithmetic sequence .
Sample Input 2
Sample Output 2
Explanation for Sample Output 2
The list is already an arithmetic sequence, so no changes are necessary.
• Can change element in array to negative number? | {"url":"https://dmoj.ca/problem/cheerio1j5","timestamp":"2024-11-11T23:41:58Z","content_type":"text/html","content_length":"26092","record_id":"<urn:uuid:866d9b36-476c-43e9-b945-46bf659588d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00284.warc.gz"} |
Python Semiconductor Simulation Projects
Python semiconductor simulation are aided by us, tailored upon your needs. Designing the physical and electrical activity of semiconductor devices is encompassed in Python-based semiconductor
simulation is really a hard one. Generally, numerous factors like electric fields, charge carrier dynamics, and current-voltage features are involved. We provide a fundamental summary and few
valuable recommendations to begin efficiently with such a project:
Project Outline for Semiconductor Simulation
1. Introduction:
• Initially, we aim to offer a summary of semiconductor physics.
• The simulation’s goal such as designing a solar cell, p-n junction, or MOSFET should be described.
2. Setting Up the Environment:
• Python installation and arrangement.
• Essential libraries: Matplotlib, NumPy, SciPy, etc.
3. Basic Semiconductor Theory:
• Doping and intrinsic/extrinsic semiconductors
• Drift and diffusion of carriers
• Charge carriers: electrons and holes
• Energy bands and band gaps
4. Mathematical Modeling:
• Drift-diffusion model
• Poisson’s equation for electrostatics
• Continuity equations for charge carriers
5. Numerical Methods:
• Mainly, for resolving differential equations, finite difference method (FDM) is employed which is examined as a numerical technique.
• For extensive linear models, our team focuses on utilizing iterative solvers.
6. Simulation Implementation:
• Generally, the architecture of the semiconductor such as 1D or 2D grid should be described.
• It is significant to set characteristics of resources like mobility, doping profiles, etc.
• For Poisson’s equation, we intend to apply the numerical solver.
• The drift-diffusion system must be executed for carrier transport.
• Consider boundary constraints and contacts.
7. Visualization and Analysis:
• Current-voltage (I-V) features
• Plotting possible distribution
• Carrier concentration outlines
8. Advanced Topics (optional):
• 3D modeling
• Quantum effects and tunneling
• Temperature dependence
Instance: Simple p-n Junction Simulation
The following is a simple instance based on how you could begin a 1D p-n junction simulation in Python:
Step 1: Import Required Libraries
import numpy as np
import matplotlib.pyplot as plt
from scipy.constants import k, q, epsilon_0
# Constants
T = 300 # Temperature in Kelvin
V_T = k * T / q # Thermal voltage
epsilon_si = 11.7 * epsilon_0 # Dielectric constant of silicon
# Doping concentrations
N_A = 1e16 # Acceptor concentration (p-type)
N_D = 1e16 # Donor concentration (n-type)
# Define grid
L = 1e-6 # Length of the device
N = 1000 # Number of grid points
x = np.linspace(0, L, N)
dx = x[1] – x[0]
# Initial potential guess
phi = np.zeros(N)
# Poisson solver (simple relaxation method)
def poisson_solver(phi, rho, epsilon, dx, tol=1e-6, max_iter=10000):
for _ in range(max_iter):
phi_new = np.copy(phi)
phi_new[1:-1] = 0.5 * (phi[:-2] + phi[2:] – dx**2 * rho[1:-1] / epsilon)
if np.linalg.norm(phi_new – phi) < tol:
phi = phi_new
return phi
# Charge density (assuming complete ionization)
rho = q * (N_D – N_A)
# Solve Poisson’s equation
phi = poisson_solver(phi, rho, epsilon_si, dx)
# Plot the results
plt.plot(x, phi)
plt.xlabel(‘Position (m)’)
plt.ylabel(‘Potential (V)’)
plt.title(‘Electrostatic Potential in a p-n Junction’)
Step 2: Extending to Drift-Diffusion Model
For carrier transport, encompass the drift-diffusion framework by prolonging the simple Poisson solver. By considering boundary scenarios and recombination-generation mechanisms, this includes the
process of resolving coupled differential equations for electron and hole densities.
Step 3: Visualization and Analysis
In order to visualize several metrics like carrier concentration, electric field, and I-V features, it is beneficial to employ Matplotlib.
Supplementary Resources
• Books:
• “Device Electronics for Integrated Circuits” by Richard S. Muller and Theodore I. Kamins
• “Semiconductor Device Fundamentals” by Robert F. Pierret
• Online Courses:
• MIT OpenCourseWare: Introduction to Solid State Chemistry
• Coursera: Semiconductor Physics
• Libraries and Tools:
• NanoTCAD ViDES: It is defined as a freely available device simulator.
• SimPy: Generally, the SimPy is described as a process-based discrete-event simulation model.
Python semiconductor simulation projects
Several projects based on semiconductor simulation are progressing continuously in recent years. Concentrating on various factors of the simulation, we suggest a project which could be divided into
numerous phases:
Project Title: Simulation of p-n Junction Diode Characteristics using Python
1. Introduction
• Goal: Through the utilization of Python, we focus on simulating the carrier distribution, electrostatic potential, and current-voltage (I-V) properties of a p-n junction diode.
• Motivation: For the model and improvement of electronic circuits, it is significant to interpret the characteristics of semiconductor devices. Through computational modeling, offering a realistic
interpretation of semiconductor physics is the major goal of this project.
2. Literature Review
• Semiconductor Basics:
• Doping and its impacts on carrier concentration
• Intrinsic and extrinsic semiconductors
• Charge carriers: electrons and holes
• p-n Junction Theory:
• In-built potential
• Forward and reverse bias activity
• Creation of depletion region
• Charge distribution in equilibrium
3. Methodology
d2ϕdx2=−ρϵ\frac{d^2 \phi}{dx^2} = -\frac{\rho}{\epsilon}dx2d2ϕ=−ϵρ
In which, permittivity is indicated by ϵ\epsilonϵ, ϕ\phiϕ is defines the electrostatic potential, and the charge density is specified by ρ\rhoρ.
• Continuity Equations for Electrons and Holes:
dndt=1q(dJndx+G−R)\frac{dn}{dt} = \frac{1}{q} \left( \frac{dJ_n}{dx} + G – R \right)dtdn=q1(dxdJn+G−R) dpdt=−1q(dJpdx+G−R)\frac{dp}{dt} = -\frac{1}{q} \left( \frac{dJ_p}{dx} + G – R \right)dtdp=−q1
Where, the current densities are specified by JnJ_nJn and JpJ_pJp, RRR indicates the recombination rate, the electron and hole densities are defined by nnn and ppp, and GGG specifies the generation
• Drift-Diffusion Current Density:
Jn=qnμndϕdx+qDndndxJ_n = q n \mu_n \frac{d\phi}{dx} + q D_n \frac{dn}{dx}Jn=qnμndxdϕ+qDndxdn Jp=qpμpdϕdx−qDpdpdxJ_p = q p \mu_p \frac{d\phi}{dx} – q D_p \frac{dp}{dx}Jp=qpμpdxdϕ−qDpdxdp
In which, DnD_nDn and DpD_pDp indicate the diffusion coefficients, and the mobilities of electrons and holes are specified by μn\mu_nμn and μp\mu_pμp.
• Finite Difference Method (FDM): The Poisson’s and continuity equations could be categorized through the utilization of FDM technique.
• Newton-Raphson Method: Typically, this method is beneficial for resolving the non-linear system of equations.
• Iterative Solvers: The Conjugate Gradient approach is an iterative solver which is employed for addressing extensive sparse linear models.
4. Implementation
• Python Libraries:
• For numerical calculations, NumPy is highly beneficial.
• Generally, SciPy library is used for scientific computing processes.
• For plotting and visualization, it is advisable to employ Matplotlib.
• Supplementary libraries: PySparse is efficient for sparse matrix processes, SymPy is valuable for symbolic mathematics.
• Modules:
• py: For semiconductor characteristics and evaluations, this module encompasses effective functions and classes.
• py: Numerical solvers are efficiently executed by the solver.py module.
• py: Every plotting and visualization missions are managed by this module.
• py: To execute the simulation, main.py is considered as the main script.
1. Define the Semiconductor Structure:
• Specifically, for the p and n regions, we plan to initialize the doping profile.
• The grid and material characteristics should be configured appropriately.
2. Initial Guess for Potential:
• For the possible distribution, our team initiates with a realistic initial guess.
3. Solve Poisson’s Equation:
• To categorize and determine the potential, it is beneficial to employ the finite difference technique.
4. Carrier Density Calculation:
• On the basis of the potential distribution, we intend to assess hole and electron densities.
5. Current Density Calculation:
• For electrons and holes, our team focuses on calculating drift and diffusion current densities.
6. Iterate to Self-Consistency:
• Till the solution intersects, we plan to repeat the procedure in an efficient manner.
7. Visualization:
• Typically, the I-V features, potential distribution, and carrier densities have to be plotted.
5. Results and Analysis
• Potential Distribution: Among the p-n junction, we aim to visualize the electrostatic potential.
• Carrier Distribution: The hole and electron densities ought to be plotted.
• I-V Characteristics: In various prejudicing scenarios, our team focuses on simulating and plotting the current-voltage correlation.
6. Conclusion
• Summary: The major outcomes and perceptions which are obtained from the simulation have to be outlined.
• Future Work: Potential developments like simulating other semiconductor devices such as MOSFETs or solar cells or combining highly innovative frameworks must be recommended.
Through this article, we have offered a simple overview and several recommendations that assist you to initiate a semiconductor simulation project effectively. Also, considering various factors of
the simulation, a project that could be divided into numerous steps is suggested by us in an explicit manner.
We have all the needed tools and resources to get your work done, drop us all your query we will give you best programming and coding guidance. Get a perfect thesis done by us, we assure you that all
your research needs done perfectly. | {"url":"https://matlabsimulation.com/python-semiconductor-simulation/","timestamp":"2024-11-10T09:42:47Z","content_type":"text/html","content_length":"74428","record_id":"<urn:uuid:aec337cf-09b5-401c-9bff-236e10633d17>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00275.warc.gz"} |
FILTER last n valid entries
The goal in this example is to display the last 3 valid entries from the table shown, where "valid" is defined as a temperature of less than 75 in the "Temp" column. At a high level, the FILTER
function is used to filter entries based on a logical test, and the INDEX function is used to extract the last 3 entries from the filtered list. Working from the inside out, we use the SEQUENCE
function to construct a row number value for the INDEX function like this:
SEQUENCE is configured to create an array of 3 rows x 1 column. The step value is -1, and the start number is defined by this snippet:
SUM(--(temp<75)) // returns 7
Here we are counting temp values less than 75. Because the named range temp contains twelve values, the result is an array of 12 TRUE and FALSE values:
The double negative (--) is used to coerce the TRUE and FALSE results to 1s and 0s, and the SUM function returns the total:
SUM({1;1;1;0;0;1;1;0;0;1;0;1}) // returns 7
This number is returned directly to SEQUENCE for the start value. Now we have:
We use SORT to ensure that values are returned in the same order they appear in the source data. This array is handed off to the INDEX function as the row_num argument:
In a similar way, SEQUENCE is also used to generate an array for columns:
SEQUENCE(1,COLUMNS(data)) // returns {1,2,3}
which is given to INDEX for the columns argument. Now we have:
The next step is to construct the array for INDEX to work with. We only want to work with "valid" entries, so we use the FILTER function to retrieve a list of entries where the temp value is less
than 75:
The array argument is data, and the include argument is the expression temp<75. This can be translated literally as "return values from the named range data where values in temp are less than 75".
The result is a 2D array with 3 columns and 7 rows:
Notice rows associated temp values greater than or equal to 75 have been removed. This array is returned to the INDEX function for its array argument.
Finally, the INDEX function returns the last 3 entries from the array returned by FILTER.
Note: Both the value for n and the logic used to test for valid entries is arbitrary in this example and can be adjusted as needed to suit your needs. | {"url":"https://exceljet.net/formulas/filter-last-n-valid-entries","timestamp":"2024-11-07T07:07:45Z","content_type":"text/html","content_length":"55191","record_id":"<urn:uuid:aabab58a-ae10-4863-b1d1-61028e8ab92d>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00450.warc.gz"} |
What is the present value of a perpetuity that pays 1000, Financial Management
What is the present value of a perpetuity that pays $1,000 per year, beginning one year from now, if the appropriate interest rate is 2%? (Round off to the nearest dollar and ignore the $ sign in
your input.)
Request for Solution File
Ask an Expert for Answer!!
Financial Management: What is the present value of a perpetuity that pays 1000
Reference No:- TGS01084583
Expected delivery within 24 Hours | {"url":"https://www.tutorsglobe.com/question/what-is-the-present-value-of-a-perpetuity-that-pays-1000-51084583.aspx","timestamp":"2024-11-10T10:43:42Z","content_type":"text/html","content_length":"43348","record_id":"<urn:uuid:e93c1cae-5531-4e21-a6f8-77b28371eecc>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00492.warc.gz"} |
Workshop on Generic Programming
Last updated on 31st July 2001
Participants; Programme; Travel; Accomodation
Workshop on Generic Programming
Thursday 26th July 2001
University of Nottingham
The Workshop on Generic Programming is an informal one-day event for the discussion of recent and ongoing developments in the area of generic programming. The workshop is open to all, and there is no
charge for attending. Please contact the organiser, Graham Hutton, if you would like to attend.
The list of participants is as follows:
Thorsten Altenkirch University of Nottingham
Roland Backhouse University of Nottingham
Ian Bailey University of Oxford
Edwin Brady University of Durham
Paul Callaghan University of Durham
Kieran Clenaghan University of York
Roy Crole University of Leicester
Sharon Curtis University of Stirling
Neil Ghani University of Leicester
Jeremy Gibbons University of Oxford
Ralf Hinze University of Utrecht
Yorck Hunke University of Oxford
Graham Hutton University of Nottingham
Johan Jeuring University of Utrecht
Clare Martin Oxford Brookes University
Conor McBride University of Durham
Simon Peyton Jones Microsoft Research Cambridge
Silvija Seres University of Oxford
Joel Wright University of Nottingham
Wang Yanbing University of Nottingham
The workshop will run from 10am to around 5pm, with breaks for coffee and lunch. I've made a reservation for lunch at Cafe Terrazo on the campus, but you'll have to pay for your own lunch (around 3-5
pounds). After the workshop there will be a BBQ at Roland's house starting at around 6pm, for those who can stay a little longer or are staying overnight. Roland's house is around 10-15 minutes by
taxi from the railway station.
An overhead projector and a data projector linked to a PC (with Powerpoint 97 installed) will be available for presentations, and it will also be possible to connect a laptop to the data projector.
10.00 Ralf Hinze, University of Utrecht
- Functorial unparsing (paper, slides)
Danvy has shown how to implement a variant of C's `printf' function in a statically typed language. We present a new solution to this problem based on the notion of a functor. Our solution
enjoys nice algebraic properties and is more direct than the original one. Along the way, we encounter singleton types, type-indexed values, type-indexed types, multiple-parameter type
classes, and functional dependencies.
10.30 Thorsten Altenkirch, University of Nottingham
- Towards a semantics of generic programming
I will put forward some ideas about how to model generic programs categorically. Many (but not all) generic programs can be understood as folds/unfolds in a category of families. Maybe this
gives rise to the view that generic is not the same as adhoc.
- Coffee
11.30 Sharon Curtis, University of Stirling
- Generic greedy algorithms
Looking at how generic programming concepts do and don't help produce greedy algorithms in the world of relational programming.
12.00 Silvija Seres, University of Oxford
- Higher-order transformation of logic programs (paper)
It has earlier been assumed that a compositional approach to algorithm design and program transformation is somehow unique to functional programming. With this talk we hope to demonstrate that
some of the same techniques and results are applicable to logic programming as well.
- Lunch
14.00 Jeremy Gibbons, University of Oxford
- When is a function a fold or an unfold? (paper, slides)
We present necessary and sufficient conditions for when a (partial or total) function can be expressed as a fold or an unfold. An earlier version of this talk was presented at CMCS2001 in
April; since then, the arguments have been simplified considerably. Curiously, the results are much easier to prove in a relational setting, even though they apply only to functions and not to
14.30 Roland Backhouse, University of Nottingham
- Generic termination (paper, slides)
Generic programming is about parameterising programs by datatypes. The introduction of such parameters in the process of program construction opens up a new dimension for reasoning about
programs. In this talk we consider how termination properties of programs are parameterised by datatypes. In particular, we compare several generalisations of the notions of well-foundedness
and inductivity.
- Coffee
15.30 Conor McBride, University of Durham
- Universe constructions as a medium for generic programming
A `universe', in the sense of this talk, is a collection of types given by a type U of `codes' and a `decoding' function T from U to types. In a dependently typed setting, we can capture many
collections of types over which we wish to define generic operations by universes whose U's are datatypes coding for exactly the types we intend. Correspondingly, systematic type-level
behaviour can be implemented by `ordinary' programming over the U's. I will give some motivating examples, and discuss how Haskell might be extended to support this kind of technique.
16.00 Johan Jeuring, University of Utrecht
- Type-indexed data types (paper)
A polytypic function is a function that can be instantiated on many datatypes to obtain datatype specific functionality. Examples of polytypic functions are the functions for digital
searching, pattern matching, unification, rewriting, and structure editing. For each of these problems, we not only have to define polytypic functionality, but also a type-indexed data type: a
data type that is constructed in a generic way from an argument datatype. This talk shows how to define type-indexed data types, discusses some examples of type-indexed data types, and
discusses the specialization of type-indexed data types.
- Closing
The workshop will be held at the following location:
Seminar Room C1
School of Computer Science
The University of Nottingham
Jubilee Campus, Wollaton Road
Nottingham NG8 1BB
United Kingdom
Online maps and directions to the Jubilee Campus are available here.
If you are arriving by train or bus, take a taxi to the University, which will take around 10-15 minutes. Make sure to ask for the Jubilee Campus on Wollaton Road, as the University has more than one
If you are arriving by car, use the Wollaton Road entrance to the Jubilee Campus, and ask the security staff at the entrance to direct you to the free visitors car park nearby. Make sure not to park
elsewhere, as all other spaces require a parking permit and are subject to stickering or clamping.
If you would like accomodation before or after the workshop, below are four suggestions, all of which are around 10-15 minutes by taxi from the train/bus stations and have availability on the 25th
and 26th July. The first two are just a few minutes walk from the Jubilee Campus, while the remaining two are just a few minutes by taxi. If you require accomodation, please make your own
P&J Hotel
227-229 Derby Road
Cost: approx 48 pounds per night
Tel : 0115-978 3998
Lucieville St James Hotel
349 Derby Road
Cost: approx 55 pounds per night
Tel : 0115-978 7389
The Hylands Hotel
307 Queens Road
Cost: approx 40 pounds per night
Tel : 0115-925 5472
Priory Toby Hotel
Derby Road
Wollaton Vale
Cost: approx 52 pounds per night
Tel : 0115-922 1691 | {"url":"https://people.cs.nott.ac.uk/pszgmh/wgp01.html","timestamp":"2024-11-06T00:58:13Z","content_type":"text/html","content_length":"12220","record_id":"<urn:uuid:c3644639-9740-42cb-bb8a-b894501dbe83>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00517.warc.gz"} |
Tough topics in JC H2 Math | Math Academy
JC H2 Math is considered one of the most difficult subjects in Singapore’s Junior Academies. Besides, JC H2 Math is one of the most important subjects students are advised to take since it is
considered in the selection criteria during university admissions. Most students, therefore, register for JC H2 Math tuition classes to understand the subject better. The JC H2 Math tuition classes
enhance students skills in problem solving and a better understanding of math concepts.
Some topics are considered difficult and thought to contribute to poor performance in JC H2 Math by most students. Some of the topics found difficult in the JC H2 Math subjects include calculus,
function, and advanced topics such as vectors and series and sequences from further maths. Confidence by students to tackle other problems and Tuition in junior college math academies is offered to
enhance the student’s skills. For instance, JC H2 Math tuition classes make the entire JC H2 Math relatively easier since the tuition classes are held by professionally qualified tutors.
Vectors is also amongst the topics considered hard in JC H2 Math. This topic is least understood by most peoples due to its advanced calculations in Further maths. Further maths is amongst the
subjects that were advisable to deep mathematical understanding and capability. JC H2 Math tuition classes help students identify their weak areas and strive to achieve excellent skills from their
tutors. The JC H2 Math goes a long way in ensuring that students revisit vectors as a math topic, revises several papers.
Most subtopics in Calculus are relatively difficult and include continuity, derivatives, limits and integrals. Professional tutors teach this topic in JC H2 Math tuition classes. These areas are
highly revisited in the JC H2 Math, helping students develop significant math skills and problem-solving knowledge. Differential equations and plan curves also render JC H2 Math subjects difficult
and, therefore, the need to seek JC H2 Math tuition classes.
In-depth applications and exclusive solutions to difficult topics such as functions should not worry students from selection, as the topic is mostly revisited in the JC H2 Math tuition classes.
Functions are meant to offer skills related to comparing the relationship between input and output components and the properties to which the relationship occurs. Domains and the range if a function
is also some of the difficult skills in JC H2 Math subjects. Knowledge of these important mathematical concepts is offered by tutors in the JC H2 Math tuition classes.
JC H2 Math tuition classes in Singapore are therefore important in ensuring that students understand all topics in JC H2 Math subject better. Good performance in math related subjects offers students
an added advantage in the selection criteria to public universities. Also, the JC H2 Math offers students with vast mathematical skills and problem-solving techniques. | {"url":"https://mathacademy.sg/tough-topics-in-jc-h2-math/","timestamp":"2024-11-12T12:52:43Z","content_type":"text/html","content_length":"74276","record_id":"<urn:uuid:cf3a6c57-f562-404c-b903-bebad74fd0e5>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00038.warc.gz"} |
Kids.Net.Au - Encyclopedia > Hermitian
, a square
entries is called
if it is equal to its
conjugate transpose
- that is, if the element in the
th row and
th column is equal to the
complex conjugate
of the element in the
th row and
th column, for all indices
<math>a_{i,j} = \overline{a_{j,i}}</math>
The conjugate transpose of a matrix is also called its
, and a synonym for
Here is an example of a Hermitian matrix:
If all the entries of a matrix are real, then it is Hermitian if and only if it is a symmetric matrix.
Every Hermitian matrix is normal, and the finite-dimensional spectral theorem applies. It says that any Hermitian matrix can be diagonalized by a unitary matrix, and that the resulting diagonal
matrix has only real entries. This means that all eigenvalues of a Hermitian matrix are real, and, moreover, eigenvectors with distinct eigenvalues are orthogonal. It is possible to find an
orthonormal basis of C^n consisting only of eigenvectors.
If the eigenvalues of a Hermitian matrix are all positive, then the matrix is positive definite.
A continuous linear operator A: H → H on a Hilbert space H is called Hermitian or self-adjoint if
(x,Ay) = (Ax,y)
for all elements
. Here, the parentheses denote the
inner product
given on
This definition agrees with the one given above if we take as H the Hilbert space C^n with the standard dot product and interpret a square matrix as a linear operator on this Hilbert space. It is
however much more general as there are important infinite-dimensional Hilbert spaces.
The spectrum of any Hermitian operator is real; in particular all its eigenvalues are real. A version of the spectral theorem also applies to Hermitian operators; while the eigenvectors to different
eigenvalues are orthogonal, in general it is not true that the Hilbert space H admits an orthonormal basis consisting only of eigenvectors of the operator. In fact, Hermitian operators need not have
any eigenvalues or eigenvectors at all.
In the mathematical formulation of quantum mechanics, one considers even more general Hermitian operators: they are only defined on a dense subspace of a Hilbert space and don't have to be
For example, consider the complex Hilbert space L^2[0,1] and the differential operator A = d^2 / dx^2, defined on the subspace consisting of all differentiable functions f : [0,1] → C with f(0) = f
(1) = 0. Then integration by parts easily proves that A is Hermitian. Its eigenfunctions are the sinusoids sin(nπx) for n = 1,2,..., with the real eigenvalues n^2π^2; the well-known orthogonality of
the sine functions follows as a consequence of the Hermitian property.
Another example: the complex Hilbert space L^2(R), and the operator which multiplies a given function by x:
Af(x) = xf(x)
It is defined on the space of all L
functions for which the right-hand-side is square-integrable.
is a Hermitian operator without any eigenvalues and eigenfunctions.
All Wikipedia text is available under the terms of the GNU Free Documentation License | {"url":"http://encyclopedia.kids.net.au/page/he/Hermitian","timestamp":"2024-11-14T11:14:37Z","content_type":"application/xhtml+xml","content_length":"17071","record_id":"<urn:uuid:8a4e4b2a-ba94-46c3-954d-17fe17191f43>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00427.warc.gz"} |
Lesson 6
Rewriting Quadratic Expressions in Factored Form (Part 1)
• Let’s write expressions in factored form.
6.1: Puzzles of Rectangles
Here are two puzzles that involve side lengths and areas of rectangles. Can you find the missing area in Figure A and the missing length in Figure B? Be prepared to explain your reasoning.
6.2: Using Diagrams to Understand Equivalent Expressions
1. Use a diagram to show that each pair of expressions is equivalent.
\(x(x+3)\) and \(x^2 +3x\)
\(x(x+\text-6)\) and \(x^2-6x\)
\((x+2)(x+4)\) and \(x^2 + 6x + 8\)
\((x+4)(x+10)\) and \(x^2 + 14x + 40\)
\((x+\text-5)(x+\text-1)\) and \(x^2 - 6x +5\)
\((x-1)(x-7)\) and \(x^2 -8x + 7\)
2. Observe the pairs of expressions that involve the product of two sums or two differences. How is each expression in factored form related to the equivalent expression in standard form?
6.3: Let’s Rewrite Some Expressions!
Each row in the table contains a pair of equivalent expressions.
Complete the table with the missing expressions. If you get stuck, consider drawing a diagram.
│ factored form │ standard form │
│\(x(x+7)\) │ │
│ │\(x^2+9x\) │
│ │\(x^2-8x\) │
│\((x+6)(x+2)\) │ │
│ │\(x^2+13x+12\) │
│\((x-6)(x-2)\) │ │
│ │\(x^2-7x+12\) │
│ │\(x^2+6x+9\) │
│ │\(x^2+10x+9\) │
│ │\(x^2-10x+9\) │
│ │\(x^2-6x+9\) │
│ │\(x^2+(m+n)x+mn\) │
A mathematician threw a party. She told her guests, “I have a riddle for you. I have three daughters. The product of their ages is 72. The sum of their ages is the same as my house number. How old
are my daughters?”
The guests went outside to look at the house number. They thought for a few minutes, and then said, “This riddle can’t be solved!”
The mathematician said, “Oh yes, I forgot to tell you the last clue. My youngest daughter prefers strawberry ice cream.”
With this last clue, the guests could solve the riddle. How old are the mathematician’s daughters?
Previously, you learned how to expand a quadratic expression in factored form and write it in standard form by applying the distributive property.
For example, to expand \((x+4)(x+5)\), we apply the distributive property to multiply \(x\) by \((x+5)\) and 4 by \((x+5)\). Then, we apply the property again to multiply \(x\) by \(x\) and \(x\) by
5, and multiply 4 by \(x\) and 4 by 5.
To keep track of all the products, we could make a diagram like this:
Next, we could write the products of each pair inside the spaces:
│ │ \(x\) │ \(4\) │
│ \(x\) │\(x^2\) │\(4x\) │
│ \(5\) │\(5x\) │\(4 \boldcdot 5\) │
The diagram helps us see that \((x+4)(x+5)\) is equivalent to \(x^2 +5x +4x + 4 \boldcdot 5\), or in standard form, \(x^2 +9x + 20\).
• The linear term, \(9x\), has a coefficient of 9, which is the sum of 5 and 4.
• The constant term, 20, is the product of 5 and 4.
We can use these observations to reason in the other direction: to start with an expression in standard form and write it in factored form.
For example, suppose we wish to write \(x^2 - 11x + 24\) in factored form.
Let’s start by creating a diagram and writing in the terms \(x^2\) and 24.
We need to think of two numbers that multiply to make 24 and add up to -11.
│ │ \(x\) │ │
│ \(x\) │\(x^2\)│ │
│ │ │ \(24\) │
After some thinking, we see that -8 and -3 meet these conditions.
The product of -8 and -3 is 24. The sum of -8 and -3 is -11.
│ │ \(x\) │ \(\text-8\) │
│ \(x\) │\(x^2\) │\(\text-8x\) │
│\(\text-3\)│\(\text-3x\) │\(24\) │
So, \(x^2 - 11x + 24\) written in factored form is \((x-8)(x-3)\).
• coefficient
In an algebraic expression, the coefficient of a variable is the constant the variable is multiplied by. If the variable appears by itself then it is regarded as being multiplied by 1 and the
coefficient is 1.
The coefficient of \(x\) in the expression \(3x + 2\) is \(3\). The coefficient of \(p\) in the expression \(5 + p\) is 1.
• constant term
In an expression like \(5x + 2\) the number 2 is called the constant term because it doesn't change when \(x\) changes.
In the expression \(5x-8\) the constant term is -8, because we think of the expression as \(5x + (\text-8)\). In the expression \(12x-4\) the constant term is -4.
• linear term
The linear term in a quadratic expression (in standard form) \(ax^2 + bx + c\), where \(a\), \(b\), and \(c\) are constants, is the term \(bx\). (If the expression is not in standard form, it may
need to be rewritten in standard form first.)
• zero product property
The zero product property says that if the product of two numbers is 0, then one of the numbers must be 0. | {"url":"https://curriculum.illustrativemathematics.org/HS/students/1/7/6/index.html","timestamp":"2024-11-04T12:03:37Z","content_type":"text/html","content_length":"90124","record_id":"<urn:uuid:85a7af44-02ab-4ada-a2ba-4ed118f0ffd5>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00395.warc.gz"} |
Create a Math Block I can’t say this enough. An effective math block will help you tremendously! It creates routine and structure that you need to make sure everyone is on task. This is what I
include in my math block: Engaging Mini Lessons Math centers Independent practice visuals throughout the classroom Engaging Mini […] | {"url":"https://www.mrsjonescreationstation.com/tag/math-vocabulary/","timestamp":"2024-11-06T10:41:17Z","content_type":"text/html","content_length":"148203","record_id":"<urn:uuid:b1a8a4bb-24d7-4e8a-a12f-00fd3135c45a>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00044.warc.gz"} |
path param as a list with comma separator | SmartBear Community
path param as a list with comma separator
Hi for some reason I can't create a schema with path param list with comma separator and enum that defines the allowed values. what am I doing wrong?
the structure of the url in the schema is:
this is my url example below
when Im passing that url Im getting enum not allowed values error
this is the path params schema below:
summary: Gets locations tiles by x y z of with specific layers
- name: layers
in: path
description: Types of locations layers(points, polygons, linestrings, clusters)
required: true
type: string
enum: [ 'points', 'polygons', 'linestrings', 'clusters' ]
style: simple
explode: true
- name: z
in: path
description: Z coordinate
required: true
type: number
- name: x
in: path
description: X coordinate
required: true
type: number
- name: y
in: path
description: Y coordinate
required: true
type: number
this is the package version im using
"openapi-validator-middleware": "^3.2.2"
any suggestion what am I doing wrong? thanks! | {"url":"https://community.smartbear.com/discussions/SwaggerOSTools/path-param-as-a-list-with-comma-separator/262814","timestamp":"2024-11-12T17:30:20Z","content_type":"text/html","content_length":"216689","record_id":"<urn:uuid:860fbd80-9ccf-4846-b0c0-b8dc37184fc7>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00628.warc.gz"} |
Biordered set
Jump to navigation Jump to search
This article's
tone or style may not reflect the encyclopedic tone used on Wikipedia
(November 2012) (Learn how and when to remove this template message)
A biordered set ("boset") is a mathematical object that occurs in the description of the structure of the set of idempotents in a semigroup. The concept and the terminology were developed by K S S
Nambooripad in the early 1970s.^[1]^[2]^[3] The defining properties of a biordered set are expressed in terms of two quasiorders defined on the set and hence the name biordered set. Patrick Jordan,
while a master's student at University of Sydney, introduced in 2002 the term boset as an abbreviation of biordered set.^[4]
According to Mohan S. Putcha, "The axioms defining a biordered set are quite complicated. However, considering the general nature of semigroups, it is rather surprising that such a finite
axiomatization is even possible."^[5] Since the publication of the original definition of the biordered set by Nambooripad, several variations in the definition have been proposed. David Easdown
simplified the definition and formulated the axioms in a special arrow notation invented by him.^[6]
The set of idempotents in a semigroup is a biordered set and every biordered set is the set of idempotents of some semigroup.^[3]^[7] A regular biordered set is a biordered set with an additional
property. The set of idempotents in a regular semigroup is a regular biordered set, and every regular biordered set is the set of idempotents of some regular semigroup.^[3]
The formal definition of a biordered set given by Nambooripad^[3] requires some preliminaries.
If X and Y be sets and ρ⊆ X × Y, let ρ ( y ) = { x ∈ X : x ρ y }.
Let E be a set in which a partial binary operation, indicated by juxtaposition, is defined. If D[E] is the domain of the partial binary operation on E then D[E] is a relation on E and (e,f) is in D[E
] if and only if the product ef exists in E. The following relations can be defined in E:
${\displaystyle \omega ^{r}=\{(e,f)\,:\,fe=e\}}$
${\displaystyle \omega ^{l}=\{(e,f)\,:\,ef=e\}}$
${\displaystyle R=\omega ^{r}\,\cap \,(\omega ^{r})^{-1}}$
${\displaystyle L=\omega ^{l}\,\cap \,(\omega ^{l})^{-1}}$
${\displaystyle \omega =\omega ^{r}\,\cap \,\omega ^{l}}$
If T is any statement about E involving the partial binary operation and the above relations in E, one can define the left-right dual of T denoted by T*. If D[E] is symmetric then T* is meaningful
whenever T is.
Formal definition[edit]
The set E is called a biordered set if the following axioms and their duals hold for arbitrary elements e, f, g, etc. in E.
(B1) ω^r and ω^l are reflexive and transitive relations on E and D[E] = ( ω^r ∪ ω ^l ) ∪ ( ω^r ∪ ω^l )^−1.
(B21) If f is in ω^r( e ) then f R fe ω e.
(B22) If g ω^l f and if f and g are in ω^r ( e ) then ge ω^l fe.
(B31) If g ω^r f and f ω^r e then gf = ( ge )f.
(B32) If g ω^l f and if f and g are in ω^r ( e ) then ( fg )e = ( fe )( ge ).
In M ( e, f ) = ω^l ( e ) ∩ ω^r ( f ) (the M-set of e and f in that order), define a relation ${\displaystyle \prec }$ by
${\displaystyle g\prec h\quad \Longleftrightarrow \quad eg\,\,\omega ^{r}\,\,eh\,,\,\,\,gf\,\,\omega ^{l}\,\,hf}$.
Then the set
${\displaystyle S(e,f)=\{h\in M(e,f):g\prec h{\text{ for all }}g\in M(e,f)\}}$
is called the sandwich set of e and f in that order.
(B4) If f and g are in ω^r ( e ) then S( f, g )e = S ( fe, ge ).
M-biordered sets and regular biordered sets[edit]
We say that a biordered set E is an M-biordered set if M ( e, f ) ≠ ∅ for all e and f in E. Also, E is called a regular biordered set if S ( e, f ) ≠ ∅ for all e and f in E.
In 2012 Roman S. Gigoń gave a simple proof that M-biordered sets arise from E-inversive semigroups.^[8]
Subobjects and morphisms[edit]
Biordered subsets[edit]
A subset F of a biordered set E is a biordered subset (subboset) of E if F is a biordered set under the partial binary operation inherited from E.
For any e in E the sets ω^r ( e ), ω^l ( e ) and ω ( e ) are biordered subsets of E.^[3]
A mapping φ : E → F between two biordered sets E and F is a biordered set homomorphism (also called a bimorphism) if for all ( e, f ) in D[E] we have ( eφ ) ( fφ ) = ( ef )φ.
Illustrative examples[edit]
Vector space example[edit]
Let V be a vector space and
E = { ( A, B ) | V = A ⊕ B }
where V = A ⊕ B means that A and B are subspaces of V and V is the internal direct sum of A and B. The partial binary operation ⋆ on E defined by
( A, B ) ⋆ ( C, D ) = ( A + ( B ∩ C ), ( B + C ) ∩ D )
makes E a biordered set. The quasiorders in E are characterised as follows:
( A, B ) ω^r ( C, D ) ⇔ A ⊇ C
( A, B ) ω^l ( C, D ) ⇔ B ⊆ D
Biordered set of a semigroup[edit]
The set E of idempotents in a semigroup S becomes a biordered set if a partial binary operation is defined in E as follows: ef is defined in E if and only if ef = e or ef= f or fe = e or fe = f holds
in S. If S is a regular semigroup then E is a regular biordered set.
As a concrete example, let S be the semigroup of all mappings of X = { 1, 2, 3 } into itself. Let the symbol (abc) denote the map for which 1 → a, 2 → b, and 3 → c. The set E of idempotents in S
contains the following elements:
(111), (222), (333) (constant maps)
(122), (133), (121), (323), (113), (223)
(123) (identity map)
The following table (taking composition of mappings in the diagram order) describes the partial binary operation in E. An X in a cell indicates that the corresponding multiplication is not defined.
┃ ∗ │(111)│(222)│(333)│(122)│(133)│(121)│(323)│(113)│(223)│(123)┃
┃(122)│(111)│(222)│(333)│(122)│(122)│(121)│X │X │X │(122)┃
┃(133)│(111)│(222)│(333)│(122)│(133)│X │X │(133)│X │(133)┃
┃(121)│(111)│(222)│(333)│(121)│X │(121)│(323)│X │X │(121)┃
┃(323)│(111)│(222)│(333)│X │X │(121)│(323)│X │(323)│(323)┃
┃(113)│(111)│(222)│(333)│X │(113)│X │X │(113)│(223)│(113)┃
┃(223)│(111)│(222)│(333)│X │X │X │(233)│(113)│(223)│(223)┃ | {"url":"https://static.hlt.bme.hu/semantics/external/pages/transzform%C3%A1ci%C3%B3s_f%C3%A9lcsoport/en.wikipedia.org/wiki/Biordered_set.html","timestamp":"2024-11-10T21:28:52Z","content_type":"text/html","content_length":"69483","record_id":"<urn:uuid:8d8dc75a-2ea3-425a-9947-500674b2a4cf>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00307.warc.gz"} |
Search results for fernandez-jimenez-n_maternal_pre-pregnancy_body_mass_index_inverse_variance-weighted_fixed_effects_meta-analysis_adjusted_by_cellular_heterogeneity
Author PMID Outcome Exposure Tissue Analysis N CpG Location Gene Beta P
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg08219219 chr19:15337971 EPHX3 -0.0011 5.1E-11
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14704941 chr11:19224659 CSRP3 0.002 5.3E-10
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04724807 chr14:62396305 - -0.0018 9.1E-10
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00423969 chr2:97359879 FER1L5 -0.00092 1.1E-09
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26433445 chr16:81764289 - -0.00086 3.9E-09
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg15933729 chr11:27504612 - -0.00079 4.8E-09
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg15258080 chr10:71091204 HK1 -0.0013 7.9E-09
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg03603866 chr13:107027625 - 0.0013 1.2E-08
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14143441 chr8:134387493 - -0.0017 1.5E-08
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg08539067 chr3:49395985 GPX1 -0.00049 2.8E-08
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg22673972 chr3:14415346 - -0.00085 3.1E-08
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg09126859 chr12:52244063 - -0.0011 3.2E-08
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg09167414 chr1:16076206 - -0.0015 3.3E-08
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg12613632 chr1:95385935 CNN3 -0.00088 3.3E-08
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14051770 chr7:76054572 ZP3 -0.0016 3.6E-08
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14163484 chr2:97359926 FER1L5 -0.00085 4.1E-08
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg16310415 chr8:25898539 EBF2 0.0017 5E-08
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg23696550 chr14:24732386 TGM1 -0.0013 5.8E-08
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20042798 chr3:13036713 IQSEC1 -0.00052 8E-08
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg08129759 chr1:202091944 GPR37L1 -0.00077 8.1E-08
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg16724070 chr1:183133736 - -0.0016 8.2E-08
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00510149 chr11:10674966 MRVI1 -0.0013 9.6E-08
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg24893073 chr17:7742126 KDM6B -0.00061 1E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg02286857 chr2:47297177 TTC7A -0.0013 1.1E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14244402 chr9:130681102 - -0.0017 1.1E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05590755 chr1:23855149 E2F2 -0.0008 1.2E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05965490 chr8:30264627 RBPMS -0.0013 1.2E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg25178913 chr11:19301951 - -0.0012 1.3E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05521767 chr11:75230135 GDPD5 -0.0012 1.4E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg06641607 chr1:233498953 KIAA1804 0.00024 1.4E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg13351161 chr8:27490921 SCARA3 -0.00098 1.4E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg24720038 chr3:128291538 C3orf27 -0.001 1.4E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00665106 chr1:201515370 - -0.00075 1.6E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05641778 chr7:157194536 DNAJB6 -0.0013 1.7E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg23097499 chr1:12225569 TNFRSF1B -0.0011 1.8E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg03822934 chr4:41505964 LIMCH1 -0.00097 2E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20683108 chr3:149225178 - -0.00089 2.2E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg06898145 chr7:2627735 IQCE 0.00041 2.5E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20958804 chr11:130721851 - 0.0014 2.5E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14305278 chr9:971046 - 0.0033 2.7E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg21171978 chr22:32029943 - -0.00049 2.9E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg24718756 chr5:126308095 MARCH3 -0.0014 2.9E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00231049 chr1:23855119 E2F2 -0.00097 3.2E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg07078732 chr2:30505165 - -0.001 4.1E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg08466982 chr2:24162619 UBXN2A -0.00015 4.4E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04390865 chr1:117214010 MIR320B1 -0.0011 4.9E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg16904399 chr10:22902615 PIP4K2A -0.00071 4.9E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg07110405 chr11:70917533 SHANK2 -0.0021 5.7E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg03078141 chr4:129721849 - -0.0013 6.4E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg10129408 chr6:10421001 TFAP2A -0.00026 6.6E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg06835822 chr1:111061247 KCNA10 0.00021 6.8E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00838415 chr19:47253604 FKRP -0.00061 7E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg16606561 chr20:824641 FAM110A -0.001 7.1E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg23986143 chr7:76054256 ZP3 -0.00086 7.5E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg17676607 chr7:133811837 LRGUK 0.0029 8.2E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg16398761 chr14:74220238 C14orf43 -0.00088 8.3E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg27605300 chr17:79360456 - -0.00017 8.3E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg03126561 chr11:12434050 PARVA -0.00077 8.5E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg03417473 chr1:113216382 MOV10 -0.0011 9.1E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg17696756 chr20:55101001 GCNT7;C20orf106 0.00056 9.4E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg01244006 chr12:6485916 SCNN1A -0.00069 9.7E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04353603 chr1:95320222 SLC44A3 -0.00019 9.7E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg23682310 chr2:235749867 - -0.00081 9.7E-07
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg01731783 chr14:74211788 C14orf43 -0.00094 1E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14214914 chr9:131870304 CRAT -0.0011 1E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14985076 chr1:86042149 DDAH1 -0.00046 1E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00992055 chr9:127571598 OLFML2A -0.0016 1.1E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05856677 chr22:19739082 - -0.0011 1.1E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg19428083 chr5:141675535 - -0.001 1.1E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg22663660 chr15:58732754 LIPC -0.0011 1.1E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04084026 chr16:57832309 KIFC3 -0.00081 1.2E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg01753198 chr16:70560099 SF3B3 0.0011 1.3E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04717143 chr8:37555391 ZNF703 -0.001 1.3E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg06481122 chr15:37403088 - 0.0015 1.3E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg10173586 chr8:145020593 PLEC1;MIR661 -0.00072 1.3E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg11746359 chr4:141012403 MAML3 -0.00079 1.3E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20353207 chr15:61209239 RORA -0.00037 1.3E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg21143896 chr7:2802374 GNA12 -0.0014 1.3E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg21223803 chr6:168711821 DACT2 -0.00088 1.3E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg23170439 chr3:74662982 - 0.0021 1.3E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg22951989 chr8:27491676 SCARA3 -0.00026 1.4E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg01047635 chr11:16761506 C11orf58 0.0016 1.5E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg10038867 chr1:21982511 RAP1GAP -0.0006 1.5E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg21091985 chr6:31706222 - -0.00053 1.5E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg07739927 chr17:62778162 LOC146880 -0.0012 1.6E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14615559 chr9:130911577 LCN2 -0.0014 1.6E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26672098 chr2:42326070 - -0.00078 1.6E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04330176 chr10:105619211 - -0.00076 1.7E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05342835 chr1:33160791 SYNC -0.0011 1.7E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05655707 chr11:18477254 LDHAL6A 0.0013 1.7E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg07596065 chr22:50984393 - -0.0009 1.7E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg23786545 chr11:130580518 - 0.0007 1.7E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg25220751 chr6:4655948 - -0.00085 1.7E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00972420 chr20:49204792 FAM65C -0.00095 1.8E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg01057132 chr5:169190508 DOCK2 0.00073 1.8E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg21988461 chr4:88008667 AFF1 -0.00087 1.8E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg21994712 chr19:21861136 - 0.002 1.8E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg27104271 chr11:74862127 SLCO2B1 -0.00065 1.8E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg11773243 chr7:136613718 CHRM2 0.00036 1.9E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg16196175 chr7:27289120 - 0.0011 1.9E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg13627197 chr22:25160378 TOP1P2;PIWIL3 0.0015 2E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg12126344 chr1:12207564 - -0.00083 2.2E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg18410110 chr5:31963583 PDZD2 -0.001 2.2E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg07381806 chr19:2094327 MOBKL2A -0.0018 2.3E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg18402166 chr17:62778279 LOC146880 -0.001 2.3E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg27616833 chr2:106060221 - -0.00066 2.3E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg12381074 chr7:959759 ADAP1 0.0012 2.5E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg13668129 chr19:41768075 HNRNPUL1 -0.00037 2.5E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg10098670 chr1:200299925 - -0.00081 2.6E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26061510 chr3:44667010 ZNF197 0.0025 2.6E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00835193 chr19:2291780 LINGO3 -0.00083 2.7E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg06870728 chr10:8095363 FLJ45983;GATA3 -0.001 2.7E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20289911 chr20:816362 FAM110A -0.00074 2.7E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg22424746 chr1:117753313 VTCN1 -0.0007 2.7E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg16255334 chr13:100652448 - 0.00061 2.8E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg18843326 chr6:169095875 - -0.00088 2.8E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00372940 chr1:109583681 WDR47 -0.00052 2.9E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg21182960 chr7:156812171 - 0.0013 3E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26802256 chr7:4652765 - -0.00015 3E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00101728 chr6:2953027 SERPINB6 -0.00049 3.1E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg13795819 chr2:128405844 GPR17;LIMS2 -0.0012 3.2E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg17669365 chr2:71163953 ATP6V1B1 -0.0007 3.2E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg13907059 chr9:140002893 MAN1B1 0.00043 3.3E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg06417478 chr19:12876846 HOOK2 -0.0048 3.4E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg22526990 chr19:33788511 - -0.0011 3.4E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg13808641 chr9:96006533 WNK2 -0.00081 3.5E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg22313574 chr8:27468981 CLU -0.0011 3.5E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14121282 chr9:137268074 RXRA -0.0018 3.6E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg18372930 chr5:137939469 - -0.00094 3.7E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg03933676 chr16:10788468 TEKT5 0.00032 4.2E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg21611682 chr11:68138269 LRP5 -0.00077 4.2E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg22454119 chr14:24563794 PCK2 0.0013 4.2E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg01469864 chr5:175299260 CPLX2 0.0035 4.3E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04520793 chr17:42248056 ASB16 -0.00046 4.3E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg08458733 chr8:37555577 ZNF703 -0.0011 4.3E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg23791626 chr17:65542892 PITPNC1 -0.00094 4.3E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg10920329 chr8:126458568 - -0.00091 4.4E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg18805734 chr12:31453901 FAM60A -0.00091 4.5E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg19788727 chr17:8927521 NTN1 0.0012 4.5E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg27651452 chr11:64035061 - -0.00077 4.6E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg06001734 chr2:54467214 ACYP2 0.00088 4.7E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg18471474 chr22:50860456 SAPS2 -0.00082 4.7E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg23899408 chr19:12877188 HOOK2 -0.0028 4.8E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg16542898 chr11:130416857 - 0.00085 4.9E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg27601574 chr22:31498863 SMTN -0.00083 4.9E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26651514 chr13:33864734 - -0.0012 5E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg23588713 chr12:56523270 ESYT1 -0.00064 5.1E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg07798560 chr2:189734507 - -0.0011 5.2E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04895854 chr3:116778999 - 0.0012 5.3E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg10473158 chr7:130130122 MESTIT1;MEST -0.00091 5.3E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg03370270 chr7:106065603 - -0.0013 5.4E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg02896318 chr1:1370775 VWA1 -0.00036 5.5E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg12856114 chr7:73242028 - -0.00061 5.5E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg01516372 chr1:87617972 LOC339524 -0.00091 5.7E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05348897 chr19:49302856 BCAT2 -0.0011 5.7E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg22677268 chr10:45475128 C10orf10;RASSF4 -0.00061 5.7E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14396892 chr9:96623032 - 0.0012 5.9E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg09419102 chr11:65550444 - -0.00069 6E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg18791205 chr7:19146324 - -0.00049 6.1E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg18836745 chr11:119212247 C1QTNF5;MFRP -0.00054 6.1E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg06858555 chr14:105144735 - -0.00058 6.2E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg02893348 chr14:68745571 RAD51L1 -0.0013 6.4E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26954327 chr7:70216208 AUTS2 0.00026 6.4E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg09314149 chr22:43503706 - 0.00015 6.5E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14737287 chr11:130340832 ADAMTS15 0.00069 6.5E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg12444411 chr7:2802554 GNA12 -0.0016 6.7E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg11421724 chr7:69323175 AUTS2 0.00066 6.9E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg13816999 chr11:12398883 PARVA -0.00031 7E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg19859290 chr11:20618399 - 0.0011 7.1E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg27306986 chr2:74735907 PCGF1 -0.00064 7.2E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg06331300 chr7:20830702 - -0.00016 7.3E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg06494421 chr11:130356164 - 0.0012 7.3E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg10008953 chr14:102414004 - -0.00067 7.3E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg21101086 chr2:39836209 - 0.00087 7.4E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14585415 chr10:102778683 PDZD7 -0.00058 7.5E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg25607177 chr11:128556152 - 0.0015 7.5E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg12964187 chr3:45270932 - 0.001 7.6E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg03410436 chr10:1779904 ADARB2 -0.00075 7.7E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg18133957 chr19:1450493 APC2 0.0008 7.9E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26123920 chr12:49259714 RND1 -4.3e-05 7.9E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg01059704 chr5:34914428 RAD1;BRIX1 0.0011 8E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg24598187 chr5:96253798 ERAP2 -0.001 8.1E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04060571 chr16:75267857 BCAR1 -0.00071 8.2E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04981492 chr19:15218713 SYDE1 -0.00096 8.4E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg09025071 chr16:1593152 IFT140;TMEM204 -0.0011 8.4E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg10421247 chr12:120524653 CCDC64 -0.00052 8.4E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg23164681 chr6:30227373 HLA-L 0.0024 8.4E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20012601 chr4:141013584 MAML3 -0.00062 8.6E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg21697198 chr11:129246394 BARX2 -0.00024 8.6E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14096415 chr2:235891856 SH3BP4 -0.00065 8.7E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26333564 chr14:105147781 - -0.00049 8.7E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg17512133 chr4:77973813 CCNI -0.0011 8.8E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg13749063 chr20:60809561 - -0.00056 9E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg25725418 chr10:75639879 - -0.00077 9E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg15764655 chr22:50914909 SBF1 -0.001 9.1E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg01341643 chr17:76472354 DNAH17 -0.0015 9.2E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg13715502 chr9:112211847 PTPN3 -0.0014 9.3E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05556535 chr6:114544140 - 0.00073 9.4E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg18248869 chr15:28051072 OCA2 0.0012 9.4E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg10123952 chr3:100791384 - 0.00075 9.5E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg12615951 chr14:31677607 HECTD1 -9.6e-05 9.5E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00449021 chr1:234658467 - -0.00072 9.8E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg06359931 chr1:25893405 LDLRAP1 -0.00062 9.8E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg12647920 chr12:109144744 - -0.0015 9.9E-06
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg10400362 chr1:4832549 AJAP1 0.00039 1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg11376339 chr2:42570056 - -0.00083 1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg01297180 chr11:16625289 - -0.00083 1.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg10243939 chr7:96654788 DLX5 -0.00034 1.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg13333185 chr15:56035459 PRTG -8.3e-05 1.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg13656831 chr4:682184 MFSD7 -0.00088 1.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg01015899 chr12:120663812 PXN -0.00065 1.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg02600679 chr16:88832700 FAM38A -0.0016 1.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04244307 chr7:1847567 - 0.0015 1.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg06094337 chr12:106751663 POLR3B -5.2e-05 1.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg08680528 chr8:98083371 PGCP 0.00075 1.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg11547950 chr5:77652471 - 0.00079 1.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg12623302 chr6:28058802 ZSCAN12L1 0.00064 1.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg17910478 chr1:228581288 - -0.0013 1.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg19011826 chr3:193851757 - -0.00073 1.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg22460123 chr12:52638294 KRT7 -0.00069 1.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg01192869 chr4:24470951 - 0.0011 1.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg02017323 chr16:30411471 - -0.00018 1.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg03215050 chr11:130668461 - 0.0014 1.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04389704 chr16:30953496 FBXL19 -0.00046 1.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05245650 chr2:37900205 CDC42EP3 -0.00068 1.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg06399881 chr4:69111391 TMPRSS11B 0.0013 1.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg08492883 chr5:35946266 - 0.0012 1.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg09577425 chr6:163731978 PACRG;LOC285796 0.0015 1.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg10201328 chr1:52082165 OSBPL9 0.0003 1.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg11466837 chr11:120009682 TRIM29 -0.0006 1.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14076417 chr5:43412601 CCL28 -0.0014 1.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg15319522 chr14:76819459 - -0.00085 1.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg24497361 chr11:3858493 RHOG -0.00071 1.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg25960769 chr6:41169811 TREML2 -0.0013 1.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26224354 chr7:1096374 C7orf50;GPR146 -0.0016 1.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26684673 chr11:122543428 UBASH3B -0.00094 1.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00605063 chr7:27289128 - 0.0011 1.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00713567 chr8:143545949 BAI1 0.0011 1.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00812557 chr4:38073835 TBC1D1 -0.00077 1.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg02643834 chr11:130060371 ST14 -0.00053 1.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg07181565 chr2:172953482 DLX1 0.0014 1.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg11973682 chr4:5890633 CRMP1 -0.00044 1.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg15228694 chr11:7692131 CYB5R2 -0.00047 1.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg16360868 chr1:4832593 AJAP1 0.00078 1.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg16693012 chr1:68283821 GNG12 -0.0015 1.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg18051353 chr8:68251877 ARFGEF1 -0.00014 1.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20433531 chr10:13141779 OPTN -7.7e-05 1.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26396357 chr1:44324656 ST3GAL3 -0.00064 1.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00259019 chr2:236611999 AGAP1 -0.0011 1.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg09559189 chr8:25898412 EBF2 0.0014 1.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg19176696 chr17:79038153 BAIAP2 -0.00078 1.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20603222 chr7:1096387 C7orf50;GPR146 -0.0018 1.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg02114084 chr14:75894209 JDP2 -0.00097 1.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg13448828 chr2:111826490 ACOXL -0.00064 1.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14801864 chr20:17540975 BFSP1 -0.00086 1.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg16656979 chr15:37403242 - 0.0017 1.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg17412886 chr5:50678913 ISL1 -0.00038 1.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg17942618 chr1:32409867 - -0.00064 1.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg22949077 chr3:107601227 LOC285205 0.00029 1.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg24546723 chr17:48048615 DLX4 -0.0013 1.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg27555761 chr2:223721806 - -0.00075 1.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg02278335 chr15:89911976 - 0.0011 1.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04256697 chr12:120688557 PXN -0.0018 1.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg09045105 chr1:149871945 BOLA1 0.00047 1.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg09844976 chr5:148188454 - -0.0011 1.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg11478273 chr8:128806682 PVT1 -0.00024 1.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg13398658 chr3:44981371 ZDHHC3 -0.00066 1.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20776385 chr15:40613333 - -0.00011 1.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg15952808 chr5:1556929 - 0.0017 1.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26226650 chr3:50276265 GNAI2 -0.0011 1.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg15933451 chr2:98329830 ZAP70 -0.00059 1.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg21387604 chr1:218518468 TGFB2 -0.00017 1.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg22453113 chr17:42287971 UBTF -0.00045 1.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg23525051 chr15:38852680 RASGRP1 0.00061 1.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg25095171 chr14:93577304 ITPK1 -0.00054 1.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg12659009 chr5:135268802 FBXL21 0.0018 2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg02273392 chr7:100770414 SERPINE1 -0.00031 2.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg03611029 chr3:13115406 IQSEC1 0.00095 2.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg03748476 chr16:3701727 DNASE1 -0.00062 2.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg08734918 chr6:94129481 EPHA7 -0.0012 2.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg22830844 chr3:44283343 C3orf77 0.001 2.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg24445507 chr3:53044901 SFMBT1 0.00015 2.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26687670 chr1:241804939 OPN3 -0.00092 2.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00151251 chr2:198173194 - -0.00078 2.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04191712 chr11:67399029 TBX10 -0.00041 2.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg07028533 chr7:145813439 CNTNAP2 -0.00035 2.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg11323113 chr15:71342202 LRRC49 0.0005 2.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg12264949 chr10:104001946 PITX3 -0.00017 2.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg13026729 chr6:159240774 EZR -0.00047 2.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg17174400 chr8:97712676 PGCP 0.0005 2.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg17515024 chr1:228644963 HIST3H2BB -0.00072 2.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg18002814 chr17:15406377 FAM18B2 -6.6e-05 2.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg21573345 chr4:15009110 CPEB2 0.00079 2.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg25974903 chr1:65211057 RAVER2 -8.1e-05 2.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg03294458 chr17:40935998 WNK4 0.0013 2.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg06183338 chr5:178004204 COL23A1 0.0036 2.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14268714 chr9:5186162 INSL6 0.00097 2.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg24062310 chr19:41289994 RAB4B -0.0012 2.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg08135379 chr12:47474763 AMIGO2 -0.00054 2.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg09667303 chr2:144695257 - -0.00027 2.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg10089357 chr6:32086818 ATF6B -0.00023 2.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg18383660 chr1:32082219 HCRTR1 -0.00049 2.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20058744 chr11:61257511 C11orf66 -0.00087 2.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20491488 chr10:106075176 ITPRIP 0.00017 2.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg21046160 chr22:24105147 C22orf15 -0.00049 2.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg22736323 chr4:55095529 PDGFRA 0.00085 2.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg01101364 chr1:114544915 - 0.001 2.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg01485548 chr19:18284321 IFI30 -0.0015 2.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05963712 chr12:49373746 WNT1 0.00077 2.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg11926610 chr15:37403211 - 0.0012 2.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26814703 chr13:31019673 - -0.00063 2.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00657529 chr6:31698687 CLIC1;DDAH2 -0.0011 2.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04098270 chr15:72489686 GRAMD2 -7.1e-05 2.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg09012001 chr1:19250279 IFFO2 -0.00085 2.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg13850019 chr1:2245895 - -0.00019 2.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14851114 chr7:27704471 - -0.0014 2.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg03493146 chr7:69062082 - -0.00088 2.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg19260189 chr12:106995001 RFX4 -0.00097 2.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20647118 chr13:92051786 GPC5 0.00083 2.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00571434 chr6:28550427 SCAND3 0.00021 2.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg01106419 chr4:100083121 PCNAP1 -0.00071 2.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg17276021 chr1:16084445 FBLIM1 -0.00086 2.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg24703339 chr1:12600744 - -0.0006 2.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26215727 chr12:6485537 SCNN1A -0.00076 2.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26471674 chr12:115132905 - -0.00075 2.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26824516 chr15:101419293 ALDH1A3 -0.00036 2.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg27364319 chr12:53626837 RARG -0.0005 2.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg27586378 chr19:49063066 SULT2B1 -0.00056 2.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04233620 chr21:46629219 ADARB1 0.0013 2.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg09221932 chr1:87597830 LOC339524 -0.00048 2.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg10230591 chr13:44818496 - -0.00076 2.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg12208612 chr3:169490789 MYNN -4.9e-05 2.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg18350895 chr14:30397419 PRKD1 -0.00076 2.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg21544633 chr11:12494428 PARVA -0.00067 2.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26554042 chr6:64572840 EYS 0.00034 2.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg01153342 chr14:74254319 C14orf43 -0.00045 3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04072910 chr3:129290078 PLXND1 -0.00093 3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05375680 chr17:53521504 - -0.0013 3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg07590544 chr13:45992945 SLC25A30 0.00051 3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg09767598 chr1:220960017 MOSC1 -0.00092 3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg12060786 chr6:31803718 C6orf48;SNORD52 -8.5e-05 3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14834938 chr5:50678919 ISL1 -0.00033 3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg07742396 chr1:27853060 - -0.00074 3.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg08275454 chr19:10732729 SLC44A2 -0.00077 3.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg11248896 chr2:177003747 - 0.00097 3.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg23514211 chr17:79304188 TMEM105 -0.00053 3.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg13613748 chr18:61090281 VPS4B -0.00032 3.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg15071166 chr17:3771325 CAMKK1 0.0021 3.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg17605847 chr2:27356922 PREB -3.8e-05 3.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04417773 chr10:80909901 ZMIZ1 -0.00088 3.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg15068552 chr7:130130203 MESTIT1;MEST -0.0012 3.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg22627800 chr1:204333036 - -0.00028 3.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26176103 chr7:30407021 ZNRF2 0.00064 3.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg06501716 chr22:19436948 C22orf39 -0.00063 3.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14746813 chr11:57427452 CLP1 0.00024 3.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg17690263 chr7:150690969 NOS3 -0.00061 3.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg19879491 chr3:42803396 CCDC13 -0.00088 3.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg11388320 chr4:81119299 PRDM8 -0.0011 3.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14485083 chr11:113349500 - 0.0015 3.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg16519587 chr6:26614649 - 0.0027 3.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg17003212 chr10:97054967 - -8.7e-05 3.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20201971 chr2:145284259 - 0.0017 3.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg08905114 chr9:95897015 NINJ1 -0.00065 3.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg11943820 chr10:106094061 ITPRIP -0.00077 3.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg19144019 chr17:26875307 UNC119 -0.00051 3.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20677901 chr1:3568210 TP73 0.0016 3.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg21881330 chr14:92938068 SLC24A4 -0.00023 3.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg25512381 chr3:195342063 - -0.00076 3.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20397034 chr7:130794584 MKLN1;FLJ43663 -0.00015 3.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20833182 chr17:7741012 - -0.00067 3.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg01932691 chr14:25045625 CTSG -0.0013 3.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg02891579 chr10:102823812 KAZALD1 -0.0005 3.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05541460 chr22:39850774 - -0.0012 3.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg07796823 chr5:45696928 HCN1 0.0017 3.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg25357830 chr4:142353093 - 0.0014 3.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg02613685 chr15:49174654 SHC4 0.00037 3.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05487589 chr8:25896196 EBF2 0.0015 3.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05984115 chr10:54631212 - 0.0013 3.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg07813249 chr4:46392253 GABRA2 -0.0019 3.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg11663780 chr19:1001892 GRIN3B -6.7e-05 3.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg17339956 chr3:44283355 C3orf77 0.001 3.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg19234171 chr3:128226071 LOC90246 -0.00082 3.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20596543 chr8:103794922 - -0.00087 3.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26682048 chr8:97358327 - 0.0011 3.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg03371306 chr4:153456039 FBXW7 -6.9e-05 4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg07851948 chr11:56085745 OR8K3 0.0024 4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26363579 chr8:1107269 - 0.00074 4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00390484 chr22:20019695 C22orf25;MIR185 -0.00089 4.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg06720017 chr17:76967629 LGALS3BP -0.00076 4.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg07251128 chr11:67239107 - 0.00064 4.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg19812283 chr5:173021074 - 0.0012 4.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20582388 chr8:119088760 EXT1 -0.0012 4.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26891329 chr6:43597837 MAD2L1BP;GTPBP2 -4.6e-05 4.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04193065 chr15:31528995 - -0.00078 4.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05478628 chr14:21029431 RNASE9 0.00049 4.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg16545079 chr17:8055888 PER1 -0.00022 4.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg23220637 chr6:32976805 HLA-DOA 0.0017 4.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00084604 chr6:40929496 - 0.0003 4.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00292662 chr22:38071168 LGALS1 -0.00029 4.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg03518417 chr1:6483657 ESPN -0.00021 4.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04818570 chr16:18992277 - 0.0007 4.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg06457317 chr11:20185267 - 0.00078 4.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg07241908 chr8:56284622 XKR4 0.00084 4.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg10951619 chr1:190447232 FAM5C -0.0013 4.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20443635 chr20:35200590 TGIF2 -0.00052 4.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg25317315 chr1:144995101 PDE4DIP -0.00052 4.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26656452 chr10:115313165 HABP2 -0.0006 4.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg06727067 chr5:92582155 - 0.0016 4.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg10823157 chr18:76829834 ATP9B 5.8e-05 4.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg02247863 chr22:50983415 - -0.00072 4.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg08363415 chr2:54345792 ACYP2 0.00072 4.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg15262516 chr13:110958418 COL4A1;COL4A2 -0.00073 4.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00222125 chr13:53226144 SUGT1 0.00046 4.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04970352 chr11:44327399 ALX4 0.002 4.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg06018699 chr3:133648441 C3orf36 0.0012 4.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg01533108 chr6:27101662 HIST1H2BJ -0.00048 4.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg03985136 chr6:150262734 ULBP2 -0.0001 4.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04520242 chr6:46869727 GPR116 0.00054 4.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg06239355 chr5:32714010 NPR3 0.0012 4.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg07067982 chr11:57194370 SLC43A3 0.0016 4.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg09766510 chr16:87174481 - 0.0011 4.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg11645318 chr7:37491346 - 0.00015 4.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg18545991 chr15:45740677 - -0.00056 4.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg21159993 chr5:139554405 C5orf32 -0.00048 4.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg27046335 chr5:35198290 PRLR -0.0011 4.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg27383876 chr11:103833427 PDGFD 0.0018 4.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg03327619 chr2:159810515 - -0.00075 4.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05618595 chr7:7874409 - -0.00067 4.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg11696165 chr12:120663824 PXN -0.00068 4.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg15856454 chr3:50298001 - -0.00068 4.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg18302606 chr19:39904402 PLEKHG2 -0.00016 4.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg23402144 chr17:9099485 NTN1 -0.00055 4.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04304450 chr22:43525431 BIK -0.00049 5.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04496824 chr1:38276835 MTF1 -0.00064 5.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg08027745 chr6:36561726 SFRS3 -3.8e-05 5.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg15120754 chr2:145275322 ZEB2 -0.00096 5.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg24563570 chr10:60936801 PHYHIPL -0.00012 5.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00403483 chr8:79470784 PKIA 0.00051 5.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg03612413 chr21:38066918 - 0.00098 5.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg08402058 chr20:36148961 BLCAP;NNAT -0.00076 5.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg09853822 chr17:4712456 PLD2 -0.00078 5.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg10307152 chr1:12640678 DHRS3 -0.0011 5.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00903308 chr7:27177068 - 0.002 5.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg01663008 chr2:134357740 - 0.0012 5.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg03055021 chr1:15664770 FHAD1 -0.00059 5.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg08200293 chr11:569357 MIR210 -0.00024 5.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg13702846 chr3:42888883 CCBP2 -0.00068 5.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg16357582 chr1:201476557 CSRP1 -0.00055 5.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg18343437 chr8:142528415 - 0.0015 5.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20764887 chr5:178004012 COL23A1 0.0034 5.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg22119665 chr22:23745131 ZDHHC8P -0.0007 5.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg01838728 chr15:36131223 - 0.0014 5.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg07847452 chr16:10695161 - -0.0007 5.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg21107889 chr18:34802732 KIAA1328 0.00022 5.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14444287 chr13:112755863 - -0.0018 5.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg17496659 chr1:3568245 TP73 0.0014 5.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg02658668 chr5:121525513 - 0.00051 5.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg03400060 chr5:78365801 BHMT2;DMGDH 0.00095 5.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg07984358 chr10:24704829 KIAA1217 -0.00048 5.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg13166171 chr22:45133951 PRR5-ARHGAP8 -0.00096 5.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg13677149 chr7:27284789 EVX1 0.001 5.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg16988958 chr6:7987739 MGC26597 0.001 5.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04723723 chr1:67966270 - -4.7e-05 5.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg12413918 chr17:37855819 ERBB2 -0.00062 5.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg18446916 chr5:150038198 SYNPO -0.00068 5.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26458452 chr7:140341135 - -0.00071 5.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26476925 chr19:45245446 - -0.00066 5.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg07125274 chr5:1299528 - 0.00061 5.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg11025793 chr19:13262015 IER2;STX10 -0.00087 5.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg23087306 chr10:119294055 EMX2OS 0.0011 5.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg24189904 chr19:9473781 ZNF177 0.0011 5.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg02915974 chr1:231453756 - 0.00022 6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg11693364 chr19:11998457 ZNF69 -0.00039 6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg12599641 chr7:150807470 AGAP3 -0.00039 6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20152891 chr19:49944506 SLC17A7 -0.00064 6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26184501 chr17:48139723 ITGA3 -0.00048 6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg27127645 chr14:55220956 SAMD4A 9.5e-05 6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00740389 chr12:53590448 ITGB7 0.00077 6.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg08490768 chr2:74730047 LBX2;LOC151534 -0.00019 6.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20321270 chr4:23155692 - 0.0028 6.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg24735129 chr11:70957520 - -0.00085 6.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04631202 chr1:25942460 MAN1C1 -0.00019 6.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg06392241 chr12:93771346 NUDT4;NUDT4P1 -0.00063 6.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg24858417 chr7:2057703 MAD1L1 -5.8e-05 6.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg25446076 chr21:38083149 SIM2 0.0028 6.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg25778027 chr10:74856981 P4HA1 -5e-05 6.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05276066 chr6:22569604 HDGFL1 0.0014 6.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05897465 chr6:106583161 - -0.00066 6.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14021321 chr9:137594239 COL5A1 0.00089 6.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14511946 chr9:23850969 - -0.0018 6.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg21229979 chr6:163746175 LOC285796 -0.00011 6.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg21386545 chr1:4832865 AJAP1 0.00014 6.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg22175303 chr3:138327450 FAIM -0.00028 6.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg24270542 chr13:24844846 SPATA13 -0.00044 6.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04586237 chr3:44283470 C3orf77 0.0009 6.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg11941060 chr3:133502564 - 0.00068 6.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg19768356 chr6:36547772 - -0.00063 6.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00751072 chr19:18529121 SSBP4 -0.00048 6.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg06803850 chr17:26926738 SPAG5 -0.00013 6.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg08502360 chr6:30076813 TRIM31 0.001 6.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg19554235 chr20:43159912 PKIG -0.0012 6.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg22138597 chr11:82396872 - 0.00076 6.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg25652454 chr17:414740 VPS53 0.0016 6.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg27209983 chr11:124824125 CCDC15 -0.0014 6.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg01987702 chr2:74264707 - -0.0015 6.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05139761 chr1:4842529 AJAP1 0.0015 6.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg08268274 chr1:226792795 C1orf95 -0.00071 6.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg10594245 chr2:109558966 EDAR 0.0005 6.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg13630493 chr9:36190154 CLTA -0.00077 6.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg16710042 chr2:165760725 SLC38A11 -0.0012 6.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg17240987 chr20:44421576 WFDC3;DNTTIP1 0.001 6.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg23324953 chr8:145013728 PLEC1 -0.00031 6.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26338030 chr14:105932930 MTA1 -8.3e-05 6.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05166473 chr16:88103629 BANP 0.00014 6.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00240517 chr2:183457634 - 0.00022 6.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14834477 chr3:51813178 IQCF6 0.00018 7.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg16535352 chr7:129945684 CPA4 -0.00036 7.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg17349632 chr7:131830544 PLXNA4 0.00076 7.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00484122 chr2:20579065 - -0.00058 7.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg13277040 chr20:62716332 OPRL1;C20orf201 0.00064 7.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg10587741 chr22:38071170 LGALS1 -0.00044 7.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg13755795 chr16:51185772 SALL1 -0.00022 7.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg17231494 chr7:134633404 CALD1 -0.00099 7.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26532627 chr17:43339744 LOC100133991; -0.0003 7.3E-05
N methylation body mass index adjusted by cellular heterogeneity C17orf46
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00503917 chr6:170041920 WDR27 0.00016 7.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg01015008 chr6:31095845 PSORS1C1 0.0015 7.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg08303370 chr2:99758219 C2orf15;TSGA10 -7.2e-05 7.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg12800566 chr5:179765497 GFPT2 0.0008 7.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg17214699 chr11:75173887 GDPD5 -0.00074 7.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20672044 chr16:72096498 HPR 0.00021 7.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg24972080 chr11:65154083 FRMD8 -5.1e-05 7.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26054167 chr20:61951347 COL20A1 0.0014 7.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg21310731 chr8:145618932 CPSF1 -4.7e-05 7.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg01385708 chr16:4373570 - -0.001 7.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg02709068 chr7:42016844 GLI3 -0.00074 7.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04602060 chr11:69632655 FGF3 -6e-05 7.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg06765314 chr7:117852592 - 0.0006 7.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14095761 chr9:35713238 TLN1 -0.00069 7.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg16638920 chr5:87974012 LOC645323 0.001 7.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg03326188 chr5:43412465 CCL28 -0.00063 7.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg10815420 chr8:105599835 LRP12 0.0013 7.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg15319126 chr8:94457064 - 0.0015 7.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg17959448 chr11:33891799 LMO2 -0.00013 7.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20801476 chr7:27281465 EVX1 0.00088 7.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05308317 chr14:102053958 - 0.0013 7.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg09915299 chr4:38666663 FLJ13197;KLF3 -8.3e-05 7.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg12760319 chr2:172541622 - 0.00091 7.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg15879316 chr22:46934089 CELSR1 -5.6e-05 7.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg19256731 chr20:1874527 SIRPA -0.0021 7.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg19967800 chr7:133811828 LRGUK 0.0021 7.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg22863118 chr7:136701166 CHRM2 0.00026 7.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05468346 chr11:124823826 CCDC15 -0.0017 8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg07226484 chr1:1972776 - 0.001 8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg10389032 chr2:135157316 MGAT5 0.00026 8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg12570716 chr10:3822379 KLF6 -0.00084 8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg15274870 chr11:71900970 FOLR1 -0.00056 8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg23968213 chr11:2481965 KCNQ1 0.0011 8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20307896 chr12:114883063 - 0.0024 8.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg22783308 chr14:75894315 JDP2 -0.00022 8.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05151360 chr15:37402723 - 0.0011 8.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg15823502 chr6:41650768 - -0.00072 8.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg16047663 chr1:201975041 RNPEP -0.0008 8.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20157095 chr11:67780637 ALDH3B1 -0.00089 8.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg27420889 chr4:187422035 - 0.00074 8.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg27479162 chr10:98450737 PIK3AP1 -0.0011 8.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00592411 chr22:19787329 GNB1L -0.0013 8.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg03987199 chr1:29189655 OPRD1 0.001 8.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg13576290 chr2:29256737 FAM179A -0.00082 8.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg22307444 chr8:672057 ERICH1 -0.0012 8.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg24118773 chr5:172287924 ERGIC1 -0.00058 8.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg06362313 chr12:6645287 GAPDH -0.00087 8.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg10629682 chr2:27486061 SLC30A3 0.001 8.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg15509286 chr3:19186531 - 0.001 8.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26485159 chr5:4511755 - 0.0016 8.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26797372 chr17:78955115 - -0.00024 8.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00615473 chr21:32930423 TIAM1 -0.00024 8.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg08641867 chr1:84395855 TTLL7 0.00067 8.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg09153448 chr11:67347724 - 0.00075 8.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg18443571 chr15:90547692 ZNF710 -0.00067 8.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg19468290 chr11:19380814 NAV2 0.00063 8.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg03110787 chr19:6217641 MLLT1 -0.00071 8.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04212979 chr17:63096635 - -3.5e-05 8.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05038436 chr8:98036431 PGCP 0.00081 8.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05878887 chr3:99620389 C3orf26; 0.0026 8.6E-05
N methylation body mass index adjusted by cellular heterogeneity FILIP1L;MIR548G
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg08244750 chr14:94494115 OTUB2 -0.00056 8.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg12166662 chr5:92906851 FLJ42709 0.00094 8.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg10030250 chr16:55540152 MMP2 0.00037 8.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14365070 chr17:48041570 - -0.00058 8.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg18234296 chr1:210407896 C1orf133; -0.00069 8.7E-05
N methylation body mass index adjusted by cellular heterogeneity SERTAD4
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg19734830 chr22:49139600 FAM19A5 -0.0015 8.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg21814178 chr12:51720755 - -0.00064 8.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg25251635 chr7:100303170 POP7 -7.2e-05 8.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg25826546 chr7:100770060 SERPINE1 -0.00016 8.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg03260781 chr1:16346299 HSPB7 -0.00093 8.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14617302 chr9:95435864 - 0.00032 8.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg04476427 chr5:60582883 - 0.00034 8.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg09137382 chr11:130731461 - 0.0011 8.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg24983248 chr18:42644213 SETBP1 0.0016 8.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg26576047 chr7:131874478 PLXNA4 0.0013 8.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg01385327 chr10:123356336 FGFR2 -7.5e-05 9.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg02404507 chr6:42712100 - 0.0011 9.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg08354351 chr6:96461709 - 0.0017 9.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg21840976 chr13:79181285 - 0.0016 9.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg24399028 chr1:1338100 MRPL20 0.00011 9.1E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00748589 chr12:11653486 - 0.0014 9.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg10726868 chr11:12743980 TEAD1 -0.00061 9.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg11628021 chr19:17491382 - -0.0007 9.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg15028756 chr12:7343000 PEX5 0.00091 9.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg16561266 chr17:62777777 LOC146880 -0.00015 9.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg27518047 chr12:68647590 IL22 0.00076 9.2E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg07530599 chr17:74275137 QRICH2 -0.00081 9.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20076842 chr3:28617743 - -0.00075 9.3E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg05068866 chr3:66633374 - 0.001 9.4E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg17388689 chr13:107140685 - 0.00021 9.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg17486946 chr10:103533783 FGF8 -0.00095 9.5E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg22873167 chr4:1019143 FGFRL1 0.00093 9.6E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg00992145 chr20:36891623 - 0.00037 9.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg13977235 chr19:33172072 - -0.0013 9.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg14813037 chr20:6193833 - 0.0018 9.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg17574799 chr3:12525544 TSEN2 -0.00038 9.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg19284658 chr13:43876277 ENOX1 0.00029 9.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg22513237 chr1:229730411 TAF5L 0.00043 9.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg22867063 chr11:7041549 ZNF214;NLRP14 -0.00012 9.7E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg11012584 chr2:242549943 THAP4 -0.0017 9.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg20679517 chr6:110966949 CDK19 0.00056 9.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 ch.10.1700276F chr10:80418378 - -0.00012 9.8E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg10375016 chr15:50282326 ATP8B4 0.0001 9.9E-05
N methylation body mass index adjusted by cellular heterogeneity
Fernandez-Jimenez NA DNA maternal pre-pregnancy Placenta inverse variance-weighted fixed effects meta-analysis 2631 cg21923525 chr18:9474143 RALBP1 -0.00074 9.9E-05
N methylation body mass index adjusted by cellular heterogeneity
*this tab-deliminated tsv file contains the full set of associations and variables, i.e. those in the downloadable catalog. | {"url":"https://ewascatalog.org/?study=fernandez-jimenez-n_maternal_pre-pregnancy_body_mass_index_inverse_variance-weighted_fixed_effects_meta-analysis_adjusted_by_cellular_heterogeneity","timestamp":"2024-11-11T00:01:03Z","content_type":"text/html","content_length":"652399","record_id":"<urn:uuid:6df98051-614b-462f-bd5b-74f87ff7e826>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00559.warc.gz"} |
Give an example which is continous everywhere but not differentiable at 3 points? | Socratic
Give an example which is continous everywhere but not differentiable at 3 points?
1 Answer
$f \left(x\right) = \left\mid x - 1 \right\mid \cdot \left\mid x - 2 \right\mid \cdot \left\mid x - 3 \right\mid$
Here is what a graph of this function would look like:
graph{abs(x-1)abs(x-2)abs(x-3) [-0.13, 4.736, -0.249, 2.181]}
Although the function is defined for $\forall x \in \mathbb{R}$
it is not differentiable at $x \in \left\{1 , 2 , 3\right\}$
Impact of this question
4658 views around the world | {"url":"https://socratic.org/questions/give-an-example-which-is-continous-everywhere-but-not-differentiable-at-3-points","timestamp":"2024-11-06T14:02:52Z","content_type":"text/html","content_length":"33103","record_id":"<urn:uuid:27e1918f-f0c7-4aaf-94ea-aff9f91e2561>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00011.warc.gz"} |
How to transfer amperes to kw
In amperes measure force of electric current, in watts — electric, thermal and mechanical power. The ampere and watt in electrical equipment are connected among themselves by certain formulas,
however as measure different physical quantities in them, it is simple to transfer amperes to kW it will not turn out. But it is possible to express some units through others. Let's understand as
current and power in an electrical network of various look correspond.
It is required to you
• - tester;
• - current-measuring pincers;
• - reference book on electrical equipment;
• - calculator.
1. Measure by a tester tension of network to which the electric device is connected.
2. Measure current size by means of current-measuring pincers.
3. Tension of network - a postoyannoyeumnozhta current size (amperes) on value of tension of network (volts). The received work — power in watts. For transfer to kilowatts it is necessary to divide
this number for 1000.
4. Tension of network — variable to an odnofaznoyeumnozhta value of tension of network at a size of current and a cosine of the angle fi (power factor). The received work - the consumed active power
in watts. For transfer of this number to kilowatts divide it into one thousand.
5. The cosine of the angle between full and active capacity is equal in a triangle of capacities to the relation of active power to full. The corner fi differently is called shift of phases between
tension and current - the shift arises in the presence in an inductance chain. The cosine fi is equal to unit at purely active loading (electric heaters, glow lamps) and about 0.85 — at the mixed
loading. The less reactive component of full capacity, the is less than loss therefore power factor is in every possible way sought to be increased.
6. Tension of network — variable to a trekhfaznoyeperemnozhta the size of tension and current of one of phases. Increase the received value by power factor. Similarly the power of two other phases
pays off. Then, all three phase capacities develop. The received sum will also be value of power of the electroinstallation connected to three-phase network. At symmetric loading on all three phases
the active power is equal to the work, phase current, phase tension and power factor multiplied by three. | {"url":"https://mirrorinfo.online/interesting/how-to-transfer-amperes-to-kw","timestamp":"2024-11-09T03:21:31Z","content_type":"text/html","content_length":"28613","record_id":"<urn:uuid:ce0505dc-6a50-40a6-843f-eef78a3f7217>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00441.warc.gz"} |
Array Archives - Page 2 of 48 - TutorialCup
Jump Game IV LeetCode Solution
Problem Statement: Jump Game IV LeetCode Solution says – Given an array of integers arr, you are initially positioned at the first index of the array. In one step you can jump from the index i to
index: i + 1 where: i + 1 < arr.length. i – 1 where: i – 1 >= …
Divide Chocolate LeetCode Solution
Problem Statement The Divide Chocolate LeetCode solution says the chocolate bar is represented by a list of non-zero integers. The sum of a contiguous subarray stands for the sweetness of the
chocolate piece represented by this subarray. Here the task is to find the maximum possible minimum sum of all …
Shifting Letters LeetCode Solution
Problem Statement Shifting Letters says that we have given a string s and an array shifts. Now for each shifts[i] = x, we want to shift the first i + 1 letters of s, x times. We have to return the
final string after all shifts are applied. Example 1: Input: s = “abc”, shifts …
Top K Frequent Elements LeetCode Solution
Problem Statement Top K Frequent Elements LeetCode Solution Says that – Given an integer array nums and an integer k, return the k most frequent elements. You may return the answer in any order.
Example 1: Input: nums = [1,1,1,2,2,3], k = 2 Output: [1,2] Example 2: Input: nums = [1], k = 1 Output: [1] …
Find the Winner of the Circular Game LeetCode Solution
Problem Statement Find the Winner of the Circular Game LeetCode Solution – There are n friends that are playing a game. The friends are sitting in a circle and are numbered from 1 to n in clockwise
order. More formally, moving clockwise from the ith friend brings you to the …
Problem Statement
Maximum Population Year LeetCode Solution says that – You are given a 2D integer array logs where each logs[i] = [birth[i], death[i]] indicates the birth and death years of the ith person.
The population of some year x is the number of people alive during that year. The ith a person is counted in the year x‘s population if x is in the inclusive range [birth[i], death[i] - 1]. Note that
the person is not counted in the year that they die.
Return the Maximum population Year.
Example 1:
logs = [[1993,1999],[2000,2010]]
The maximum population is 1, and 1993 is the earliest year with this population.
Example 2:
logs = [[1950,1961],[1960,1971],[1970,1981]]
The maximum population is 2, and it had happened in years 1960 and 1970.
So the maximum population year is 1960.
• 1 <= logs.length <= 100
• 1950 <= birth[i] < death[i] <= 2050
• In order to Find the Maximum Population Year. First, we will focus on the total number population in each year by checking in each interval of the given matrix and will find the maximum count and
return the year of maximum value. If the count is the same then we simply return the previous year(earliest year).
Approach for Maximum Population Year LeetCode Solution
– First, we will create one array of size 101 because the constraints of years lie in the range 1950 to 2050.
– after that, we will run a loop from 0 to the length of logs and will increase the count of the array at index(logs[i][o]) by 1 and decrease the count of the array at index (logs[i][1]) by 1
– again we will run a loop from 0 to the length of the array and make one variable prev count and update each element of the array by array+prev and update prev by prev = array[i].
– at last, we will run a loop and find the maximum value in the array and return that particular index(index+1950). Hence find the maximum population year.
Maximum Population Year Python Leetcode Solution:
class Solution:
def maximumPopulation(self, logs: List[List[int]]) -> int:
arr = [0]*101
for i in range(len(logs)):
arr[logs[i][0]-1950] += 1
arr[logs[i][1]-1950] -= 1
previous = arr[0]
for i in range(1,101):
arr[i] += previous
previous = arr[i]
maxi = 0
ind = 0
for i in range(len(arr)):
if arr[i] > maxi:
maxi = arr[i]
ind = i + 1950
return ind
Maximum Population Year Java Leetcode Solution:
class Solution {
public int maximumPopulation(int[][] logs) {
int[] arr = new int[101];
for(int i = 0;i < logs.length;i++){
arr[logs[i][0]-1950] +=1;
arr[logs[i][1]-1950] -=1;
int prev = arr[0];
for(int i=1;i<arr.length;i++){
arr[i] += prev;
prev = arr[i];
int ind = 0;
int maxi = 0;
for(int i=0;i<arr.length;i++){
if(maxi < arr[i]){
maxi = arr[i];
ind = i+1950;
return ind;
Complexity Analysis of Maximum Population Year Leetcode Solution:
Time Complexity
The Time Complexity of the above solution is O(n).
Time Complexity
The Space Complexity of the above solution is O(1).
As we have made an array of length = 101. So we can consider it constant
Minimum Swaps to Group All 1’s Together Leetcode Solution
Problem Statement Minimum Swaps to Group All 1’s Together Leetcode Solution – says that Given a binary array data, return the minimum number of swaps required to group all 1’s present in the array
together in any place in the array. Input: data = [1,0,1,0,1] Output: 1 Explanation: There are 3 ways to group all …
Maximum Population Year LeetCode Solution
Problem Statement: Maximum Population Year Leetcode Solution says that – You are given a 2D integer array logs where each logs[i] = [birthi, deathi] indicates the birth and death years of
the ith person. The population of some year x is the number of people alive during that year? The ith person is counted in the year x‘s population if x is …
Best Meeting Point LeetCode Solution
Problem Statement: Best Meeting Point Leetcode Solution says – Given a m x n binary grid grid where each 1 marks the home of one friend, return the minimal total travel distance. The total travel
distance is the sum of the distances between the houses of the friends and the meeting point. The distance is calculated using Manhattan Distance, …
Minimum Path Sum Leetcode Solution
Problem Statement The Minimum Path Sum LeetCode Solution – “Minimum Path Sum” says that given a n x m grid consisting of non-negative integers and we need to find a path from top-left to bottom
right, which minimizes the sum of all numbers along the path. We can only move … | {"url":"https://tutorialcup.com/tag/array/page/2","timestamp":"2024-11-08T04:49:04Z","content_type":"text/html","content_length":"112845","record_id":"<urn:uuid:597ce20a-8fae-4db4-a2e2-172b1be6d123>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00045.warc.gz"} |
How Does the Binary System Work?
Today’s Wonder of the Day was inspired by Nikhil. Nikhil Wonders, “How does a binary sequence transalate to a character?” Thanks for WONDERing with us, Nikhil!
How often do you use a computer? If you think about all the different gadgets you use every day, you'll probably realize that you use more computers than you think. Beyond the laptop or desktop
computers you use at school or home, you might also use calculators, smartphones, tablets, music players, electronic readers, digital video recorders, video games, and all sorts of other devices.
In today's technology-filled world, it's hard to avoid using computers. In fact, we bet many of our Wonder Friends will one day work in jobs that require you to use computers all the time. Some of
you may even build computers or write code to create software, video games, and smartphone apps!
When you study basic computer programming, you learn early on that basically everything that goes into (input) or comes out of (output) a computer is comprised of a series of 0s and 1s. That's the
essence of digital data, and it's based upon the binary system.
When you learn math at school, you use a base-10 number system. That means your number system consists of 10 digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. When you add one to nine, you move the 1 one
spot to the left into the tens place and put a 0 in the ones place: 10.
The binary system, on the other hand, is a base-2 number system. That means it only uses two numbers: 0 and 1. When you add one to one, you move the 1 one spot to the left into the twos place and put
a 0 in the ones place: 10. So, in a base-10 system, 10 equals ten. In a base-2 system, 10 equals two.
In the base-10 system you're familiar with, the place values start with ones and move to tens, hundreds, and thousands as you move to the left. That's because the system is based upon powers of 10.
Likewise, in a base-2 system, the place values start with ones and move to twos, fours, and eights as you move to the left. That's because the base-2 system is based upon powers of two. Each binary
digit is known as a bit.
Don't worry if the binary system seems confusing right now. It's fairly easy to pick up once you work with it a while. It just seems confusing at first because all numbers are made up of only 0s and
1s. The familiar base-10 system is as easy as 1-2-3, while the base-2 binary system is as easy as 1-10-11.
You may WONDER why computers use the binary system. Computers and other electronic systems work faster and more efficiently using the binary system, because the system's use of only two numbers is
easy to duplicate with an on/off system.
Electricity is either on or off, so devices can use an on/off switch within electric circuits to process binary information easily. For example, off can equal 0 and on can equal 1.
Every letter, number, and symbol on a keyboard is represented by an eight-bit binary number. For example, the letter A is actually 01000001 as far as your computer is concerned!
To help you develop a better understanding of the binary system and how it relates to the decimal system you're familiar with, here's how the decimal numbers 1-10 look in binary:
1 = 1
2 = 10
3 = 11
4 = 100
5 = 101
6 = 110
7 = 111
8 = 1000
9 = 1001
10 = 1010
Standards: CCRA.L.3, CCRA.L.6, CCRA.R.1, CCRA.R.2, CCRA.R.4, CCRA.R.10, CCRA.SL.1 | {"url":"https://www.wonderopolis.org/index.php/wonder/How-Does-the-Binary-System-Work","timestamp":"2024-11-06T02:41:03Z","content_type":"text/html","content_length":"172307","record_id":"<urn:uuid:e2f3adc2-2391-40b6-b0b3-fb2329c3c044>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00096.warc.gz"} |
sourCEntral -
rand − Pseudo random number generation.
This module provides a random number generator. The module contains a number of algorithms. The uniform distribution algorithms use the scrambled Xorshift algorithms by Sebastiano Vigna. The normal
distribution algorithm uses the Ziggurat Method by Marsaglia and Tsang.
The following algorithms are provided:
Xorshift116+, 58 bits precision and period of 2^116-1
Xorshift64*, 64 bits precision and a period of 2^64-1
Xorshift1024*, 64 bits precision and a period of 2^1024-1
The default algorithm is exsplus. If a specific algorithm is required, ensure to always use seed/1 to initialize the state.
Every time a random number is requested, a state is used to calculate it and a new state is produced. The state can either be implicit or be an explicit argument and return value.
The functions with implicit state use the process dictionary variable rand_seed to remember the current state.
If a process calls uniform/0 or uniform/1 without setting a seed first, seed/1 is called automatically with the default algorithm and creates a non-constant seed.
The functions with explicit state never use the process dictionary.
Simple use; creates and seeds the default algorithm with a non-constant seed if not already done:
R0 = rand:uniform(),
R1 = rand:uniform(),
Use a specified algorithm:
_ = rand:seed(exs1024),
R2 = rand:uniform(),
Use a specified algorithm with a constant seed:
_ = rand:seed(exs1024, {123, 123534, 345345}),
R3 = rand:uniform(),
Use the functional API with a non-constant seed:
S0 = rand:seed_s(exsplus),
{R4, S1} = rand:uniform_s(S0),
Create a standard normal deviate:
{SND0, S2} = rand:normal_s(S1),
This random number generator is not cryptographically strong. If a strong cryptographic random number generator is needed, use one of functions in the crypto module, for example,
alg() = exs64 | exsplus | exs1024
Algorithm-dependent state.
Algorithm-dependent state that can be printed or saved to file.
export_seed() -> undefined | export_state()
Returns the random number state in an external format. To be used with seed/1.
export_seed_s(X1 :: state()) -> export_state()
Returns the random number generator state in an external format. To be used with seed/1.
normal() -> float()
Returns a standard normal deviate float (that is, the mean is 0 and the standard deviation is 1) and updates the state in the process dictionary.
normal_s(State0 :: state()) -> {float(), NewS :: state()}
Returns, for a specified state, a standard normal deviate float (that is, the mean is 0 and the standard deviation is 1) and a new state.
seed(AlgOrExpState :: alg() | export_state()) -> state()
Seeds random number generation with the specifed algorithm and time-dependent data if AlgOrExpState is an algorithm.
Otherwise recreates the exported seed in the process dictionary, and returns the state. See also export_seed/0.
seed(Alg :: alg(), S0 :: {integer(), integer(), integer()}) ->
Seeds random number generation with the specified algorithm and integers in the process dictionary and returns the state.
seed_s(AlgOrExpState :: alg() | export_state()) -> state()
Seeds random number generation with the specifed algorithm and time-dependent data if AlgOrExpState is an algorithm.
Otherwise recreates the exported seed and returns the state. See also export_seed/0.
seed_s(Alg :: alg(), S0 :: {integer(), integer(), integer()}) ->
Seeds random number generation with the specified algorithm and integers and returns the state.
uniform() -> X :: float()
Returns a random float uniformly distributed in the value range 0.0 < X < 1.0 and updates the state in the process dictionary.
uniform(N :: integer() >= 1) -> X :: integer() >= 1
Returns, for a specified integer N >= 1, a random integer uniformly distributed in the value range 1 <= X <= N and updates the state in the process dictionary.
uniform_s(State :: state()) -> {X :: float(), NewS :: state()}
Returns, for a specified state, random float uniformly distributed in the value range 0.0 < X < 1.0 and a new state.
uniform_s(N :: integer() >= 1, State :: state()) ->
{X :: integer() >= 1, NewS :: state()}
Returns, for a specified integer N >= 1 and a state, a random integer uniformly distributed in the value range 1 <= X <= N and a new state. | {"url":"http://man.m.sourcentral.org/f26/3+rand","timestamp":"2024-11-12T18:23:35Z","content_type":"text/html","content_length":"20335","record_id":"<urn:uuid:cb24cf1a-d947-405c-a05c-8ecbf490ab7c>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00785.warc.gz"} |
What is Data Science and Does it Matter?
It starts like a typical data science job interview - I summarise my resume and they describe their core products. They describe what their data looks like and we have a very interesting chat about
the structure of their data. But the interviewer keeps dancing around the topic of what my job will consist of and the whole thing become vaguer and vaguer until I'm forced to ask 'this sounds great
but could you please explain what the job actually is?'
The answer is often some version of: 'Well, we're pretty sure we need a data scientist but could you explain what exactly a data scientist does?'
Nobody seems to know exactly what 'data science' is, let alone its almost synonym 'big data'. Maybe it's that most people are not accustomed to statisticians being interesting and it comes as a shock
that statistics is useful to their business. Perhaps a new term just makes it easier to deal with.
Statistics has certainly become more valuable as the amount of data collected by businesses has exploded. We now collect so much data that we can create valuable products from the data itself,
previously considered a mere by-product.
Is this really new? Reader's digest was doing data analysis on millions of households in the 70s (but they needed a mainframe to do it)^[1]. They employed programmers and statisticians but no one we
would recognize as a data scientist. Instead they had 300 people who together did what a data scientist can achieve today on a single MacBook Pro.
But the statistical element of data science and big data is often not new. I read a story in the New York Times about the problems in New York caused by restaurants pouring used cooking oil into
sewers. Apparently the culprits were identified by 'big data'^[2]:
They dug up data from the Business Integrity Commission, an obscure city agency that among other tasks certifies that all local restaurants have a carting service to haul away their grease. With
a few quick calculations, comparing restaurants that did not have a carter with geo-spatial data on the sewers, the team was able to hand inspectors a list of statistically likely suspects.
Leaving aside the question of whether this is truly 'big data' - how different is this analysis from John Snow's analysis of Cholera cases in London in 1854? ^[3] ^[4]
The statistical analysis is not different. The difference today is that the data was just sitting around in a database waiting to be discovered. This requires very different skills to traipsing
around Victorian London looking for corpses.
It's now so easy to manipulate vast quantities of data that you don't need to employ a separate statistician, database guy and programmer. You hire someone who can code and do statistics too. This is
the data scientist.
But this has lead to some confusion about what constitutes a data scientist. Very often data scientists are recruited by non-scientists, who aren't clear on what skills they should be hiring for.
Hence the rise of the the hadoop engineer or SQL analyst hired as data scientist. Nothing wrong with these skills, but they don't make you a scientist.
Statistical know-how is essential. Large messy data sets with weird dependency structures and missing data will cause any data scientist pain and suffering. But at least those with statistical
training know they are in trouble; and know what to do about it.
It's now time for data science as a profession to take stock and ask itself 'what is the core skill set?'. Or further than this, 'is data science a new statistical speciality or even a nascent field
in its own right?'.
Come and join the debate at the Royal Statistical Society on the 11th of May | {"url":"https://www.martingoodson.com/what-is-data-science-and-does-it-matter/","timestamp":"2024-11-04T04:39:50Z","content_type":"text/html","content_length":"23789","record_id":"<urn:uuid:ef1b097a-7b42-437b-b7ec-c3af491c305c>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00329.warc.gz"} |