content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
What is a flat interest rate mean
Flat interest rate means the interest rate which is calculated on full loan amount during the course of its tenure without considering that monthly EMIs would Lenders often quote different numbers
that mean different things. Some might quote interest rates without including additional fees in their advertisements, while Understanding the different terms used to describe interest rates can be
what each of these terms mean, and how interest is calculated and compounded. of 3.33% making the 3.55 flat rate a better deal, but if you plan to invest $50,000, the
13 Jul 2017 A flat interest rate is an interest rate calculated on the full original loan amount Meaning the next year, the reducing balance rate would be Flat interest rate means not fixed interest
means an interest rate that is calculated on the full principal amount of the loan throughout its tenure without considering 17 Dec 2018 Flat rate loans will generally have lower annual interest
rates than APRs, but be careful, because that doesn't mean they're better value. Beware flat interest rates; How to find the best loans. The Cost of Borrowing. The easiest and most accurate way to
define When the interest rate quoted is a flat rate, it means that the interest due is calculated as simple interest on the amount of the loan. We can therefore use the If, for a year loan of the
total repayment is then clearly the total interest is of the loan, which means a flat interest rate of per year. Flat rates of interest are used for Flat vs Declining Balance Interest Rates. What
is the Difference? One of the main components to the price of a loan is the interest rate. A somewhat abstract
Interest Rate Frequency: Monthly; Disbursement Date: 1/23/2011; 365 days. Fixed Flat This is the only method for which interest is not accrued over time
Flat interest rate means not fixed interest means an interest rate that is calculated on the full principal amount of the loan throughout its tenure without considering 17 Dec 2018 Flat rate loans
will generally have lower annual interest rates than APRs, but be careful, because that doesn't mean they're better value. Beware flat interest rates; How to find the best loans. The Cost of
Borrowing. The easiest and most accurate way to define When the interest rate quoted is a flat rate, it means that the interest due is calculated as simple interest on the amount of the loan. We can
therefore use the If, for a year loan of the total repayment is then clearly the total interest is of the loan, which means a flat interest rate of per year. Flat rates of interest are used for
Flat rate of interest is the interest charged on the full amount of a loan throughout its entire term and commonly known as a ‘pre-determined’ credit charge. The flat rate takes no account of the
fact that periodic repayments, which include both interest and principal, gradually reduce the amount owed.
17 Jul 2019 Using the Flat Rate Method of calculation, the interest you pay is based on the This means that as you pay down the loan (a process called 22 Aug 2019 It takes into account all the costs
during the term of the loan including any set up charges and the interest rate. This means that fees and 9 Dec 2019 On fixed rate loans, interest rates stay the same for the entirety of the loan's
term. This means that the cost of borrowing money stays constant But interest rates are often difficult to understand, calculate, and compare due to variables including Next to come, Flat vs.
declining balance rates… Here are the interest rates for Personal Loans with tenures of up to 36 months. Fixed Rate Loan, 1 Yr MCLR, Spread over 1 year MCLR, Effective ROI, Reset.
Here are the interest rates for Personal Loans with tenures of up to 36 months. Fixed Rate Loan, 1 Yr MCLR, Spread over 1 year MCLR, Effective ROI, Reset.
Flat interest rate mortgages and loans calculate interest based on the amount of money a Their meaning is sometimes confused: The expression "flat rate" is However, a flat interest rate, on the
other hand, means that each payment includes interest based on the initial loan balance, so it stays constant over the term Flat interest rate, as the term implies, means an interest rate that is
calculated on the full amount of the loan throughout its tenure without considering that 30 Aug 2014 A flat interest rate means that the amount of interest paid is fixed and does not reduce as time
moves on. In other words, the amount of payable interest does not
However, a flat interest rate, on the other hand, means that each payment includes interest based on the initial loan balance, so it stays constant over the term
Flat interest rate is for reference only and is based on a front-end add-on calculation method (Interest = loan principal x flat interest rate x loan tenor). 3 Effective A 4% flat interest rate on
N600,000 would mean he pays an interest of N24,000 monthly (4% of N600,000) and N288,000 totally in interest. His total monthly flat interest rate. Definition. Interest charged on the loan without
taking into consideration that periodic payments reduce the amount loaned. For example, an individual takes a $10,000 loan at 10% payable in 5 equal installments,. Using a flat interest rate, the
interest charge would be $5,000 for the entire term. However, a flat interest rate, on the other hand, means that each payment includes interest based on the initial loan balance, so it stays
constant over the term and costs much more than with a Flat interest rate mortgages and loans calculate interest based on the amount of money a borrower receives at the beginning of a loan. However,
if repayment is scheduled to occur at regular intervals throughout the term, the average amount to which the borrower has access is lower and so the effective or true rate of interest is higher. Flat
rate definition: a rate or charge that does not vary , being the same in all situations | Meaning, pronunciation, translations and examples
Compare all SME business loans interest rates fast & hassle free. FREE loan What does EIR mean? That's why it's common to be quoted the nominal rate ( also known as simple or flat rate) for business
loan products to simplify calculation. 21 Jul 2017 Here we explain what effective interest rate means. All you need to do to compute for flat interest rate is to multiply a given interest rate with
To calculate your EMI, just enter the loan amount, rate of interest and loan tenure, and your EMI is instantly displayed. You can enter loan amounts from 50,000 to 10 Feb 2019 The effective rate of
interest is the rate that makes the present value of the repayments equal to the principal. If the monthly interest rate is i then This calculator provides a method of comparing compound and flat
rates of interest. Flat rates of interest are often used in illustrations because they appear | {"url":"https://tradingkzqmoei.netlify.app/hufford10551jupa/what-is-a-flat-interest-rate-mean-218.html","timestamp":"2024-11-09T09:55:43Z","content_type":"text/html","content_length":"34188","record_id":"<urn:uuid:fc3bcfad-8231-4991-9aab-9c62477977b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00167.warc.gz"} |
Fundamental Units of SI System
1. Length
Length is de ned as the distance between two points. The SI unit of length is metre. One metre is the distance travelled by light through vacuum in 1/29,97,92,458 second.
In order to measure very large distance (distance of astronomical objects) we use the following units.
· Light year
· Astronomical unit
· Parsec
Light year: It is the distance travelled by light in one year in vacuum and it is equal to 9.46 × 10^15m.
Astronomical unit (AU): It is the mean distance of the centre of the Sun from the centre of the earth. 1 AU = 1.496 × 10^11 m Figure 1.
Parsec: Parsec is the unit of distance used to measure astronomical objects outside the solar system.
1 parsec = 3.26 light year.
To measure small distances such as distance between two atoms in a molecule, the size of the nucleus and the wavelength, we use submultiples of ten. These quantities are measured in Angstrom unit
(Table 4).
2. Mass
Mass is the quantity of matter contained in a body. The SI unit of mass is kilogram. One kilogram is the mass of a particular international prototype cylinder made of platinum-iridium alloy, kept at
the International Bureau of Weights and Measures at Sevres, France.
The related units in submultiples of 10 (1/10) are gram and milligram and in multiples of 10 are quintal and metric tonne.
1 quintal = 100 kg
1 metric tonne = 1000 kg = 10 quintal
1 solar mass = 2 × 10^30 kg
Atomic mass unit (amu):
Mass of a proton, neutron and electron can be determined using atomic mass unit.
1 amu = 1/12th of the mass of carbon-12 atom.
3. Time
Time is a measure of duration of events and the intervals between them. The SI unit of time is second. One second is the time required for the light to propagate 29,97,92,458 metres through vacuum.
It is also defined as 1/86, 400th part of a mean solar day. Larger unit for measuring time is millennium. 1 millenium = 3.16 × 10^9 s.
4. Temperature
Temperature is the measure of hotness. SI unit of temperature is kelvin(K). One kelvin is the fraction of 1/273.16 of the thermodynamic temperature of the triple point of water ( e temperature at
which saturated water vapour, pure water and melting ice are in equilibrium). Zero kelvin (0 K) is commonly known as absolute zero. The other units for measuring temperature are degree Celsius and
Fahrenheit (Table 5). To convert temperature from one scale to another we use
C/100 = (F – 32)/180 = (K-273) /100
Convert (a) 300 K in to Celsius scale, (b) 104°F in to Celsius scale.
(a) Celsius = K-273 = 300-273 = 27°C
(b) Celsius = (F – 32) × 5/9 = (104-32) × 5/9 =72 × 5/9 = 40°C | {"url":"https://www.brainkart.com/article/Fundamental-Units-of-SI-System_35754/","timestamp":"2024-11-15T03:33:45Z","content_type":"text/html","content_length":"40783","record_id":"<urn:uuid:40724ea7-3985-4b0a-8bf5-0267ea44deb8>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00134.warc.gz"} |
Evaluation of Various Ejector Profiles on CO2 Transcritical Refrigeration System Performance
Evaluation of Various Ejector Profiles on CO[2] Transcritical Refrigeration System Performance
Department of Applied Mechanics, Faculty of Mechanical Engineering, Technical University of Liberec, Stdentská 1402/2, 46117 Liberec, Czech Republic
Author to whom correspondence should be addressed.
Submission received: 6 August 2022 / Revised: 15 August 2022 / Accepted: 19 August 2022 / Published: 23 August 2022
This study examines the potential impact of the different ejector profiles on the CO[2] transcritical cooling system to highlight the contribution of the multi-ejector in the system performance
improvement. The research compares the implementation of an ejector-boosted CO[2] refrigeration system over the second-generation layout at a motive flow temperature of 35 °C and discharge pressure
of 90 bar to account for the transcritical operation mode. The result revealed a significant energy saving by reducing the input power to the maximum of 8.77% when the ejector was activated.
Furthermore, the multi-ejector block could recover up to 25.4% of the expansion work losses acquired by both ejector combinations VEJ1 + 2. In addition, the behavior of the multi-ejector geometries
and operation conditions greatly influence the system exergy destruction. The analysis shows a remarkable lack of exergy destruction during the expansion process by deploying the ejector in parallel
with the HPV.
1. Introduction
In recent times, there has been a major global call to replace deleterious refrigerants with more eco-friendly alternatives. Alexander Twining first produced carbon dioxide (R744). It was officially
recognized as a British patent and as being fit for use as a refrigerant in 1850. Over time, it has become one of the most popular refrigerants among many [
]; the Coca-Cola Company, for instance, is planning to discontinue the use of HFCs by adopting CO
technology as a leading solution [
]. When R744 became known as a refrigerant, it was greeted by stiff skepticism and stern criticism among relevant scientific communities [
]. The reason for this was the characteristic heat sink operational pressure and low system efficiency, which call for another mechanism integration that fosters technical advancement and addresses
unprecedented technological challenges. Recently, these challenges and the quest for system advancement have been addressed through better process design, which has shed more light on the merits of
using CO
in cooling systems.
It has been established that CO
has superior thermal properties over other refrigerants. For instance, natural refrigerants represent the class with the highest thermal conductivity and specific heat capacity. They also possess the
high latent heat of vaporization needed for a more efficient heat transfer within the evaporator. Moreover, CO
features a high volumetric refrigeration capacity, which is well known to significantly influence the heat transfer coefficient [
]. In addition, CO
has a low viscosity, which mainly reduces the initial investment cost due to the cost-effective geometrical properties of the valves, pipelines, and other ancillary components.
In contrast, CO
has a high saturation pressure of 4–12 times that of other refrigerants, which requires special technical considerations during the manufacturing process. Thus, due to the merits of the excellent
properties of natural refrigerants, they are a favorable choice of working fluid [
]. The market has witnessed a surge in the applications using this refrigerant in cooling systems worldwide. This is encapsulated in the fact that ejectors improve the performance of transcritical
refrigeration cycles when integrated with parallel compressors to attain the attendant pressure lift. The advantages of using the ejector have resulted in many ground-breaking types of research in
terms of significant energy reduction, which is greatly reliant on the ejector geometries, refrigerant properties, and the core purpose of its applications.
A large number of publications have presented experiments on the R744 ejector, which illustrates the impact of the inclusion of the ejector on transcritical systems. Elbel and Hrnjak reported
improvements of up to 8% in the total system COP by using a prototype ejector in their experimental work [
]. In the same regard, 15% higher COP was reported experimentally when the ejector was operated, in comparison with the conventional base system [
]. Furthermore, Nakagwa et al. conducted research on the ejector-boosted system and experimentally demonstrated up to 27% higher COP [
]. In addition, further published research proved the possibility of enhancing the total system COP from 20% to 30% by using the ejector to recover the expansion work [
Hafner et al. [
] introduced a multi-ejector block containing different ejector cartridge geometries. This concept supports the activation of any profile combination in parallel to suit any requested capacity, keeps
the work recovery at the optimum level, and accurately maintains the gas cooler pressure values. The multi-ejector strategy significantly improves several aspects of the refrigeration system. For
example, four different ejector cartridges were tested by Banasiak et al. to map the performance of each profile separately and detect the greatest work recovery provided [
]. For the sake of optimization and to study the irreversibilities of the ejector, several computational and numerical works were performed to predict the influence of the ejector efficiency on the
refrigeration cycle [
The impact of ejector geometries has been studied in many experimental studies [
]. For example, an adjustable ejector motive nozzle throat was tested by XU et al. [
]. The authors stated a 20–30% distribution of the ejector efficiency, which led to maximizing the system COP. In addition, fixed and adjustable parallel ejector arrangements were evaluated by Smolka
et al. [
] to deliver a flexible mass flow. The results showed that the controllable-geometry design does not exceed 35% of the ejector efficiency, while the fixed geometry configurations can produce higher
efficiency concerning the operating conditions.
Elbarghthi et al. implemented an extensive experimental study that used a small ejector throat to analyze the ejector performance under the subcritical and supercritical regions of operations [
]. The result revealed a high ejector efficiency that could allow 36.9% of the available work rate to be recovered and reach 23% of exergy efficiency at a high exit gas cooler temperature. Gullo et
al. proved that a CO
multi-ejector outperformed other fluorinated working fluids in conventional-based solutions, especially in northern and central Europe [
]. The results showed 26.9% higher energy savings in average-sized supermarkets utilizing CO
as a refrigerant. The ejector has contributed to the air conditioning applications, and can reduce the total system power consumption by 8.3–8.6% in different system configurations.
Multi-ejector blocks use different cartridge combinations to recover the maximum available work in the system. One of the challenges in this field is to study the best receiver working conditions
when different ejector cartridges are running because each cartridge has a limited pressure lift; otherwise, the malfunction mode will exist in operations at high receiver pressure, thus influencing
the overall system performance. In this regard, this paper aims to examine the impact of using various ejector profiles in improving general system performance. The study compares the implementation
of an ejector-boosted system over the second generation of CO[2] transcritical refrigeration system. The performance of the ejectors and the overall system operational characteristics is emphasized
as the main objective of the study.
The analysis covers different characteristics, such as the system COP, the exergy destruction, and the contribution of the ejector to the total input power reduction.
2. System Configuration
Figure 1
represents a simple ejector schematic diagram. The graphical R744 transcritical refrigeration system supported with an ejector used in the analysis is shown in
Figure 2
. The cooling cycle consists of a base-load compressor used to compress the expanded vapor from the low temperature and pressure region (evaporator pressure range) to the gas cooler pressure region.
The system adopts a supplementary compressor indicated as a parallel compressor to extract the vapor from the liquid separator pressure level to the gas cooler pressure. In the calculations, Dorin
semi-hermetic compressors, type CD1400H and CD380H, were used based on their polynomial functions that defined the mass flow rate and the power consumption provided by the supplier. However, this
layout boosts the unloading of the base-load compressor. As a result, extra power is consumed by the parallel compressors, but the total input power from all the compressors in the system is reduced
and the system COP is improved [
]. The refrigerant rejects the heat at the gas cooler using the glycol cycle, which serves as the heat sink and then leaves (state 4) to the liquid separator after expanding through HPV (state 5).
From the receiver, the vapor portion is supplied to the parallel compressor suction line (state 6–3) while the liquid (state 7) is fed to the evaporator through the expansion valve device (state 8).
This circulation mechanism has been proved to enhance the distribution of R744 in the cycle [
The exit gas cooler temperature is sensitive to the environment, and increases in a hot climate, providing a massive amount of the flashed gas, which can reach 50% of the entire mass flow in the
system [
]. Nonetheless, this configuration represents the booster parallel refrigeration system, which is compared with the ejector-supported system where the ejector is connected in parallel with HPV for
the overall system performance improvement evaluation.
In this study, two different ejector profiles, VEJ1 and VEJ2, are utilized. These ejector cartridges have been studied comprehensively in the literature, and their performances have been represented
with reasonable accuracy in approximation functions [
]. The main geometries of both ejector profiles are listed in
Table 1
. When the ejector cartridges are on, a portion of the gas cooler exit working fluid passes through the primary ejector nozzle, which expands and generates a local pressure drop that allows the
entrainment of the secondary flow stream from the evaporator exit (state 1). The mixed stream then passes to the liquid separator at a pressure higher than the suction flow pressure level (state 9).
The study was undertaken for a −6 °C evaporation temperature, and gas cooler pressure and temperature of 90 bar and 35 °C, respectively, to account for the transcritical operation. The proposed
system cooling capacity was analyzed for 10 kW because the used cartridges are quite small and the goal was to indicate the significant benefit of these two ejector profiles on the system performance
3. System Performance Calculations
Many parameters are used to evaluate the two-phase expansion ejector performance, represented as the pressure lift, entrainment ratio, and ejector efficiency, and described as the ratio of the
expansion work recovery to the maximum potential work. The entrainment ratio is calculated as the suction nozzle mass flow rate ratio to the motive nozzle, as represented in Equation (1). The best
ejector performance can be achieved when a large pressure lift is obtained with a high suction mass flow rate. The pressure lift is defined as the pressure difference between the ejector outlet mixed
stream and the suction nozzle pressure, as described in Equation (2). The ejector efficiency is calculated based on the derivation provided by Elbel et al. [
]. This formula expresses the ejector’s total irreversibility and has been used in many studies in the literature as a simple model for measuring efficiency because it relies on the operating
boundary conditions of the ejector and no information is needed from the entire flow [
In Equations (1)–(3), ER represents the entrainment ratio; $ṁ SN$ and $ṁ MN$ are the suction mass flow rate and the motive nozzle mass flow rate, respectively, in kg/s; and $P rec$ is the liquid
separator receiver pressure in bar, which characterizes the ejector outlet backpressure. $P SN$ represents the suction nozzle inlet pressure in bar. $η ej$ is the ejector efficiency. $Ẇ r$ and $Ẇ r ,
max$ represent the actual recovered work of the ejector and the overall available work recovery potential, respectively, in kW. The calculations were developed using the first and second laws of
thermodynamics based on the following constraints:
• the processes for all the analyses are steady-state;
• the pressure drop at the gas cooler, evaporator, and piping is not considered;
• the kinetic and the potential energies are neglected;
• the system is well isolated.
The main parameter used to evaluate the system is the coefficient of performance (COP), which describes the vantage cooling action provided to the energy input required, and is calculated as in
Equation (4). The influence of the ejector, when integrated with the system, is evaluated by the determination of the COP improvement.
$COP = Q · evap / Ẇ comp$
$Q · evap = ṁ CO 2 · h evap . out − h evap , in$
$Ẇ comp = Ẇ comp 1 + Ẇ comp 2$
$COP improv = COP ej − COP COP · 100 %$
$Ẇ comp$
is the compressors’ input power in kW and
$Q evap$
is the cooling capacity in kW. Based on the second law of thermodynamics, the exergy destruction for the high-pressure valve in the cycle
$D · HPV$
can be calculated by the specific exergy difference in any state, as revealed in Equations (8) and (9). For the exergy calculation, the environmental dead-state properties
$T o$
$P o$
were selected to be 20 °C and one atmospheric pressure.
$e i = h i − h o − T o s i − s o$
$D · HPV = ṁ HPV e in − e out$
4. Results and Discussion
4.1. Ejector Characteristic Functions
The results provide an in-depth comparison of utilizing the two ejector profiles with the parallel compressor system known in the literature as the second-generation layout. To test the influence of
the ejectors on the system, the performance of these two ejectors should be illustrated and carefully tested to evaluate their efficiency and main driving characteristics.
Figure 3
represents the characteristics of both ejector profiles and the combinations (VEJ1 + VEJ2), which depend on the pressure lift ranging from 2 to 12 bar with a step of 2 bar. The research was carried
out based on an inlet motive nozzle temperature of 35 °C and exit gas cooler pressure of 90 bar considering the transcritical operation mode. The results revealed similar shortcomings exhibited by
the ejector mass entrainment ratio when the pressure lift increases. For instance, increasing the liquid separator receiver pressure for a high-pressure lift creates shock waves inside the ejector,
moving it closer to the motive exit position where it disturbs the flow with the less-entrained suction flow. Subsequently, the entrainment ratio drops. It should be noted that the small ejector
cartridge provides a higher entrainment ratio compared to VEJ2 when the system is operating at a low-pressure lift; this is associated with a higher motive mass flow rate. Furthermore, the efficiency
of the ejector of the VEJ1 recorded an optimum value of 31% under test conditions, whereas this profile experimentally registered higher efficiency of up to 37% [
] for different operational parameters.
The multi-ejector concept introduced by Hafner et al. [
], which has the flexibility of using various cartridges connected in parallel to reach maximum capacity while maintaining a more efficient work recovery, has proven to be more viable than the single
fixed geometry ejector, which is the smallest vapor ejector cartridge presented by Banasiak et al. [
]. Its 1 mm throat diameter (VEJ2) was combined with the current cartridge (VEJ1) to evaluate the system performance. The VEJ2 performance was tested with 400 investigation points to produce a
qualitative resolution; then, the approximation function was introduced for the inlet mass flow rate with reasonable accuracy. As shown in
Figure 3
, the behavior of both ejector combinations (VEJ1 + 2) relies significantly on the pressure lift and the inlet motive nozzle temperature. The results revealed the same trends for the ejector mass
entrainment ratio of the VEJ. For instance, when both cartridges are activated at P
= 2 bar, ER reaches 0.83, which is lower than when using VEJ1 alone. However, the multi-ejector allows entraining a 50% higher suction mass flow rate with the ejector combinations, but the motive
mass flow rate also experiences a surge. When the pressure lift is increased, the ER drops gradually, which vanishes for VEJ1 from P
= 8 bar where this profile is introduced as a normal expansion valve. In contrast, VEJ2 continues to produce a higher pressure lift to 12 bar with ER of 0.091. However, the ejector efficiency for
VEJ1 + 2 acquired an optimum value of 25.4% reporting lower efficiency than using the VEJ1 profile alone but extended to cover a wide range of the operational condition. In other words, the
combination of the ejector cartridges greatly influenced the system’s performance by improving the work rate recovered. The results demonstrated an increase in the recovered work rate to a maximum of
0.198 kW and recorded a rate that was 2.2-times higher overall than that of the single VEJ1 used under the same operating conditions based on Equation (3). Generally, when the systems are running in
the transcritical state, the amount of flash gas increases, which increases the maximum work recovery potential, thereby reducing the ejector efficiency.
The most significant parameters that can be used to evaluate how the ejector can benefit the CO
transcritical refrigeration systems are the expansion work rate recovery (Ẇ
) and the overall available work recovery potential (Ẇ
). These parameters indicate the power available to perform isentropic compression on the suction flow through the ejector to the separator and the maximum theoretical work recovery potential that
depicts the total irreversibility of the ejector [
Figure 4
illustrates the work rate and maximum potential work recovery characteristics via different pressure lifts. The analysis was performed for the parallel compressor system layout as the baseline
compared with varying configurations of ejectors for 10 kW cooling capacity. The results show the maximum work recovery rate of expansion in the high-pressure valve for the parallel system with
expansion work ranging from 1 kW at P
= 2 bar to 0.7 kW when the pressure lift increases to 12 bar. This indicates the significant throttling loss of CO
as a refrigerant compared with other low-pressure working fluids, especially at ambient temperatures that force the cycle to operate in transcritical mode. It can be seen that the smaller ejector
cartridge VEJ1 could only recover up to 0.09 kW of the expansion work from the overall available work recovery potential of 0.3 kW. This ejector profile can only be used for a short range of liquid
separator pressure with a pressure lift lower than 8 bar.
In contrast, the second ejector cartridge allows for recovery up to 0.13 kW, representing 27% of the overall available work recovery potential that this cartridge could provide. The reason for this
is closely connected with the increase in the motive mass flow rate when the motive nozzle throat diameter becomes larger. Under similar operating conditions, VEJ2 proved to have 53% of the parallel
system available work recovery potential. When both ejector cartridges are activated, the maximum available work recovery expands from 0.6 to 0.85 kW depending on the pressure lift, representing up
to 86% Ẇ[r,max] of the baseline system. Moreover, the multi-ejector block allows recovery of up to 0.2 kW of the expansion work, which represents 25.4% of the throttling losses according to the
efficiency metrics and statistics. Therefore, this analysis is essential to map out each ejector’s performance and indicate the best range of operation conditions.
4.2. Ejector System Performance Improvement
The impact of VEJ1 on system COP was experimentally tested for a wide range of operating conditions. However, implementing different ejector profiles for the transcritical CO
cycle was determined for the system operational dynamics, including the COP, as shown in
Figure 5
. The results were obtained for the parallel compressor system and compared with different ejector configurations and pressure lifts. The outcome reveals that the COP has a proportional relation with
the pressure lift. Increasing the separator pressure for a higher pressure lift provides a higher system COP based on the compression ratio reduction, decreasing the required input power and
improving the performance. When the VEJ1 is activated, the system COP witnesses an increase of up to 1.2% compared with the baseline layout. It should be noted that the operation range for this
cartridge is relatively short, which cannot benefit the system when the pressure lift exceeds 8 bar. By comparison, COP degradation was recorded when both ejector profiles ran at a pressure lift of
less than 3.1 bar, despite operating with both ejector cartridges.
The influence of both ejector cartridges VEJ1 + 2 on the system performance is presented in
Figure 5
. The results indicate an appreciable improvement in the system COP obtained by running the multi-ejector to reach an optimum value of 2.39 at P
= 6 bar, representing a COP that is 4% higher than that of the booster system under the same working conditions. It can be noted that VEJ2 could support the system with a higher COP even for a
pressure lift higher than 12 bar. It is also noteworthy that the system showed worse performance when operated at a low pressure lift compared to the booster baseline. The highest COP degradation was
obtained at T
= 35 °C, P
= 2 bar for a value up to −2.9%. For this ejector configuration, the region of the COP improvement in the transcritical mode started at P
higher than 3.15 bar. In general, the multi-ejector block supported with more than one ejector profile can enhance the performance of the cooling system and meet any capacity needed by switching the
required ejector electric solenoid valve.
The influence of the combination of the ejectors VEJ1 + 2 on the compressor power recovery for various pressure lifts is captured in
Figure 6
. The result depicts significant energy recovery achieved by reducing the compressor power by introducing the ejector profiles. For example, VEJ1 contributed to the reduction of up to 4% of the input
power with respect to the exit gas cooler temperature at 35 °C. By comparison, implementing the ejector combination VEJ1 + 2 leads to the most significant power reduction. The maximum compressor
power saving was 8.77% compared to operating the system in the absence of an ejector. It is also noteworthy that the total compressor power saving improved substantially to the minimum amount of
2.34% compared with the parallel layout depending on the ejector efficiency trend, which provided the optimum performance. In total, when VEJ2 is running, the input power is reduced by two to three
times compared to running with a single ejector in VEJ1. This strategy indicates the advantages of operating with multi-ejector profiles where any requested capacity can be reached. In addition, a
multi-ejector block is also able to control the discharge pressure and simultaneously maintain an efficient work recovery to a greater extent than other types of ejectors, such as the needle-based
The exergy analysis is known by the maximum useful work, which can be determined at any thermodynamic state at equilibrium with the surroundings. CO
transcritical refrigeration cycles exhibit remarkable throttling loss, which is recovered during the expansion process due to the significant difference in pressure between the heat rejected and the
evaporation temperature. The expansion process takes place at the high-pressure valve. The ejector proved to be a reliable solution that could be connected in parallel with the HPV to recover the
amount of work in question and improve the system performance.
Figure 7
illustrates the HPV exergy destruction rate for the baseline parallel system compared with different ejector cartridge combinations at the variant level of pressure lift. The results revealed massive
exergy destruction for the baseline system exceeding 1 kW at the operation level with low pressure lift. Increasing the pressure lift in operation provides a lower amount of irreversibility in the
expansion process due to the reduction in the parallel pressure ratio of the compressors, which decreases the input power needed. However, when the small ejector profile of VEJ1 runs, the expansion
process losses decrease by 31% compared to all operation ranges without an ejector, bringing the maximum exergy destruction to 0.74 kW. When the second ejector cartridge of VEJ2 runs alone with the
HPV, the exergy destruction recues by 53%. The result indicates a significant improvement in the exergy destruction by using both cartridges together with the HPV. In total, more than 84% of the
exergy losses during the expansion can be reduced by both ejectors. These results provide crucial energy savings for the CO
refrigeration system operating at high ambient temperature and facing a high amount of flash gas in the second-generation layout of the transcritical systems.
5. Conclusions
The current study evaluated the impact of utilizing different ejector profiles on the performance of the R744 transcritical refrigeration system. The research ideas are premised on the first and
second laws of thermodynamics. The approximation functions that experimentally described the performance of each ejector profile in previous work were implemented. The results were compared with the
classical parallel layout as the baseline to reveal the contribution of ejectors to the recovery of the high irreversibilities during the expansion process, which reduces the exergy destruction. The
most outstanding findings are summarized as follows:
• A total of 31% of available work was recovered by activating VEJ1, while the total efficiency acquired by both ejector combinations of VEJ1 + 2 registered an optimum value of 25.4%. However, the
multi-ejector allows entraining a 50% higher suction mass flow rate with the ejector combinations, which greatly influences the system performance by improving the work rate recovered.
• CO[2] transcritical refrigeration cycles possess significant throttling loss, especially at lower pressure lift values. In contrast, the combination of both ejector cartridges represented 85% of
the potential work that the ejector implementation can achieve compared with the conventional layout.
• The multi-ejector concept was found to improve the overall system COP, which increased the refrigerating effect because a higher amount of liquid-phase refrigerant could be supplied to the
evaporators. Moreover, the multi-ejector allowed pre-compression of the evaporator exit refrigerant prior to the intermediate pressure region and reduced the compressor input power needed to
achieve this.
• In ejector technology, especially for those ejectors operating as supersonic ejectors in transcritical mode, the speed of sound and shock waves play a fundamental role and stand out as two
crucial physical phenomena. They are responsible for choking flow and the increase in pressure inside the ejector. To consider the effects and dynamics of these parameters, an optimization CFD
study should be performed to analyze these critical parameters.
Author Contributions
A.F.A.E.: Conceptualization, Methodology, Investigation, Writing—Original Draft, Writing—Review and Editing. V.D.: Supervision, Visualization, Writing—Review and Editing. All authors have read and
agreed to the published version of the manuscript.
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
All data are available from the authors on request.
The work was supported by the Student Grant Competition of the Technical University of Liberec under the project No. SGS-2021-5063.
Conflicts of Interest
The authors declare no conflict of interest.
Figure 2. The schematic and P-h diagram for ejector-boosted R744 transcritical refrigeration system.
Figure 4. The work rate and maximum potential work recovery characteristics based on different ejector configurations and pressure lifts.
Figure 5. System COP characteristics vs. pressure lift for the booster system layout at different ejector configurations.
Figure 6. The impact of the ejector system on the compressor power recovery as a function of different pressure lifts.
Figure 7. The HPV exergy destruction rate of the baseline parallel system and in the case of implementing different ejector profiles via different pressure lifts.
Parameter name Unit VEJ1, [22] VEJ2, [10]
Motive nozzle inlet diameter mm 3.8 3.8
Motive nozzle diverging angle degree 2 2
Motive nozzle converging angle degree 30 30
Diffuser diameter mm 7.3 7.3
Diffuser angle degree 5 5
Motive nozzle throat diameter mm 0.71 1.00
Motive nozzle outlet diameter mm 0.78 1.12
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:
Share and Cite
MDPI and ACS Style
Elbarghthi, A.F.A.; Dvořák, V. Evaluation of Various Ejector Profiles on CO[2] Transcritical Refrigeration System Performance. Entropy 2022, 24, 1173. https://doi.org/10.3390/e24091173
AMA Style
Elbarghthi AFA, Dvořák V. Evaluation of Various Ejector Profiles on CO[2] Transcritical Refrigeration System Performance. Entropy. 2022; 24(9):1173. https://doi.org/10.3390/e24091173
Chicago/Turabian Style
Elbarghthi, Anas F. A., and Václav Dvořák. 2022. "Evaluation of Various Ejector Profiles on CO[2] Transcritical Refrigeration System Performance" Entropy 24, no. 9: 1173. https://doi.org/10.3390/
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/1099-4300/24/9/1173","timestamp":"2024-11-11T14:58:02Z","content_type":"text/html","content_length":"406982","record_id":"<urn:uuid:64e57f89-6a29-495e-939e-b09349ce3867>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00184.warc.gz"} |
How To Draw A Chocolate Bar
How To Draw A Chocolate Bar - How to draw a cute chocolate bar subscribe: As usual, for your convenience, i have prepared a pdf file in which you can find a short version of this lesson and
additional useful materials. This lesson only takes about 20 minutes and has a pdf near the bottom of the lesson you can easily print or download. 🎨 art supplies we love (amazon affiliate links):
You’ll learn the fundamentals for drawing a chocolate bar.
🎨 art supplies we love (amazon affiliate links): Web sketch out the decorative element. Begin outlining the edges of the candy. In this case it will be a rectangle. Step 2 next, draw this square
and rectangular shapes to build the packaging. Use straight lines for most of the edges, noticing the slanted line at the. How to draw a cute chocolate bar subscribe:
How to Draw a Chocolate Bar Really Easy Drawing Tutorial
Web step 1 first, let’s draw a block of squares to form the chocolate bar. Web sketch out the decorative element. See how to draw a chocolate bar step by step. Star the drawing by outlining the
basic shape of the chocolate bar. Use straight lines to outline square or. On the wrapper of the.
How to Draw a Chocolate Bar Wrapper TUTORIAL
See how to draw a chocolate bar step by step. As usual, for your convenience, i have prepared a pdf file in which you can find a short version of this lesson and additional useful materials. You’ll
learn the fundamentals for drawing a chocolate bar. Add a few of the little sections the bar is.
How to draw Chocolate bar Step by Step for Beginners YouTube
Step 2 next, draw this square and rectangular shapes to build the packaging. Step 3 add some line to give some depth to the chocolate bar packaging. In this case it will be a rectangle. 326k views 3
years ago #guuhmult. Learn how to draw a chocolate candy bar! On the wrapper of the chocolate.
How to Draw a Chocolate Bar Step by Step EasyLineDrawing
As usual, for your convenience, i have prepared a pdf file in which you can find a short version of this lesson and additional useful materials. Use straight lines to outline square or. 🎨 art
supplies we love (amazon affiliate links): Use orange, yellow, red and brown. In this video you will find simple instructions.
How to Draw a Chocolate Bar Step by Step Easy Drawing Guides
As usual, for your convenience, i have prepared a pdf file in which you can find a short version of this lesson and additional useful materials. Web step 1 first, let’s draw a block of squares to
form the chocolate bar. In this case it will be a rectangle. Web art for kids hub. 🎨.
How to Draw a Chocolate Bar Really Easy Drawing Tutorial
In this case it will be a rectangle. Better get you some chocolate bars. On the wrapper of the chocolate bar, draw the outline of the heart. Step 3 add some line to give some depth to the chocolate
bar packaging. Web get ready for delicious creativity! Add a few of the little sections the.
How to Draw a Chocolate Bar HelloArtsy
Use straight lines to outline square or. Web get ready for delicious creativity! Begin outlining the edges of the candy. 326k views 3 years ago #guuhmult. As usual, for your convenience, i have
prepared a pdf file in which you can find a short version of this lesson and additional useful materials. In this case.
How to Draw a Cute Chocolate Bar Easy in 2022 Chocolate drawing
This lesson only takes about 20 minutes and has a pdf near the bottom of the lesson you can easily print or download. How to draw a cute chocolate bar subscribe: On the wrapper of the chocolate bar,
draw the outline of the heart. Web art for kids hub. Use straight lines to outline square.
How to Draw a Chocolate Bar HelloArtsy
Web art for kids hub. In this case it will be a rectangle. Step 5 add those tiny lines to give its final look. Web want to find out how to draw a chocolate bar? Use straight lines to outline square
or. Better get you some chocolate bars. 326k views 3 years ago #guuhmult. You’ll.
How to Draw a Chocolate Bar HelloArtsy
Web want to find out how to draw a chocolate bar? Step 4 add these 4 squares below to make the chocolate bar realistic. This lesson only takes about 20 minutes and has a pdf near the bottom of the
lesson you can easily print or download. See how to draw a chocolate bar step.
How To Draw A Chocolate Bar Begin outlining the edges of the candy. Web step 1 first, let’s draw a block of squares to form the chocolate bar. Step 3 add some line to give some depth to the
chocolate bar packaging. Begin to outline the individual pieces of chocolate. Learn how to draw a chocolate candy bar!
Download A Free Printable Outline Of This Video And Draw Along With Us:.
In this case it will be a rectangle. See how to draw a chocolate bar step by step. Star the drawing by outlining the basic shape of the chocolate bar. You’ll learn the fundamentals for drawing a
chocolate bar.
Learn How To Draw A Chocolate Candy Bar!
This lesson only takes about 20 minutes and has a pdf near the bottom of the lesson you can easily print or download. 1.3k views 2 years ago. Step 2 next, draw this square and rectangular shapes to
build the packaging. On the wrapper of the chocolate bar, draw the outline of the heart.
Use Straight Lines To Outline Square Or.
Web step 1 first, let’s draw a block of squares to form the chocolate bar. Web want to find out how to draw a chocolate bar? Begin outlining the edges of the candy. Web art for kids hub.
Better Get You Some Chocolate Bars.
🎨 art supplies we love (amazon affiliate links): Step 3 add some line to give some depth to the chocolate bar packaging. As usual, for your convenience, i have prepared a pdf file in which you can
find a short version of this lesson and additional useful materials. Step 4 add these 4 squares below to make the chocolate bar realistic.
How To Draw A Chocolate Bar Related Post : | {"url":"https://sandbox.independent.com/view/how-to-draw-a-chocolate-bar.html","timestamp":"2024-11-08T09:41:02Z","content_type":"application/xhtml+xml","content_length":"23665","record_id":"<urn:uuid:2cb365bc-9807-470d-adee-74a1315a2752>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00836.warc.gz"} |
15th Math Colloquium BCAM-UPV/EHU
Data: Az, Aza 22 2023
Ordua: 11:45
Lekua: Sala Aketxe at Sede Building at UPV/EHU campus in Leioa
Hizlariak: Xabier Cabre (ICREA and UPC (Barcelona)) and Volker Diekert
The 15th Math Colloquium BCAM-UPV/EHU will take place on Wednesday, November 22, at 11:45 (CET) "Sala Aketxe" at Sede Building at UPV/EHU campus in Leioa. Lunch will be offered at the end.
11:45-12:45 | Xabier Cabre: Hilbert’s 19th problem on the regularity of minimizers to elliptic functionals: minimal surfaces and reaction-diffusion equations
Hilbert’s 19th problem asked whether minimizers of elliptic functionals are always analytic. In this lecture I will describe progress made on the problem since the late fifties. After explaining the
celebrated result of De Giorgi and Nash, we will focus on minimal surfaces, from the developments in the late sixties to the recent important work of Chodosh and Li. In the last part of the lecture,
I will concentrate on a recent result (joint with Figalli, Ros-Oton, and Serra) for reaction-diffusion equations. As in minimal surfaces theory, smoothness of stable solutions (and of minimizers)
only holds up to a certain critical space dimension.
13:00-14:00 | Volker Diekert: Matrices everywhere!
Computing with matrices is a basic tool for many areas in mathematics. Every undergraduate student learns this for solving systems of linear equations, but matrices also appear in quantum physics,
representation theory, computing shortest paths in a network, or solving many other (discrete) optimization problems. However, fundamental problems about matrices are still open. For example, given a
finite set of invertible n × n matrices over rationals, decide whether the first matrix can be written as a product over the other matrices. Another example is the so-called mortality problem: given
finitely many matrices over rationals, decide whether the zero matrix can be expressed as a product of these matrices. For n = 4 the first problem, and for n = 3 the mortality problem, are both
undecidable. However decidability of these problems is open even for 2 × 2 integer-matrices. In my talk, I will speak about joint work with Igor Potapov and Pavel Semukhin from Liverpool (UK). We
proved various new (un-)decidability results for 2 × 2-matrices over the rational numbers by combining ideas from formal language theory and geometric group theory. All our results come with concrete
complexity bounds.
Hizlari baieztatuak:
Xabier Cabre (ICREA and UPC (Barcelona)) received his PhD in Mathematics, Courant Institute, advisor Louis Nirenberg, 1994. Kurt Friedrichs Prize, 1995. Member of the Institute for Advanced Study,
Princeton, 1994-95. Habilitation à diriger des recherches, Université Paris VI, 1998. Harrington Faculty Fellow and Tenure Associate Professor, The University of Texas at Austin, 2001-03. ICREA
Research Professor at the Universitat Politècnica de Catalunya, since 2003. Fellow of the American Mathematical Society, inaugural class, 2013. Plenary speaker at the 8th European Congress of
Mathematics, 2021. Frontiers of Science Award, The first International Congress of Basic Science, Beijing 2023.
Volker Diekert was born 1955 in Hamburg, Germany. He graduated in Mathematics in 1980 from the University of Hamburg, Germany. He spent the academic year 1977-78 at the Université des Sciences et
Techniques du Languedoc in Montpellier, France, where he studied with Prof.Alexander Grothendieck and obtained a Diplôme des Études Supérieures. In 1983 he earned his PhD in mathematics at the
University of Regensburg, Germany, under the direction of Prof. Jürgen Neukirch. He received his Habilitation in 1989 for Computer Science at the Technical University of Munich. Since 1991 he holds
the chair for Theoretical Computer Science at the University of Stuttgart. He has been a visiting professor for extended periods in France (Paris 7, Bordeaux, and ENS Cachan), USA (Stevens Institute
of Technology), Japan (Toho University), and Australia (University of Newcastle and University of Technology Sydney). His research interests include algorithmic and geometric group theory as well as
algebraic foundations of computer science, ranging from formal language theory to algebraic models for concurrency. He has published more than 130 refereed journal and conference papers. His
Habilitationsschrift appeared as a monograph in the Springer LNCS series. Together with Professor Grzegorz Rozenberg from Leiden University he edited “The Book of Traces“ which has become the
standard reference in the theory of partial commutation. He is the coauthor of two textbooks in discrete mathematics and discrete algebraic methods. He was member of the Gödel-prize committee from
2005 to 2008 and chair of the committee in 2008.
Ez da ekiltaldirik aurkitu. | {"url":"https://www.bcamath.org/eu/15th-math-colloquium-bcam-upvehu","timestamp":"2024-11-04T20:32:49Z","content_type":"text/html","content_length":"45459","record_id":"<urn:uuid:a2e3c2d3-4a3d-4242-a9ee-d2bd3da97097>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00387.warc.gz"} |
Frontiers | Generating Chromosome Geometries in a Minimal Cell From Cryo-Electron Tomograms and Chromosome Conformation Capture Maps
• ^1Department of Chemistry, University of Illinois at Urbana-Champaign, Urbana, IL, United States
• ^2Division of Biological Sciences, University of California San Diego, San Diego, CA, United States
• ^3Leiden Institute of Chemistry, Leiden University, Leiden, Netherlands
• ^4Center for Microbial Cell Biology, Leiden University, Leiden, Netherlands
• ^5Synthetic Biology Group, J. Craig Venter Institute, La Jolla, CA, United States
JCVI-syn3A is a genetically minimal bacterial cell, consisting of 493 genes and only a single 543 kbp circular chromosome. Syn3A’s genome and physical size are approximately one-tenth those of the
model bacterial organism Escherichia coli’s, and the corresponding reduction in complexity and scale provides a unique opportunity for whole-cell modeling. Previous work established genome-scale gene
essentiality and proteomics data along with its essential metabolic network and a kinetic model of genetic information processing. In addition to that information, whole-cell, spatially-resolved
kinetic models require cellular architecture, including spatial distributions of ribosomes and the circular chromosome’s configuration. We reconstruct cellular architectures of Syn3A cells at the
single-cell level directly from cryo-electron tomograms, including the ribosome distributions. We present a method of generating self-avoiding circular chromosome configurations in a lattice model
with a resolution of 11.8 bp per monomer on a 4 nm cubic lattice. Realizations of the chromosome configurations are constrained by the ribosomes and geometry reconstructed from the tomograms and
include DNA loops suggested by experimental chromosome conformation capture (3C) maps. Using ensembles of simulated chromosome configurations we predict chromosome contact maps for Syn3A cells at
resolutions of 250 bp and greater and compare them to the experimental maps. Additionally, the spatial distributions of ribosomes and the DNA-crowding resulting from the individual chromosome
configurations can be used to identify macromolecular structures formed from ribosomes and DNA, such as polysomes and expressomes.
1 Introduction
JCVI-syn3A is a genetically minimal bacterial cell, consisting of 493 genes and a single short 543 kbp circular chromosome derived from a Gram-positive bacterium, Mycoplasma mycoides. Previous work
established genome-scale gene essentiality and proteomics data along with its essential metabolic network (Hutchison et al., 2016; Breuer et al., 2019) and a kinetic model of genetic information
processing (Thornburg et al., 2019). The kinetic model of genetic information processes of DNA replication, transcription, and translation requires positions and length of genes along the chromosome.
Spatially resolving and solving these kinetic models require cellular architecture, including spatial distributions of ribosomes and the circular chromosome. The first iteration of a bacterial cell
with a synthetic genome was JCVI-syn1.0, whose synthesized genome is 1,079 kbp (Gibson et al., 2010). Syn1.0 has a doubling time of approximately 60 min and shows a spherical morphology with a radius
of $∼$200 nm (Hutchison et al., 2016). Two additional cycles of targeted genomic reduction resulted in JCVI-syn3.0, a cell whose synthetic genome is only 531 kbp but still autonomously replicates (
Hutchison et al., 2016). Syn3.0 has a slower growth rate than Syn1.0, with a doubling time of approximately 180 min (Hutchison et al., 2016), and based on optical and scanning electron microscopy
(SEM), Syn3.0 exhibits a pleomorphic morphology with significant variations (Hutchison et al., 2016; Pelletier et al., 2021). The organism that is the subject of this study, Syn3A, was created from
Syn3.0 through the addition of 19 genes present in Syn1.0. While this addition made a less minimal genome, it resulted in cells with a robust spherical morphology and an average doubling time of
approximately 110 min (Breuer et al., 2019). Super-resolution fluorescence microscopy (STORM) imaging (private communication, Taekjip Ha) reveals that it recovers the spherical morphology of Syn1.0,
with a radius of 200–250 nm.
To create chromosome geometries for our spatial models and subsequent simulations of gene expression and translation, we develop a method of generating self-avoiding circular chromosome
configurations with a resolution of 11.8 bp per monomer on a 4 nm cubic lattice. To place the chromosome inside the cell volume, we use cryo-electron tomography (cryo-ET) to define the cell
boundaries and ribosome distribution, which define the regions available to the chromosome. Cryo-ET data shows that the ribosomes appear to be nearly randomly distributed throughout the cell. In
cryo-ET of bacteria, the position of the chromosome is typically determined by the absence of ribosomes. For example, the cryo-ET studies of slow-growing Escherichia coli show ribosomes primarily
localized at the poles and along the sides of the nucleoid so that the DNA can be inferred to be confined within the enclosed nucleoid region (Roberts et al., 2011). Based on the tomograms for Syn3A
presented in this study, we assume the chromosome is randomly distributed among the ribosomes. We also present experimental chromosome conformation capture (3C) maps, a technique that shows the
frequency at which different regions of the chromosome are in contact with each other (Dekker et al., 2002). This map shows only a single main diagonal with some small ($<$4 kbp) features along it
and has no other significant features.
Using our knowledge of Syn3A’s proteome data and genome, along with the experimental cryo-ET and 3C-Seq maps, we have created a physics-based model of the chromosome to generate chromosome
configurations and predict contact maps. A diagram of the workflow is presented in Figure 1 that shows the process of annotating ribosome locations and membrane from the tomograms and using the
ribosome locations as constraints on chromosome configurations. The configurations are also influenced by the features present in the experimental contact map. Hi-C analysis, a variant of the
chromosome conformation capture (3C) method, has been used extensively to describe the structure of eukaryotic chromatin (Lieberman-Aiden et al., 2009; van Berkum et al., 2010; Belton et al., 2012).
Those chromosome contact maps are used to generate chromatin structures based on topologically associated domains (TADs) observed in the contact maps (Dekker et al., 2013; Rao et al., 2014; Fudenberg
et al., 2016; Di Pierro et al., 2017). While considerations of the energy functions used in the chromatin models for eukaryotic studies are helpful in designing a bacterial study, there are
bacteria-specific proteins and related effects that need to be considered when constructing a bacterial chromosome model and how the effects would appear in the resulting contact map (Le et al., 2013
; Marbouty et al., 2015; Verma et al., 2019). The 3D structure of the circular bacterial chromosome at both the global and local levels is determined by effects of various nucleoid associated
proteins (NAPs) and is also influenced by the crowding of ribosomes. With its reduced genome, Syn3A lacks many of the NAPs that cause significant features in the chromosome structure which leads to
considerable variation from the structures and Hi-C/3C maps observed in other bacteria such as Mycoplasma pneumoniae, Bacillus subtilis, Caulobacter crescentus, and Pseudomonas aeruginosa.
FIGURE 1
FIGURE 1. Workflow Diagram: Cryo-ET data is used to reconstruct spherical Syn3A cells, constrained chromosome configurations are generated within the reconstructed cells and resulting in silico
chromosome contact maps are compared to the experimental 3C-Seq map. (A): The left side is a single z-slice of the tomographic reconstruction. The right side are the ribosomes (yellow) identified
using template-matching and the membrane segmentation (orange) superimposed on the z-slice. (B): The shape of the blotted cells is approximated by an ellipsoid that is manually compared to the
tomographic membrane segmentation (orange). In an iterative procedure, a series of minimal surface area enclosing ellipsoids (MSAEE) are fit around the ribosome coordinates while extraneous ribosomes
assessed to be outside of the true membrane are removed within each iteration. The final fitted ellipsoid is shown in blue and the extraneous ribosomes in red. (C): As Syn3A cells are known to have a
spherical morphology, the ribosome coordinates and ellipsoidal membrane approximation are then transformed to a sphere with equivalent surface area. (D): The continuum representation is then
converted to an 8 nm cubic lattice representation used for whole-cell simulations with LM. (E): Circular and self-avoiding chromosome configurations are generated as a lattice polymer on a 4 nm cubic
lattice. The 4 nm lattice is coincident with the 8 nm cubic lattice and the chromosomes are constrained to avoid the ribosomes and remain within the membrane. In the representative DNA configuration,
monomers are colored red and blue on opposite arms of the chromosome. (F): In silico contact maps from ensembles of generated DNA configurations are compared to the experimental 3C-Seq map.
The global structure of the chromosome, also known as the cellular disposition of the chromosome (Lioy et al., 2018, 2020), describes how regions of the chromosome are organized within the confines
of the cell. Factors affecting the global structure include possible attachment of the chromosome to the membrane, as in M. pneumoniae, and loading of SMC proteins near the origin by a complete parAB
S system, which causes alignment of the two arms of the circular chromosome (Wang et al., 2014; Wang and Rudner, 2014; Lioy et al., 2020). The parABS system includes two proteins, parA and parB,
which site-specifically load SMC onto parS sites on the DNA (Livny et al., 2007). Both of these effects result in a secondary diagonal, orthogonal to the main diagonal, in chromosome contact maps as
observed in maps for M. pneumoniae, B. subtilis, C. crescentus, and P. aeruginosa (Le et al., 2013; Marbouty et al., 2015; Tran et al., 2017; Trussart et al., 2017; Lioy et al., 2020). However, the
experimental contact map presented in the results section reveals that Syn3A does not have a secondary diagonal. Syn3A does not have a complete parABS system because it lacks the parB protein (Breuer
et al., 2019) and the complete signature parS sites, i.e. no sequences greater than a 10/16 match to the consensus sequence (Livny et al., 2007) were identified in a BLAST search of the genome. For
comparison, when performing the same search on B. subtilis, we found matches of 14/16 and higher. Livny et al. also identified Meso, Urea and Mycoplasmas as members of the Firmicutes that lack
complete parABS systems (Livny et al., 2007). Therefore, we do not expect to see alignment of the two arms of the chromosome via the parABS system and would not expect to see a secondary diagonal due
to this effect. Additionally, Syn3A does not have any annotated proteins that attach the DNA to the membrane (Breuer et al., 2019), so we would not expect to see a secondary diagonal due to
attachment of the DNA, unlike M. pneumoniae which has an attachment organelle (Trussart et al., 2017).
Factors affecting local structure include supercoiling, plectonemic loops resulting from supercoiling, small loops formed by SMC bridging distant chromosome segments, and bending and stiffening by
proteins such as histone-like protein (HU), heat-stable nucleoid structuring protein (H-NS), and integration host factor (IHF) (Dame, 2005; Ohniwa et al., 2011; Dame and Tark-Dame, 2016; Dame et al.,
2019; Verma et al., 2019; Birnie and Dekker, 2021). These micro level effects can strongly affect gene expression as localized crowding affects the access of the RNA polymerase (RNAP) to genes and
supercoiling and plectonemes affect the RNAP’s ability to transcribe a gene (Kim et al., 2019). Of the proteins HU, IHF, and H-NS, Syn3A only has one gene, JCVISYN3A_0350, which is annotated as a
putative histone-like protein with a proteomics count of 28. The count of 28 for this protein is significantly lower than the counts seen in other bacteria. For example, fast growing E. coli contains
more than 12,000 HU (Wang et al., 2015), B. subtilis contains almost 9,000 HU (Wang et al., 2015), and Mesoplasma florum contains 9,500 HU (Matteau et al., 2020). Due to its small count, we do not
expect any significant contributions to the stabilizing of chromosome loops by the protein encoded by gene JCVISYN3A_0350 and do not include it in our model.
Supercoiling is formed during transcription by the RNAP, which induces positive supercoiling in the forward direction and negative supercoiling in the reverse (Chong et al., 2014; Verma et al., 2019
). Supercoiling can be eliminated by topoisomerases, gyrases, and positive/negative supercoiling annihilating each other along free DNA (Chong et al., 2014; Dorman, 2019; Verma et al., 2019). The
experimental contact map presented in the Results is too sparse to distinguish between short ($<$10 kbp) supercoiled domains and loops and we do not see any larger interaction domains. We do not
include supercoiling because of this, and given the low proteomics count of HU, we infer the DNA is in a relaxed state. As discussed above, Syn3A does not have genes that code for proteins that would
attach the DNA to the cell membrane. Since the chromosome is therefore most likely not fixed at any location, we assume that, in general, DNA is mostly free, allowing positive and negative
supercoiling to annihilate each other more easily. Additionally, the genome-wide proteomics counts show that we have a total of 187 RNAP and roughly 250 DNA gyrases that can alleviate positive
supercoiling, 150 type IV DNA topoisomerases that can alleviate negative supercoiling, and 175 type I topoisomerases that can alleviate either (Chong et al., 2014; Breuer et al., 2019). Other
bacteria are observed to have fewer topoisomerases and gyrases than RNAP, for example, fast-growing E. coli has roughly 3,800 topoisomerase I, 1,200 topoisomerase IV, and 6,000 to 8,000 gyrases while
having over 10,000 RNAP (Bremer and Dennis, 2008; Wang et al., 2015). Another Gram-positive bacterium, B. subtilis, has 3,000 RNAP while only having 1,200 gyrases, 900 topoisomerase I, and 200
topoisomerase IV (Wang et al., 2015). The more closely related M. pneumoniae has 5,000 topoisomerase I, 200 topoisomerase IV, and 1,800 gyrases while having around 6,000 RNAP (Kühner et al., 2009).
It is then more likely in these systems where larger domains have been observed in their chromosome contact maps that the proteins removing supercoiling cannot keep up with the supercoiling induced
by RNAP due to their lower relative counts. Therefore, it is our assumption that as supercoiling is formed by RNAP in Syn3A, there are sufficient gyrases, topoisomerases, and negative/positive
supercoiling pair annihilations to keep the DNA in a more relaxed configuration with no significant supercoiled domains.
While Syn3A does not have a complete parABS system, it does contain 202 structural maintenance of chromosomes (SMC) proteins, which can bridge distant loci via loop extrusion powered by
ATP-hydrolysis (Ganji et al., 2018; van Ruiten and Rowland, 2018). The SMC protein is a long coiled-coil protein that dimerizes and has head and hinge domains separated by approximately 50 nm (
Diebold-Durand et al., 2017). The number of SMC in Syn3A is smaller than the 448 observed in B. subtilis (Wang et al., 2015) and 900 observed in M. pneumoniae (Kühner et al., 2009), but Syn3A also
has a smaller volume and shorter chromosome, which could result in a higher density of loops. M. florum is not much larger than Syn3A and only has 85 SMCs (Matteau et al., 2020). With a higher
density of SMC in both volume and chromosome length, we assume the effects of SMC looping can be significant in the chromosome structure of Syn3A. We manually annotate any of the observed regions of
contact along the main diagonal in the experimental 3C-Seq map as possible loops ($<$4 kbp) and implement them as looping restraints in our chromosome model.
Finally, based on the cryo-ET images of Syn3A cells presented in the results, the chromosome in Syn3A is more constrained by ribosomes than in other bacteria. From the cryo-ET of Syn3A we infer that
the ribosomes are uniformly distributed throughout the cells and that there is no clearly-defined condensed nucleoid region. The lack of a condensed nucleoid region is in contrast to the rod-shaped
E. coli where the ribosomes are primarily located at the poles and along the sides of the nucleoid region (Nevo-Dinur et al., 2011; Roberts et al., 2011; Bakshi et al., 2012). We saw this
distribution in cryo-ET data of slow-growing E. coli that was part of a previous Lattice Microbes (LM) simulation of the lac genetic switch (Roberts et al., 2011). We observe a ribosome number
density of 12,920–19,370 ribosomes/μm^3 in Syn3A cells, which is higher than the density of 4,200 ribosomes/μm^3 in M. pneumoniae (Trussart et al., 2017; O’Reilly et al., 2020). The density of
ribosomes in E. coli was previously found to be 27,000 ribosomes/μm^3 (Bakshi et al., 2012), which is greater than the density in Syn3A, but the inferred ribosome density within the nucleoid region
is 2,000–8,000 ribosomes/μm^3. Given this, relative to other bacterial cells, the crowding of the ribosomes in Syn3A more strongly constrains the possible chromosome configurations.
In this paper, we first explain how the cellular architecture and ribosome distributions are obtained from three-dimensional cryo-electron tomograms. Using ensembles of constrained DNA configurations
from our circular chromosome model on a lattice, we predict contact maps for individual cells at resolutions of 250 bp and greater and compare them to our experimental 3C-Seq map at 1,000 bp
resolution. The DNA configurations in this study are generated with the intent of incorporating them into stochastic whole-cell models of Syn3A simulated using the reaction-diffusion master equation
(RDME) as implemented in LM (Roberts et al., 2013; Hallock et al., 2014; Earnest et al., 2018). In the whole-cell simulations, the cellular space is divided into cubic subvolumes, so we chose to
model the DNA as a lattice polymer. The DNA configurations, cell sizes, and ribosome locations presented here will later be directly incorporated into cell geometry in the kinetic simulations and
will influence both diffusion and the locations at which genetic information reactions take place. We also identify potential complexes formed from ribosomes and DNA in our spatial model, such as
polysomes and expressomes (O’Reilly et al., 2020), that would affect the reactions within a kinetic model.
2 Methods
2.1 Reconstructing Cell Geometries From Cryo-Electron Tomograms
2.1.1 Tomogram Collection and Processing
One of the primary challenges for cellular cryo-ET is to prepare a specimen such that it is thin enough to be transparent to electrons and to vitrify thoroughly. Due to the small size of Syn3A, this
can be accomplished by placing cells on a Quantifoil EM grid and blotting the majority of the liquid away followed by plunge freezing, leaving a thin layer of ice with cells embedded within it.
Initially, samples were frozen on Quantifoil Cu 200 mesh R 1/4 grids (Electron Microscopy Sciences), which have patterned holes 1 μm in diameter, with 4 μm spacing between holes (i.e., 5 μm
periodicity). Our rationale was that using smaller holes would retain more Syn3A cells on the grid after blotting. However, we found that because the synthetic cells lack a cell wall, they are supple
and get distorted in the direction of the flow of the medium as it is blotted away from the grid. This issue was resolved by switching to Quantifoil Cu 200 mesh R 2/1 grids, which have larger holes.
Syn3A cells were grown to mid-log-phase at 37^°C in SP4 medium (Williamson and Whitcomb, 1975) using KnockOut™ serum replacement (Invitrogen), to a density of $∼108$ cells$/$ml. 4 μl of sample was
deposited on glow-discharged grid, blotted on the backside of the grid for 6 s using Whatman No. 1 filter paper, and plunged into a $50/50$ mixture of ethane and propane (Airgas) cooled to liquid
nitrogen temperatures using a manual plunger (Max Planck Institute of Biochemistry).
These Mycoplasma cells with synthetic genomes were more radiation sensitive than we have encountered for other bacteria. Imaging conditions were chosen to keep cumulative electron dose under 120 e/Å^
2. All data were acquired using a Titan Krios (ThermoFisher Scientific, TFS) at 300 kV and a Gatan K2 camera with a GIF energy filter, using SerialEM v3.7.4 automated protocols (Mastronarde, 2005;
Schorb et al., 2019). The microscopic parameters were: 1) Pixel size: 0.53 nm (FOV: 2 μm) or 0.43 nm (FOV: 1.6 μm), 2) Target defocus: 6 μm, 3) Total accumulated dose: 90–120 e$/$Å^2, 4) Tilt scheme:
dose symmetric from 0^° to $±$60^° every 2^°, 5) 70 μm objective aperture. Individual tilt-series frames were aligned using MotionCor2 (Zheng et al., 2017). Tomograms were reconstructed using IMOD
v4.10.29 (Kremer et al., 1996; Mastronarde, 1997; Mastronarde and Held, 2017) and binned by four for downstream template matching. Additionally, non-linear anisotropic diffusion (NAD) filtering was
applied in IMOD to enhance contrast for visualization.
At the pixel size and target defocus used for acquisition, the ribosome distributions are easily discerned and can be seen for the small and large cells in Supplementary Figures S1,S2, respectively.
The small cell’s dimensions and ribosome count were in good agreement with those reported previously (Hutchison et al., 2016; Breuer et al., 2019). However, the cells were flattened into ellipsoids,
and sometimes further elongated. This well-known effect from blotting seems amplified in these cells due to the absence of a cell wall. The frozen cells were flattened to $∼$160 nm.
To determine the ribosome distribution inside cells, we used two different approaches based on template matching, with one of them continuing to 3-D classification. First, one has to identify all
ribosomes within the tomogram. Template matching is performed by creating a 3-D template of the target structure, and comparing it to each voxel in the tomogram using a 6-D search (three spatial and
three rotational degrees of freedom) to identify regions that correlate highly with the template. It is noteworthy that the contrast difference between ribosomes and their surroundings in Syn3A was
greatly reduced compared to other bacteria, e.g. E. coli, suggesting that the mass density (molecular crowding) of Syn3A is higher.
In our first approach, we used Dynamo v1.1.509 (Castaño-Díez et al., 2012) with a bacterial ribosome structure (PDB:5MDZ) as the initial template filtered to 20 Å resolution in UCSF Chimera (
Pettersen et al., 2004), resampled to match the pixel size, and contrast scaled to match that of the target tomograms. A threshold cross correlation was selected so that it contained most ribosomes
that were clearly inside the cell boundary. Final particle positions were inspected visually, and removed if they were membrane segments. Membranes were segmented using TomoSegMemTV (Martinez-Sanchez
et al., 2014), and ribosomes outside of this segmented membrane were excluded. Starting with a high-correlation threshold, the first approach initially identified 547 ribosomes in the small cell and
849 ribosomes in the large cell. Fitting approximate cell boundaries in section 2.1.2 reduced these ribosome counts to 503 and 820 for the small cell and large cell, respectively.
In our second approach, tilt-series were preprocessed using Warp v1.0.9 (Tegunov and Cramer, 2019) for sub-frame motion correction and 3D-CTF estimation. Tilt-series were aligned using IMOD and the
final reconstructions were created in Warp for subsequent processing. Template matching of ribosomes within tomograms was performed in Warp, using an initial ribosome template generated from about
200 manually picked particles from Syn3A tomograms using IMOD to avoid template bias. Extracellular particles were initially discarded based on cell boundaries defined in Dynamo. Obvious false
positives (e.g., membrane segments, ice particles) were manually removed. The remaining particles were used for 3D alignment and classification in Relion v3.1 (Scheres, 2012). For each cell of
interest, particles were subject to successive rounds of binary classification with a large (500 Å or 83 binned pixels) mask, the class which contained particles that did not appear as ribosomes were
removed from subsequent rounds. This was done until the two classes reached about equal population. A schematic of the overall process is presented in Supplementary Figure S3. Coordinates and
orientations of the remaining particles were imported into Amira for visualization. While starting with a lower correlation threshold, this second approach resulted in 718 ribosomes in the small cell
and 1,136 ribosomes in the large cell, as the quality of fit increased. An additonal round of binary classification, deemed too restrictive, gave counts similar to the first approach. Fitting
approximate cell boundaries in section 2.1.2 reduced these ribosome counts to 684 and 1,095 for the small cell and large cell, respectively.
The second approach that starts with a lower correlation threshold and includes subsequent iterative 3-D classification, is more accurate to find the final true-positive ribosomes and ribosome
distributions (Lasker et al., 2021). However, it requires considerably more resources and expertise. Thus, we introduce both approaches. Even though they give slightly different distributions, both
are in agreement with estimates from other experimental and computational data (Breuer et al., 2019), and notably do not significantly affect the outcome of the chromosome geometries generated, as
shown in Figure 2. A summary of the ribosome counts for both approaches at each stage of our workflow are presented in Table 1.
FIGURE 2
FIGURE 2. (A)—Z-slices from the cryo-ET data of the small and large Syn3A cells. The ribosomes are the objects with higher density than the surrounding cytoplasm that are distributed throughout the
cells. (B)—Cumulative ribosome distributions with 8 nm bins in the reconstructed spherical geometries of a small cell of radius 201.26 nm with 503 ribosomes and large cell of radius 247.42 nm with
820 ribosomes when the first approach was used, and a small cell of radius 203.52 nm with 684 ribosomes and large cell of radius 241.20 nm with 1,095 ribosomes when the second approach was used. Also
shown are the reconstructed spherical geometries resulting from the first approach to template matching.
TABLE 1
TABLE 1. Summary of the ribosome distributions and cell geometries resulting from the two template matching methods for both the small and the large cell.
2.1.2 Determining the Spherical Cell Size and Ribosome Distribution
Given a set of ribosome coordinates, the bounding membrane and shape of the deformed cell can be approximated using an ellipsoid. This was done by calculating an ellipsoid with minimal surface area
that encloses the centers of all the ribosomes. The solution for the minimal surface area enclosing ellipsoid (MSAEE) was found using the minimize routine in the SciPy package with the sequential
least-squares programming (SLSQP) method. To optimize the calculation, only the convex polytope of the ribosome coordinates was used to constrain the enclosing ellipsoid. The optimal enclosing
ellipsoid is always constrained by four ribosomes that form a tetrahedral shape bounding the ellipsoid.
Some ribosomes identified by template matching are extraneous ribosomes, e.g., ribosomes that are present in the cell periphery but correspond to a nearby lysed cell. The extraneous ribosomes were
iteratively removed from the set of coordinates and a series of ellipsoids were iteratively fit after each extraneous ribosome removal until the relative change in the ellipsoid surface area between
iterations fell below 0.001%. At the end of each iteration, we choose the ribosome among the four bounding ribosomes having the greatest projection along the major axis of the enclosing ellipsoid as
the extraneous ribosome and remove it. The number of extraneous ribosomes removed are summarized in Table 1 for all cases. Figure 2A shows z-slices of the cryo-ET data for both the small and large
Syn3A cells. Notably, the spatial distribution of ribosomes in these cells are largely homogeneous, but small regions of about 150 nm appear to have fewer ribosomes than the surrounding cytoplasm.
After an ellipsoid approximating the membrane surface was calculated, both the ellipsoid and the enclosed ribosome coordinates were transformed to a spherical cell with equivalent surface area. A
surface-area preserving transformation was chosen as previous measurements on bilayer vesicles indicated that the membrane area can only strain by approximately 5% before lysing (Needham and Nunn,
1990), thus we assume that there is a small change in volume during the blotting procedure due to mass transport of water across the membrane. The equation of an ellipsoid centered at $c$ is ${x∈ℝ3|
(x−c)TATA(x−c)≤1}$, where $A$ is the matrix describing the shape of the ellipsoid, and the equation of a sphere centered at the origin is ${x∈ℝ3|xTR−2x≤1}$, where $R=RI$ and R is the radius of the
sphere. For ribosome coordinates, ${ri}$, the transformed coordinates, ${ρi}$, are given by transforming all of the coordinates to a unit sphere centered at the origin by translating them by the
vector $c$ and transforming them with the matrix $A$. The coordinates in the unit sphere representation are then scaled by the matrix $R$ to a sphere with surface area equivalent to the MSAEE. The
overall transformation is given by
The transformation preserves the relative distances amongst the ribosomes and the shapes of the voids between the ribosomes. In a final step, the ribosome coordinates are expanded anisotropically
along the semiaxes of the ellipsoid to ensure the ribosomes at the extremes reach the membrane. Two representative spherical geometries resulting from this transformation and the radial distribution
of ribosomes are shown in Figure 2B.
After the transformed ribosome coordinates are determined for the spherical cell, the coordinates are projected onto the 8 nm cubic lattice used for Lattice Microbes simulations and converted to a
star shape comprised of seven 8 nm cubic sites to approximate the ribosome diameter of 20 nm. The set of ribosome coordinates on the 8 nm lattice and the boundary imposed by the cell membrane then
serve as constraints when generating the ensemble of chromosome configurations.
2.2 Modeling Bacterial Chromosome Configurations
The three primary objectives of creating a chromosome model for Syn3A are creating realistic spatial heterogeneities due to DNA crowding that are discernable at the 8 nm resolution used in
spatially-resolved kinetic models of Syn3A, matching the cell architecture dictated by the cell boundary and ribosome distribution, and reproducing the intra-chromosomal interactions in chromosome
conformation capture experiments through DNA-looping.
Computational models for chromosomes can be broadly classified into two groups, direct models and inverse models (Rosa and Zimmer, 2014). This distinction is not entirely black and white and it is
discussed in the following paragraphs. Direct models use a minimal set of assumptions about the underlying physics of DNA or chromatin to create a polymer model, and the results of simulating the
model can then be compared to experimental data (Rosa and Zimmer, 2014). These models range on length-scale from 1 bp per monomer models of the E. coli chromosome (Hacker et al., 2017) to
500–50,000 bp per monomer models of human chromosomes (Di Pierro et al., 2016). The models at the smallest length scales often use a Kratky-Porod model (Kratky and Porod, 1949) or a worm-like chain
model for the polymer, where the persistence length of the DNA is explicitly incorporated. In contrast, the models at the largest length scales often use a Rouse model (Rouse, 1953) for the polymer,
in which the monomers are assumed to be uncorrelated equilibrium globules of DNA. These models based on Rouse dynamics are well-suited for eukaryotic chromosomes on the order of $107$-$108$ bp, where
the DNA is organized in nucleosomes comprised of histone octamers and other higher-order structures. A comprehensive discussion of possible interactions in the direct models of DNA polymers can be
found in the review by Haddad et al. (Haddad et al. 2017) and the Minimal Chromatin Model of Di Pierro et al. (Di Pierro et al. 2016). The complexity of interactions in polymer models of DNA can
range from those in homopolymer models to block copolymer models, and finally heteropolymer models (Haddad et al., 2017). Additionally, direct chromosome models can include the influence of NAPs,
SMC, or bridging proteins in strings and binders models (Annunziatella et al., 2018; Ryu et al., 2021), where other particles diffuse amongst the chromosome and cause multi-point intrachromosomal
interactions. After a polymer model has been specified and the chromosome of interest has been mapped to the model, molecular dynamics or Monte Carlo methods are used to sample configurations of the
direct models.
Inverse models are data-driven and use large sets of experimental data to create a compatible model (Rosa and Zimmer, 2014; Oluwadare et al., 2019). The most common form of experimental data used in
inverse models are chromosome contact maps resulting from 3C methods. The interaction frequencies in the contact maps are inverted to produce distance-based restraints for the chromosome models (Rosa
and Zimmer, 2014). In addition to these distance-based restraints, constraints that are based on the known properties of the chromosome, such as the topology and excluded-volume effects, can be
incorporated into the inverse models. A single ideal chromosome configuration that simultaneously satisfies all restraints and constraints can then be determined using iterative methods (Duan et al.,
2010; Lesne et al., 2014; Hua and Ma, 2019). However, in reality, no single chromosome configuration will capture all of the interactions present in the contact map, as the contact map is an average
over a population of cells. Instead, methods such as simulated annealing are used to find families of optimal chromosome configurations (Rosa and Zimmer, 2014; Junier et al., 2015). The chromosome of
M. pneumoniae (Trussart et al., 2017) and that of C. crescentus (Umbarger et al., 2011) were modeled in this fashion using the Integrative Modeling Platform (Russel et al., 2012). Inverse models have
also been built using maximum entropy techniques (Di Pierro et al., 2017; Messelink et al., 2021).
At the start of this study there was no experimental chromosome contact data for Syn3A, so we chose to create a direct model of the chromosome and because we intend to incorporate the chromosome
configurations in simulations of whole-cell models using a lattice-based methodology (Roberts et al., 2013), we decided to use a lattice polymer model. There is a rich history of proteins and other
polymers being modeled using discrete lattice models (Verdier and Stockmayer, 1962; Heilmann and Rotne, 1982; Lau and Dill, 1989; Madras et al., 1990; Dill et al., 1995). Bacterial chromosome
configurations have previously been directly modeled using lattice models (Buenemann and Lenz, 2010; Messelink et al., 2021) and continuous models have been constructed by interpolating between
lattice models and relaxing the system (Goodsell et al., 2018). However, none of the models satisfied all three of our requirements of 1) being at the spatial resolution needed to introduce spatial
heterogeneities on the 8 nm lattice, 2) self-avoidance, and 3) able to be constrained by the cell boundary and ribosomes. We investigated modifying an existing model, such as Goodsell et al.‘s (
Goodsell et al., 2018), but found that none were easily extensible.
2.2.1 Growing a Self-Avoiding Polygon Model of Syn3A’s Chromosome
We model the circular chromosome of Syn3A as a circular lattice polymer. To account for the volume-exclusion effects, the circular lattice polymer is required to be strictly self-avoiding. These
circular and self-avoiding configurations of monomers on a lattice are known as self-avoiding polygons (SAPs) and have been previously used to model E. coli and C. crescentus chromosomes (Buenemann
and Lenz, 2010). The SAP model of Syn3A’s circular chromosome is defined on a 4 nm cubic lattice and each monomer is represented by a 4 nm $×$ 4 nm $×$ 4 nm cube. These monomers contain cylindrical
segments of DNA 4 nm in length, which corresponds to approximately 11.8 bp per monomer. The 543 kbp chromosome of Syn3A is represented by 46,188 of these monomers. The total volume excluded by
monomers in the chromosome is 2,956,032 nm^3. At the two extremes, in the small cell with a radius of 201.26 nm, a single chromosome occupies nearly 9% of the cytoplasmic volume, and in the large
cell with a radius of 247.42 nm, a single chromosome occupies just below 4% of the cytoplasmic volume.
Mathematically, the SAP configurations within the reconstructed cell geometries are described by the set of monomer coordinates, ${ri}$, on the cubic lattice, that satisfy four different constraints,
two SAP constraints, a circularity constraint (g^circ) and a self-avoidance constraint (g^SA), and two cell geometry constraints, a membrane constraint (h^mem) and a ribosome constraint (h^ribo). The
circularity constraint requires that consecutive monomers are adjacent in the lattice, the self-avoidance constraint requires that no monomers share coordinates, the membrane constraint requires that
the monomers remain within the cell, and the ribosome constraints require that the monomers do not intersect any ribosomes. The ribosomes in the 8 nm lattice representation are converted to a 4 nm
lattice representation, where they are the same star shape, but now formed from fifty-six 4 nm cubes. These constraints are formulated mathematically using constraint functions that are equal to 1
when the constraints are satisfied and 0 when the constraints are not satisfied. All four of these constraints must be satisfied while growing and moving the SAP. While satisfying the constraints,
the configurations are sampled from the canonical ensemble with a Hamiltonian that specifies intrachromosomal interactions, including looping, which will be referred to as restraints. The Hamiltonian
is described in section 2.2.3.
A SAP with a greater number of monomers can be grown from an existing SAP by severing the bond between a pair of consecutive monomers and adding a closed branch orthogonal to the vector between that
pair of monomers (Buenemann and Lenz, 2010; Goodsell et al., 2018). This is done in an unbiased fashion by randomly selecting consecutive pairs of monomers to serve as a branch-point and then
randomly proposing growths in the orthogonal directions, an example of proposed growths is depicted in Figure 3A. Each proposed growth is only accepted if the resulting SAP satisfies all of the
constraints. For example, growth #1 in Figure 3B may have been accepted because all of the other proposed growths violated the ribosome constraints. If a satisfactory growth can not be found, then
the SAP is moved before searching for growths again. Pseudocode for the SAP growth algorithm is presented in Algorithm 1.
FIGURE 3
FIGURE 3. (A)—SAP with the set of proposed growths orthogonal to branch-point at monomers 4 and 5 shown in red. (B)—SAP after growth #1 with a size of 4 was accepted and incorporated into the SAP,
increasing the SAP size from 16 monomers to 20 monomers.
2.2.2 Circularity-Preserving Moves and Proof of Ergodicity
If we start with a valid SAP configuration and then only change the configuration using moves that result in a polymer configuration still satisfying the circularity constraint, then, provided that
the moves are ergodic and the new configurations are self-avoiding, we can sample SAP configurations using random sequences of the circularity-preserving moves. The proof of ergodicity follows the
proof outlined in (Messelink et al., 2021).
A SAP on a lattice may be represented as a series of displacements along the cubic lattice from a starting location (Messelink et al., 2021). Displacements in the positive and negative Cartesian
directions are denoted by $X+$, $Y+$, and $ℤ+$and $X−$, $Y−$, and $ℤ−$, respectively. To ensure circularity, the number of positive and negative displacements should be equal for every direction on
the lattice, or symbolically, $NA+=NA−$, where $A=X,Y, or ℤ$ (Messelink et al., 2021). Traveling counter-clockwise from the origin, the SAP in Figure 4 is described by the sequence - $(origin)
FIGURE 4
FIGURE 4. Circularity-preserving moves on a cubic lattice—An example kink move is shown in green. Two example crankshaft moves are shown in red and blue. Following a single enumeration of the set of
possible crankshaft moves, multiple crankshaft moves can be made, provided that they are compatible with the crankshaft moves previously sampled from that set of possible crankshaft moves. An example
of this is shown by the composition of crankshaft moves 1 and 2 in the purple.
There are a variety of circularity-preserving moves that can transform the sequence while maintaining the circularity. For our program, we chose an extension of the Verdier-Stockmeyer moveset (
Verdier and Stockmayer, 1962; Sokal, 1995) with kink moves and 2 to $N/2$ monomer crankshaft moves. A kink move is the interchange of two symbols in a subsequence $AB→BA$ (Messelink et al., 2021).
The move labeled kink move in Figure 4 is equivalent to $X−Y−→Y−X−$. A crankshaft move alters a motif of a specific type. The motif is a subsequence where the monomers at the start and end of the
subsequence share two Cartesian coordinates (Messelink et al., 2021). Symbolically, within such a subsequence, $NA+≠NA−$, while $NB+=NB−$ and $Nℂ+=Nℂ−$. The crankshaft move is then a rotation of
magnitude $π/2$, π, or $3π/2$ about the vector separating the monomers at the start and end of the subsequence, applied to all of the monomers between those two. Generally, the transformation of
symbols within the subsequence undergoing a crankshaft move will be $A±→A±$, while $B±→(ℂ±,B∓,ℂ∓)$ and $ℂ±→(B±,ℂ∓,B∓)$. The move labeled crankshaft move 1 in Figure 4 is equivalent to
Starting from a sequence of at least two symbol types satisifying the condition $NA+=NA−$, where $A=X,Y, or ℤ$, combining the kink and crankshaft moves can produce any sequence of symbols that also
satisfies the condition (Messelink et al., 2021). This result allows for ergodic sampling of sequences, which is equivalent to ergodic sampling of polymer configurations satisfying the circularity
constraint. However, the Verdier-Stockmeyer moveset is known to be non-ergodic for self-avoiding walks (SAWs) and SAPs due to the presence of knotted configurations (Madras and Sokal, 1987; Madras et
al., 1990) and there is the additional challenge of confinement imposed by the ribosome and the cell boundary constraints. We attempted to mitigate these issues by incorporating the extended
crankshaft moves and growing the SAPs to sample configurations that would otherwise be inaccessible by a single SAP being dynamically sampled using a Markov chain Monte Carlo method.
The relative frequencies of the kink moves and crankshaft moves have significant impact on the overall speed of the algorithm and are linked to the ergodicity (Sokal, 1995). The speed of the
algorithm can be improved by performing multiple kink or crankshaft moves from a single enumeration of all possible kink or crankshaft moves in the current configuration, respectively. However,
following the single enumeration, in addition to satisfying the SAP and spatial constraints, all kink or crankshaft moves performed must be compatible.
The list of possible kink moves are stored as an array of three element vectors of monomer indices, $(i−1,i,i+1)$, where the i-th monomer in the middle will be moved by interchanging two of its
coordinates that match with the coordinates of the $i−1$-th and $i+1$-th monomers. After at least one kink move is proposed and accepted, all following kink moves may not have their $i−1$-th or $i+1$
-th monomers be one of the middle monomers that was moved in the previously accepted kink moves. Proposed kink moves are then rejected based on this condition. The list of possible crankshaft moves
are stored as an array of two element vectors $(i,j)$ of monomer indices, where $i<j$ and i and j are the monomers defining the ends of the subset of the SAP which will be transformed by the
crankshaft moves, and an array of two element vectors $(d,ω)$, describing the length of the SAP subset, d, and the direction around the SAP in which the SAP subset is defined, ω. After at least one
crankshaft move has been accepted, all following crankshaft moves must have their $(i′,j′)$ either both belonging to the SAP subset that was moved by the crankshaft move or both not belonging to the
SAP subset that was moved. Proposed crankshaft moves are then rejected based on this condition.
Crankshaft moves are the most computationally expensive to both enumerate and sample; however, they cause the fastest change in the configuration. The naive solution to this problem was to assign a
frequency at which crankshaft moves were performed, $ηcrankshaft$, and multiplicities for the number of kink and crankshaft moves that were performed after a single enumeration of kink or reflect
moves, $gkink$ and $gcrankshaft$, respectively. These parameters describing the sampling were then manually adjusted. Using this methodology prevents ergodic sampling from ever occurring. This can be
illustrated by considering the fact that as long as crankshaft moves are sampled in batches of $gcrankshaft$ every $ηcrankshaft$ iterations, then unless only a single crankshaft move is possible,
there will never be an instance in which a kink move is sampled immediately after a crankshaft move. The inverse case is also true. An alternative is to randomly select the iterations at which
crankshaft moves will be enumerated and performed, where the probability is given by $pcrankshaft=ηcrankshaft−1$. Once the move type is determined using this criteria, randomly sample the number of
moves to be performed from a distribution whose mean is equal to the multiplicity of the respective move type. For example, in the case of discrete uniform distributions $nkink=u(0,2gkink)$ and
$ncrankshaft=u(0,2gcrankshaft)$. Now there exists the possibility that any sequence of kink and reflect moves may be sampled. Pseudocode for the SAP movement algorithm is presented in Algorithm 2.
2.2.3 Energy Functions and Metropolis-Hastings Sampling
The Hamiltonian for the SAP model of the chromosome has three contributions, a bending energy related to the stiffness of DNA, a nearest-neighbor interaction, and a harmonic interaction acting as a
restraint to recreate the effect of DNA looping.
The contribution to the Hamiltonian due to the bending stiffness of linear DNA is
and is parameterized by the bending energy per unit length squared, κ. This Hamiltonian incurs an energy penalty for every bend in the lattice polymer and can be used to model the stiffness of a
polymer, a quantity often characterized by the persistence length. One interpretation of the persistence length, $lp$, is the constant describing the exponential rate at which the polymer
orientations become decorrelated (Brinkers et al., 2009; Hsu and Binder, 2012; Zhang et al., 2019)
where $〈f(ri)〉mono$ is the average over the N monomers in the configuration and l is the lattice size. Consider the case of a SAW on a cubic lattice, in which the lattice polymer can become
immediately decorrelated, thus consider the case when $s=1$
leading to an equation with the bending Hamiltonian parameterized by κ. Assuming the lattice polymer is in thermal equilibrium at inverse temperature $β=1/kBT$, we can take a thermal average of this
and κ can be calculated by solving this root-finding problem through Monte Carlo sampling of SAW configurations using Wang-Landau sampling (Wang and Landau, 2001). In this study, the value of κl^2
(3.872k[B]T) was estimated using the exact solution for a non-reversal random walk and the consensus persistence length for DNA of 50 nm (Vologodskii et al., 1992; Manning, 2006; Brinkers et al.,
2009; Geggier et al., 2010; Mantelli et al., 2011).
The contribution to the Hamiltonian due to pairwise nearest-neighbor interactions is
and was used to tune the excluded-volume effects of DNA (ϵ = k[B]T). Lastly, the contribution to the Hamiltonian when looping restraints are imposed is
These pairwise harmonic interactions were used to create looping between portions of chromosome bound by SMC proteins (k[ij]l^2 = 10,000k[B]T).
A Markov chain Monte Carlo algorithm (Metropolis et al., 1953; Hastings, 1970) was used to sample configurations governed by this Hamiltonian from the canonical ensemble. We use the Metropolis
criterion, $A({ri′},{ri})=min(1,P({ri′})/P({ri}))$, (Metropolis et al., 1953), for the acceptance probability of moving from the current configuration, ${ri}$, to the proposed configuration, ${ri′}$.
The probability of a configuration satisfying the SAP constraints (g^circ and g^SA) and geometric constraints (h^mem and h^ribo) is
where $Z$ is the canonical partition function of the system found by summing over all possible configurations of N monomers on a cubic lattice. Assuming the current configuration, ${ri}$, and the
proposed configuration, ${ri′}$, always satisfy the circularity constraint because they are generated from sequences of circularity-preserving moves, then the ratio of probabilities is
Additionally, if the proposed configuration satisfies the self-avoidance and geometric constraints, which can be determined without evaluating energy changes, then the acceptance probability given by
the Metropolis criterion, $A({ri′},{ri})=min(1,e−βΔE)$, is simply a function of the energy difference, $ΔE=ℋ({ri′})−ℋ({ri})$, and the sampling favors low-energy configurations that better agree with
the stiffness of DNA, the excluded-volume effects, and the DNA-looping restraints.
2.2.4 Summary of Complete Algorithm for Generating Chromosome Configurations
The final algorithm generated chromosome configurations by alternating cycles of growing and moving the SAP configurations to relax the newly grown portion. Pseudocode for the final algorithm is
presented in Algorithm 3. In an early implementation, a single relaxation occurred after the growth was completed, but it was found that the alternating cycles of growth and relaxation were required
because the combined effects of confinement and the exponentially increasing attrition rate due to violations of the self-avoidance constraint became overwhelming as the SAP grew larger. The relative
frequencies and durations of these alternating growth and relaxation cycles were chosen empirically to maximize the speed of generating relaxed configurations of the complete chromosome. An example
of how the alternating growth and relaxation cycles affect the total energy over the course of a simulation for a test case with 5,000 monomers is presented in Supplementary Figure S4. While this
procedure more rapidly relaxes the system, the exponentially increasing attrition rate of rejected moves prevents us from definitively stating that we reach equilibrium in the system with 46,188
monomers. The relative frequencies and durations are described by functions, τ and σ, both are dependent on the current number of monomers, N, and separately depend on empirical parameter vectors,
$α$ and $γ$, respectively. The algorithm was implemented in Fortran 90 and a single chromosome configuration of 46,188 monomers can be generated in approximately 12–14 h on a single CPU core at
3.5 GHz. The algorithm is embarrassingly parallel and the program uses OpenMP to generate multiple configurations simultaneously. An example reconstructed cell architecture is shown in Figure 5A and
the resulting constrained chromosome configuration is shown in Figure 5B.
FIGURE 5
FIGURE 5. (A)—Reconstructed ribosome distribution in the small cell. The 100 monomers on either side of the origin are shown in red and blue. Ribosomes are depicted as yellow stars in the 8 nm
lattice representation. (B)—Complete chromosome configuration generated on the 4 nm lattice within the reconstructed architecture of the small cell. The circular chromosome is colored starting at the
origin as red to grey to blue, before returning to the origin where blue and red meet.
Starting with fixed ribosome positions and cell orientation from the cryo-ET, we initialize the configurations by randomly placing a circular fragment of the chromosome and then independently
generate hundreds of chromosomes within an otherwise identical cell. To test if the monomers along the chromosome are identically distributed within the cell, we calculate the centroid of the
ensemble of chromosome configurations. The monomer coordinates of the centroid are the ensemble averages of the monomer coordinates in the chromosome configurations. The center of mass of a sphere is
at its center, thus we expect the centroid of the ensemble of chromosome configurations to be approximately located at the center of the spherical cell. We find the centroid of 30 configurations to
be located in the center of the cell, as shown in Figure 6A. Furthermore, if the number of identically distributed chromosome configurations is increased, we expect the centroid to collapse to the
center, which we quantify with its radius of gyration. The centroid of 30 configurations in Figure 6B has a radius of gyration of 24.93 nm and it is reduced to 11.99 nm when the centroid is
calculated from 90 configurations, as shown in Figure 6C.
FIGURE 6
FIGURE 6. (A)—Centroid of 30 chromosome configurations is shown within the ribosome distribution. The same color scheme for centroid is used as for the chromosome in Figure 5. (B)—Magnified view of
centroid in 6A calculated from 30 configurations, radius of gyration is 24.93 nm. (C)—Magnified view of centroid calculated from 90 configurations, radius of gyration is 11.99 nm.
Other bacteria that are not genetically-minimal have additional regulatory systems used to control their chromosome organization, such as attachment organelles and parABS systems. Due to these
regulatory systems, their chromosomes show consistent configurations that correlate the genomic position with the internal structure of the cell (Umbarger et al., 2011; Marbouty et al., 2015;
Trussart et al., 2017) and this is reflected in their centroids. For example, in a model of M. pneumoniae’s chromosome, Trussart et al. saw a consistent alignment and interweaving of the two
chromosome arms of the centroid (Trussart et al., 2017). As a comparison, we tested fixing the origin of our chromosome at the membrane and found that the centroid had a consistent alignment of the
two chromosome arms at the fixed origin and monomers near the origin were found near the membrane (data not shown). Since there are no interactions correlating the genomic position and the internal
structure of the cell, we compared the average radius of gyration for chromosome configurations generated in the small cell with and without ribosomes present to test the excluded volume effect of
ribosomes. The average radius of gyration without ribosomes was 145.40 nm and was 133.59 nm when ribosomes were present. We also tested the effect of further increasing the number of ribosomes by
randomly placing 497 ribosomes in addition to the 503 from the tomogram in the small cell and found that the average radius of gyration further decreased to 124.29 nm. We attribute this reduction in
the average radius of gyration to the additional confinement caused by the volume exclusion of the ribosomes.
2.3 3C-Seq Library Preparation
JCVI-syn3A chromosome contact maps were prepared with 3C-Seq (Lioy and Boccard, 2018), a chromosome conformation capture technique reminiscent of Hi-C (Crémazy et al., 2018). The protocols differ in
that following the restriction digestion of the fixed chromosome, restriction fragment ends are not filled-in with biotin-labelled nucleotides in 3C-Seq (Crémazy et al., 2018; Lioy and Boccard, 2018
). The modification reduces the cost of chromosome conformation capture in prokaryotes since the requirement for biotin-labelled nucleotides is alleviated. 3C-Seq increases the diversity of
restriction enzyme options available for library preparation from only enzymes that generate 5′-overhangs that can be filled-in by the Klenow fragment, to include enzymes that generate 3′-overhangs,
and blunt-ends. Furthermore, sticky ends generated by restriction digestion are not “blunted” in 3C-Seq, increasing ligation efficiency since sticky-end ligation occurs more efficiently than
blunt-end ligation. In addition, the absence of biotin at restriction fragment ends eliminates the requirement of removing biotin-labels from unligated ends, DNA purification following biotin
removal, and enrichment of biotin-labelled ligation junctions, effectively, reducing the library preparation time by at least 30%.
Syn3A was cultured to stationary phase in 25 ml of SP4-KO medium in a 50 ml conical tube at 37°C. The cells were fixed with a final concentration of 1% formaldehyde (Sigma-Aldrich) at 25°C for 30 min
and 4°C for a further 30 min. The reaction was quenched with 0.125 M glycine (Sigma-Aldrich) for 15 min at 4°C. The fixed cells were collected by centrifugation and washed twice with 1X HE pH 8.0
[10 mM HEPES (Sigma-Aldrich), 1 mM EDTA (Sigma-Aldrich)]. The cell pellet was flash-frozen with liquid nitrogen in a 1.5 ml low-binding microfuge tube and stored at −80^°C until use. Fixed Syn3A
cells were resuspended in 100 μl of 1X HE pH 8.0 and mechanically sheared with 0.5 mm glass beads (Sigma-Aldrich) using a vortex mixer. Membranous structures in the lysate were solubilised with 0.5%
SDS (Sigma-Aldrich) for 15 min at 37^°C in a Thermomixer® (Eppendorf) with shaking at 1,000 rpm. SDS was quenched with 1% Triton X-100 (Sigma-Aldrich) in 1X CutSmart buffer (NEB) for 15 min at 37^°C
in a Thermomixer® (Eppendorf) with shaking at 1,000 rpm. The extracted chromatin was digested with 100 U of NlaIII (NEB) for 3 h at 37^°C. The reaction was terminated with 0.5% SDS (Sigma-Aldrich)
for 20 min at 37^°C. The digested chromatin was centrifuged at 20,000 xg for 1 h at 4^°C. The supernatant was removed and the gel-like pellet was dissolved in 200 μl of nuclease-free water
(ThermoFisher Scientific). The DNA concentration of the dissolved chromatin was determined using the Qubit® HS dsDNA assay kit (ThermoFisher Scientific) and the Qubit® fluorometer (ThermoFisher
Scientific). 3 μg of DNA was used for ligation in 1X T4 DNA ligase buffer (NEB) supplemented with 100 μg/ml BSA (NEB) in a final volume of 1,000 μl. The reaction was carried out with 4000 CEU of T4
DNA ligase (NEB) at 16^°C for 16 h and 25^°C for 1 h. Ligation was terminated with 10 mM EDTA pH 8.0 (usb Corporation). Ligated DNA (the 3C library) was extracted twice with 25:24:1
phenol:chloroform:isoamyl alcohol (Sigma-Aldrich) and once with chloroform (Sigma-Aldrich). The library was precipitated with 0.1 × 1.0 M NaOAc (Sigma-Aldrich) pH 8.0, 0.025 × 5 mg/ml glycogen
(Invitrogen), and 2.5 × 100% ethanol (Sigma-Aldrich) at -20^° C overnight. Precipitated DNA was pelleted by centrifugation and the pellet washed twice with 70% ethanol (Sigma-Aldrich). The pellet was
air-dried and dissolved in 50.0 μl of 10 mM Tris (Sigma-Aldrich) pH 8.0. The 3C library was purified with 3X KAPA HyperPure beads (KAPA Biosystems) and eluted in 20.0 μl of 10 mM Tris (Sigma-Aldrich)
pH 8.0.3C-Seq libraries for next-generation sequencing were prepared using the KAPA HyperPlus Kit (KAPA Biosystems) according to the manufacturer’s protocol. 3C-Seq libraries were sequenced on an
Illumina® platform.
3 Results
3.1 3C-Seq and in Silico Contact Maps
The 3C-Seq library prepared using the restriction enzyme NlaIII had a total of 1,819,715 reads that were mapped at a resolution of 1,000 bp. A histogram of restriction digestion fragment sizes and
distribution NlaIII cut sites in Syn3A’s chromosome are presented in Supplementary Figures S5,S6, respectively. The contact map was normalized to be a doubly-stochastic matrix using the
matrix-balancing procedure of Knight and Ruiz (Knight and Ruiz, 2012; Rao et al., 2014) and is shown in Figure 7A. The chromosome contact map shows a primary diagonal of high interaction frequency
that reflects the physical proximity of loci that lie close to each other along the primary sequence of the DNA polymer. A secondary diagonal cannot be detected implying the absence of inter-arm
interactions along the chromosome. The absence of a secondary diagonal is in contrast to the chromosome contact maps of M. pneumoniae (Trussart et al., 2017), B. subtilis (Marbouty et al., 2015), and
C. crescentus (Le et al., 2013; Tran et al., 2017). Notably, there are two regions of the chromosome that are devoid of interactions, these regions correspond to the two identical ribosomal RNA
operons in Syn3A and can be seen in Figure 7B. No interactions were assigned to these regions as sequencing reads arising from either copy could not be distinguished. There are smaller secondary
features along the diagonal that we interpret to be regions of high interaction due to looping. However, as this is a preliminary map with a low read depth and signal-to-noise ratio, chromosome
architecture cannot be reliably interpreted and standard loop and chromosome interaction domain (CID) annotation software (Durand et al., 2016b) was unable to reliably process the map. Upon visual
inspection at a resolution of 250 bp, the map shows four interactions, with distinct signatures reminiscent of loops (Fudenberg et al., 2016). Snapshots of the four interactions at 250 bp resolution
in Juicebox (Durand et al., 2016a) are shown in Supplementary Figure S7. We infer the end points of these loops to be such that they fully encompass genes in the corresponding regions of the
chromosome. The positions of these manually annotated loops, the genes they encompass, and the corresponding proteomics are presented in Table 2, and the loop locations within the contact map can be
seen in Figure 7A.
FIGURE 7
FIGURE 7. (A)—3C-Seq contact map at 1,000 bp resolution with the color-scale adjusted to make weak secondary features along the diagonal more apparent. The four manually annotated loops listed in
Table 2 are indicated with cyan boxes. (B)—Circular chromosome of Syn3A with features shown as arrows and arcs around the perimeter - constructed using CGview (Petkau et al., 2010). The proteomics of
a 400 nm Syn3A cell (Breuer et al., 2019) are plotted in red around the middle ring and the innermost ring contains the annotated loops in green. (C)—In silico contact map resulting from 150
configurations with looping interactions added at the positions of the manually annotated loops in Table 2. A locus size of 1,000 bp was used to match the 3C-Seq map. The interactions at the
ribosomal RNA operons have been removed from the map to enhance visual clarity. Cyan squares are again used to indicate regions containing the loops. (D)—Magnified view of the region within the cyan
square containing the fourth loop in the in silico contact map. The map was recalculated at a resolution of 250 bp and the maximum of the color-scale was increased to better resolve the
characteristic signature of a loop.
TABLE 2
TABLE 2. Loops inferred from 3C-Seq library of Syn3A. The gene annotations and locus tags are those in the NCBI entry for Syn3A’s genome (https://www.ncbi.nlm.nih.gov/nuccore/CP016816.2) and the
locus tags are abbreviated to only the 4-digit number.
By comparing the annotated loops to the proteomics (Breuer et al., 2019), as shown in Figure 7B, we can investigate correlations between the relative expression levels and the locations of the loops.
For reference, the average proteomics count in Syn3A is approximately 180 (Breuer et al., 2019). We will refer to the loops according to their order along the genome. The first loop encompasses the
genes pdhC and lpdA, which respectively code for the E2 and E3 subunits of the PDH complex. These genes have identical proteomics counts and lower expression levels than the genes surrounding them.
Upstream are genes coding for enzymes in the main pathway of the central metabolism in Syn3A (Breuer et al., 2019) and downstream are genes coding for components of the PTS system, another essential
part of the central metabolism. The second loop encompasses the genes ywjA (0371) and ywjA (0372), which code for the two subunits of the flippase. This is the longest loop and the two genes within
it have the greatest disparity in expression levels. The third loop encompasses the genes lgt and trx, which code for lipoprotein diacylglyceryl transferase and thioredoxin reductase, respectively.
The proteomics counts of both proteins coded by these genes are lower than average, as are those of the genes immediately downstream. However, less than 10 kbp upstream is an operon for ribosomal
proteins, which contains some of the most highly-expressed genes in Syn3A’s genome (Breuer et al., 2019). The fourth loop encompasses genes that code for two uncharacterized proteins, JCVISYN3A_0877
and JCVISYN3A_0878, both of which have very low proteomics counts. The expression levels of the nearby genes are similarly low. Our most consistent findings are twofold. First, the loops are all
between 2 and 4 kbp in length. Second, the loops often contain genes with common expression levels.
The chromosome configurations on the 4 nm lattice, with 11.8 bp monomers, enable the calculation for contact matrices at any resolution greater than 11.8 bp per locus. Equally-sized contiguous
regions of the chromosome can be classified as loci and the pairwise interactions between the loci counted according to the relative pairwise distances between monomers belonging to the loci. In the
foreground of Figure 7C is a representative example of the interaction counting. When counting the total number of interactions between the red and blue loci using an arbitrary threshold distance
indicated by the dashed line, the black monomer in the red loci contributes three interactions to the total interaction count between the loci. Due to a relative scarcity of chromosome models at a
similar resolution in terms of bp per monomer and uncertainty about what proteins are involved in protein-DNA formaldehyde cross-linking (Dekker et al., 2002; van Berkum et al., 2010), the distance
for assessing interactions can be chosen from a minimum of 4 nm corresponding to lattice spacing to a maximum of 50 nm. The maximum distance corresponds to the length of SMC proteins, which is the
maximal distance spanned by a nucleoid-associated protein in Syn3A (Diebold-Durand et al., 2017; Marko et al., 2019; Ryu et al., 2021). We selected a contact radius of 8 nm because it is an integer
multiple of our lattice spacing and the resulting maps show the best agreement with 3C-Seq map. This distance metric can be used alongside any locus size converted to units of monomers to generate
contact matrices for ensembles of computationally generated chromosome configurations.
Contact maps were calculated for chromosome configurations within the small cell. A locus size of 1,000 bp was chosen to match the resolution of the 3C-Seq chromosome contact map. As was the case for
the experimental contact map, the contact map was normalized to be a doubly-stochastic matrix using a matrix-balancing procedure. Unfortunately, the precision of the in silico contact maps is limited
by the number of chromosome configurations used to calculate the ensemble-averaged interaction frequencies, a number many orders of magnitude lower than the number of cells in typical 3C-Seq
In the absence of a sequence-specific system, such as the parABS system, dictating the global structure of the chromosome and promoting inter-arm interactions, we decided to explore a test case of
introducing looping interactions at the positions of the manually annotated loops to test the efficacy of our model. We consider a loop to be successfully formed if the monomers at the endpoints are
separated by less than 16 nm, the percentage of configurations with successful loop formation are shown for each loop in Table 2. One or more loops were formed in 75 of the 150 configurations. As
expected, decreasing the length of the loop increased the probability that it was successfully formed. The in silico contact map generated from 150 configurations in the small cell is presented in
Figure 7C. The contact map shows a single diagonal in Figure 7C, which is consistent with the 3C-Seq contact map and indicates that the majority of interactions are self-interactions within loci or
interactions between neighboring loci. The strongest signal characteristic of a loop was observed for the fourth loop, which is the shortest, and Figure 7D shows a magnified view of the surrounding
region in the contact map.
Using the 3C-Seq map, we plotted the interaction frequency as a function of genomic distance and observed a plateau after the initial decrease in interaction frequency. We fit a power law of the form
$P(x)∝xs$ to two regimes within the strictly-decreasing region before the plateau, i.e. the region extending from self-interactions along the diagonal to interactions with loci at distances less than
or equal to 10 kbp away, and found a range of exponents ($s=−0.519$ to $s=−2.210$). We repeated this calculation for the in silico map and found a narrower range of exponents than in the 3C-Seq case
($s=−0.720$ to $s=−1.132$).
Plots of the two datasets and their contact laws are presented in Supplementary Figure S8. In both cases, the steepest rate of change and largest exponent was found in the region whose lower limit
corresponded to interactions of loci separated by 1 kbp. For the in silico case, the calculated values of s are in closer agreement to the value expected when confined homopolymers are organized as
fractal globules ($s=−1$), with clearly defined territories caused by topological constraints, rather than equilibrium globules ($s=−1.5$) (Lua et al., 2004; Lieberman-Aiden et al., 2009; Mirny, 2011
; Rosa and Zimmer, 2014; Sanborn et al., 2015). The organization of the chromosome into territories can be observed for the in silico case in Figure 5B as the separation into distinct colored
The plateau is more pronounced in the 3C-Seq case, than the in silico case, and all interactions in the 3C-Seq dataset are nearly equally probable at genomic distances greater than 100 kbp. While the
plateau in the 3C-Seq dataset is a characteristic of equilibrium globules (Lieberman-Aiden et al., 2009; Mirny, 2011) and some mathematically-predicted fractal globules, such as the Sierpinski
triangle and inside-out Hilbert curves (Sanborn et al., 2015), we are unable to infer a topological state of the Syn3A chromosomes sampled using 3C-Seq because of the significant variations in the
exponents of the power law and the sensitivity to the regime chosen for fitting. It is possible that these variations and the steep drop off are a consequence of the low coverage in the preliminary
3C-Seq map or reflect biologically relevant levels of organization.
3.2 Spatial Model of JCVI-syn3A
Computational modeling of spatially-resolved kinetics in Syn3A is done by simulating the reaction-diffusion master equation (RDME) in Lattice Microbes (LM) (Roberts et al., 2013; Hallock et al., 2014
; Earnest et al., 2017, 2018; Bianchi et al., 2018) using a stochastic simulation algorithm. When using the RDME, physical space is discretized into a cubic lattice representation. The size of the
cubic lattice dictates both the resolution of the spatial modeling and the maximum allowable timestep when modeling the kinetics, smaller lattice sizes reduce the maximum allowable timestep. A
lattice size of 8 nm was chosen as an acceptable compromise between creating a high-resolution spatial model of Syn3A, while permitting simulations over biologically-relevant time scales. Each of
these 8 nm lattice sites can contain a maximum of sixteen particles. Previous work on combining LM simulations and tomogram data, directly reconstructed cell architectures in LM (Earnest et al., 2017
). Unfortunately, as discussed earlier, the tomograms do not show well-defined DNA strands. Instead, chromosome configurations consistent with the ribosome distributions observed in the tomograms are
generated using our method, and those are used for the spatial models.
We have also used the spherical cell architecture reconstructed from the tomograms to predict the number of ribosomes involved in polysomes, the number of ribosomes at or near the membrane, and the
number of ribosomes close enough to DNA to form an expressome. To predict the number of ribosomes involved in possible polysomes, we calculate the pairwise distances between all ribosome pairs in the
spherical cell. Annotating any pair within a center-to-center distance of 22 nm to be in a possible polysome, as was experimentally measured in E. coli (Brandt et al., 2009), we calculate 194,
approximately 39%, of the 503 ribosomes from the first template matching method (approach 1) are involved in possible polysomes. In the second template matching method with 3D classification
(approach 2), this number increases to 373, approximately 55%, of the 684 ribosomes in possible polysomes. If we instead use a center-to-center distance of 18 nm such that the ribosomes are almost in
contact, we find that 125, approximately 25%, of the 503 ribosomes from approach 1 are involved in possible polysomes. In approach 2, we find that 274, approximately 40%, of the 684 ribosomes are in
possible polysomes using the 18 nm distance. In cryo-ET of a closely related organism M. pneumoniae, sub-tomogram averaging over many cells predicted that an average of 16.4% of ribosomes are
involved in polysomes (O’Reilly et al., 2020). The discrepancies arise from several factors: First, M. pneumoniae only has 300 ribosomes per cell (Seybert et al., 2006; Yus et al., 2009; O’Reilly et
al., 2020) and has a larger volume than Syn3A (Kühner et al., 2009), so it is not unreasonable that the higher ribosome density in Syn3A results in a larger fraction ribosomes involved in polysomes.
Second, the method to define polysomes in (O’Reilly et al., 2020) exclusively looks at ribosomes that are in very close proximity and oriented so that the mRNA exit channel of one ribosome aligns
with the mRNA entry of the next. In cells, polysomes are likely to be more relaxed than this configuration. Our predicted fractions of ribosomes involved in polysomes are all lower than the 70%
observed in fast-growing E. coli using absorption spectroscopy (Phillips et al., 1969; Forchhammer and Lindahl, 1971).
To estimate the number of ribosomes on or near the membrane, we calculate the number of ribosomes within a cytoplasmic shell directly inside the membrane. We annotate a ribosome as being within the
cytoplasmic shell if its center is within the shell. Using a 10 nm thick shell, the approximate radius of a ribosome, we find that 53, approximately 10%, of the 503 ribosomes from approach 1 are near
the membrane. We find a similar number of ribosomes within the same distance in approach 2, 60 (9%) of the 684 ribosomes. If we extend the shell to 20 nm thick, we find that 122, approximately 24%,
of the 503 ribosomes in approach 1 are near the membrane. In approach 2, we found 136, or 20%, of the 684 ribosomes are within 20 nm of the membrane. The range of our calculated fractions agrees with
the observed 15% of ribosomes being membrane-bound in cryo-ET of S. melliferum (Ortiz et al., 2006).
Expressomes are macromolecular complexes of RNA polymerases (RNAPs) and ribosomes that couple transcription and translation, they were first identified in E. coli (Kohler et al., 2017). In M.
pneumoniae a maximum of 19% of ribosomes have been identified to be in an expressome complex in which NusA and NusG help to connect or direct the mRNA from production by RNAP to the ribosome (
O’Reilly et al., 2020). Given the proteomics counts of RNAP, NusA, and NusG within a 400 nm cell of 187, 238, and 464 respectively (Breuer et al., 2019), the possibility of expressome complexes
emerging in the whole-cell model is certainly possible. Using the 4 nm lattice representation, we searched for possible expressomes by counting chromosome monomers directly adjacent to the
star-shaped ribosomes. We find that on average there are 106 of the 503 ribosomes in approach 1 with a DNA monomer directly adjacent, a fraction of approximately 21%. From ribosomes identified in
approach 2, we found that 127 of the 684 ribosomes, roughly 19%, were directly adjacent to the DNA on average. This is in good agreement with the fraction of 2.8–19% of ribosomes found to be in
expressomes in M. pneumoniae by O’Reilly et al. (O’Reilly et al., 2020).
The computationally-generated chromosome configurations on the 4 nm lattice are converted to the 8 nm lattice before being used in RDME simulations. The conversion is done using a coarse-graining
procedure where the 4 nm effective monomers are localized within the 8 nm lattice site containing them. These 8 nm lattice sites are then identified as chromosome sites. Due to the self-avoiding
nature of the chromosome model, each 8 nm chromosome site can contain up to a maximum of eight monomers, where each monomer contains 11.8 bp of DNA. The number of monomers within the 8 nm lattice
site are directly converted to up to eight of the maximum of sixteen particles within a lattice site. This coarse-graining procedure preserves the overall volume exclusion of the chromosome and the
spatial heterogeneities caused by varying chromosome densities throughout the cell, and allows for genomically distant pieces of the chromosome to be spatially localized within the 8 nm chromosome
sites. Figure 8 shows the coarse-graining of a chromosome configuration in the small cell. The kinetic model of genetic information processing in Syn3A (Thornburg et al., 2019) can then be extended
to include the effects of RNA polymerases diffusing between the spatial locations of genes within the chromosome (Weng and Xiao, 2014).
FIGURE 8
FIGURE 8. Coarse-graining, before and after—The coarse graining procedure localizes up to eight effective monomers in the 4 nm chromosome configurations (same color scheme as Figure 5) within the
8 nm chromosome lattice sites (green). Circled in orange there is an example of genomically-distant regions being localized within the same chromosome site.
Two means were used to quantify the diffusivity of the coarse-grained chromosome configurations. First, the average monomer occupancy of the coarse-grained chromosome sites was calculated, with the
target average occupancy being under 3 monomers per coarse-grained chromosome site, as shown in Figure 9A. Second, connected-component labeling was used to identify contiguous regions of high monomer
occupancy, where diffusing particles may become trapped for extended periods of simulation time, as shown in Figure 9B. Maintaining an acceptable number of particles per lattice site has a
significant impact on the efficiency of the multi-particle diffusion used in the GPU-accelerated LM (Hallock et al., 2014). The results for the small cell when the second approach for
template-matching was applied, with the high ribosome packing density, showed that all 150 computationally-generated chromosome configurations were diffuse enough to be used for RDME simulations of
Syn3A. The configurations satisfy the first criterion and there were no instances in which connected-components with an occupancy greater than 8 monomers per coarse-grained chromosome site could form
closed shapes on the 8 nm lattice. The large cell is presumed to be near the end of the cell cycle, after DNA replication has been completed, and a second chromosome can be placed within the cell
architecture, as shown in Figure 10. We assume the small cell and large cell are representative examples of cells at the start and end of the cell cycle, respectively, and the combination of the
small and large cell architectures enable whole-cell simulations of Syn3A at the start and end of the cell cycle.
FIGURE 9
FIGURE 9. (A)—Relative frequencies of chromosome site monomer occupancies in the small cell from 150 configurations with no looping interactions. (B)—Distribution of connected-component sizes for
different monomer occupancy thresholds for the small cell from 150 configurations with no looping interactions.
FIGURE 10
FIGURE 10. (A)—First chromosome (green) generated within the large cell architecture of radius 247.42 nm and containing 820 ribosomes (yellow). (B)—An additional second chromosome (magenta) was
generated within the large cell architecture while avoiding the first chromosome (green). (C)—The isolated second chromosome (magenta) in the large cell architecture after the removal of the first
4 Discussion
We developed a procedure to reconstruct single-cell geometries of Syn3A cells from cryo-electron tomograms. The procedure has two parts, the determination of the cell size and subsequent
transformation of the ribosome distribution to a cell with spherical geometry, and the generation of circular chromosome configurations constrained by the spherical cell boundary and the ribosome
distribution, and restrained by a small number of DNA loops observed in the experimental 3C-Seq map. Cell geometries were reconstructed for the small cell assumed to be at the start of the cell cycle
and a large cell considered to be near the end of the cycle, when two chromosomes would be present.
The 3C-Seq chromosome contact map at a resolution of 1,000 bp has no secondary diagonal and confirms our assumption that Syn3A has no factors affecting the global structure of the chromosome. We
based this assumption on our knowledge of the genome-scale gene essentiality and proteomics data, which indicated Syn3A lacked a parABS system or attachment organelle. Our computational model of the
chromosome reproduced this behavior while constrained by the reconstructed cell geometry. Furthermore, we generated the DNA configurations under the assumption that the DNA was in a relaxed state
with limited supercoiling. This was justified due to the high abundance of proteins that modify the supercoiling state, topoisomerases and gyrases, relative to the number of RNAP, and the relatively
low abundance of proteins that form topological constraints and stabilize supercoiled loops, such as HU. We were able to model local structures, whose signatures were observed in the 3C-Seq map at a
resolution of 250 bp. Currently, SMC is the only annotated protein in Syn3A that can form unsupercoiled loops, so it is possible the observed loops are formed by SMC. Recent studies show that SMC
functions through active loop extrusion rather than static loop stabilization (van Ruiten and Rowland, 2018), so there are potentially additional unannotated effects and/or proteins causing the
experimentally observed loops or causing localization of the actively-extruding loops through preferential SMC binding at the annotated locations. Our chromosome model does not include active loop
extrusion and is only capable of reproducing the results of active loop extrusion in an ensemble average sense. At this time, we wish to avoid making further definitive statements about the nature of
the local structure of Syn3A’s chromosome until deeper sequencing is completed. Future experiments with additional restriction enzymes that cut the DNA at complementary positions and greater depth of
the reads will help to improve our analysis.
We can speculate that the significant differences in the global chromosome organization of bacterial cells with natural genomes, such as B. subtilis, C. crescentus, and M. pneumoniae, to Syn3A with
its synthetic genome are a result of genome minimization, both natural and targeted. The parent organism from which all variants (Syn1.0, Syn3.0, and Syn3A) are descended is M. mycoides, a choice
that was made because Mycoplasma cells have small genomes that have been naturally reduced over evolutionarily-long time scales. This reduction likely occurred because they are parasitic organisms
that can rely on a stable environment provided by their host. Mycoplasmas have dispensed of the genes that code for complex regulatory systems, such as the parABS system, and the remaining genes
largely code for environment-independent functions essential to all life (Hutchison et al., 2016). Chromosome organization at the local level is dictated by NAPs and the supercoiling state of the
DNA. Notably, while there is a significant disparity in the relative proteomics counts of NAPs in Syn3A and naturally-occurring bacteria, with the majority of NAPs being wholly absent from Syn3A’s
genome, there is no such disparity in the counts of proteins that modify the supercoiling state of the DNA. These proteins are essential to the function of Syn3A, which is not surprising due to the
relationship between supercoiling and the universal process of transcription (Chong et al., 2014; Dorman, 2019).
From the reconstructed cell geometries, we estimated fractions of ribosomes that could be attached to the membrane or are complexed in possible polysomes and expressomes. We simply used distances
between ribosomes and the membrane, other ribosomes, and the DNA, respectively, to predict these numbers. To confirm these estimates of the polysomes we could use the orientations of the ribosomes’
entry and exit channels such that the ribosomes can pass mRNA between each other (O’Reilly et al., 2020). The membrane-bound ribosomes can be further characterized by determining which of those
ribosomes have their 50S subunit facing the membrane (Ortiz et al., 2006). Further analysis of expressomes would require a template involving the RNAP and the essential transcription factor NusA that
was found to attach the RNAP to the ribosome in cryo-ET of M. pneumoniae (O’Reilly et al., 2020). In the same M. pneumoniae study, subtomogram averaging was used to more confidently assign expressome
structures along with orientation of the mRNA entry site to help identify ribosomes complexed with RNAP.
The effects of ribosomes attached to the membrane or complexed in polysomes and expressomes can all be included in future whole-cell, spatially-resolved kinetic models. The configurations resulting
from the SAP model of the bacterial chromosome are directly transferrable to the 8 nm lattice representation used for LM simulations of whole Syn3A cells through a coarse-graining procedure. The
coarse-grained chromosome configurations specify the spatial heterogeneities caused by DNA-crowding in whole-cell kinetic models of Syn3A and define the spatial locations of genes to investigate
spatial and temporal correlations in gene expression (Weng and Xiao, 2014; Thornburg et al., 2019). Future work will focus on assigning chromosomal interactions based on improved experimental 3C-Seq
libraries, improving the model to include dynamic formation and relaxation of supercoiling and plectonemic loops, and incorporating dynamic representations of the chromosome (Miermans and Broedersz,
2020) within the LM simulations, which will include DNA diffusion and chromosome replication. The compactness and degree local structure of the DNA determines the accessibility of its genes to RNAP
which is an important consideration in the whole-cell simulations of all the cellular networks being developed for the minimal cell JCVI-syn3A.
Data Availability Statement
The tilt-series in this study have been deposited in the Electron Microscopy Public Image Archive (https://www.ebi.ac.uk/pdbe/emdb/empiar/) under EMPIAR entry numbers EMPIAR-10685 (large cell) and
EMPIAR-10686 (small cell). The reconstructed tomograms in this study have been deposited in the Electron Microscopy Data Bank (https://www.ebi.ac.uk/pdbe/emdb/index.html/) under EMDB entry numbers
EMD-23660 (large cell) and EMD-23661 (small cell). 3C-Seq libraries of Syn3A are available from the 4TU repository (https://data.4tu.nl/) under DOI: https://doi.org/10.4121/14333618. The software
used in this study can be found at https://github.com/brg4/SAP_chromosome.
Author Contributions
BG: development of method for reconstructing cell architecture from tomograms, development of chromosome model, development of coarse-graining procedure, contact map analysis, cell architecture
analysis, and writing original draft.
ZT: development of method for reconstructing cell architecture from tomograms, development of chromosome model, development of coarse-graining procedure, cell architecture analysis, assisted in
programming of Jupyter notebooks, data-curation, and writing original draft.
ZL-S: development of method for reconstructing cell architecture from tomograms, development of chromosome model, development of coarse-graining procedure, contact map analysis, cell architecture
analysis, and writing original draft.
VL: tomogram collection and processing, development of method for reconstructing cell architecture from tomograms, data-curation, and writing original draft.
EV: development of method for reconstructing cell architecture from tomograms, data-curation, and writing original draft.
F-ZR: 3C-Seq library, experimental 3C map, and editing manuscript.
RD: 3C-Seq library, experimental 3C map, and editing manuscript.
JG: establishing the network of collaborators, development of the minimal cell, and editing manuscript.
BG, ZT, and ZL-S: Partial support from NSF MCB 1818344 and 1840320, The Center for the Physics of Living Cells NSF PHY 1430124, and The Physics of Living Systems Student Research Network NSF PHY
1505008. The cell figures in the workflow diagram and all lattice representations of ribosomes and DNA were prepared using Visual Molecular Dynamics (VMD), developed by the NIH Center for
Macromolecular Modeling and Bioinformatics in the Beckman Institute at UIUC, with support from NIH P41-GM104601-28.
VL and EV: This work was supported by an NIH Director’s New Innovator Award 1DP2GM123494-01 (to EV) and NIH 5T32GM7240-40 (to VL). VL is also supported in part by NIH R35GM118290 awarded to Susan S.
Golden. This work on Syn3A by VL and EV is also supported in part by NSF MCB 1818344. This work was supported by the National Science Foundation MRI grant (NSF DBI 1920374). We acknowledge the use of
the UCSD Cryo-Electron Microscopy Facility which is supported by NIH grants to Dr Timothy S. Baker and a gift from the Agouron Institute to UCSD. Molecular graphics and analyses performed with UCSF
Chimera, developed by the Resource for Biocomputing, Visualization, and Informatics at the University of California, San Francisco, with support from NIH P41-GM103311.
F-ZR and RD: This research was supported by a VICI grant (VICI 016.160.613) and an ENW Groot grant (OCENW.GROOT. 2019.012) from the Netherlands Organization for Scientific Research (RD).
JG: Partial support from NSF MCB 1818344, 1840301 and 1840320.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
BG, ZT, and ZL-S: We thank John Stone at the Beckman Institute at UIUC for his assistance in preparing scientific visualizations using VMD. F-ZR and RD: We thank Utrecht Sequencing Facility for
providing sequencing service and data. Utrecht Sequencing Facility is subsidized by the University Medical Center Utrecht, Hubrecht Institute, Utrecht University and The Netherlands X-omics
Initiative (NWO project 184.034.019). We thank Wouter de Laat and Amin Allahyar (Hubrecht Institute, The Netherlands) for discussions and assistance with data analysis. We thank Kim Wise at the J.
Craig Venter Institute (JCVI) for providing JCVI-Syn3A cells and growth medium to the laboratories of EV and RD.
Supplementary Material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fmolb.2021.644133/full#supplementary-material
Annunziatella, C., Chiariello, A. M., Esposito, A., Bianco, S., Fiorillo, L., and Nicodemi, M. (2018). Molecular Dynamics Simulations of the Strings and Binders Switch Model of Chromatin. Methods.
142, 81–88. doi:10.1016/j.ymeth.2018.02.024
Bakshi, S., Siryaporn, A., Goulian, M., and Weisshaar, J. C. (2012). Superresolution Imaging of Ribosomes and Rna Polymerase in Live escherichia Coli Cells. Mol. Microbiol. 85, 21–38. doi:10.1111/
Belton, J.-M., McCord, R. P., Gibcus, J. H., Naumova, N., Zhan, Y., and Dekker, J. (2012). Hi-C: A Comprehensive Technique to Capture the Conformation of genomes3D Chromatin Architecture. Methods.
58, 268–276. doi:10.1016/j.ymeth.2012.05.001
Bianchi, D. M., Peterson, J. R., Earnest, T. M., Hallock, M. J., and Luthey‐Schulten, Z. (2018). Hybrid CME-ODE Method for Efficient Simulation of the Galactose Switch in Yeast. IET Syst. Biol. 12,
170–176. doi:10.1049/iet-syb.2017.0070
Birnie, A., and Dekker, C. (2021). Genome-in-a-box: Building a Chromosome from the Bottom up. ACS Nano. 15, 111–124. doi:10.1021/acsnano.0c07397
Brandt, F., Etchells, S. A., Ortiz, J. O., Elcock, A. H., Hartl, F. U., and Baumeister, W. (2009). The Native 3d Organization of Bacterial Polysomes. Cell. 136, 261–271. doi:10.1016/
Bremer, H., and Dennis, P. P. (2008). Modulation of Chemical Composition and Other Parameters of the Cell at Different Exponential Growth Rates. EcoSal Plus. 3, 8. doi:10.1128/ecosal.5.2.3
Breuer, M., Earnest, T. M., Merryman, C., Wise, K. S., Sun, L., Lynott, M. R., et al. (2019). Essential Metabolism for a Minimal Cell. eLife. 8, e36842. doi:10.7554/eLife.36842
Brinkers, S., Dietrich, H. R. C., de Groote, F. H., Young, I. T., and Rieger, B. (2009). The Persistence Length of Double Stranded Dna Determined Using Dark Field Tethered Particle Motion. J. Chem.
Phys. 130, 215105. doi:10.1063/1.3142699
Buenemann, M., and Lenz, P. (2010). A Geometrical Model for Dna Organization in Bacteria. PLOS ONE. 5, e13806–13. doi:10.1371/journal.pone.0013806
Castaño-Díez, D., Kudryashev, M., Arheit, M., and Stahlberg, H. (2012). Dynamo: A Flexible, User-Friendly Development Tool for Subtomogram Averaging of Cryo-EM Data in High-Performance Computing
Environments. J. Struct. Biol. 178, 139–151. doi:10.1016/j.jsb.2011.12.017
Chong, S., Chen, C., Ge, H., and Xie, X. S. (2014). Mechanism of Transcriptional Bursting in Bacteria. Cell. 158, 314–326. doi:10.1016/j.cell.2014.05.038
Crémazy, F. G., Rashid, F. M., Haycocks, J. R., Lamberte, L. E., Grainger, D. C., and Dame, R. T. (2018). Determination of the 3D Genome Organization of Bacteria Using Hi-C. Methods Mol Biol. 3,
3–18. doi:10.1007/978-1-4939-8675-0_1
Dame, R. T., Rashid, F.-Z. M., and Grainger, D. C. (2019). Chromosome Organization in Bacteria: Mechanistic Insights into Genome Structure and Function. Nat. Rev. Genet. 21, 227–242. doi:10.1038/
Dame, R. T., and Tark-Dame, M. (2016). Bacterial Chromatin: Converging Views at Different Scales. Curr. Opin. Cel Biol. 40, 60–65. doi:10.1016/j.ceb.2016.02.015
Dame, R. T. (2005). The Role of Nucleoid-Associated Proteins in the Organization and Compaction of Bacterial Chromatin. Mol. Microbiol. 56, 858–870. doi:10.1111/j.1365-2958.2005.04598.x
Dekker, J., Marti-Renom, M. A., and Mirny, L. A. (2013). Exploring the Three-Dimensional Organization of Genomes: Interpreting Chromatin Interaction Data. Nat. Rev. Genet. 14, 390–403. doi:10.1038/
Dekker, J., Rippe, K., Dekker, M., and Kleckner, N. (2002). Capturing Chromosome Conformation. Science. 295, 1306–1311. doi:10.1126/science.1067799
Di Pierro, M., Cheng, R. R., Lieberman Aiden, E., Wolynes, P. G., and Onuchic, J. N. (2017). De Novo prediction of Human Chromosome Structures: Epigenetic Marking Patterns Encode Genome Architecture.
Proc. Natl. Acad. Sci. USA. 114, 12126–12131. doi:10.1073/pnas.1714980114
Di Pierro, M., Zhang, B., Aiden, E. L., Wolynes, P. G., and Onuchic, J. N. (2016). Transferable Model for Chromosome Architecture. Proc. Natl. Acad. Sci. USA. 113, 12168–12173. doi:10.1073/
Diebold-Durand, M.-L., Lee, H., Ruiz Avila, L. B., Noh, H., Shin, H.-C., Im, H., et al. (2017). Structure of Full-Length Smc and Rearrangements Required for Chromosome Organization. Mol. Cel. 67,
334–347.e5. doi:10.1016/j.molcel.2017.06.010
Dill, K. A., Bromberg, S., Yue, K., Fiebig, K. M., Yee, D. P., Thomas, P. D., et al. (1995). Principles of Protein Folding-Aa Perspective from Simple Exact Models, Protein Sci. 4, 561–602.
Dorman, C. J. (2019). Dna Supercoiling and Transcription in Bacteria: a Two-Way Street. BMC Mol. Cel Biol. 20, 26. doi:10.1186/s12860-019-0211-6
Duan, Z., Andronescu, M., Schutz, K., McIlwain, S., Kim, Y. J., Lee, C., et al. (2010). A Three-Dimensional Model of the Yeast Genome. Nature. 465, 363–367. doi:10.1038/nature08973
Durand, N. C., Robinson, J. T., Shamim, M. S., Machol, I., Mesirov, J. P., Lander, E. S., et al. (2016a). Juicebox Provides a Visualization System for Hi-C Contact Maps with Unlimited Zoom. Cel Syst.
3, 99–101. doi:10.1016/j.cels.2015.07.012
Durand, N. C., Shamim, M. S., Machol, I., Rao, S. S. P., Huntley, M. H., Lander, E. S., et al. (2016b). Juicer Provides a One-Click System for Analyzing Loop-Resolution Hi-C Experiments. Cel Syst. 3,
95–98. doi:10.1016/j.cels.2016.07.002
Earnest, T. M., Cole, J. A., and Luthey-Schulten, Z. (2018). Simulating Biological Processes: Stochastic Physics from Whole Cells to Colonies. Rep. Prog. Phys. 81, 052601. doi:10.1088/1361-6633/
Earnest, T. M., Watanabe, R., Stone, J. E., Mahamid, J., Baumeister, W., Villa, E., et al. (2017). Challenges of Integrating Stochastic Dynamics and Cryo-Electron Tomograms in Whole-Cell Simulations.
J. Phys. Chem. B. 121, 3871–3881. doi:10.1021/acs.jpcb.7b00672
Forchhammer, J., and Lindahl, L. (1971). Growth Rate of Polypeptide Chains as a Function of the Cell Growth Rate in a Mutant of escherichia Coli 15. J. Mol. Biol. 55, 563–568. doi:10.1016/0022-2836
Fudenberg, G., Imakaev, M., Lu, C., Goloborodko, A., Abdennur, N., and Mirny, L. A. (2016). Formation of Chromosomal Domains by Loop Extrusion. Cel Rep. 15, 2038–2049. doi:10.1016/
Ganji, M., Shaltiel, I. A., Bisht, S., Kim, E., Kalichava, A., Haering, C. H., et al. (2018). Real-time Imaging of Dna Loop Extrusion by Condensin. Science. 360, 102–105. doi:10.1126/science.aar7831
Geggier, S., Kotlyar, A., and Vologodskii, A. (2010). Temperature Dependence of DNA Persistence Length. Nucleic Acids Res. 39, 1419–1426. doi:10.1093/nar/gkq932
Gibson, D. G., Glass, J. I., Lartigue, C., Noskov, V. N., Chuang, R.-Y., Algire, M. A., et al. (2010). Creation of a Bacterial Cell Controlled by a Chemically Synthesized Genome. Science. 329, 52–56.
Goodsell, D. S., Autin, L., and Olson, A. J. (2018). Lattice Models of Bacterial Nucleoids. J. Phys. Chem. B. 122, 5441–5447. doi:10.1021/acs.jpcb.7b11770
Hacker, W. C., Li, S., and Elcock, A. H. (2017). Features of Genomic Organization in a Nucleotide-Resolution Molecular Model of the escherichia Coli Chromosome. Nucleic Acids Res. 45, 7541–7554.
Haddad, N., Jost, D., and Vaillant, C. (2017). Perspectives: Using Polymer Modeling to Understand the Formation and Function of Nuclear Compartments. Chromosome Res. 25, 35–50. doi:10.1007/
Hallock, M. J., Stone, J. E., Roberts, E., Fry, C., and Luthey-Schulten, Z. (2014). Simulation of Reaction Diffusion Processes over Biologically Relevant Size and Time Scales Using Multi-Gpu
Workstations. Parallel Comput. 40, 86–99. doi:10.1016/j.parco.2014.03.009
Hastings, W. K. (1970). Monte Carlo Sampling Methods Using Markov Chains and Their Applications. Biometrika. 57, 97–109. doi:10.1093/biomet/57.1.97
Heilmann, O. J., and Rotne, J. (1982). Exact and Monte Carlo Computations on a Lattice Model for Change of Conformation of a Polymer. J. Stat. Phys. 27, 19–35. doi:10.1007/BF01011737
Hsu, H.-P., and Binder, K. (2012). Stretching Semiflexible Polymer Chains: Evidence for the Importance of Excluded Volume Effects from Monte Carlo Simulation. J. Chem. Phys. 136, 024901. doi:10.1063/
Hua, K.-J., and Ma, B.-G. (2019). Evr: Reconstruction of Bacterial Chromosome 3d Structure Models Using Error-Vector Resultant Algorithm. BMC Genomics. 20, 738. doi:10.1186/s12864-019-6096-0
Hutchison, C. A., Chuang, R.-Y., Noskov, V. N., Assad-Garcia, N., Deerinck, T. J., Ellisman, M. H., et al. (2016). Design and Synthesis of a Minimal Bacterial Genome. Science. 351, aad6253.
Junier, I., Spill, Y. G., Marti-Renom, M. A., Beato, M., and le Dily, F. (2015). On the Demultiplexing of Chromosome Capture Conformation Data. FEBS Lett. 589, 3005–3013. doi:10.1016/
Kim, S., Beltran, B., Irnov, I., and Jacobs-Wagner, C. (2019). Long-distance Cooperative and Antagonistic Rna Polymerase Dynamics via Dna Supercoiling. Cell. 179, 106–119. doi:10.1016/
Knight, P. A., and Ruiz, D. (2012). A Fast Algorithm for Matrix Balancing. IMA J. Numer. Anal. 33, 1029–1047. doi:10.1093/imanum/drs019
Kohler, R., Mooney, R. A., Mills, D. J., Landick, R., and Cramer, P. (2017). Architecture of a Transcribing-Translating Expressome. Science. 356, 194–197. doi:10.1126/science.aal3059
Kratky, O., and Porod, G. (1949). Röntgenuntersuchung Gelöster Fadenmoleküle. Recl. Trav. Chim. Pays-bas. 68, 1106–1122. doi:10.1002/recl.19490681203
Kremer, J. R., Mastronarde, D. N., and McIntosh, J. R. (1996). Computer Visualization of Three-Dimensional Image Data Using IMOD. J. Struct. Biol. 116, 71–76. doi:10.1006/jsbi.1996.0013
Kühner, S., van Noort, V., Betts, M. J., Leo-Macias, A., Batisse, C., Rode, M., et al. (2009). Proteome Organization in a Genome-Reduced Bacterium. Science. 326, 1235–1240. doi:10.1126/
Lasker, K., Boeynaems, S., Lam, V., Stainton, E., Jacquemyn, M., Daelemans, D., et al. (2021). A Modular Platform for Engineering Function of Natural and Synthetic Biomolecular Condensates. bioRxiv.
45, 11–19. doi:10.1101/2021.02.03.429226
Lau, K. F., and Dill, K. A. (1989). A Lattice Statistical Mechanics Model of the Conformational and Sequence Spaces of Proteins. Macromolecules. 22, 3986–3997. doi:10.1021/ma00200a030
Le, T. B. K., Imakaev, M. V., Mirny, L. A., and Laub, M. T. (2013). High-resolution Mapping of the Spatial Organization of a Bacterial Chromosome. Science. 342, 731–734. doi:10.1126/science.1242059
Lesne, A., Riposo, J., Roger, P., Cournac, A., and Mozziconacci, J. (2014). 3d Genome Reconstruction from Chromosomal Contacts. Nat. Methods. 11, 1141–1143. doi:10.1038/nmeth.3104
Lieberman-Aiden, E., van Berkum, N. L., Williams, L., Imakaev, M., Ragoczy, T., Telling, A., et al. (2009). Comprehensive Mapping of Long-Range Interactions Reveals Folding Principles of the Human
Genome. Science. 326, 289–293. doi:10.1126/science.1181369
Lioy, V. S., and Boccard, F. (2018). Conformational Studies of Bacterial Chromosomes by High-Throughput Sequencing Methods. In High-Density Sequencing Applications in Microbial Molecular Genetics,
ed. A. J. Carpousis (Cambridge, MA: Academic Press), Vol. 612 of Methods in Enzymology. 25–45. doi:10.1016/bs.mie.2018.07.007
Lioy, V. S., Cournac, A., Marbouty, M., Duigou, S., Mozziconacci, J., Espéli, O., et al. (2018). Multiscale Structuring of the E. coli Chromosome by Nucleoid-Associated and Condensin Proteins. Cell.
172, 771–783.e18. doi:10.1016/j.cell.2017.12.027
Lioy, V. S., Junier, I., Lagage, V., Vallet, I., and Boccard, F. (2020). Distinct Activities of Bacterial Condensins for Chromosome Management in pseudomonas Aeruginosa. Cel Rep. 33, 108344.
Livny, J., Yamaichi, Y., and Waldor, M. K. (2007). Distribution of Centromere-like Pars Sites in Bacteria: Insights from Comparative Genomics. J Bacteriol. 189, 8693–8703. doi:10.1128/JB.01239-07
Lua, R., Borovinskiy, A. L., and Grosberg, A. Y. (2004). Fractal and Statistical Properties of Large Compact Polymers: a Computational Study. Polymer 45, 717–731. doi:10.1016/j.polymer.2003.10.073
Madras, N., Orlitsky, A., and Shepp, L. A. (1990). Monte Carlo Generation of Self-Avoiding Walks with Fixed Endpoints and Fixed Length. J. Stat. Phys. 58, 159–183. doi:10.1007/BF01020290
Madras, N., and Sokal, A. D. (1987). Nonergodicity of Local, Length-Conserving Monte Carlo Algorithms for the Self-Avoiding Walk. J. Stat. Phys. 47, 573–595. doi:10.1007/BF01007527
Manning, G. S. (2006). The Persistence Length of Dna Is Reached from the Persistence Length of its Null Isomer through an Internal Electrostatic Stretching Force. Biophysical J. 91, 3607–3616.
Mantelli, S., Muller, P., Harlepp, S., and Maaloum, M. (2011). Conformational Analysis and Estimation of the Persistence Length of Dna Using Atomic Force Microscopy in Solution. Soft Matter. 7,
3412–3416. doi:10.1039/C0SM01160F
Marbouty, M., LeGall, A., Cattoni, D. I., Cournac, A., Koh, A., Fiche, J.-B., et al. (2015). Condensin- and Replication-Mediated Bacterial Chromosome Folding and Origin Condensation Revealed by Hi-C
and Super-resolution Imaging. Mol. Cel. 59, 588–602. doi:10.1016/j.molcel.2015.07.020
Marko, J. F., De Los Rios, P., Barducci, A., and Gruber, S. (2019). DNA-segment-capture Model for Loop Extrusion by Structural Maintenance of Chromosome (SMC) Protein Complexes. Nucleic Acids Res.
47, 6956–6972. doi:10.1093/nar/gkz497
Martinez-Sanchez, A., Garcia, I., Asano, S., Lucic, V., and Fernandez, J.-J. (2014). Robust Membrane Detection Based on Tensor Voting for Electron Tomography. J. Struct. Biol. 186, 49–61. doi:10.1016
Mastronarde, D. N. (1997). Dual-Axis Tomography: An Approach with Alignment Methods that Preserve Resolution. J. Struct. Biol. 120, 343–352. doi:10.1006/jsbi.1997.3919
Mastronarde, D. N. (2005). Automated Electron Microscope Tomography Using Robust Prediction of Specimen Movements. J. Struct. Biol. 152, 36–51. doi:10.1016/j.jsb.2005.07.007
Mastronarde, D. N., and Held, S. R. (2017). Automated Tilt Series Alignment and Tomographic Reconstruction in IMOD. J. Struct. Biol. 197, 102–113. doi:10.1016/j.jsb.2016.07.011
Matteau, D., Lachance, J. C., Grenier, F., Gauthier, S., Daubenspeck, J. M., Dybvig, K., et al. (2020). Integrative Characterization of the Near‐minimal Bacterium Mesoplasma Florum. Mol. Syst. Biol.
16, e9844. doi:10.15252/msb.20209844
Messelink, J. J. B., van Teeseling, M. C. F., Janssen, J., Thanbichler, M., and Broedersz, C. P. (2021). Learning the distribution of single-cell chromosome conformations in bacteria reveals emergent
order across genomic scales. Nat. Commun. 12, 1963. doi:10.1038/s41467-021-22189-x
Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., and Teller, E. (1953). Equation of State Calculations by Fast Computing Machines. J. Chem. Phys. 21, 1087–1092. doi:10.1063/
Miermans, C. A., and Broedersz, C. P. (2020). A Lattice Kinetic Monte-Carlo Method for Simulating Chromosomal Dynamics and Other (Non-)equilibrium Bio-Assemblies. Soft Matter. 16, 544–556.
Mirny, L. A. (2011). The Fractal Globule as a Model of Chromatin Architecture in the Cell. Chromosome Res. 19, 37–51. doi:10.1007/s10577-010-9177-0
Needham, D., and Nunn, R. S. (1990). Elastic Deformation and Failure of Lipid Bilayer Membranes Containing Cholesterol. Biophysical J. 58, 997–1009. doi:10.1016/S0006-3495(90)82444-9
Nevo-Dinur, K., Nussbaum-Shochat, A., Ben-Yehuda, S., and Amster-Choder, O. (2011). Translation-independent localization of mrna in e. coli. Science. 331, 1081–1084. doi:10.1126/science.1195691
Ohniwa, R. L., Ushijima, Y., Saito, S., and Morikawa, K. (2011). Proteomic Analyses of Nucleoid-Associated Proteins in escherichia Coli, pseudomonas Aeruginosa, bacillus Subtilis, and staphylococcus
Aureus. PLOS ONE. 6, e19172–18. doi:10.1371/journal.pone.0019172
Oluwadare, O., Highsmith, M., and Cheng, J. (2019). An Overview of Methods for Reconstructing 3-d Chromosome and Genome Structures from Hi-C Data. Biol. Proced. Online 21, 7. doi:10.1186/
O’Reilly, F. J., Xue, L., Graziadei, A., Sinn, L., Lenz, S., Tegunov, D., et al. (2020). In-cell Architecture of an Actively Transcribing-Translating Expressome. Science. 369, 554–557. doi:10.1126/
Ortiz, J. O., Förster, F., Kürner, J., Linaroudis, A. A., and Baumeister, W. (2006). Mapping 70s Ribosomes in Intact Cells by Cryoelectron Tomography and Pattern Recognition. J. Struct. Biol. 156,
334–341. doi:10.1016/j.jsb.2006.04.014
Pelletier, J. F., Sun, L., Wise, K. S., Assad-Garcia, N., Karas, B. J., Deerinck, T. J., et al. (2021). Genetic Requirements for Cell Division in a Genomically Minimal Cell. Cell. 183, 1–11.
Petkau, A., Stuart-Edwards, M., Stothard, P., and Van Domselaar, G. (2010). Interactive Microbial Genome Visualization with GView. Bioinformatics. 26, 3125–3126. doi:10.1093/bioinformatics/btq588
Pettersen, E. F., Goddard, T. D., Huang, C. C., Couch, G. S., Greenblatt, D. M., Meng, E. C., et al. (2004). UCSF Chimera?A Visualization System for Exploratory Research and Analysis. J. Comput.
Chem. 25, 1605–1612. doi:10.1002/jcc.20084
Phillips, L. A., Hotham-Iglewski, B., and Franklin, R. M. (1969). Polyribosomes of Escherichia coli. J. Mol. Biol. 40, 279–288. doi:10.1016/0022-2836(69)90475-6
Rao, S. S. P., Huntley, M. H., Durand, N. C., Stamenova, E. K., Bochkov, I. D., Robinson, J. T., et al. (2014). A 3d Map of the Human Genome at Kilobase Resolution Reveals Principles of Chromatin
Looping. Cell. 159, 1665–1680. doi:10.1016/j.cell.2014.11.021
Roberts, E., Magis, A., Ortiz, J. O., Baumeister, W., and Luthey-Schulten, Z. (2011). Noise Contributions in an Inducible Genetic Switch: A Whole-Cell Simulation Study. Plos Comput. Biol. 7,
e1002010–21. doi:10.1371/journal.pcbi.1002010
Roberts, E., Stone, J. E., and Luthey-Schulten, Z. (2013). Lattice Microbes: High-Performance Stochastic Simulation Method for the Reaction-Diffusion Master Equation. J. Comput. Chem. 34, 245–255.
Rosa, A., and Zimmer, C. (2014). Computational Models of Large-Scale Genome Architecture. Int. Rev. Cel Mol Biol. 307, 275–349. doi:10.1016/B978-0-12-800046-5.00009-6
Rouse, P. E. (1953). A Theory of the Linear Viscoelastic Properties of Dilute Solutions of Coiling Polymers. J. Chem. Phys. 21, 1272–1280. doi:10.1063/1.1699180
Russel, D., Lasker, K., Webb, B., Velázquez-Muriel, J., Tjioe, E., Schneidman-Duhovny, D., et al. (2012). Putting the Pieces Together: Integrative Modeling Platform Software for Structure
Determination of Macromolecular Assemblies. Plos Biol. 10, e1001244–5. doi:10.1371/journal.pbio.1001244
Ryu, J.-K., Bouchoux, C., Liu, H. W., Kim, E., Minamino, M., de Groot, R., et al. (2021). Bridging-induced phase separation induced by cohesin SMC protein complexes. Sci. Adv. 7, eabe5905.
Sanborn, A. L., Rao, S. S. P., Huang, S.-C., Durand, N. C., Huntley, M. H., Jewett, A. I., et al. (2015). Chromatin Extrusion Explains Key Features of Loop and Domain Formation in Wild-type and
Engineered Genomes. Proc. Natl. Acad. Sci. USA. 112, E6456–E6465. doi:10.1073/pnas.1518552112
Scheres, S. H. W. (2012). Relion: Implementation of a Bayesian Approach to Cryo-Em Structure Determination. J. Struct. Biol. 180, 519–530. doi:10.1016/j.jsb.2012.09.006
Schorb, M., Haberbosch, I., Hagen, W. J. H., Schwab, Y., and Mastronarde, D. N. (2019). Software Tools for Automated Transmission Electron Microscopy. Nat. Methods. 16, 471–477. doi:10.1038/
Seybert, A., Herrmann, R., and Frangakis, A. S. (2006). Structural Analysis of Mycoplasma Pneumoniae by Cryo-Electron Tomography. J. Struct. Biol. 156, 342–354. doi:10.1016/j.jsb.2006.04.010
Sokal, A. D. (1995). “Monte Carlo Methods for the Self-Avoiding Walk,” in Monte Carlo and Molecular Dynamics Simulations Polymer. Editor K. Binder (USA: Oxford University Press, Inc)chap. 2. 45–124.
Tegunov, D., and Cramer, P. (2019). Real-time Cryo-Electron Microscopy Data Preprocessing with Warp. Nat. Methods. 16, 1146–1152. doi:10.1038/s41592-019-0580-y
Thornburg, Z. R., Melo, M. C. R., Bianchi, D., Brier, T. A., Crotty, C., Breuer, M., et al. (2019). Kinetic Modeling of the Genetic Information Processes in a Minimal Cell. Front. Mol. Biosci. 6,
130. doi:10.3389/fmolb.2019.00130
Tran, N. T., Laub, M. T., and Le, T. B. K. (2017). Smc Progressively Aligns Chromosomal Arms in caulobacter Crescentus but Is Antagonized by Convergent Transcription. Cel Rep. 20, 2057–2071.
Trussart, M., Yus, E., Martinez, S., Baù, D., Tahara, Y. O., Pengo, T., et al. (2017). Defined Chromosome Structure in the Genome-Reduced Bacterium Mycoplasma Pneumoniae. Nat. Commun. 8, 14665.
Umbarger, M. A., Toro, E., Wright, M. A., Porreca, G. J., Baù, D., Hong, S.-H., et al. (2011). The Three-Dimensional Architecture of a Bacterial Genome and its Alteration by Genetic Perturbation.
Mol. Cel. 44, 252–264. doi:10.1016/j.molcel.2011.09.010
van Berkum, N. L., Lieberman-Aiden, E., Williams, L., Imakaev, M., Gnirke, A., Mirny, L. A., et al. (2010). Hi-c: A Method to Study the Three-Dimensional Architecture of Genomes. JoVE. 6(39): e1869.
van Ruiten, M. S., and Rowland, B. D. (2018). Smc Complexes: Universal Dna Looping Machines with Distinct Regulators. Trends Genet. 34, 477–487. doi:10.1016/j.tig.2018.03.003
Verdier, P. H., and Stockmayer, W. H. (1962). Monte Carlo Calculations on the Dynamics of Polymers in Dilute Solution. J. Chem. Phys. 36, 227–235. doi:10.1063/1.1732301
Verma, S. C., Qian, Z., and Adhya, S. L. (2019). Architecture of the escherichia Coli Nucleoid. Plos Genet. 15, e1008456–35. doi:10.1371/journal.pgen.1008456
Vologodskii, A. V., Levene, S. D., Klenin, K. V., Frank-Kamenetskii, M., and Cozzarelli, N. R. (1992). Conformational and Thermodynamic Properties of Supercoiled Dna. J. Mol. Biol. 227, 1224–1243.
Wang, F., and Landau, D. P. (2001). Efficient, Multiple-Range Random Walk Algorithm to Calculate the Density of States. Phys. Rev. Lett. 86, 2050–2053. doi:10.1103/PhysRevLett.86.2050
Wang, M., Herrmann, C. J., Simonovic, M., Szklarczyk, D., and Mering, C. (2015). Version 4.0 of PaxDb: Protein Abundance Data, Integrated across Model Organisms, Tissues, and Cell‐lines. PROTEOMICS.
15, 3163–3168. doi:10.1002/pmic.201400441
Wang, X., and Rudner, D. Z. (2014). Spatial Organization of Bacterial Chromosomes. Curr Opin Microbiol. 22, 66–72. doi:10.1016/j.mib.2014.09.016
Wang, X., Tang, O. W., Riley, E. P., and Rudner, D. Z. (2014). The Smc Condensin Complex Is Required for Origin Segregation in bacillus Subtilis. Curr. Biol. 24, 287–292. doi:10.1016/
Weng, X., and Xiao, J. (2014). Spatial Organization of Transcription in Bacterial Cells. Trends Genet. 30, 287–297. doi:10.1016/j.tig.2014.04.008
Williamson, D. L., and Whitcomb, R. F. (1975). Plant Mycoplasmas: A Cultivable Spiroplasma Causes Corn Stunt Disease. Science. 188, 1018–1020. doi:10.1126/science.188.4192.1018
Yus, E., Maier, T., Michalodimitrakis, K., van Noort, V., Yamada, T., Chen, W.-H., et al. (2009). Impact of Genome Reduction on Bacterial Metabolism and its Regulation. science. 326, 1263–1268.
Zhang, J.-Z., Peng, X.-Y., Liu, S., Jiang, B.-P., Ji, S.-C., and Shen, X.-C. (2019). The Persistence Length of Semiflexible Polymers in Lattice Monte Carlo Simulations. Polymers. 11, 295. doi:10.3390
Zheng, S. Q., Palovcak, E., Armache, J.-P., Verba, K. A., Cheng, Y., and Agard, D. A. (2017). MotionCor2: Anisotropic Correction of Beam-Induced Motion for Improved Cryo-Electron Microscopy. Nat.
Methods. 14, 331–332. doi:10.1038/nmeth.4193
Keywords: cryo-electron tomography, chromosome conformation capture (3C) maps, computational modeling, whole-cell models, chromosome modeling, ribosome distribution, bacterial minimal cell,
Citation: Gilbert BR, Thornburg ZR, Lam V, Rashid F-ZM, Glass JI, Villa E, Dame RT and Luthey-Schulten Z (2021) Generating Chromosome Geometries in a Minimal Cell From Cryo-Electron Tomograms and
Chromosome Conformation Capture Maps. Front. Mol. Biosci. 8:644133. doi: 10.3389/fmolb.2021.644133
Received: 20 December 2020; Accepted: 14 May 2021;
Published: 22 July 2021.
Reviewed by:
Slavica Jonic
, UMR7590 Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie (IMPMC), France
Jean-Charles Walter
, UMR5221 Laboratoire Charles Coulomb (L2C), France
Ivan Junier
, UMR5525 Techniques de l'Ingénierie Médicale et de la Complexité Informatique, Mathématiques et Applications, Grenoble (TIMC-IMAG), France
Copyright © 2021 Gilbert, Thornburg, Lam, Rashid, Glass, Villa, Dame and Luthey-Schulten. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC
BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is
cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Zaida Luthey-Schulten, zan@illinois.edu | {"url":"https://www.frontiersin.org/journals/molecular-biosciences/articles/10.3389/fmolb.2021.644133/full","timestamp":"2024-11-08T02:00:32Z","content_type":"text/html","content_length":"846050","record_id":"<urn:uuid:dbdeef6b-bb06-4ead-803b-2cf769607c3e>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00452.warc.gz"} |
A CFD study of flow quantities and heat transfer by changing a vertical to diameter ratio and horizontal to diameter ratio in inline tube banks using URANS turbulence models
This paper reports the effect of changing the aspect ratio on the heat transfer and flow quantities over in-line tube banks. Two types of in-line arrangements were employed; square and non-square
configurations. The models that were examined are a standard k-ε model, SST k-ω model, v2-f model, EB k-ε model and EB-RSM model. The closer results to the experimental data and LES were obtained by
the EB k-ε and v2-f models. For the square pitch ratios, the solution has faced a gradual change from a strong asymmetric to asymmetric and then to a perfect symmetry. The strong asymmetric solution
was found by the very narrow aspect ratio of 1.2. However, the behaviour of cases of 1.5 and 1.6 became less strong than that predicted in the case of 1.2. In the larger aspect ratio of 1.75, the
flow behaviour is seen to be absolutely symmetric for all variables under consideration except Nusselt number. For the very large pitch ratio of 5, the flow has recorded maximum distributions for all
parameters on the windward side of the central tube with a perfect symmetric solution around the angle of 180° while the vortex shedding frequency has recorded minimum value and the Strouhal number;
therefore, has given the smallest value. However, for the non-square pitch ratio of constant transverse distance, the solution is still asymmetric for all parameters with merely one stagnation at the
angle of 52° at the case of the 1.5 × 1.75 while by increasing the longitudinal distance to 2 and 5, the solution provided a comprehensive symmetry for all variables with two vortices are fully
developed mirrored in shape on the leeward side of the central tube. On the contrary, for the non-square pitch ratio of constant longitudinal distance, the flow of the case of 1.75 × 1.5 provided two
stagnation locations at around 52° and 308° with a very similar solution to the case square ratio of 1.75 for all variables whereas by increasing the transverse distance to 2 and 5, the solution
recorded was not perfectly symmetric resulting in two different vortices and one stagnation position located at the leading edge of the cylinder provided by the case of 5 × 1.5. In terms of vortex
shedding effect, the reduction in the Strouhal number at a constant transverse pitch is less steep than those at a constant longitudinal pitch.
• Flow quantities
• Heat transfer
• Inline tube bundles
• URANS turbulence models
Dive into the research topics of 'A CFD study of flow quantities and heat transfer by changing a vertical to diameter ratio and horizontal to diameter ratio in inline tube banks using URANS
turbulence models'. Together they form a unique fingerprint. | {"url":"https://khazna.ku.ac.ae/en/publications/a-cfd-study-of-flow-quantities-and-heat-transfer-by-changing-a-ve","timestamp":"2024-11-05T16:50:58Z","content_type":"text/html","content_length":"61862","record_id":"<urn:uuid:a75250c0-5b93-42a4-a252-8a72d543aa20>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00807.warc.gz"} |
Summarize Numeric by Factor
We wish to get a summary of a numeric column (e.g. the mean and standard deviation) for each group where the groups are defined by the values of a categorical column.
While we can explicitly compute all the common summary statistics for a numeric column over groups (see below), it would be efficient during data inspection to use a single function that given a
numeric column and one or more grouping columns, computes the common summary statistics over groups.
df %>% group_by(col_1) %>% skim(col_2)
Here is how this works:
• We use group_by() to “partition” the data frame into groups according to the values of one or more grouping columns passed to group_by() which in this case is col_1.
• We then pass the grouped data frame to the function skim() as well as pass the numerical column whose value is to be summarized, in this case col_2.
• skim() is a great convenience. With one command, we get a consolidated report that has the most common summary statistics like row count, mean, standard deviation, minimum value, maximum value,
and percentiles.
• skim(), from the skimr package, is a more powerful alternative to R’s built in summary() function.
We wish to compute the mean of a numerical column over groups defined by one categorical column.
In this example, we wish to compute the mean of the numeric column col_2 for each group where the groups are defined by the values of the categorical column col_1.
df %>%
group_by(col_1) %>%
summarize(col_2 = mean(col_2, na.rm = TRUE))
Here is how this works:
• We first apply group_by() to the data frame df specifying the grouping column col_1.
• We then pass the grouped data frame to summarize() to apply an aggregation function (here mean()) to each group
• We set the argument na.rm = TRUE so mean() would ignore NA values and return the mean of the rest.
• See Summary Statistics for how to compute all the common summary statistics in R.
We wish to compute the sum of values of a numerical column over groups defined by one categorical column.
In this example, we wish to compute the sum of the values of the numeric column col_2 for each group where the groups are defined by the values of the categorical column col_1.
df %>%
group_by(col_1) %>%
summarize(col_2 = sum(col_2, na.rm=T))
Here is how this works:
This works similarly as above but we use sum() instead of mean().
We wish to obtain the ratio between the sum of values of a numeric variable for each group to the total sum of values of the numeric variable where the groups are defined by a grouping variable.
In this example, we compute the ratio of the sum of values of a numeric column col_2 for each group defined by col_1 to the total sum of values of col_2.
df %>%
group_by(col_1) %>%
summarize(col_2 = sum(col_2, na.rm=T)) %>%
mutate(col_2 = col_2 / sum(col_2))
Here is how this works:
• This works similarly to the above. We use group_by() and summarize() to apply sum() to the values of col_2 over groups defined by col_1.
• We then apply mutate() to the resulting summary to compute the ratio of the sum of values of col_2 for each group (which in the summary is in the col_2 column) to the total value of col_2 (which
we compute via sum(col_2)). | {"url":"https://optima.io/reference/data-manipulation/r/inspecting-numeric-by-factor","timestamp":"2024-11-03T04:40:45Z","content_type":"text/html","content_length":"175673","record_id":"<urn:uuid:daeb6262-4eeb-44bd-9788-9be409f0ff4c>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00247.warc.gz"} |
Continuous Random Variable
A Continuous Random Variable ([math]\displaystyle{ X }[/math]) is a random variable that represents a continuous random experiment whose function range is an uncountable interval.
• Context:
• Example(s):
□ [math]\displaystyle{ X(3,4) }[/math] ⇒ π.
• Counter-Example(s):
• (Wikipedia, 2013) ⇒ http://en.wikipedia.org/wiki/Random_variable#Real-valued_random_variables
□ In this case the observation space is the real numbers. Recall, [math]\displaystyle{ (\Omega, \mathcal{F}, P) }[/math] is the probability space. For real observation space, the function
[math]\displaystyle{ X\colon \Omega \rightarrow \mathbb{R} }[/math] is a real-valued random variable if :[math]\displaystyle{ \{ \omega : X(\omega) \le r \} \in \mathcal{F} \qquad \forall r \
in \mathbb{R}. }[/math] This definition is a special case of the above because the set [math]\displaystyle{ \{(-\infty, r]: r \in \R\} }[/math] generates the Borel σ-algebra on the real
numbers, and it suffices to check measurability on any generating set. Here we can prove measurability on this generating set by using the fact that [math]\displaystyle{ \{ \omega : X(\omega)
\le r \} = X^{-1}((-\infty, r]) }[/math].
• http://en.wikipedia.org/wiki/Event_%28probability_theory%29#A_note_on_notation
□ Even though events are subsets of some sample space Ω, they are often written as propositional formulas involving random variables. For example, if X is a real-valued random variable defined
on the sample space Ω, the event :[math]\displaystyle{ \{\omega\in\Omega \mid u \lt X(\omega) \leq v\}\, }[/math] can be written more conveniently as, simply, :[math]\displaystyle{ u \lt X \
leq v\,. }[/math] This is especially common in formulas for a probability, such as: [math]\displaystyle{ P(u \lt X \leq v) = F(v)-F(u)\,. }[/math] The set u < X ≤ v is an example of an
inverse image under the mapping X because [math]\displaystyle{ \omega \in X^{-1}((u, v]) }[/math] if and only if [math]\displaystyle{ u \lt X(\omega) \leq v }[/math].
• (Wikipedia, 2009) ⇒ http://en.wikipedia.org/wiki/Random_variable#Formal_definition
□ Let (Ω, \mathcal{F}, P) be a probability space and (\mathcal{}Y, \Sigma) be a measurable space. Then a random variable X is formally defined as a measurable function X: \Omega \rightarrow Y.
An interpretation of this is that the preimages of the "well-behaved" subsets of Y (the elements of Σ) are events (elements of \mathcal{F}), and hence are assigned a probability by P.
• (Dubnicka, 2006c) ⇒ Suzanne R. Dubnicka. (2006). “Random Variables - STAT 510: Handout 3." Kansas State University, Introduction to Probability and Statistics I, STAT 510 - Fall 2006.
□ TERMINOLOGY : A random variable is said to be continuous if its support set is uncountable (i.e., the random variable can assume an uncountably infinite number of values).
□ ALTERNATE DEFINITION: A random variable is said to be continuous if its cdf FX(x) is a continuous function of x.
□ TERMINOLOGY : Let X be a continuous random variable with cdf FX(x). The probability density function (pdf) for X, denoted by fX(x), is given by fX(x) = d/dx FX(x),
• (Larsen & Marx, 1986) ⇒ Richard J. Larsen, and Morris L. Marx. (1986). “An Introduction to Mathematical Statistics and Its Applications, 2nd edition." Prentice Hall
□ 'Definition 3.2.1. A real-valued function whose domain is the sample space S is called a random variable. We denote random variables by uppercase letters, often [math]\displaystyle{ X }[/
math], Y, or Z.
□ If the range of the mapping contains either a finite or countably infinite number of values, the random variable is said to be discrete ; if the range includes an interval of real numbers,
bounded or unbounded, the random variable is said to be continuous.
□ …
□ Associated with each continuous random variable [math]\displaystyle{ Y }[/math] is also a probability density function, f[Y](y), but f[Y](y) in this case is not the probability that the
random variable [math]\displaystyle{ Y }[/math] takes on the value y. Rather, f[Y](y) is a continuous curve having the property that for all [math]\displaystyle{ a }[/math] and [math]\
displaystyle{ b }[/math],
☆ P(a ≤ [math]\displaystyle{ Y }[/math] ≤ b) = P({s(∈)S| [math]\displaystyle{ a }[/math] ≤ Y(s) ≤ b}) = Integral(a,b). “f[Y](y) dy] | {"url":"http://www.gabormelli.com/RKB/continuous_random_variable","timestamp":"2024-11-02T12:24:28Z","content_type":"text/html","content_length":"48269","record_id":"<urn:uuid:fcf4925f-5648-4899-a3a2-c80fd275ba2f>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00692.warc.gz"} |
GreeneMath.com | Ace your next Math Test!
About Age Word Problems:
When solving a word problem, the most important step is to understand the main objective. This gives the student a laser focus. We can then filter through all of the information, leading to an
equation that can be solved. We then provide an answer and check to make sure it is reasonable.
Test Objectives
• Demonstrate the ability to read a word problem and understand the main objective
• Understand how to set up an equation based on the information given in a word problem
• Demonstrate the ability to check the solution to a word problem
Age Word Problems Practice Test:
Instructions: solve each word problem
a) Jamie and Lynn are sisters. Lynn’s age is currently double that of Jamie’s age. Additionally, the sum of Jamie’s age and Lynn’s age is 45. How old is each sibling?
Watch the Step by Step Video Solution | View the Written Solution
Instructions: solve each word problem
a) Three people in a room, Jason, Larry, and Jennifer have a combined age of 97 years. One year ago, Larry’s age was four times the current age of Jason. Additionally, Jason is currently 1/3 of the
age of Jennifer. How old is each person?
Watch the Step by Step Video Solution | View the Written Solution
Instructions: solve each word problem
a) At the school fair, a teacher is trying to determine if Charles, Beth, and Holly can each ride on the Squirrel Cages. In order to ride the Squirrel Cages, the minimum age is 15 years old. The
teacher knows that 5 years ago, Beth was the same age as Charles is today. Additionally, she also knows that Holly is 3 years older than Beth. If the sum of Charles’ age and Beth’s age is the same as
Holly’s age 7 years from now, who can ride in the Squirrel Cages?
Watch the Step by Step Video Solution | View the Written Solution
Instructions: solve each word problem
a) Katlyn and Sarah are best friends. Sarah is 36 years less than double Katlyn’s age. In 6 years, Sarah’s age will be 12 years more than half of Katlyn’s age. How old is each girl?
Watch the Step by Step Video Solution | View the Written Solution
Instructions: solve each word problem
a) Three colleagues, Jessica, Jen, and Aya, are trying to guess the ages of each other. They find out that in 9 years, Jessica will be as old as Jen is today. Additionally, they find out that 11
years ago, Aya’s age would have been half of Jen’s current age. Additionally, they know that the sum of Jessica’s age and Jen’s age is the same as subtracting 1 away from the result of doubling Aya’s
current age. How old is each girl?
Watch the Step by Step Video Solution | View the Written Solution
Written Solutions:
a) Jamie is 15 years old, Lynn is 30 years old
Watch the Step by Step Video Solution
a) Jason is 12 years old, Larry is 49 years old, and Jennifer is 36 years old
Watch the Step by Step Video Solution
a) Charles 10 - Can’t ride, Beth 15 - Can ride, Holly 18 - Can ride
Watch the Step by Step Video Solution
a) Katlyn is 30 years old, Sarah is 24 years old
Watch the Step by Step Video Solution
a) Jessica is 21 years old, Jen is 30 years old, and Aya is 26 years old | {"url":"https://www.greenemath.com/College_Algebra/47/Age-ProblemsPracticeTest.html","timestamp":"2024-11-10T18:08:20Z","content_type":"application/xhtml+xml","content_length":"10942","record_id":"<urn:uuid:b928c791-635d-420d-b984-d214a95d9211>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00516.warc.gz"} |
Complex Number Calculator
Complex Number Calculator
Complex Number Calculator
What is the Complex Number Calculator?
The Complex Number Calculator is a powerful tool designed to perform mathematical operations involving complex numbers. Complex numbers consist of a real part and an imaginary part. This calculator
allows users to input the real and imaginary parts of two complex numbers and perform addition, subtraction, multiplication, and division on them.
Applications of the Complex Number Calculator
This calculator is widely used in various fields such as engineering, physics, and applied mathematics. Complex numbers are essential when working with electrical circuits, signal processing, control
theory, and quantum mechanics. The calculator simplifies the process of performing operations on complex numbers, enhancing accuracy and saving time.
Benefits of Using the Complex Number Calculator
Using this calculator offers several advantages:
• Efficiency: It provides quick and accurate results for complex number operations.
• User-Friendly: The intuitive interface with tooltips helps users understand and input values correctly.
• Error Reduction: The calculator ensures input validation to prevent errors and invalid calculations.
How the Calculator Works
The calculator takes four inputs: the real and imaginary parts of two complex numbers. It also requires the user to select the desired operation: addition, subtraction, multiplication, or division.
Here’s how each operation is performed:
To add two complex numbers, the real parts are added together, and the imaginary parts are added together. For instance, if the numbers are (a + bi) and (c + di), the result is ((a + c) + (b + d)i).
To subtract two complex numbers, the real part of the second number is subtracted from the real part of the first number, and similarly, the imaginary part of the second number is subtracted from the
imaginary part of the first number. For instance, if the numbers are (a + bi) and (c + di), the result is ((a – c) + (b – d)i).
To multiply two complex numbers, you use the distributive property. If the numbers are (a + bi) and (c + di), the result is ((ac – bd) + (ad + bc)i).
To divide one complex number by another, multiply the numerator and the denominator by the conjugate of the denominator. If the numbers are (a + bi) and (c + di), the conjugate of the denominator (c
– di) is used, and the result is simplified as ((ac + bd)/(c² + d²) + (bc – ad)/(c² + d²)i).
Enhancing User Experience
The Complex Number Calculator is designed to provide a seamless experience. It features an attractive, responsive design that works well on any device. Tooltips offer guidance for each input field,
ensuring that users understand what is required. Additionally, the buttons and input fields are easy to interact with, providing a smooth user journey from input to result.
Why This Calculator is Useful
Complex number operations can be time-consuming and prone to errors when done manually. This calculator automates the process, reducing the likelihood of mistakes and speeding up computations.
Whether you’re a student, engineer, or researcher, this tool can significantly aid in calculations involving complex numbers, making it an invaluable asset in both education and professional
1. What is a complex number?
A complex number is a number that consists of a real part and an imaginary part, expressed in the form a + bi, where ‘a’ and ‘b’ are real numbers, and ‘i’ is the imaginary unit with the property i²
= -1.
2. How does the calculator handle invalid inputs?
The calculator includes input validation to ensure that users enter valid real and imaginary numbers. If invalid inputs are detected, the calculator prompts the user to correct them before proceeding
with the calculation.
3. Can this calculator handle large complex numbers?
Yes, the calculator is designed to handle large complex numbers efficiently. However, the performance may vary depending on the size of the numbers and the computational power of the device being
4. How does the calculator perform complex number division?
The calculator multiplies the numerator and the denominator by the conjugate of the denominator to perform division. This method helps eliminate the imaginary part from the denominator, simplifying
the calculation.
5. Are the results shown in standard form?
Yes, the results are presented in the standard form a + bi, where ‘a’ is the real part and ‘b’ is the imaginary part of the complex number.
6. Can I use this calculator for educational purposes?
Absolutely! This calculator is an excellent tool for students and educators alike, helping to illustrate and solve problems involving complex numbers with ease.
7. Does the calculator support operations other than basic arithmetic?
Currently, the calculator supports addition, subtraction, multiplication, and division. More advanced operations may be added in future updates.
8. How accurate are the calculations?
The calculator ensures high accuracy in its computations. Nonetheless, extremely large or small numbers may lead to minor precision errors due to the limits of floating-point arithmetic.
9. Is there an option to reset the inputs?
Yes, the calculator includes a reset button that clears all input fields, allowing users to start fresh with a new calculation.
10. Can the calculator store results for further calculations?
The current version does not support storing results for subsequent calculations. Each operation needs to be performed individually.
11. Is the calculator mobile-friendly?
Yes, the calculator has a responsive design, ensuring it works well on various devices, including smartphones and tablets.
12. Who can benefit from this calculator?
This calculator is useful for students, educators, engineers, physicists, and anyone needing to perform operations involving complex numbers quickly and accurately. | {"url":"https://www.onlycalculators.com/math/algebra/complex-number-calculator/","timestamp":"2024-11-05T18:44:52Z","content_type":"text/html","content_length":"251392","record_id":"<urn:uuid:710f46ee-2792-4023-8411-ad7ed1ac9a1b>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00426.warc.gz"} |
General Principles and Conditions for Seismic Analysis
The rules for earthquake calculation in linear performance analysis and nonlinear performance analysis applied in existing structures are defined in TBDY 15.4 . While defining the earthquake effects
in the performance analysis, the structural system behavior coefficient, the excess strength coefficient and the building importance factor are not applied (R=D=I=1). Component capacities are
determined according to the existing material strengths and building information level coefficients. The building performance evaluation is made according to the results obtained with the existing
material strengths calculated by taking into account the earthquake calculation made using rod and shell finite elements and the knowledge level coefficient, within the framework of the rules
specified in TBDY 15.4 .
15.4. GENERAL PRINCIPLES AND RULES ON EARTHQUAKE ACCOUNT
15.4.1 - According to this section of the Regulation, the purpose of earthquake calculation is to determine the earthquake performance of existing or strengthened buildings. For this purpose 15.5 as
defined in linear or 15.6 'defined in a non-linear calculation method used. However, it should not be expected that performance evaluations to be made with these methods, which are theoretically
based on different approaches, will give exactly the same result. The general principles and rules described below apply to both types of methods.
15.4.2 - In the definition of the earthquake effect, the horizontal elastic design spectrum given in 2.3.4 or 2.4.1 shall be used for earthquake ground motion levels determined according to 2.2 .
Building Importance Factor defined in 3.1.2 will not be applied in earthquake calculation (I = 1.0).
15.4.3 - Earthquake performance of the buildings will be evaluated under the combined effects of vertical loads and earthquake effects on the building. In the earthquake calculation, masses will be
defined according to 4.5.9 .
15.4.4 - Earthquake forces will be exerted on the building in both directions and in both directions separately.
15.4.5 - The structural system model of the building will be prepared with sufficient accuracy to calculate the internal forces, displacements and deformations that will occur in the structural
elements under the common effects of earthquake effects and vertical loads.
15.4.6 - In buildings where floors operate as a rigid diaphragm in the horizontal plane, two horizontal displacements on each floor and degrees of freedom of rotation around the vertical axis will be
taken into account. Storey degrees of freedom will be defined at the center of mass of each floor, and additional eccentricity will not be applied.
15.4.7 - Uncertainties in the load-bearing systems of the existing buildings will be reflected in the calculation methods through the information level coefficients defined in 15.2 according to the
scope of the data collected from the building .
Columns defined as short columns according to 15.4.8 - 7.3.8 shall be defined with their real free lengths in the structural system model.
15.4.9 - Conditions for defining the interaction diagrams of reinforced concrete sections under one or biaxial bending and axial force are given below:
(a) In the earthquake calculation , the current strengths of concrete and reinforcement steel determined according to the knowledge level defined in 15.2 shall be taken as basis.
(b) The maximum pressure unit deformation of concrete can be taken as 0.0035, and the maximum unit deformation of reinforcing steel can be taken as 0.01.
(c) Interaction diagrams can be appropriately linearized and modeled as polyline or multiplanar diagrams.
15.4.10 - In the definition of element sizes of reinforced concrete systems, the joint zones can be considered as rigid end zones.
15.4.11 - Effective section stiffnesses of the cracked section shall be used in reinforced concrete elements under the effect of bending. Effective cross-section stiffnesses will be calculated
according to 4.5.8 .
15.4.12 - In the calculation of the positive and negative plastic moments of the beams with reinforced concrete tray, the table concrete and the reinforcement inside can be taken into account.
15.4.13 - In the case of insufficient clamping or lap length in reinforced concrete elements, the yield stress of the relevant reinforcement in the calculation of the section capacity moment will be
reduced by the ratio of the interlocking or lacking in lap length.
15.4.14 - In cases where deformations in the ground may affect the behavior of the structure, the soil properties will be reflected in the analysis model.
15.4.15 - Other principles given in Chapter 3 , Chapter 4 and Chapter 5 regarding modeling apply. | {"url":"https://help.idecad.com/ideCAD/general-principles-and-conditions-for-seismic-anal","timestamp":"2024-11-05T18:55:23Z","content_type":"text/html","content_length":"44727","record_id":"<urn:uuid:dfab62b9-85e2-488c-a602-4e0948c59fde>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00380.warc.gz"} |
Java for Kids
As you can see, I haven't updated this blog in quite some time! Schools in South Africa have, for some reason, switched to using mostly Delphi and the Java space grew quiet. As for my business, I
moved from Java to Scala and Kotlin, especially for mobile development on Android.
Of late, I've been making use of Dart and Flutter, with a bit of Kotlin and Swift, to develop mobile applications. You can find some more information and programming tips at
. As always, I'm happy to try and assist where I can!
I use http://codeformatter.blogspot.com/ to format the code snippets. It's brilliant, do give it a try if you are posting code to your blog.
I recently ran into a question around how to determine the next occurrence of a particular day of the week in Java. For example, if today is Tuesday, how do I figure out when the next Saturday is?
Or, if it's Saturday, when is the next Wednesday?
Sounds simple enough, right? Turns out, it's not that easy in Java! Here's my solution:
public static Date getNextOccurenceOfDay(Date today, int dayOfWeek) {
Calendar cal = Calendar.getInstance();
int dow = cal.get(Calendar.DAY_OF_WEEK);
int numDays = 7 - ((dow - dayOfWeek) % 7 + 7) % 7;
cal.add(Calendar.DAY_OF_YEAR, numDays);
return cal.getTime();
Turns out that the % (mod) in Java doesn't deal with negative numbers that well, but by using the trick of doing a double-mod, you get the right answer!
You simply call this method with the starting date, and then tell it to target, for example, Calendar. SATURDAY, and it will give you the date of the next Saturday!
I went for an interview once, and was asked the following question:
Given a sequence of numbers, from 1 to 1000, where only one number is duplicated, how would I proceed to find the duplicate number?
After I solved the problem using a basic loop that just checks if it's seen this number before using a hash table, the interviewer asked if I could improve my answer using XOR math. I didn't quite
get it right, this is the solution they showed me :
int numbers[] = {4,2,3,4,5,6,7,8,1,10,11,12,13,14,9};
for (int pos = 1; pos < numbers.length; pos++)
numbers[pos] = numbers[pos] ^ numbers[pos-1] ^ pos;
System.out.println("Duplicate is : " + numbers[numbers.length-1]);
This bit of Java code loops through the array and finds the duplicate number. Of course, this only works for positive integers, I've not tested it with negative numbers and I know it doesn't work
with floats.
So, there you go, a good use for binary math in Java!
It's time for some more Python fun! Umonya is busy gearing up for another basic Python course aimed at high school pupils.
I quote from their website :
"Umonya will be having a course on 12-14 October 2012 where we will teach 100 High School children how to program in Python. It will be taking place during Cape Town's first ever Software week."
If you are interested, please visit their website at http://www.umonya.org/ to learn more.
I had a question this morning from a student, asking when using a pattern matcher, if he could count the number of "hi" words in a string that didn't start with an "x".
Of course you can!
import java.util.regex.Matcher;
import java.util.regex.Pattern;
public class CountOccurences {
// Our test string has 5 occurences.
private static String input_string = "hi lo and xhi xhi hihi loxhihixhihi";
public static void main(String args[]) {
int count = 0;
Pattern pattern = Pattern.compile("[^x]*hi");
Matcher matcher = pattern.matcher(input_string);
while (matcher.find()) {
System.out.println("The final count is " + count);
Now I'll leave it as an exercise for you to go read up on Regular Expressions...
JavaK now has a GitHub repository. We will be uploading all of our source code there, as well as the various tutorials that we have released.
Find the GitHub entry for JavaK here - https://github.com/ewaldhorn/javak | {"url":"https://javaforkids.blogspot.com/","timestamp":"2024-11-05T03:52:29Z","content_type":"text/html","content_length":"58085","record_id":"<urn:uuid:be7eb0aa-7960-43fb-9aba-d5f6ab02d318>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00146.warc.gz"} |
Estimates of Going and Drift use: Going = 12.9*(Temp-16) - 8.616*(Pressure-1014.2) - 40*dTemp/dt
Best fit to data (blue line). Reference gradient (red lines):/seconds.
31 Dec 2010 [11:30]
It is hard to tell how well the barometric and temperature compensators are working because temperature and pressure change at the same time. This is where the monitor page come in [link] because
an equation for working out drift from temperature and pressure data can be tested. The equation I am testing is: Going = -120*(Temp-6)-8*(Pressure-1013)-40*dTemp/dt -60) and it fits pretty well.
Here is a direct link to the drift estimate and the actual drift - a good fit [link] and the XY plot [link] shows a straight line. If this equation is right then it means that the variation of
going with temperature is -120 ms/day per deg C [-520] , with pressure is -8 ms/day per mbar [-8] and with rate-or-change of temperature is 40 ms/day
31 Dec 2010 [11:29]
per degC/day [0]. The values in [ ] are for an uncompensated pendulum.
Download data | {"url":"https://clock.trin.cam.ac.uk/main.php?menu_option=data&from=24/12/2010&to=31/12/2010&channel=driftest&channel2=drift&type=xy&scale=auto","timestamp":"2024-11-04T18:14:32Z","content_type":"text/html","content_length":"17118","record_id":"<urn:uuid:e19b3765-8397-4689-a220-d2b6048a4080>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00154.warc.gz"} |
The Source of EEG
© 2012-2024 Kevan Hashemi, Open Source Instruments Inc.
Electrode Impedance
Skull Electrodes
Neuron Impedance
Membrane Capacitance
Activation Current
Electrostatic Force Diffusion Field Extracellular Circulation
Dipole Potential
Excitory Current
Activation Frequency
When we place two screws through the skull of a rat, or two conducting pads upon the skull of a human being, we observe the electrical signal known as electroencephalograph (EEG) or local field
potential (LFP). Here we investigate the source of EEG in an effort to answer a host of questions that have arisen in the course of our experiments. How does the type of electrode affect the signal
we record? Should the reference potential be an electrode on the skull surface, or a screw into the brain? If we use a 50-μm wire electrode, will it be able to see the activation potential of
individual neurons? What is the highest frequency fluctuation we can observe in EEG? Is it possible for the EEG signal to include 300-Hz oscillations?
When we began recording with our subcutaneous transmitters, we used bare wires secured in skull holes with a screw to detect EEG. We observed frequent jumps in the signal, sometimes as large as a
hundred millivolts. When we clapped our hands, the live animal would shudder in reaction to the sound, and we would see a pulse on the EEG of order tens of millivolts. We later switched to screw
electrodes, which we soldered to the wires with acid flux and secured in holes in the skull. And we placed the animals in a faraday enclosures. The jumps in EEG became rare. We are now able to record
hours of EEG without seeing any such events, although they do occur at times. We concluded that these large, transient voltages were not generated by the brain of the animal, but rather by poor
solder joints, or intermittent contact between a bare wire and a fastening screw, or by the movement of the wires though an ambient electrostatic field.
Once we had eliminated the transient steps and pulses, we were left with a 1-160 Hz signal whose usual amplitude was of order 30 μV, with exceptional artifacts as large as 1 mV. This signal we
believe to be present in the extracellular fluid of the brain, and we believe it to be generated by the activity of cortical neurons. In the following sections, we consider how neural activity might
generate a signal of such magnitude. We know that the activation of a neuron produces 100-mV jump in its membrane potential. But it turns out that the movement of charge from one side of the neural
membrane to the other does not, in itself, generate a measurable extracellular potential. Even if every neuron in the cortex activated at the same time, the net current flowing into the neurons
through their membranes would produce no measurable EEG signal.
Instead, we will find that it is the circulation of current through the apical dendrites of pyramidal neurons, and through the extracellular fluid outside these neurons, that generates the
extracellular potential detected by our electrodes. We can find no source of EEG other than such circulating currents. We present our search for the source of the EEG signal, including several
detailed explanations for why certain activity does not contribute to the signal, and so arrive at the tentative conclusion that EEG is generated by two circulating currents in pyramidal neurons that
we call the excitory extracellular current and the activation extracellular current.
We begin by considering the properties of the electrodes we use to measure EEG, and by deriving the basic relations that govern the flow of current through a medium like the cortical extracellular
Electrode Impedance
Suppose we have two electrodes. They could be screws, wires, pads, or saline-filled micropipettes. When we detect EEG with a pair of electrodes, it is the brain that asserts the potential between the
elctrodes, not our external circuits. But we will study electrode impedance by considering what will happen when we apply our own voltage to the electrodes. Later, we can use the results of our study
to find out what will happen when the brain applies a voltage to the same electrodes.
We apply a voltage V across our electrodes and measure a current I flowing between them. We define the electrode impedance, Z, to be V/I. Current flows through the material between the electrodes.
For EEG electrodes, this material will be brain and bone. Some current flows into the amplifier we use to measure V, but we will ignore this current. We will assume that the electrode impedance is
determined only by the material between the electrodes. When we know how to estimate the impedance of this material, we will know how large we must make our amplifier impedance to satisfy our
assumption that its input current is negligible.
Figure: Electrode Impedance. We apply a voltage V to the electrodes and a current I flows between them. The electrode impedance is Z. Top: A small electrode combined with a large bulk ground
electrode. Bottom: Two identical electrodes, one of which we define as the ground electrode.
The figure above shows two arrangements of electrodes. In the first arrangement, we have a small sensing electrode and a large ground electrode that makes contact with the entire outer surface of the
brain. In the second arrangement, we have two identical screw electrodes. In both cases, we choose one electrode as the ground (or reference) electrode, and the other as the sensing electrode. The
voltage V is the electrical potential between the sensing electrode and the ground electrode. The current I flows from the sensing electrode to the grounding electrode through the brain.
Let us begin by imagining that the material between our electrodes behaves like a resistive medium with bulk resistivity τ. Brain tissue does not act like a resistive medium, but our calculation for
a resistive medium will be enlightening. Consider the bulk grounding arrangement in which our sensing electrode is a small sphere at the center of an animal brain and our grounding electrode makes
contact with the entire outer surface of the brain. We assume the brain tissue is homogenous, with resistivity τ. We assume that the brain is large compared to the sensing electrode. We apply a
voltage V to the electrodes and a current I flows through the brain. The ratio V/I is the electrode impedance.
Figure: Electrode Impedance for a Spherical Conductor in an Infinite Resistive Medium.
The resistive component of the external impedance is inversely proportional to the electrode radius and proportional to the resistivity of the medium. The following table gives approximate values for
the resistivity of various body tissues.
│ Material │Approximate Resistivity (Ωm) │
│Copper │2×10^−8 │
│Seawater │0.20 │
│Cerebro-Spinal Fluid │0.64 │
│Blood │1.5 │
│Spinal Cord (Longitudinal)│1.8 │
│Cortex (5 kHz) │2.3 │
│Cortex (5 Hz) │3.5 │
│White Matter │6.5 │
│Spinal Cord (Transverse) │12 │
│Bone │120 │
│Pure Water │2×10^5 │
│Active Membrane │2×10^5 │
│Passive Membrane │1×10^7 │
Table: Approximate Resistivity of Skull and Brain Tissue. Resistivity must be measured with an alternating current. Data based upon Table 4-1 of Electric Fields of the Brain: The Neurophysics of EEG,
Nunez et al., Oxford University Press, 2006.
Suppose, for the moment, that the resistivity of brain tissue is 3 Ωm = 3 kΩmm. A 1-mm cube will have impedance 3 kΩ between opposite faces. A 1-μm cube will have impedance 3 MΩ between opposite
faces. If our sensing electrode is an isolated sphere of radius 0.6 mm combined with a ground electrode at radius infinity, the impedance of the pair of electrodes will be 400 Ω. If the radius of the
sensing electrode is 1 μm, the impedance will be 250 kΩ. In either case, it is only the radius, a, of the sensing electrode that matters. The ground electrode is large. Nine tenths of the voltage V
is used up driving the current I through the medium between radius a and 10a. By the time we get to the ground electrode a radius infinity, the voltage in the medium is almost zero. The impedeance of
the pair of electrodes is entirely dominated by that of the sensing electrode. Thus we can speak of the impedance of a single electrode: the impedance of a small spherical electrode is τ/4aπ.
Our calculation assumes the boundary of the brain is infinitely far away. Suppose the boundary is better approximated as another sphere of radius b. The impedance of the electrode becomes,
Z = τ (1/a−1/b) / 4π.
If b = 6 mm, a = 0.6 mm, and τ = 3 kΩmm, then Z = 360 Ω. Our assumption that the brain is infinite is accurate to 10% provided that a/b < 0.1, which is always the case for implanted EEG electrodes.
Consider an electrode protruding from the bottom surface of the skull and into the brain. The skull itself is made of bone, which has resistivity forty times higher than that of brain tissue. The
bone acts as an insulator around the shaft of the electrode. If we assume that the other end of the electrode, where it emerges from the top side of the skull, is insulated from body fluid, we see
that the tip of our electrode is like a conducting hemisphere in a conducting medium divided by an insulating plane. By symmetry, the impedance of this hemispherical electrode will be twice that of
the spherical electrode:
Z = τ / 2πa for a << b.
Suppose we have two identical electrodes protruding into the brain and separated by a distance c. Instead of having our ground electrode at infinity, we move it closer and give it a specific point of
contact within the infinite medium. This arrangement is not radially symmetric, so an exact calculation of the impedance of the two electrodes is arduous, but we can arrive at an approximate solution
easily. The impedance of a hemispherical electrode of radius a with a ground at radius b is τ/2πa for a << b. Now suppose that the radius of our electrodes is much smaller than the distance between
them, or a << c. Current flows from the first electrode into the large volume of the resistive medium, and then out of the resistive medium into the second electrode. The impedance between the two
will be twice the impedance between one acting with a large ground electrode. So we have:
Z = τ / πa for a << c.
Suppose we have two screws of radius 0.6 mm penetrating the skull and entering the brain 10 mm apart. If τ = 3.0 kΩmm, the impedance between them will be close to 1.6 kΩ. If we increase their
separation to 20 mm, their impedance will change hardly at all. If we have two bare wires of radius 100 μm penetrating 100 μm into the brain and separated by 1 mm, the impedance between them will be
10 kΩ. If we increase their separation to 20 mm, their impedance will change hardly at all.
We can calculate the impedance of mixed electrodes using the same considerations as above. Their combined impedance is approximately equal to the sum of their individual impedances were they each
combined with a ground electrode at infinity. Suppose we have a 1.2-mm diameter screw acting as our ground electrode and a 50-μ diameter wire tip acting as our sensing electrode. Using τ = 3 kΩmm,
the impedance between the two will be roughly 20 kΩ, this being dominated by the impedance of the wire tip. The impedance of the screw tip alone is only 800 Ω.
Suppose we insert a saline-filled glass tube with inner radius 1 μm and outer radius 10 μm into the brain, 50 mm from a skull screw of radius 0.5 mm. The tube is our sensing electrode and the screw
is our grounding electrode. The saline in the glass tube we can treat as a spherical electrode in the brain. Assuming τ = 3 kΩmm, its impedance is 250 kΩ. The impedance of the skull screw is 800 Ω.
The impedance of the two electrodes is dominated by the glass tube.
Even with the 1-μm radius tube, the impedance is only 250 kΩ, much less than the 10 MΩ input impedance of our subcutaneous transmitters. At 1 kHz, the 100-pF lead capacitance of the A3019F presents
an impedance of 1.6 MΩ, which is three times greater than the glass tube probe impedance. For frequencies below 1 kHz, the current flowing into the impedance of our electrode leads and amplifier is
negligible compared to the current flowing through the electrode impedance.
As we mentioned earlier, however, brain tissue is not a simple resistive medium, nor is any ionic solution in water. Body fluid is similar to a 0.9% saline solution, which we can set up easily in the
laboratory, so let us study the impedance of saline solution to inform our understanding of the fulid in the brain. We dissolve one teaspoon of granulated table salt in half a liter of water, which
produces a 1.2% saline solution. We place a 1.6-mm diameter screw electrode just below the surface of water. Our ground electrode is a long, thick wire down the side of the beaker. We apply a 1 Vpp
square wave to the two electrodes. In series with the ground electrode we place a 100-Ω resistor. The voltage across the resistor is a < 200 mV, but this voltage allows us to measure the current
flowing through the water. The current spikes up to almost 2 mA and drops thereafter with a time constant of around 2 ms. When we reduce the frequency of the square wave, we find that the current
drops below 20 μA within 50 ms.
Figure: Current Spikes Through Electrodes in 1.2% Saline Solution. We apply a 100-Hz, 1-Vpp square wave to a 1.6-mm diameter hemisphere electrode and a large ground electrode. The time scale is 2 ms/
div and the current scale is 500 μA/div (50 mV/div divided by 100 Ω).
We construct the following apparatus to measure the impedance of an electrode versus frequency. We apply a sinusoid X through a resistor R1 to an electrode in 1.2% saline solution. The saline and the
known resistor form a voltage divider. The magnitude of the voltage Y on the electrode tells us the magnitude of the electrode impedance. The delay between the peaks of Y and the peaks of X tell us
the phase of Y with respect to X.
Figure: Apparatus for Measureing Saltwater Impedance. The ground wire is solid copper coated with tin. With a stainless steel electrode, and the voltage source X disconnected, we see a galvanic
potential of 220 mV at Y with respect to the ground wire, which is consistent with a galvanic cell made of steel and tin.
We use the above apparatus to measure the magnitude of electrode impedance for a 1.6-mm diameter screw electrode at the surface of the water.
Figure: Impedance of Saltwater Electrodes. Mag1.6: Magnitude of 1.6-mm diameter screw impedance. Mag.13: Magnitude of 130-μm diameter wire tip impedance. Dly1.6 phase delay from X to Y for 1.6-mm
electrode and R1 = 10 kΩ.
The phase delay of the electrode voltage tells us the electrode impedance is not purely resistive. The drop in impedance magnitude with frequency suggests a capacitor in parallel with a resistor.
Perhaps this capacitance arises from the permittivity of the saline solution. Let us calculate the capacitance, C, between a spherical electrode and a ground electrode at infinity.
Figure: Capacitive Component of Impedance for Spherical Conductor in an Infinite Resistive Medium.
The permittivity of water or saline solution is roughly 700 pF/m. For a spherical electrode with ground at infinity, the electrode capacitance due to permittivity is 4πεa. For radius 0.6 mm, this
capacitance will be 5 pF. At 1 kHz, a 5-pF capacitor presents an impedance of 30 MΩ, which is negligible compared to the resistive impedance. The same is true for a 1-μm radius sphere. Its
capacitance due to permittivity is 9 fF, which presents an impedance of 10 GΩ at 1 kHz.
The permittivity of saltwater cannot account for the current spikes we observe when we drive two saltwater electrodes with a square wave. Our best guess is that a chemical reaction is taking place
within the saltwater to slow down the initial current until it almost comes to a stop. We could attempt to model the current spike behavior with a resistor in series witha capacitor, but even then,
we would be unable to explain the following observation. At 50 Hz, the electrode voltage is visibly asymmetric, which can occur only if the electrode impedance is non-linear. No capacitor or resistor
can model such non-linear behavior. We would need a diode inside our electrode impedance in order to produce such a distortion.
Figure: Asymmetry of Saltwater Electrode Impedance. Larger waveform is applied voltage to R1 and electrodes. Smaller waveform is voltage across the electrodes. Note how the negative peaks are more
rounded than the positive.
Furthermore, our electrodes in saltwater generate a constant voltage, like the terminals of a battery. The 1.6-mm electode generates a 200 mV potential with a source resistance of 300 kΩ (if we
attach 300 kΩ across the electrodes, the voltage halves). When we amplify EEG, we must block these electrochemical potentials with a high-pass filter, or else they will saturated the dynamic range of
the EEG input.
We will not attempt to model the impedance of saltwater electrodes with equivalent circuits of capacitors and resistors. We will instead consider the magnitude of the impedance, and note that it
drops with frequency. At any particular frequency, we assume current is flowing through our electrode and the surrounding brain tissue in the same way it does for our calculation of electrode
resistance. The peaks in the sinusoidal current between the electrodes may occur before the peaks in the sinusoidal voltage across the electrodes, but the magnitude of the impedance of a
hemispherical eletrode of radius a will be Z = τ/2πa, where τ now is the bulk impedance of the medium. For a 0.8-mm radius hemispherical electrode in 1.2% saline, we measure Z = 40 kΩ at 1 Hz, 3 kΩ
at 10 Hz, and 600 Ω at 100 Hz. From this we deduce that τ = 200 kΩmm at 1 Hz, 15 kΩmm at 10 Hz, and 3 kΩmm at 100 Hz.
The measurements of Nunez et al., which we present in the table above, suggest that the resistivity of actual cortex tissue does not vary dramatically with frequency. But the absolute value of τ we
use turns out not to affect our conclusions. The potential we detect with EEG electrodes arises from voltage dividers within the brain: a neuron impedance and an electrode impedance might act as the
divider. If the value of τ is the same throughout the brain, geometry dictates the ratio of these impedances, not the absolute value of τ. In our discussions below, we will use 3 kΩmm for brain
Skull Electrodes
In the previous section we considered the impedance of a metal electrode, such as a wire tip or a screw. Here we consider using the skull itself as an electrode. We might do this with patch
connections to the skin above the skull, or with a wire running between the skin and the skull.
Suppose we shave the fur or hair off our subject's head and place upon the skin a conducting pad. According to Ngawhirunpat et al., the skin of a 90-day old rat has is 0.8 mm thick. The authors
measure the resistance of rat skin to be 20 kΩmm^2. But their measurement is far lower than most others we can find, so we will use the measurements of Davies et al., whereby the skin of a rat
presents resistance 100 kΩmm^2. According to Mao et al., the average thickness of rat skull is 0.6 mm. If we use 120 Ωm for the resistivity of bone, we see that the skull presents a resistance of 200
Suppose our conducting pad is 3 mm in diameter, for surface area 7 mm^2. To the first approximation, this pad will be connected to 7 mm^2 of the cortex through combined skin and bone resistance of 50
kΩ. The input resistance of our subcutaneous transmitters is 10 MΩ, compared to which 50-kΩ is negligible. Thus our electrode will detect the average potential of roughly 7 mm^2 cortex surface. The
resistance of the skull and skin may be negligible compared to the input resistance of our amplifier, but it is large compared to the resistance of the brain. Between two points upon cortex surface,
electrical current generated by neurons can pass through brain with resistivity 3 Ωm or skull with resistivity 120 Ωm. The internal resistance of our skull electrode is much greater than its
electrode impedance. The electrode measures the average potential of a region of the cortex without disturbing the flow of currents that generate this potential.
Another form of skull electrode, intended to obtain a ground potential for EEG measurement, is a wire running along the top surface of the skull. We might fasten one end of this wire in place with a
screw. Suppose the wire is 10 mm long, 0.5 mm in diameter, and surrounded by a 1-mm diameter sheath of tissue. We would like to know the impedance between the wire and the top surface of the brain.
The following calculation applies to a cylindrical conductor in a medium of finite radius. We assume that the medium and the conductor are the same length in the direction of the cylinder axis, so
that the calculation applies equally to an infinitely-long conductor.
Figure: Infinite Cylindrical Conductor in a Finite Resistive Medium. We calculate the impedance of a one-meter length of such conductor, radius a, in a medium that extends to radius d. We see that in
the case of the cylindrical conductor, the impedance per unit length is infinite when the medium has infinite radius.
For our sheath of tissue around a wire, let us use 6.5 Ωm for the resistivity of the tissue, this being the resistivity of white matter. The impedance of the sheath is 70 Ω. The bottom half of the
sheath makes contact with a patch of skull roughly 10 mm by 1 mm. By symmetry, the resistance of the bottom half of the sheath will be 14 Ω. Using 200 kΩmm^2 for the specific resistance of the skull,
the resistance between the top to bottom sides of a 10 mm^2 strip of skull will be 20 kΩ, which is much greater than the resistance of the sheath. The bottom side of this strip of skull now acts as
our electrode.
The advantage of skull electrodes is that we can make them large, without disrupting the flow of current in the brain. Thus they are a good choice for ground electrodes. The disadvantage of the skull
electrode is that we cannot make it small. Even if we make contact with the skull at a single point, the voltage at this point will be the average of an area of the brain with diameter roughly equal
to the thickness of the skull. If we make contact with the skin, we must add the thickness of the skin also. Another disadvantage of the skull electrode is that the conductors are on the outside of
the bone, in excellent electrical contact with the skin and the muscles that move the skin. Thus these conductors are sensitive to potentials generated by movements such as chewing and scratching.
Neuron Impedance
Suppose we have a current flowing out of the soma of a neuron. This current might be the capacitive current that flows away from the soma membrane in the milliseconds leading up to activation. If we
want to calculate the electrical potential required to drive this current away from the soma, we need to know the impedance to such current flow presented by the brain tissue around the neuron. We
define the external impedance of the soma in the same way we define the impedance of a small sensing electrode combined with a bulk grounding electrode. We imagine the neuron at the center of a large
volume of brain tissue, and we assume that the far reaches of the brain are connected to our electrical ground.
We obtain our estimate of external impedance by assuming the soma is a conducting sphere. This assumption allows us to use the same calculation we presented above for small spherical electrodes. The
external impedance of the soma will be :
Z = τ / 4πa.
Here a is the radius of the soma and τ is the bulk impedance of the medium, which is of order 3 kΩmm at 100 Hz. The soma of a rat cortical neuron is roughly 20 μm in diameter, so its external
impedance will be roughly 25 kΩ.
Current flows in and out of the dendrites and axon of neurons as well as the soma, but the dendrites and axon are not spherical. They are better approximated as cylinders. According to one detailed
study of cortical mouse neurons, the average dendrite is 100 μm long, the total dendrite length per neuron is around 2000 μm, and judging from their photographs, the dendrite radius is around 1.5 μm.
From far away, the current flowing in and out of these dendrites will be no different from the current flowing in and out of a sphere. But close to the dendrites, the current will be dominated by the
cylindrical structure of the tube. We calculated the resistance of a sheath of tissue around a wire above. Let us use that same calculation to obtain the impedance of the extracellular fluid out to a
radius many times that of the dendrite, and then use our expression for the impedance of a sphere, to arrive at an estimate of the external impedance of a dendrite. The impdance of a one-meter sheath
of radius d and resistivity τ around a dendrite of radius a will be:
Z = τ Ln(d/a)/2π.
Using a = 1.5 μm, b = 150 μm, and τ = 3 Ωm we obtain 2.2 Ωm. For a 100-μm dendrite the sheath will present impdeance 22 kΩ. We add to this the impedance of a sphere of radius 150 μm out to infinity,
which is 1.6 kΩ, arriving at a total of roughly 25 kΩ for a 100-μm dendrite, which is the same as our estimate of the soma impedance.
Membrane Capacitance
The membrane of a neuron acts as the insulating dielectric in a capacitor between the intracellular and extracellular fluid. To estimate the value of this capacitance, we need to know the surface
area of the membrane, its thickness, and its permittivity. We do not need to know the permittivity of the intracellular or extracellular fluid, because these are both good conductors compared to the
membrane, and they act as the conducting plates of the capacitor. Because the membrane is thin compared to the curvature of the cell surface, we can model the capacitor as a pair of infinite,
parallel plates.
Figure: Capacitance of Neuron Membrane, As Parallel Plates.
According Wikipedia, Scholarpedia, and University of Washington, the membrane is roughly 5 nm thick. We have been unable to find a measurement of the membrane permittivity, but let us assume it is
similar to impregnated paper, which is approximately like 40 pF/m. The specific capacitance of neural membrane will be close to 0.008 F/m^2. According Gentet et al and Scholarpedia, the specific
capacitance of neural membrane is around 0.01 F/m^2. Indeed, Matthew Walker tells us this is the value he uses for his own estimates. We will use 0.01 F/m^2 ourselves.
According to Massacrier et al, the total surface area seven-day old rat foetal neurons is 20,000 μm^2, increasing at 500 μm^2 per day. In Luhmann et al, the authors measure the effective surface area
of cortical rat neurons by measuring their effective capacitance during activation of the soma alone. They do not present values of effective capacitance (which is a pity, because that's what we want
to know), but they do presented the effective surface area of the neuron during soma activation. They obtained values between 1,000 μm^2 and 6,000 μm^2 depending upon the neuron type.
In a detailed study of cortical mouse neurons, Piccione et al found the area of the body of cortical cells to be only 150 μm^2 (Figure 4I). But each neuron is equipped with an average of 20 dendrite
endings, extending up to 200 μm from the body. The average total dendrite length per cell was roughly 2000 μm (Figure 3G). A close examination of their photographs (Figure 1J) suggests to us that the
average dendrite diameter is around 3 μm. With circumference 10 μm and total length 2000 μm the dendrites present a surface area of 20,000 μm^2. Matthew Walker tells us that he uses 20,000 μm^2 for
the total surface area involved in the activation of rat cortical neurons, so we will use this value ourselves.
With membrane area 2×10^−8 m^2 and specific membrane capacitance 0.01 F/m^2, we see that the total capacitance of a rat or mouse cortical neuron during activation is around 200 pF. The specific
capacitance of the membrane is 0.01 F/m^2, which we will find more convenient when expressed as 10 fF/μm^2 (1 fF = 10^−15 F = 1 femtofarad).
Activation Current
When a neuron fires, its membrane potential jumps up from −70 mV to +40 mV in 1 ms, and drops down again in the next few ms. The process that produces the initial jump in internal potential is called
depolarization of the cell. Electrical charge must flow into the cell for the jump and out of the cell for the drop. We know that the activation of a neuron has an effect upon the electrical
potential in the extracellular fluid. In Buzsaki et al (Figure 2 B), we see a 2-ms negative-going spikes of −100 μV immediately outside the soma membrane, −10 μV at a range of 10 μm, and −1 μV a a
range of 100 μm. Next to the dendrites, we see positive-going spikes of order 10 μV.
We want to estimate from first principles the size and shape of the extracellular activation potential outside the cell membrane, and also the manner in which the extracellular activation potential
diminishes with range from the cell. We will give detailed consideration to several potential sources of extracellular activation potential that attenuate weakly with range (as 1/r) before we
conclude that the potential in fact attenuates strongly with range (as 1/r ^2).
Electrostatic Force
Let us first consider if the activation potential within the neuron generates an electrical field directly in the surrounding fluid. The interior of the cell jumps up by 100 mV upon activation.
Perhaps this change acts at a distance upon the neuron's surroundings.
The cell membrane is roughly 5 nm thick, while the diameter of a dendrite is a few microns and the diameter of the soma is a ten or twenty of micron. Thus the radius of curvature of the membrane is
thousands of times greater than its thickness. The membrane capacitance is closely-approximated by the infinite parallel plate calculation we performed above. The charges in such a capacitor collect
in two layers on either side of the dielectric. The force upon a unit charge outside the dielectric is zero because the force exerted by an infinite wall of charge is a function only of the charge
density in the wall, not the range of the wall. Thus the electric field outside the capacitor is zero. Only within the dielectric do they produce a net force, so it is only in the dielectric that we
will find an electric field. Even if we try to account for the slight curvature of the membrane, we find that the charge density on the inner surface will be greater by the exact amount required to
maintain equality of charge on the two sides of the membrane, and zero field beyond the membrane.
Thus we expect no field outside the cell as a result of the presence of the charge layers on either side of its membrane.
Diffusion Field
Perhaps the flow of ions towards and away from the membrane creates an electrical field within the extracellular fluid. As Dennis Kaetzel explains, the current flowing into a neuron on the rising
edge of activation is made up of sodium cations. These ions move through the membrane, leaving chlorine anions behind. The chlorine ions arrange themselves upon the outer surface of the membrane
while the sodium ions arrange themselves upon the inner surface. Both ion populations must come from the extracellular fluid. Any disparity in the speed with which these ions move will generate an
electric field.
Extracellular fluid contains roughly 1% NaCl by weight. The molecular weight of NaCl is 58 g/mole, while that of water is 18 g/mole. Each mole of water contains 0.003 moles of NaCl. Thus
extracellular fluid contains 6250 sodium and chlorine ions per cubic micron. The charge on one such ion is 1.6×10^−19 C, so the sodium ion charge density in extracellular fluid is 16 pC/μm^3, while
that of chlorine ions is −16 pC/μm^3. The capacitance cell membrane is 0.01 pF/μm^2. To create a 100-mV membrane potential we need charge 0.001 pC/μm^2 = 1 fC/μm^2. We will call this the activation
charge, and denote it Q[A]. This charge flows into the cell in roughly 1 ms, so the activation current is roughly 1 pA/μm^2.
A layer of extracellular fluid 0.1 nm thick contains more than enough sodium and chlorine ions to supply the activation charge. If we distribute 1 fC/μm^2 of sodium ions on the inner surface of the
membrane, they form a layer one atom thick and containing only one sodium ion for every three hundred water molecules. On the outside, we will have a similar layer of chlorine atoms.
When the sodium channels of a cell membrane open, the concentration of sodium ions outside the cell will be depleted. So will the concentration of chlorine ions, because those chlorine ions that are
bound by the electric field of the membrane are not free to diffuse through the extracellular fluid. Thus sodium and chlorine ions will diffuse toward the membrane from the surrounding extracellular
fluid during the rising edge of activation.
Upon the falling edge of activation, current flows out of the cell. As Dennis Kaetzel explains, the falling edge is caused by the closure of the sodium ion channels in the membrane and the opening of
potassium ion channels. The potassium ion channels allow potassium to flow out of the neuron. When the potassium ions arrive outside the membrane, they leave behind an equal number of chlorine ions
in the cell. The charge on the membrane capacitance is neutralized. The chlorine ions that were bound to the outside of the membrane are now free to diffuse away. The potassium ions are likewise free
to diffuse away.
Let us consider these diffusion currents to see if they can contribute to the extracellular field. Our first step is to obtain a solution to the diffusion equation. We present such a solution below.
Instead of solving the equation in two or three dimensions, we do so in one dimension, which will be a good approximation when we are within a few microns of the soma of a neuron. We assume that the
activation charge is removed instantly from the extracellular fluid at position zero and time zero. This assumption is imperfect, because the charge enters the cell over the course of a millisecond.
But our assumption has the advantage of yielding an elementary solution to the one-dimensional diffusion equation. We further assume that there is no electrical field to encourage or oppose
diffusion, which would be correct for infinitely diluted ions, but we will soon find is not true for the ion concentrations around a neuron.
Figure: Solution to the Diffusion Equation. Position x = 0 is the outer surface of the boundary layer. We assume that charge Q[A] = 1 fC/μm^2 leaves the extracellular fluid at location x = 0 at time
t = 0 s, and so obtain an elementary solution. The Laplace Transform of the delta function, δ(t), is unity. We express ion concentration in fC/μm^3.
To apply our solution, we need values for the diffusion coefficients of the ions involved in activation. In Goodman et al, the authors measured the diffusion coefficient of sodium ions in the
extracellular fluid of living rat cortex, and found to be 1.1 μm^2/ms. In Li et al, the authors present the diffusion coefficients of various infinitely-diluted ions in water at 25°C. They give 1.3
μm^2/ms for sodium cations, 2.0 μm^2/ms for chlorine anions, and 1.9 μm^2/ms for potassium cations. We will use the Li et al. coefficients so that we can have a self-consistent set of values for all
three ions involved in activation.
The following graph plots the excess sodium ion concentration with distance for various times after the instantaneous removal of Q[A] = 1 fC/μm^2 at time and distance zero. We express the excess
concentration in terms of the excess sodium ionic charge per cubic micron. We see that the excess charge is negative close to the membrane, which we expect when we have removed positive ions.
Figure: Graphs of Excess Sodium Concentration. We remove 1 fC/μm^2 at time and distance zero. Each graph corresponds to a different time in milliseconds. Note that the area above each graph is equal,
but does not appear so because of the logarithmic distance scale.
We can model the diffusion of ions outside an activating cell as a summation of four ion transfers. First, we have the simultaneous removal of Q[A] sodium cations and −Q[A] chlorine anions. One
millisecond later, we have the simultaneous addition of Q[A] potassium cations and −Q[A] chlorine anions. The following graph shows the charge density that would result in the extracellular fluid if
we could ignore the effect of the electrostatic force upon the diffusing ions.
Figure: Charge Density For Independent Diffusion of Activation Ions Ignoring Current Flow Due to Electrical Fields. Each graph corresponds to a different time in milliseconds. The area under each
graph is zero, but does not appear so because of the logarithmic distance scale.
At time 2 ms, there is −0.09 fC/μm^2 of charge in the region 0 μm x < 2.5 μm, and a matching +0.09 fC/μm^2 of charge in the region 2.5 μm < x < 10 μm. Let us define our zero potential to be that of
the extracellular fluid far from the membrane. A simple consideration of this charge distribution yields an estimate of −0.4 V for the potential at x = 2.5 μm. If we assume resistivity 3 Ωm, which is
the same as 3 MΩμm, this potential difference across 7.5 μm of extracellular fluid would give rise to a current of 18 nA/μm^2, which is seventeen thousand times the activation current of 1 pA/μm^2.
This current will flow so as to reduce the difference in electrical potential between the two locations.
Thus we see that the electrostatic force will prevent the separation of cations from anions by diffusion. Indeed, Li et al. state, "In a system which shows a real concentration gradient of ions, the
rate of diffusion of a given cation has to be the same as that of a co-diffusing anion or that of a counter-diffusion cation in order to maintain the electro-neutrality condition everywhere in the
system." Our calculations show the basis for their assumption. We see that the diffusion rates of sodium and chlorine will be equal because electrostatic forces will slow down the chlorine diffusion
and speed up the sodium diffusion until they both diffuse at the same speed. But there must be some electric field for this matching to take place.
We propose that the diffusion coefficient of the combined population will be the average of their infinitely-diluted diffusion coefficients. An electrical field causes the slower ions to flow forward
at the same rate as it causes the faster ions to flow backwards. The following calculation applies these assumptions to obtain an expression for the electrical potential in terms of excess ion
Figure: The Electrical Potential Arising From Diffusion of Cations and Anions Together.
Our result is simple: the electrical potential generated by the diffusion of sodium and chlorine is proportional to the excess sodium concentration at any point, and to the difference between the
infinite-dilution diffusion coefficient of chlorine and sodium ions separately. If we re-calculate the sodium ion concentrations outside the membrane using a diffusion coefficient 1.65 μm^2/ms, we
find that at time 2 ms the concentration for x < 1 μm is roughly 0.3 fC/μm^3. The difference in the independent diffusion coefficients is 0.7 μm^2/ms, so the diffusion-induced voltage close to the
membrane will be only 0.6 μV.
Thus we expect no significant field outside the cell due to the diffusion of ions upon activation.
Let us now consider the flow of current between different regions of a neuron during the development of the activation potential. When the soma membrane activates, charge moves across the soma
membrane, raising the electrical potential of the soma body. The soma membrane potential jumps up by 100 mV, which means the voltage between the soma and the dendrite endings jumps by 100 mV. Current
flows out of the soma, through the intracellular fluid of the dendrite, and charges the dendrite's membrane capacitance. Let us begin by assuming the dendrite does not itself undergo activation, and
later adjust our calculations to account for activation.
After consulting Stuart et al, we assume a value of 2 TΩμm^2 for the resistivity of the membrane. From Piccione et al we obtain a total dendritic surface area of 20,000 μm^2. Thus the dendrite
membranes together present a impedance of 100 MΩ. Using 0.01 pF/μm^2 for the specific membrane capacitance, the total dendritic capacitance will be 200 pF. After consulting at Lopez-Aguado et al,
Stuart et al, and Jaffe et al, we settle upon 3.0 Ωm (or 3 MΩμm) for the resistivity of the fluid within a dendrite. A dendrite 100 μm long and 3 μm in diameter will have impedance 50 MΩ. Twenty such
dendrites in parallel will have impedance 2.5 MΩ. We can make a rough model of the dendrite using a 2.5-MΩ resistor, representing the impedance of the dendrite fluid, in series with a 200-pF
capacitor, representing the membrane capacitance, and a second resistor of 100 MΩ in parallel with the capacitor, representing the membrane resistance. The membrane resistance is large compared to
the fluid impedance, so the current flow will be dominated by the membrane capacitance and the fluid impedance. The time constant of the fluid impedance and the membrane capacitance is roughly 0.5
If the soma potential jumped by 100 mV instantly, we would at first have roughly 40 nA flowing into the dendrites from the soma, charging their membrane capacitance, and flowing out of the membrane
capacitance and into the extracellular fluid. We call this the activation current. But it takes 1 ms for the soma potential to jump by 100 mV. During this 1 ms, the average activation current will be
closer to 20 nA. As we found above, the external impedance of a 100-μm long 3-μm diameter dendrite is 25 kΩ. We assumed twenty dendrites, so each will get 1 nA of the 20-nA activation current. For
this current to move away from the dendrite, the potential just outside the dendrite membranes must increase by +1 nA × 25 kΩ = +25 μV.
We now show that the current flowing out of the dendrites must be matched almost exactly by a current flowing into the soma from the extracellular fluid. We will prove our point by contradiction:
suppose this were not the case. Suppose the current flows out of the dendrites, but does not flow into the soma either through its membrane capacitance nor its sodium channels. The soma will lose
charge at 20 nC/s. The capacitance of the soma, taken as a conducting object in a resistive dielectric, is what we call its body capacitance. We calculate the body capacitance of a spherical
electrode in a resistive medium here. The same calculation applies to the soma of a neuron. Its body capacitance will be 4πεa, where a is its effective radius. Suppose we use 10 μm for a and 700 pF/m
for ε (permittivity of water). This gives us a soma body capacitance of 10 fF. If we discharge this capacitance at 20 nC/s, its potential with respect to the surrounding fluid will drop at 2 V/μs. In
a mere 25 ps, the potential of the inside of the neuron will drop by 500 μV. This 500-μV drop will be communicated to the extracellular fluid, because by assumption there is no net current flowing
into the soma through its membrane capacitance or its sodium channels. But the external impedance of the soma is roughly 25 kΩ, so if the extracellular potential drops by 500 μV, we will see 20 nA
flowing towards the soma, which contradicts our assumption that no such current flows. Thus the current leaving the dendrites must be matched almost exactly by a current entering the soma.
We see that activation causes a current to circulate through the neuron and the extracellular fluid. We will call this current the extracellular activation current. It begins passes through the
capacitance of the dendrite membranes, enters the extracellular fluid, and re-enters the activating neuron at the soma. We can build an approximate model of the circulating current with two equal and
opposite current sources. The positive source we place at the center of the soma and the negative one we place at the approximate center of the region filled by the dendrites. The external impedance
of the soma is 25 kΩ, so −20 nA produces an extracellular potential at the soma of −500 μV. As we have already argued: each dendrite gets of order 1 nA and has external impedance 25 kΩ, so the
potential outside the dendrites is +25 μV. Thus we expect the +100-mV spike in membrane potential to generate a +25-μV extracellular spike just outside the dendrites and a −500 μV extracellular spike
outside the soma.
For our initial calculation of the extracellular activation current, we assumed the dendrites did not undergo activation themselves. But the dendrites of cortical neurons do undergo activation. The
current flowing from the soma into the dendrites will persist until the dendrite membrane potential reaches its activation threshold, at which point charge will rush across the membrane, raising the
membrane potential to 100 mV above its resting potential without any need for circulating current. The activation potential of a membrane tends to be roughly 15 mV above its resting potential. To the
first approximation, therefore, we see that 15% of the jump in the dendrite membrane potential will be brought about by circulating current, while the remaining 85% will be generated by the local
movement of ions through the membrane. We have already demonstrated that this local movement of ions has a negligible effect upon the extracellular potential. It is only the extracellular activation
current, which is circulating, that affects the electrical potential of the extracellular fluid. To obtain a better estimate of the extracellular activation potential, therefore, we take our 20-nA
estimate of the extracellular activation current, multiply by 15%, and obtain an estimate of 3 nA.
With a 3-nA extracellular activation current, we will obtain a −75 μV extracellular activation potential just outside the soma or our example neuron, and +4-μV outside the dendrites. Buszaki et al.
report −100 μV and +10 μV respectively. Their measurements were, however, obtained from pyramidal neurons. Pyramidal neurons have a branching dendrite arrangement more complex, asymmetric, and
far-reaching, than the simple arrangement we consider here. They also appear to have synapses of increasing strength as we move farther along the dendrites from the soma.
In Gold et al, the authors consider many sources of current into and out of a neuron during the evolution of an action potential. By adjusting the parameters of a neuron model, they obtain good
agreement between simulated and observed extracellular action potentials. Thus we assume that such currents can, in principle, explain the extracellular action potential. The authors do not, however,
tell us how the currents of their simulation combine to produce agreement with observations. Thus we cannot say that the simulation performed by the authors either confirms or denies our analysis.
The authors do say that the morphology of the dendrites has very little effect upon the size or shape of the extracellular activation potential, which is consistent with the calculations we present
Dipole Potential
We can model the extracellular activation current, or any other current that circulates through a neuron and the extracellular fluid, with two equal and opposite current sources. These sources
represent the current entering the fluid from the cell and entering the cell from the fluid. Such an arrangement is a current dipole. We can obtain the electrical potential induced by a current
dipole by considering each of its current sources separately, and adding their effects together. We perform this calculation below. We use r to indicate the range from the mid-point between the two
Figure: The Electrical Potential Arising From A Current Dipole.
In the plane x = 0, the field is zero. When we are far from the dipole, moving in such a way that x increases with r, the potential induced by the dipole drops as 1/r ^2. When we are very close to
one of the current sources, so that a >> r, the field is dominated by that of the nearer current source. The potential close to either current source will vary in inverse proportion to the range of
the source. We see that the produce Ia appears in our solution. This we call the dipole moment, in units of A-m. We express the strength of a dipole in terms of its dipole moment, because this value
dictates the field far from the dipole.
In the case of most cortical neurons, the soma is at the center of a sphere filled by their dendrites. Thus a will be zero and there will be no extracellular action potential outside the extent of
the dendrites. In the case of a pyramidal neuron, the distribution of dendrites is not symmetric. They extend off to one side of the soma, with the farthest endings up to 1000 μm away. The following
figure shows several reconstructed neurons, which we obtained from Rafa Yuste's cell database at Columbia University.
Figure: Cortical Neuron Reconstructions. Dendrites are red, axons are blue, and soma are black. Click on images for full size. All reconstructions taken from Rafa Yuste's cell database. The database
names of the imageas are AM50_1-1 (top-left), AM52_3-3 (top-center), AM52_3-4 (top-right), AM54_1 (bottom-right), AM76-1 (bottom-center), and AM60_1 (bottom-right). So far as we can make out from the
database comments, all are pyramidal neurons except for the bottom-right, which is a basket neuron.
Looking at the above reconstructions of pyramidal neurons, we see a cluster of dendrites around the soma, and another cluster roughly half a millimeter above the soma. Let us consider the
extracellular activation current flowing out of the upper cluster and into the some below, through the apical dendrite. We let a = 500 μm. We estimated the extracellular activation current to be 3 nA
for a twenty-dendrite, radially-symmetric neuron, such as the basket neuron shown above. If we suppose that a pyramidal neuron has approximately the same activation current, and we suppose that two
thirds of it flows along the apical dendrite, we let I = 2 nA. With these values, the pyramidal neuron activation produces a dipole of 1000 pA-mm. When this neuron activates, its extracellular
activation current will generate a potential in the surrounding brain tissue. At location x = 0.5 mm, y = 0 mm, this potential will be roughly +1.0 μV.
Excitory Current
Neurons communicate with one another through synapses. As Dennis Kaetzel explains, a neuron's dendrites hold its input synapses and its axon holds its output synapses. One neuron's axon provides the
pre-synaptic membrane while another neuron's dendrite provides the post-synaptic membrane. When the first neuron activates, the activation propagates down the axon to the pre-synaptic membrane where
it provokes the release of neurotransmitter chemicals. These chemicals react with the post-synaptic membrane and cause current to flow through the membrane and into the dendrite. As current flows
into the dendrite, the electrical potential within it rises. Current flows away from the synapse towards the soma.
When current flows into the soma, its membrane potential rises. When the membrane potential reaches a threshold, it activates. Thus current flowing into a dendrite makes it more likely that a neuron
will activate, and so we call it an excitory post-synaptic current (EPSC). The rising potential within the post-synaptic membrane is an excitory post-synaptic potentials (EPSP). In Fricker et al, we
see plots of EPSP in young rat hippocampal neurons. A typical EPSP in a pyramidal cell rises 10 mV in 10 ms, reaches a maximum, and falls 10 mV over the course of 100 ms.
In Extracellular Circulation we considered the circulation of the extracellular activation current through the soma, dendrites, and extracellular fluid. In that case, the current flowed out of the
soma, down the dendrites, into the extracellular fluid, and back into the soma. An excitory current flows in the opposite direction. The post-synaptic membrane potential drives current along the
dendrite, through the impedance of the intracellular fluid, charging up the membrane capacitance as it goes. Current leaving the base of the dendrite enters the soma and flows out through the soma
membrane capacitance and into the extracellular fluid. As we showed above above, this same current must flow out of the extracellular fluid and into the post-synaptic membrane, or else the total
charge of the neuron will not be conserved.
Suppose we have a pyramidal neuron with a 1000-μm long dendrite, diameter 3 μm, internal resistivity 3 MΩμm, and membrane resistance 2 TΩμm^2. The dendrite resistance from end to end will be 400 MΩ,
while the resistance of its membrane will be 200 MΩ. The capacitance of the membrane will be around 10 pF, using 0.01 pF/μm^2. The time constant of the capacitance with the dendrite resistance is 40
ms, and with the membrane resistance is 20 ms. If we could ignore the membrane resistance and capacitance, the 10-mV excitory voltage would cause 25-pA current to flow from the synapse the soma. But
most of this current leaks out along the way or enters the membrane capacitance. Let us use 10 pA arrives at the soma. This current flows through the soma capacitance and into the extracellular
fluid. An identical current flows into the synapse from the extracellular fluid. Thus we have a current source of −10 pA in the synaps 500 μm from the a current source of +10 pA in the soma. This
creates a dipole of strength −5 pA-mm. Here we are using a sign convention consistent with the one we used with the extracellular activation current, which produced a positive dipole moment.
According to the Institute of Neurology, a single pyramidal neuron synapse generates a current current dipole of 20 pA-mm. In Leresche et al, the authors measured synaptic currents of up to 400 pA in
rat cortical neurons. Dennis Kaetzel says 10 pA for a synaptic current is more likely, and that 200 pA at the soma will cause the cell to activate. We will assume that a single synapse produces a
dipole of 10 pA-mm and that ten such synapses firing within 10 ms of one another will cause the neuron to activate. In the 10 ms before a pyramidal neuron fires, the excitory currents of ten synapses
will be acting together to stimulate the neuron, thus producing for 10 ms a dipole of −100 pA-mm.
We expect the excitory post-synaptic currents flowing into a pyramidal neuron to generate a current dipole of order −100 pA-mm in the extracellular fluid for roughly 10 ms before the neuron
activates. Once activation takes place, we expect a dipole of order +1000 pA-mm for roughly 1 ms. Thus we expect the total charge movement due to excitory currents to be of the same order but
opposite to the movement due to activation currents.
Here we attempt to predict the amplitude of the electroencephalography (EEG) signal we will record from the cortex of a live animal with a pair of electrodes. Our assumption so far is that the signal
we record is the extracellular potential generated by a combination of activation and excitory currents circulating through pyramidal neurons in the cortex. Both these currents pass through a neuron
to generate a current dipole in the extracellular fluid. The excitory current flows into a post-synaptic membrane on a dendrite, and out of the body of the neuron. The activation current flows into
the body of a neuron and out of the dendrite walls.
The strength of a current dipole is Ia, where I is the circulating current and a is the distance between the source and sink of the current. For any neuron with a symmetric distribution of dendrites
about its soma, a will be zero for both the excitory current and the activation current. The activation and excitation of such a neuron will have no significant effect upon the extracellular
potential outside the extent of the neuron. Because we are concerned with estimating the extracellular potential hundreds of microns from a neuron, we assume that the contribution of symmetric
neurons to the extracellular potential is zero.
We are left with the contribution to extracellular potential made by asymmetric neurons. By far the most common asymmetric neuron in the cortex is the pyramidal neuron. These are arranged in layers,
with their dendrites extending out towards the cortical membrane. Thus their excitory currents will tend to circulate in the one direction and their activation currents in the opposite direction. Let
us suppose that all the pyramidal neurons within radius R of a point on the top surface of the layer are producing the same extracellular current, I, at the same time. Let us further suppose that we
place a high-impedance electrode a distance h above the center of this coherent, circular region. We present this arrangement in the figure below.
Figure: The Electrical Potential Above A Coherent Layer of Pyramidal Neurons.
Each neuron within the circle x ≤ R produces a current +I at the top surface of the layer of pyramidal neurons, and −I at the bottom. The height of the layer is a, this being our estimate of the
length of the apical dendrite of the pyramidal neuron. We assume a uniform density of neurons, s, per unit area, and uniform resistivity, τ, for the extracellular fluid. Each neuron makes a
contribution to the extracellular potential at our electrode. We add these together by integration, and obtain an expression for the total potential, V. In the case where 2R >> a and R >> h, this
expression reduces to V = τsIa/2.
Suppose all the neurons within a large radius experience an excitory current I = −200 pA in the 10 ms leading up to activation. The sign of the current is negative because the excitory current flows
out of the soma, in the opposite direction to that indicated in our figure. Let us use a = 0.5 mm and τ = 3 Ωm. According to Dorph-Petersen et al., the density of pyramidal neurons in the gray matter
of the human auditory cortex is 2×10^10 m^−2 (20,000 per square millimeter). The thickness of the gray matter is of order 1.5 mm. Some neurons extend all the way through the gray matter. Others
extend only part-way through. Let us assume a density of 10^10 m^−2 for our 0.5-mm thick layer. With these values, assuming a large radius of coherent excitation, we arrive at V = −1.5 mV. The
negative sign is a result of the excitory current flowing into the top surface of the layer.
When R >> h, V is independent of h. But V always depends upon a and I. Now suppose R = 1 mm, h = 0.25 mm, and a = 0.5 mm. We no longer have R >> a, nor even R^2 >> (a + h)^2, so we must use our exact
solution to the integral in the derivation we present above. We obtain −0.84 mV. If we increase R to 2 mm we obtain −1.14 mV. The neurons between radius 1 mm and 2 mm contribute roughly 0.3 mV to the
voltage seen by our electrode. If we consider one quarter of this annulus of neurons, so that we are left with a quarter-annulus of inner radius 1 mm and outer radius 2 mm, this area of 2.4 mm^2 will
contribute roughly 75 μV to the voltage on an electrode 1.5 mm away from its center of activity. In general, an electrode a distance R from a coherent area of width R will detect an excitory
potential of order 100 μV.
In order to produce an extracellular potential of 1 mV, we need at least several square millimeters of a pyramidal layer acting coherently. That is to say, we need R to be of order 1 mm or larger. If
that is the case, then it is also true that for a region of diameter R over the center of this coherent region, the extracellular potential varies by less than 50%. We can place an electrode of
radius R/2 above the coherent region and we will cause little disturbance of the current flow, because the conducting surface of the electrode does not connect points in the brain that have greatly
differing potentials. Another way to arrive at the same conclusion is to consider a coherent region of radius R to be a source of extracellular potential with an external impedance of τ / 4πR. If we
place nearby an electrode of radius R/2 and therefore impedance τ / 2πR, we see that the flow of current away from the coherent region will indeed be affected by the presence of the electrode by not
reduced by more than 50%.
Thus we conclude that excitation of a layer of pyramidal neurons 1 mm in radius will generate an extracellular potential above the layer of order −1.5 mV. This we we can detect with an electrode of
diameter 1 mm or less. If our electrode is 1 mm way from such a region, it will detect a potential of order −150 μV.
So far we have considered the excitory current during the 10 ms prior to activation, and we estimate the magnitude of this current to be 200 pA for a typical pyramidal neuron. More than 10 ms prior
to activation, the excitory current will be less than 200 pA, but it will not be zero. Given the tens of thousands of synapses on every pyramidal neuron, connecting them to tens of thousands of other
neurons, some of which might be several millimeters away in another region of the cortex, we conclude that the excitory current will never be zero. Most likely, the average excitory current will be
much smaller than 200 pA, perhaps 20 pA. In the 100 ms prior to a coherent activation, however, the excitory current could be increasing steadily from 20 pA to 200 pA, so that our electrode would
record a drop in potential over the course of 100 ms to a minimum of −1.5 mV.
Following the excitation of our layer of pyramidal neurons, we can expect the neurons to activate. Suppose they all activated at the same time. Consider the simple, ideal case where 2R >> a and R >>
h, for which V = τsIa/2. We estimated that I for activation current is ten times the excitory current and in the opposite direction, which makes it +2 nA. Unlike the excitory current, which must
endure for at least 10 ms, and possibly 100 ms, the activation current lasts for roughly 1 ms. If, somehow, all neurons in our layer were to activate within 1 ms of one another, we would see V = +15
mV for 1 ms. Given that the excitory current is the largest current we have so far identified in the extracellular fluid, this +15 mV is the largest signal possible in EEG. If, on the other hand, the
activation of the neurons in the layer was spread out over 10 ms interval, the average activation current would be ten times smaller, but it would be ten times as enduring, so we would see V = +1.5
mV for 10 ms.
Activation Frequency
We now consider how often a neuron can activate. The activation of a single neuron takes roughly 2 ms for its rising and falling edges. The 2-ms activation time puts a lower bound upon the period of
any repetition of the activation. The activation frequency of a neuron cannot be greater than 500 Hz.
The concentration of sodium ions within a neuron is a small fraction of the concentration in the extracellular fluid, while the concentration of potassium ions within a neuron is more than ten times
the concentration in the extracellular fluid (see here). It is this difference in concentration that motivates sodium ions to pass rapidly through sodium channels and into a neuron on the rising edge
of activation, and for potassium ions to pass rapidly through potassium channels out of the neuron on the falling edge of activation. Without this difference in concentration, activation will fail to
As we estimated above, the concentration of sodium ion charge in the extracellular fluid is 16 pC/μm^3, and the sodium ion charge that flows through the membrane during a single activation is roughly
1 fC/μm^2. Meanwhile, the concentration of sodium ions inside the cell is roughly 10% of the external concentration. Let us suppose that activation remains possible so long as the sodium ion
concentration within the cell is no more than 20% of the concentration outside, or around 3.0 pC/μm^3.
Consider a pyramidal cell body that is roughly spherical with diameter 20 μm. Its surface area is roughly 1300 μm^2 and its volume is 4200 μm^3. During a single activation, 1.3 pC of sodium charge
flows in, raising the concentration within the cell by a mere 0.3 fC/μm^3. Even if there were no sodium pumps in the cell acting to lower the sodium concentration, this inflow could still occur five
thousand times before the sodium concentration within the cell would exceed 3.0 pC/μm^3 and, by assumption, activation would no longer be possible. Consider instead an axon of diameter 1.0 μm. Each
1-μm of this axon has volume 0.79 μm^3 and surface area 3.1 μm^2. During a single activation, the concentration of sodium within the axon will rise by 4 fC/μm^3. This could occur five hundred times
before activation would no longer be possible.
Thus we can understand statements like, "The ions exchanged during an action potential, therefore, make a negligible change in the interior and exterior ionic concentrations," (see here). Even if we
ignore the sodium and potassium pumps that work to restore the equillibrium concentration of these ions in the neuron, it appears that the initial concentration differences are adequate to support
activation at 500 Hz for roughly one second.
Activation begins with the opening of sodium channels. So far as we know, these channels have three states, which we can describe as closed, open, and inactive. When a neuron is in its resting state,
with membrane potential roughly −70 mV, almost all the channels are in the closed state. When the membrane potential reaches −55 mV, many of them transition into the open state, which raises the
membrane potential further, and encourages still more of them to open. Each sodium channel remains open for only a fraction of a millisecond before it enters its inactive state. This inactive state
is believed to be the result of the open channel being blocked by a component of the channel protein. The probability of a transition out of the inactive state is low, so long as the membrane
potential remains above −70 mV. Once the membrane potential returns to −70 mV, the channel has a good chance of switching to its closed state within 10 ms. At the peak of an activation potential,
almost all the sodium channels in the membrane are inactive. The potassium channels, which are slower, are responding to the rising membrane potential, and are switching from closed to open. They
remain open until the membrane potential drops below −70 mV, at which point they close. Thus the activation potential ends with the sodium channels in their inactive state and the potassium channels
closed. After roughly 10 ms, the majority of the sodium channels will transition to their closed state, and the cell will be ready for another activation potential.
The inactivation of sodium channels prevents a neuron from re-activating immediately. We must allow for a refractory period before the neuron re-activates. The refractory period for sodium channels
in a neuron is of order 10 ms. If we allow 2 ms for the activation potential, and 8 ms for the sodium channels to return to their closed state, we arrive at a minimum neuron activation period of 10
So far as we can tell, EEG is generated by the coherent circulation of current through tens of thousands of pyramidal neurons. We can find no other mechanism by which a signal of more than a few
microvolts could be induced in a pair of skull screws. Thus it is our hypothesis that the EEG signal is generated solely by the extracellular excitory and activation currents of pyramidal neurons.
These two currents act in opposite directions, the excitory current preceding the activation current. The pyramidal neurons in the cortex are all oriented in the same direction, with their apical
dendrites growing towards the surface. Their excitory currents always tend to generate a negative potential at the surface, and their activation currents tend to produce a positive potential. In
order to develop a net potential of one millivolt in a pair of skull screws, tens of thousands of pyramidal neurons, beneath several square millimeters of the cortical surface, must be excited and
activated coherently.
If the pyramidal neurons in an area 6 mm in diameter are excited all at the same time, the extracellular potential above the cortex should drop by roughly 1.5 mV. The drop would endure for at least
10 ms, this being the duration of individual post-synaptic excitory currents, and also the time constant generated by the dendrite resistance and the soma membrane capacitance. The excitory current
might, however, build up over a hundred milliseconds, resulting in a 100-ms negative excursion in the extracellular potential with a minimum of −1.5 mV.
If all pyramidal neurons in the same region activate at the same time, we would expect to see the potential jump up by 15 mV. This jump would endure for roughly 1 ms. Given the 10-ms time-constant of
the excitory currents that provoke this activation, however, it appears to us impossible that all neurons in a cortical region could activate together in the same 1 ms period. If there were some
direct communication of activation through the extracellular fluid from one neuron to the next, such synchronous activation might be possible. But our study has convinced us that no such direct
communication exists. The activation of one neuron has no measurable effect upon any other neuron, except through the agency of axons, synapses, and dendrites. These agencies take tens of
milliseconds to convey an effect, and the effect of a single post-synaptic current is never sufficient on its own to provoke activation. Thus we claim that the activation of neurons in a coherent
cortical region will be spread out over an interval of at least 10 ms. When spread out over 10 ms, they will produce a potential of roughly 1.5 mV.
In order to generate an extracellular potential of 1 mV, several square millimeters of the cortex must act coherently. We are therefore able to detect such a potential with a skull screw of diameter
1 mm. We could also detect the same potential with the tip of a 50-μm wire. When the wire tip is place perfectly above the center of a 1-mm diameter coherent region, the potential it detects might be
50% greater than would be detected by a screw. But if the wire tip were 2 mm from the center of the same region, it would detect only a fraction of the potential that would be recorded by a skull
screw in the same location. Thus we conclude that the skull screw is a good choice for EEG recording. It is small enough that it will not disturb the coherent activity required to produce a 1-mV EEG
signal, but large enough that it can detect such activity over a range of several millimeters.
The extracellular activation potential generated by a single neuron is, in theory, large enough for us to detect, provided that our electrode is small and located within a few microns of the cell
body. The electrode must be small so that its external impedance is large and will not disturb the extracellular currents that generate the extracellular activation potential. The electrode must be
near the cell because the individual extracellular activation voltage is significant only within a few tens of microns of the membrane. We estimate the extracellular activation voltage to be of order
−75 μV just outside the body of a neuron. This neuron does not have to be a pyramidal neuron, it can be any neuron.
Suppose we insert a 1-μm electrode into the cortex and place it a few microns from the body of a neuron. The external impedance a 1-μm electrode in the extracellular fluid is 1 MΩ, while the external
impedance of the soma is only 25 kΩ. Because the external impedance of the probe is much larger than that of the soma, the probe will have only a slight effect upon the soma's extracellular
potential. We expect the electrode to detect the full −75 μV activation spike. But the electrode tip would have to be placed within a few microns of a cell body, which is impractical when working
through a cannula and a hole in the skull of a live animal. If the probe were near dendrites instead of the soma, it would detect a +4-μV spike, which would be lost in the electrical noise of our
recording system.
Now suppose we insert a 50-μm diameter, hemispherical wire tip in the same location, so as to increase the range of detection of our electrode. The external impedance of this electrode would be
20-kΩ, which we expect will reduces the neuron's extracellular activation spike by a factor of two. Nevertheless, if there is a neuron body within ten microns of any part of our 50-μm diameter
electrode, we will pick up an activation spike of order −30 μV. Given that the baseline amplitude of our EEG signals in the 60-160 Hz range is roughly 10 μV rms, we see that we can indeed detect such
a spike, provided we use a sensor with sufficient bandwidth to observe its full amplitude. With a 50-μm electrode, we can hope to detect the activation of several dozen individual neurons in contact
with the electrode, while at the same time monitoring the EEG signal generated by tens of thousands of pyramidal neurons within a few millimeters of the electrode.
Figure: Depth (Pink, 10 kΩ) and Surface (Blue, 2 kΩ) Potentials. The depth electrode is a 125-μm diameter wire tip. The surface electrode is a 500-μm diameter screw end. Both are recorded
simultaneously with an A3028A-HCC transmitter. The common electrode is also a 500-μm diameter screw. Recorded at Univerity of Edinburgh, (M1443256453.ndf). The baseline amplitude of the depth
recording is around 400 μV rms, and of the surface recording is around 80 μV rms.
With a skull screw, we expect never to see any individual activation potential. The conducting surface of a 1.6-mm diameter skull screw presents an external impedance of only 800 Ω, which will reduce
the extracellular activation potential of a single neuron by a factor of thirty, to the point where it will be lost in the combined excitory and activation potentials of the tens of thousands of
other neurons near the conducting surface of the screw.
Having concluded that a skull screw is well-suited to detecting the coherent activity of tens of thousands of neurons, which is what we call Electroencephalograph (EEG) or Local Field Potential
(LFP), let us consider the second electrode required to define the difference in potential recorded by our sensor. Our sensor will measure the difference between the potential of the first and second
electrodes. A negative potentials applied to the second electrode will appear as a positive potential in the difference. If we want our first electrode to record the activity in one region of the
cortex, we must arrange for our second electrode to record activity from a much larger area of the brain, to give us what we can regard as a ground potential, or average cortex potential. If we place
our second electrode over another region of the cortex, we will be observing simultaneously all coherent activity below both electrodes. We will be unable to distinguish between negative spikes
beneath one electrode and positive spikes beneath another.
There are several solutions to this problem. One is to use a skull electrode, such as a wire running along the top surface of the skull. Such an electrode makes contact with a large portion of the
cortex without affecting the flow of current in the brain. The potential picked up by this electrode will to some degree approximates the average brain potential. When one region beneath the
electrode experiences excitation, another region may be experiencing activation, so that the average potential picked up by the electrode is zero. The disadvantage of the skull electrode is that
there is a resistance of order 10 kΩ between the wire and the brain, but only 100 Ω or so between the wire and the skin. As a result, the electrode becomes sensitive to voltages generated by movement
of the skin over the skull. We are likely to see transient voltages induced by grooming, chewing, and scratching.
Another solution is to use a second skull screw placed at a point on the brain where we know there is very little coherent activity. If we were to place two electrodes in this region, we would
observe a signal of no more than a few tens of microvolts. If such a region exists, we can place our reference electrode here and be confident of obtaining a good connection to the brain's
extracellular fluid, without the disturbance of local activity. So far as we can tell, the neighborhood of the bregma is such a region, and our collaborators have been using a screw near the bregma
as a reference electrode. We are, however, unable to find a paper to corroborate this claim.
The following figure shows fluctuations in the EEG that we call spindles. These were recorded in free-moving rats by Children's Hospital Boston with 1.2-mm diameter skull screws. Spindles are not
associated with epilepsy. We see them in healthy non-epileptic animals. One of the tasks of automatic seizure detection is to distinguish spindles from seizures.
Figure: Spindles in Rat Electroencephalograph. These patterns are seen in healthy animals as well as epileptic animals. The voltage scale is 400 μV/div and the time scale 400 ms/div.
The spindles produce a fluctuation of peak-to-peak amplitude 2 mV. The period of the fluctuations is of order 100 ms. Such signals are consistent with excitation of a pyramidal layer over an interval
of 50 ms by some other region of the cortex, followed by activation of the entire layer over an interval of 50 ms. Our calculations suggest that such coherent excitation and activation could produce
a signal with peak-to-peak amplitude up to 3 mV.
The following figure shows spikes the signal recorded by Children's Hospital Boston from epileptic rats. These spikes are larger than the spindles, and they asymmetric. They spike down in the
negative direction, but barely at all in the positive. These spikes occur when the animal is having a seizure. In these animals, the seizures were the result of impact damage to the cortex.
Figure: Epileptic Seizures in Rat Electroencephalography. These patterns are seen only in epileptic animals. The voltage scale is 400 μV/div and the time scale 400 ms/div.
These seizure spikes are consistent with the coherent excitation of pyramidal neurons by another region of the cortex, as in the spindles, but without the ensuing activation that creates the
symmetric spindle shape. The largest potential we observe in the spikes is of order −1.5 mV, which is consistent with our calculation for the coherent excitation of a layer of pyramidal neurons below
a screw electrode. If the layer does not activate during this excitation, there will be no activation current to offset the excitory current, so that the negative excursion will be greater, and there
will be no positive excursion to follow it.
The following figure shows sustained fluctuations of 60-100 Hz in the signal recorded by the Institute of Neurology in epileptic rates. We call these fluctuations hiss. We observe hiss in rats that
have been treated with tetanus toxin. Note the shorter time scale of 100 ms/div.
Figure: Hiss in Rat Electroencephalography. The voltage scale is 400 μV/div and the time scale 100 ms/div.
The peak-to-peak amplitude of the hiss is roughly 400 μV. Such activity is consistent with excitation of a pyramidal layer by itself. Over 5 ms the excitory current builds up, followed by activation
of the entire layer over another interval of 5 ms. During activation, the excitory current for the next cycle will start to build up. Thus we obtain oscillations of order 100 Hz. If we take the
Fourier transform of the hiss, we see a second and third harmonic of the fundamental 100 Hz oscillation. The power of the second harmonic at 200 Hz will be in proportion to the asymmetry of the
oscillation. The power of the third harmonic will be in proportion to how sharp are the edges of its extremes.
The maximum frequency of the hiss we observe is our epileptic rats is around 100 Hz. Our recordings have full sensitivity all the way up to 160 Hz. Many recent papers in neuroscience have presented
what the authors believe are oscillations higher than 160 Hz in the EEG they record from human and animal subjects. The recordings they make with either skin patch electrodes, screw electrodes, or
micro-electrodes. Micro-electrodes can record not only EEG, but also individual activation potentials at the same time, giving rise to a brush of high frequency power on top of a spike or trough in
the EEG. So far as we can tell, however, EEG is generated entirely by post-synaptic excitory currents. These have a time constant of 10 ms. It is therefore impossible for EEG to contain oscillations
much higher than 100 Hz, even if the neural network is oscillating at a much higher frequency.
Papers such as Buzski et al. show evidence of local field potentials that contain bursts of 200 Hz. But they obtained these bursts after filtering to 50-250 Hz, and their bursts of 200 Hz are always
accompanied by spikes in the EEG. Any such spike will excite the 50-250 Hz filter to ring at 50 Hz and 250 Hz, so it is not clear to us that the oscillations are present in the unfiltered data. In
Bragin et al., the authors report observing oscillations in EEG recorded by micro-electrodes in rats. They obtain ripples when they filter the EEG to 250-500 Hz. In their Figure 2, they show a 100-ms
negative pulse in the EEG signal, with high-frequency activity near the bottom of amplitude roughly 100 μV. This suggests to that excitation is mounting in the neighborhood of the micro-electrodes,
and that nearby neurons are activating, thereby generating 1-ms spikes in the extracellular potential. These spikes create a burst of high-frequency activity that stimulate a 250-500 Hz band-pass
filter to produce oscillations at 250 Hz and 500 Hz. The authors appear to believe that these oscillations are present in the original signal, when in fact they could simply be an artifact of their
band-pass filtering. Indeed, Benar et al. share our skepticism about the existence of high-frequency oscillations in EEG, demonstrating in a variety of ways that filtering can create the illusion
that such oscillations exist.
The following extracellular field potential was recorded from the dentate gyrus of a rat following perforant pathway stimulation, but not during the stimulation. The electrode is an insulated 75-μm
steel wire cut off at the end to expose the conductor (see Norwood et al.).
Figure: Dentate Gyrus Spikes After Perforant Pathway Stimulation. Recorded at Philipps Univerity, Marburg (M1422757041, 1831 s, No4) with a Subcutaneous Transmitter (A3028R) sampling at 512 SPS and
providing bandwidth 1-160 Hz.
In Andersen et al., the authors describe how a "perforant pathway volley", caused by electrical stimulation of the perforant pathway in the entorhinal area, generates sharp, short-lived, negative
spikes of up to 10 mV in the extracellular fluid at the same depth as the granule cell somata in the dentate gyrus. The authors argue that the initial negative spike is due to circulating, excitory
post-synaptic current, while the termination of the spike is due to circulating inhibitory post-synaptic current. As we have seen, however, excitory post-synaptic current flowing out of the somata
will cause a negative pulse in extracellular potential at the depth of the somata. Conversely, if inhibitory current flowed into the somata, this would require a negative pulse. The pulses observed
above are due to the circulation of activation current when an entire population of granule cells fire at once in response to a perforant pathway volley. The activation propagates up the dendrites
from the somata, which causes current to flow out through the dendrite membranes, through the extracellular fluid, and back to the somata. Thus we have a negative spike on the extracellular field
potential in the layer containing the granule somata, and a smaller, positive spike at the tips of the dendrites. In our section on excitory current we used the equation we derived for dipole
potential to predict that an entire layer of pyramidal neurons activating within 100 μs would generate a positive pulse at the brain surface of order 15 mV. The same equation predicts a negative
pulse in the layer of the pyramidal somata. Here we see simultaneous firing creates a negative pulse of 10 mV in the somata layer.
The EEG signal is generated by currents circulating in layers of pyramidal neurons of the cortex. When tens of thousands of such neurons act coherently, they can generate signals of several
millivolts. A symmetric fluctuation of 1 mV with frequency 10 Hz is consistent with coherent excitation of one cortical region by another. A sequence of −1 mV spikes with frequency 10 Hz is
consistent with the failure of the local neurons to activate after repeated excitation from another cortical region. Hiss of amplitude 400 μV and frequency 100 Hz is consistent with a cortical region
exciting itself.
When a layer of pyramidal neurons is not acting coherently, we still expect its excitory current will not be zero, so our baseline EEG signal will consist of fluctuations in the baseline excitory
current. What we observe in EEG is a baseline signal of order 30 μV, which suggests that the fluctuations in excitory current during periods of little activity are of order 2% of the maximum excitory
current during fully-coherent excitation.
Any oscillation contained in EEG will be suppressed by the time-constant of the post-synaptic excitation process. This time constant is of order 10 ms. Even if a network of neurons were oscillating
at 500 Hz, this oscillation would not appear in the EEG signal. The highest frequency oscillation we can expect in EEG will be of order 100 Hz. Our automated event detection can find high frequency
oscillations easily. In thousands of hours of EEG we have examined by eye and hundreds of thousands we have been through with automated detection, we have never seen an oscillation higher than 120
Hz. So far as we can tell, there exists no mechanism by which an EEG oscillation higher than 200 Hz could take place, neither in a group of neurons acting coherently, nor in a single neuron acting
alone. Reports of high frequency oscillations (HFOs) at 500 Hz are, we claim, artifacts of band-pass filtering, or individual activation currents detected by a high-impedance electrode, or they are
added to the EEG by distortion in the recording system.
If our investigation has overlooked one or more sources of the EEG signal, we can expect to observe signals in our EEG recordings that are inconsistent with the sources that we have identified. Until
then, however, we will trust in the accuracy of our analysis.
Screw electrodes are suitable for recording EEG. Smaller electrodes will detect EEG also, and perhaps the activation of individual neurons, although these activations will produce signals of only a
few tens of microvolts. In practice, we find that a bare wire held in place with a screw is more practical and picks up a stronger signal. Because the wire is not connected to the screw, we can turn
the screw to fasten it in place without twisting the wire. The wire itself can protrude a little farther down form the skull, where the EEG signal has greater amplitude. | {"url":"https://opensourceinstruments.com/Electronics/A3019/EEG.html","timestamp":"2024-11-13T22:37:08Z","content_type":"text/html","content_length":"102312","record_id":"<urn:uuid:78da77ac-65a9-4c88-953d-58cace2dd9fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00332.warc.gz"} |
Paper Title
Prediction Of Compressive Strength Of Light Weight Fiber Reinforced Concrete Using Artificial Neural Networks
Fiber reinforced concrete (FRC) is a type of concrete that contains discontinuous fibers distributes randomly among the concrete block.Lightweight concretes can be produced with an over-dry density
range of approximately 300 to a maximum of 2000 kg/m3. In this paper, the Artificial Neural Networks are utilized to predict the effect of the addition of steel nails as fibers on the compressive
strength of lightweight concrete with crushed bricks used as coarse aggregate. The study involves testing of cubic concrete samples with various mixing proportions and water cement ratios. The
results showed thatthe highest value of the compressive strength of 7 days age for (1:2:4) proportion is obtained with fiber adding percentage 5% with w/c ratio 50% for fiber size 1". While for 1.5"
fiber size, the 10% fiber addition with w/c of 50% has the greatest value of concrete compressive strength. It is also shown that the highest value of the compressive strength of 28 days for (1:2:4)
proportion is obtained with fiber adding percentage 10% with w/c ratio 60% for fiber size 1". While for 1.5" fiber size, the 10% fiber addition with w/c of 50% has the greatest value of concrete
compressive strength. It is concluded that the highest value of the compressive strength of 7 days for (1:1.5:3) proportion is obtained with fiber adding percentage 10% with w/c ratio 60% for fiber
size 1". While for 1.5" fiber size, the 10% fiber addition with w/c of 50% has the greatest value of concrete compressive strength. Also, it is found that the highest value of the compressive
strength of 28 days age for (1:1.5:3) mixing is obtained with fiber adding percentage of 10% and w/c equal 50%. The results of prediction showed that for the mixing proportion (1:1.5:3), the
compressive strength decreases with increasing of fiber addition and the 1" nail size gives higher values of compressive strength than 1.5" size. Also the prediction results showed that for the
mixing proportion (1:2:4), the compressive strength decreases with increasing the fiber addition ration and the 1.5" nail size gives the higher compressive strength than 1". Keyword - Fiber
Reinforced Concrete, Prediction of Compressive Strength, Neural Networks, Reinforced Concrete, Lightweight | {"url":"https://www.worldresearchlibrary.org/abstract.php?pdf_id=6458","timestamp":"2024-11-06T20:57:59Z","content_type":"text/html","content_length":"3404","record_id":"<urn:uuid:2fa61f7d-9faa-42bc-afa3-880f0d3290c1>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00099.warc.gz"} |
Linked List
A Linked List is a linear data structure, which consists of nodes which point from one to the next to the next.
• A singly linked list has nodes which only point forward to the next node.
• A doubly linked list has nodes which also point back to the previous node.
• A cyclical linked list is one in which the last node's "next" points to the first node.
Operations and their time complexity (big-o)
There doesn't seem to be consistent terminology accross languages for these operations, so I'm using the names I find most intuitive.
• Prepend:
□ Adds an item to the beginning of the list
□ O(1): Constant time
• RemoveFirst:
□ Removes the first item from the list, and makes the second (if any) the first
□ O(1): Constant time
• Append:
□ Adds an item to the end of the list
□ O(n) in a naive version which traverses the entire list to reach the end
□ O(1) in an optimized version which maintains a reference to the last item
• RemoveLast:
□ Removes the last item from the list
□ O(n) in a naive version which traverses the entire list to reach the end
□ O(1) in an optimized version which maintains a reference to the last item
• Reverse:
□ Reverses the older of the items in the list
□ O(n) because it must traverse the entire list once to reverse it | {"url":"https://joshuaclanton.dev/notebook/linked-list/","timestamp":"2024-11-04T00:33:58Z","content_type":"text/html","content_length":"3771","record_id":"<urn:uuid:80d68ac6-a07e-4239-aafb-317bcbefa4da>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00252.warc.gz"} |
How to build a spectrometer - theory
Spectroscopy is a non-invasive technique and one of the most powerful tools available to study tissues, plasmas and materials. This article describes how to model a lens-grating-lens (LGL)
spectrometer using paraxial elements, addressing the design process from the required parameters to the performance evaluation with Advanced OpticStudio features such as Multiple Configurations,
Merit Functions and ZPL macros.
Authored By Lorenz Martin
How to build a spectrometer -theory - YouTube
Optical spectrometers are instruments to measure the intensity of light as a function of wavelength. There is a variety of generic setups for spectrometers. This article features the
line-grating-line (LGL) spectrometer. After setting up the spectrometer in OpticStudio, its critical design parameters are identified and discussed.
Basic setup of an LGL spectrometer
The basic setup of an LGL spectrometer is as follows:
The polychromatic light enters the spectrometer through the entrance pinhole resulting in a divergent beam. The collimator lens is then used to generate parallel rays. The following transmission
diffraction grating is the core element of the spectrometer. It changes the direction of the light beam as a function of its wavelength (i.e. its colour). The focusing lens, finally, focuses the
light beams on the detector. Every wavelength has a different position on the detector, and by measuring the intensity as a function of position on the detector the spectrum of the light is obtained.
As a first approach, this setup is modelled in OpticStudio using paraxial elements. Doing so allows to ignore aberration and optimization issues, which are discussed in the Knowledgebase article "How
to build a spectrometer – implementation". On the other hand, our LGL spectrometer is suitable for understanding the basic physical concepts of a spectrometer and its resolution.
Modelling a paraxial LGL spectrometer in OpticStudio
System setup
Let’s start with setting the basic parameters of our design in the System Explorer. Set the Entrance Pupil Diameter as follows (we will see later how the aperture affects the performance of the
With our spectrometer, we want to analyse visible light in the range from λ[min] = 400 nm to λ[max] = 700 nm wavelength, resulting in a bandwidth of Δλ = 300 nm. Hence, we set three wavelengths, two
at the edge of the spectrum and the central wavelength λ[0] at 550 nm. The latter will also be the primary wavelength:
Collimator lens
This done, we can proceed with the first element in the spectrometer and add the first lines in the lens file. We are assuming that the light originates from a point source (corresponding to a
pinhole). Using a paraxial lens with a focal length of 30 mm positioned 30 mm behind the pinhole will produce a collimated beam. A second surface of 30 mm thickness is inserted to account for the
distance between collimating lens and diffraction grating:
The 3D Layout of our design will look like this:
Diffraction grating
The next element in the spectrometer is the transmission diffraction grating. Let’s have a closer look at the grating before implementing it in OpticStudio, since this is the crucial element of the
The grating is essentially a stop with several slits arranged in parallel and with equal distances between them. For the sake of simplicity, we first have a look at a grating with only two slits (top
The incident beam is collimated, so all the rays in the beam are parallel to each other. If we consider the two rays passing through the two slits (red arrows), we can calculate the path difference,
Δs, between these two rays (blue section) as a function of the distance between the two slits, d, the angle of incidence, α, and the diffraction angle, β:
We want this path difference to be one wavelength in order to have constructive interference between the two rays:
The two previous equations enable us to calculate the diffraction angle:
This formula describes how polychromatic light is split into its wavelengths in a spectrometer. As we can see, the diffraction angle only depends on the wavelength (for given α and d).
The concept of the double slit can then be extended to a grid with many slits, which will concentrate more rays of a specific wavelength in the direction of the diffraction angle and thus enhance the
diffraction efficiency.
There is much more to say about diffraction gratings and their features such as efficiency, blazing angle, etc. This information can be found in the Knowledgebase article "Simulating diffraction
efficiency of surface-relief grating using the RCWA method". We just keep in mind that a diffraction grating is characterized by its distance between two adjacent slits and that it diverts the
collimated light beam as a function of its wavelength.
When implementing the refraction grating in the spectrometer, the angle of incidence is typically chosen so that it is equal to the diffraction angle for the central wavelength, i.e.
and using equation 1
In our example we assume d = 0.5 µm and get α = 33.367°. With that in mind, we set up the diffraction grating in OpticStudio. First, we introduce a coordinate break in our lens file and set the Tilt
About X to 33.367° in order to tilt the rays by the angle of incidence. The next line to add is the Diffraction Grating. Set the Lines/µm (which is the inverse of d) to 2, and the diffraction order
to –1. Another Coordinate Break is needed to account for the diffraction angle. Here, we set a Chief Ray solve for Tilt About X to have the coordinates automatically follow the primary wavelength:
Focusing lens and detector
The last group of elements in the spectrometer is the focusing lens and detector. We add four lines to our lens file being space between grating and focusing lens (30 mm), the paraxial focusing lens
(focal length f[f] = 30 mm), space accounting for the focal length and the detector plane, respectively:
Our 3D Layout will now look like this, once you have adjusted the settings as shown here:
One last setting concerns the rays in the 3D Layout marked with the red circle in the previous image where OpticStudio draws too many lines. They can be eliminated by setting the properties of
surface 6 in the lens file:
Now we are done with the design of our paraxial LGL spectrometer and we can open a Standard Spot Diagram to view the spot size in the image plane (i.e. on the detector) at the three wavelengths we
chose initially:
It is seen that the spot size is infinitesimally small which is only possible because we chose paraxial lenses and used geometric ray tracing. In reality, the spots are larger due to diffractive
effects. That’s what we will address in the last part of this article. But first we have a closer look at the focusing lens and at the detector to understand how they must be dimensioned.
Spectrometer resolution
Detector width
The width of the detector is defined through three parameters: The bandwidth of the spectrometer, Δλ = λ[max] – λ[min], the slit distance of the grating, d, and the focal length f[f] of the focusing
lens. Whereas Δλ and d are typically prerequisites, the focusing lens can be chosen to match the geometry of the detector.
Taking the minimum and maximum wavelength of the spectrometer (in our example 400 nm and 700 nm, respectively), we can calculate the minimum and maximum angle of diffraction using equation 1. The
result is β[min] = 14.48° and β[max] = 58.21°, which can be verified in OpticStudio in the Single Ray Trace data, tracing the marginal ray at the minimum and maximum wavelength:
When the rays pass through the focusing lens under minimum and maximum angle, we have the following situation:
Where f[f] is the focal length of the focusing lens and L the detector width. Consequently, we can calculate the detector width using
In our example we get L = 24.16 mm. This result can again be verified in OpticStudio. A simple and approximative way is to measure it directly with the Measure tool in the 3D Layout:
A more sophisticated and precise way is to use operands. For this purpose, we open the Merit Function Editor, key in the following lines and update the window (red arrow):
With the REAY operand we get the real ray’s y-coordinate, in our example on surface 9 (the detector). We select the values for wave 1 and 3, which correspond to 400 nm and 700 nm wavelength,
respectively. The DIFF operand is used to calculate the difference between the two y-coordinates. The resulting value is now exactly what we calculated analytically before.
Let’s recall the essential outcome of the previous considerations: Once the bandwidth of a spectrometer is defined, the diffraction grating yields the minimum and maximum refraction angle (equation
1). The minimum and maximum diffraction angle, in turn, with the focal length of the focusing lens f[f] defines the detector width (equation 2). Large detectors call for a large f[f] and vice versa.
Remapping of the wavelengths on the detector
When we have a look at the Spot Diagram, we notice that the spots of the three wavelengths are not uniformly distributed on the detector surface, even though being uniformly distributed in the
wavelength range. This effect comes from the sinus in equation 1 and must be accounted for in spectrometers by remapping the position on the detector to the corresponding wavelength.
We can calculate the mapping function (being the inverse of the remapping function) in OpticStudio by sweeping through the wavelengths of the spectrometer bandwidth and record the position of the ray
on the detector. An efficient way to do so is to use a Zemax Programming Language (ZPL) Macro. Download the attached macro Mapping_Function_Resolution.ZPL and save it in the folder Zemax\Macros. Open
it and have a look at the structure. The macro first gets the system wavelengths (operand WAVL) and then computes the y-coordinate of the ray on the detector (operand RAYY) while looping through the
wavelengths using multiple configurations. The resulting plot after execution shows the mapping function:
Spectral resolution
The macro Mapping_Function_Resolution.ZPL produces a second plot showing the spectral resolution of the spectrometer R, i.e. the fraction of bandwidth, δλ, per unit width of the detector, ΔL:
The spectral resolution as it is defined here is the inverse of the derivative of the mapping function. For this reason, it is computed in the same macro:
The lower the spectral resolution, the less bandwidth we have per unit width of the detector. Multiplying the spectral resolution with the pixel width of the detector finally yields a measure for the
spectrometer resolution, being an important characteristic value of every spectrometer.
We could enhance the spectral resolution of the spectrometer by selecting a larger focal length for the focusing lens and thus spreading the spectrum over a larger detector width, according to
equation 2. However, this strategy wouldn’t work out. We must also consider that the spot size on the detector is limited by diffraction, introducing new constraints for spectrometer design.
Diffraction limit
A spectrometer can be considered as an optical system mapping an object (the entrance pinhole, i.e. a point source) to the image plane (the detector). Using rays to calculate the propagation of light
through the optical system as OpticStudio does is very efficient. But the result we get with ray tracing does not fully correspond to reality. Instead of an infinitesimally small point (corresponding
to a sharp image) the image of the point source will be blurred. This effect is due to diffraction and limits the resolution of all optical systems. The way an optical system like the spectrometer
maps a point source into the blurred image is referred to as point spread function.
OpticStudio has a variety of tools to take diffraction into account. Here we consider the Airy disk (being the diffraction limited spot size) in the Spot Diagram where it is plotted along with its
numerical value in the plot comments:
The Airy disk is also used for the Rayleigh criterion. The Rayleigh criterion states that the images of two-point sources can be discriminated as soon as the distance between them is larger than the
radius of their Airy disk. In a spectrometer, the distance between two point sources corresponds to the fraction of bandwidth, δλ, as introduced in the previous section.
The Rayleigh criterion has a direct impact on the choice of the pixel size of the detector. It is useless to have pixels smaller than half the Airy disk radius since they would oversample the
diffraction-limited resolution of the spectrometer.
The formula to calculate the radius of the Airy disk is
where F# is the working f-number, which corresponds to the focal length of the focusing lens, f[f], divided by the system aperture. The consequences of this relation are the following:
1. The diffraction-limited resolution of the spectrometer varies with wavelength. This effect cannot be eliminated with the optical design.
2. Choosing a large focal length for the focusing lens, f[f], will increase the f-number which, in turn, increases the size of the Airy disk. This effect goes hand in hand with the detector width L
as discussed in the previous section (equation 2): The detector width will increase as well. In the end, we only get larger Airy disks on a larger detector and do not enhance the spectrometer
3. Choosing a large system aperture will decrease the f-number which reduces the size of the Airy disk.
Choice of system parameters
Assuming that the bandwidth and the grating of our spectrometer are pre-set, we have two parameters we can tune to get the most out of our spectrometer:
System aperture
The system aperture has a direct impact on the size of the Airy disc, i.e. the diffraction-limited resolution of our spectrometer (equation 3). It is a good strategy to choose the aperture as large
as possible since this yields small Airy disks.
Focusing lens
The choice of the focal length of the focusing lens, f[f], is more delicate. The most important is to illuminate the detector entirely (equation 2). If the detector is small, also f[f] is small and
we get a more compact spectrometer. On the other hand, small focal lengths entail more aberrations. Consequently, the detector should be chosen as large as possible. The diffraction-limited
resolution of the spectrometer is not affected by the focusing lens, since the size of the airy disk scales with the detector width. | {"url":"https://support.zemax.com/hc/en-us/articles/1500005578762-How-to-build-a-spectrometer-theory","timestamp":"2024-11-05T15:24:57Z","content_type":"text/html","content_length":"63254","record_id":"<urn:uuid:460a1161-4a48-424c-9eac-fc9935f52c51>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00169.warc.gz"} |
High-Resolution Discharge Forecasting for Snowmelt and Rainfall Mixed Events
Department of Geoinformatics, Faculty of Electronics, Telecommunication and Informatics, Gdansk University of Technology, Gabriela Narutowicza 11/12, 80-233 Gdansk, Poland
Author to whom correspondence should be addressed.
Submission received: 23 November 2017 / Revised: 22 December 2017 / Accepted: 3 January 2018 / Published: 10 January 2018
Discharge events induced by mixture of snowmelt and rainfall are strongly nonlinear due to consequences of rain-on-snow phenomena and snowmelt dependence on energy balance. However, they received
relatively little attention, especially in high-resolution discharge forecasting. In this study, we use Random Forests models for 24 h discharge forecasting in 1 h resolution in a 105.9 km$2$
urbanized catchment in NE Poland: Biala River. The forcing data are delivered by Weather Research and Forecasting (WRF) model in 1 h temporal and 4 × 4 km spatial resolutions. The discharge
forecasting models are set in two scenarios with snowmelt and rainfall and rainfall only predictors in order to highlight the effect of snowmelt on the results (both scenarios use also pre-forecast
discharge based predictors). We show that inclusion of snowmelt decrease the forecast errors for longer forecasts’ lead times. Moreover, importance of discharge based predictors is higher in the
rainfall only models then in the snowmelt and rainfall models. We conclude that the role of snowmelt for discharge forecasting in mixed snowmelt and rainfall environments is in accounting for
nonlinear physical processes, such as initial wetting and rain on snow, which cannot be properly modelled by rainfall only.
1. Introduction
Discharge events induced by mixture of snowmelt and rainfall are strongly nonlinear due to consequences of rain-on-snow phenomena and snowmelt dependence on energy balance. Such events should be of
major consideration especially in urban catchments, where they can promote flooding [
]. The hydrological processes related snowmelt and rainfall mixed events, mostly concerning the rain-on-snow phenomena, were simulated or analysed recently in a number of studies [
]. However, snowmelt and rainfall mixed events received relatively little attention in high-resolution discharge forecasting, especially in study areas where snow processes tend to be disregarded.
Discharge forecasts are often forced using numerical weather prediction (NWP) data in regional and local scale sites all over the world. Rogelis and Werner [
] used Weather Research and Forecasting (WRF) model forecasting ensembles for timely prediction of flash floods in mountain areas in tropical regions. Li et al. [
] coupled WRF with hydrological Liuxihe model to extend the flood forecast lead time. A different approach was proposed by Tao et al. [
], where an Integrated Precipitation and Hydrology Experiment (IPHEx-IOP) was performed to characterize flood predictability dedicated to complex terrains of Southern Appalachians. This research was
carried out using the Coupled surface–groundwater Hydrology Model (DCHM) forced by hourly precipitation fields and other forecasts produced by the NASA-Unified WRF (NU-WRF) model. Some studies
focused on the real-time applicability of the system. For instance, an operational framework induced by meteorological forecasts for Kavango River basin in Africa [
] and a neuro-fuzzy system for flash floods warning for the area of mountainous region of Rio de Janeiro state in Brazil [
]. The numerous studies show how NWP can be used for discharge forecasts; however, most of the attention is paid to the precipitation fields that appear to be the major driver of discharge events.
On the other hand, NWP can provide far more complete representation of data, such as snow related variables, such as snow cover, depth, water equivalent, albedo or melt. These variables can be used
for enhancing the modelling of hydrological processes, such as floods, with snow processes. Numerous configuration of microphysical schemes allow for looking at the snow processes with various levels
of complexity. Among the most often used models available in WRF that include snow pack processes are Community Land Model (CLM) [
], Rapid Update Cycle (RUC) [
] or Noah [
]. According to the literature, the models vary in the performance of snow processes simulation; however, they all provide comparable, operational valid output [
WRF with the aforementioned microphysical configurations was used in cold environments: Förster et al. [
] conducted catchment-scale simulations, where snow models were integrated into the hydrological modelling system PANTA RHEI forced by WRF meteorological forecasts in the Harz Mountains, Germany,
whereas, Wu et al. [
] and Zhao et al. [
] used WRF to force snowmelt runoff in snow dominated, high mountain ranges of Central Asia. The results indicated good skill of WRF derived snow related variables for hydrological modelling, yet the
studies were conducted in snow dominated not urbanized environments, which do not exhibit extensive snowmelt and rainfall mixing.
Discharge response to snowmelt and rain-on-snow events in urbanized catchment is different than in other areas. Snowmelt and rainfall mixed events in urban catchments intensify discharge while
urbanization increases [
]. Moreover, the onset of spring snowmelt discharge is quicker in urbanized than in non-urbanized catchments [
]. The complexity of urbanized catchments response to snow related processes is depicted by the fact that, in such catchments, higher simulation performance (expressed in Nash–Sutcliffe efficiency)
can be obtained when hydrological simulations are conducted with full energy balance model in reference to the degree-day method [
]. Despite these contrasts of urbanized and non-urbanized catchments, there were no attempts to predict mixed snowmelt and rainfall discharge events in these areas using forcing data from weather
forecasting models, such as WRF. While such experiments could highlight feedback between rainfall and snow processes that should be taken into account for modelling also in other areas.
The aim of this study is to highlight the effect of neglecting snowmelt in high-resolution discharge forecasts in mixed snowmelt and rainfall catchments. We conduct the study in an urbanized
catchment where the influence of snow processes on discharge is enforced. Our aim is realized by simple post-processing of the WRF in order to provide input data for a machine learning algorithm that
forecasts discharge and highlights the predictors’ importance.
2. Methods
2.1. Discharge Forecasting
Discharge at the catchment outlet is forecasted using Random Forests, a machine learning algorithm used for classification and regression [
]. The main principle of Random Forests is to find the optimal division of predictors’ hyperspace for discrimination or quantification of a predicted variable. This is achieved by growing a tree with
a random subset of features and finding the best splitting point for each of the selected features. The random trees are then grown multiple times and their output is combined by voting for
classification or averaging for regression. Use of Random Forests in this study was motivated by the following reasons:
• A catchment response, expressed in discharge, to meteorological and climatological forcing is nonlinear. Hence, a nonlinear model is required to represent this system.
• Our approach uses big models, with multiple predictors that may vary in their significance for the forecast output. Hence, a multiple model resistive to overfitting is required to regress the
• Our aim was also to gain insight into the forecasting model behaviour, for which a predictor’s importance estimation feature of Random Forests can be used.
The forecasting models predict discharge (
$q t s r$
$q t r$
) [m
$− 1$
] at the catchment outlet at forecast time
$t ∈ 〈 1 ; 24 〉$
[h] based on catchment average snowmelt (
$s l$
) and rainfall (
$r l$
) fluxes [mm/h] at lag time
$l ∈ 〈 − 24 ; t 〉$
[h], mean discharge (
$q m$
) [m
$− 1$
] in 25 h preceding the forecast, i.e., mean discharge between
$l ≥ − 24$
$l ≤ 0$
, and
$q d$
$− 1$
] being a difference between
$q m$
and discharge at
$l = 0$
h (the last available discharge before forecast):
$q t s r = f ( s − 24 , … , s t , r − 24 , … , r t , q m , q d ) .$
The full model (Equation (
)) is further referred to as snowmelt and rainfall model. To assess the effect of snowmelt flux inclusion in the model, we also use a forecasting scenario with rainfall and discharge predictors only
(further referred to as rainfall only model):
$q t r = f ( r − 24 , … , r t , q m , q d ) .$
Effectively, the number of predictors depends on the forecast time
: the snowmelt and rainfall model (Equation (
)) will consist of 54–100 predictors and the rainfall only model (Equation (
)) will consist of 28–51 predictors. This relatively big model design allows us to look at the predictors’ importance and various lag time
and at various forecast time
. We assess the importance with the increase in mean squared error (IncMSE) (%) measure that is calculated by Random Forests algorithm for each predictor. IncMSE is calculated as a difference between
the out-of-bag (OOB) mean squared error (MSE) from a forest with a permuted predictor and an original predictor, next, this difference is normalized to the MSE of a forest with the original predictor
and expressed as a percentage. The higher the IncMSE, the more important the predictor is. The advantage of this approach is that it gives unbiased estimation of MSE and can be easily interpreted in
terms of importance of each predictor for the model [
The training period for the model is 10 February 2013 to 30 May 2013 and the validation period is 24 October 2012 to 9 February 2013. This winter was chosen because of the longest and most
fluctuating snow cover among the period with available 1 h resolution discharge data (i.e., 2010–2016) [
]. Both in training and validation period rainfall only, snowmelt only and mixed snowmelt and rainfall events are observed (
Figure 1
). The forecast temporal resolution is 1 h and for each time step discharge is forecasted from
$t = 1$
$t = 24$
h. The only observations used as the predictors are the preceding discharge records:
$q m$
$q d$
. Snowmelt and rainfall data for both positive and negative time lags (
) are derived from WRF output, i.e., they are meteorological forecasting results from the previous or forthcoming 24 h. Perhaps, in an operational model, the preceding 24 h forcing data could be
obtained from interpolation of observations and only the forthcoming forcing data would be forecasted. However, in this study, our intention was to keep one source of predictors for equivocal
analysis of their importance.
The discharge forecast errors are quantified using the root mean square error [m
$− 1$
$RMSE = 1 N ∑ i = 1 N q i − q ^ i 2 ,$
and the mean error [m
$− 1$
$ME = 1 N ∑ i = 1 N q i − q ^ i ,$
$q i$
and the
$q ^ i$
are the observed and forecasted discharge [m
$− 1$
] at time
ranging from 1 to
[h]. The RMSE quantifies the error magnitude, and the optimal RMSE is 0 m
$− 1$
. The ME informs whether the observed discharge is underestimated (negative values) or overestimated (positive values), with the optimal value being 0 m
$− 1$
2.2. The WRF Model Setup and Output
For the purpose of the study, the operational setup of WRF model instance that covers the Central Europe area was prepared. The model operates in two computational domains, namely in a parent domain
with a horizontal resolution of 12 × 12 km (d01) that uniforms lateral boundary condition, and nested domain of 4 × 4 km resolution, which provides the model data used an actual input in discharge
forecasting. The nested (d02) domain consists of nearly 85 thousand discrete cells (298 × 285) with its geographical range presented in
Figure 2
. The model was started every day from the Global Forecast System (GFS) reanalysis dataset producing 24 h simulations. In order to model sub-scale surface physical properties, the Noah land surface
model scheme [
] and the Kain–Fristch cumulus scheme were used in order to account forimplicit parameterization of sub-scale precipitation events. In this study, we used the WRF hourly total rainfall (
$r l$
) and snowmelt (
$s l$
) fields as the forecasting models (Equations (
) and (
)) predictors. The WRF model was set up to use the Ferrier operational microphysics scheme and the Melor–Yamada–Janjic planetary boundary layer scheme [
]. In order to properly asses the convection parameterization, the utilized WRF instance provides vertical grid distribution (VGD) of 39 pressure levels over analysed computing domains with the top
model layer at 5000 Pa. Proper VGD allows for explicit solving of advection schemes and better assessment of cloud microphysical processes and rainfall events. In order to account for radiation
budget, model uses longwave and shortwave radiation schemes that are called every 30 min of the simulation (as default value in WRF simulations). In order to increase efficiency of the model, we used
Rapid Radiative Transfer Model long wave radiation scheme [
] and simple downward integration short wave radiation scheme [
We estimate the accuracy of WRF simulations using the observed precipitation and snow depth provided by National Oceanic and Atmospheric Administration (NOAA) for a station located in the study area
]. One run of the 24 h forecast using the WRF model in this configuration took about 80 min; however, including the data import and export time, it increased to 90–100 min.
2.3. Experimental Setup
Our experimental setup is rather simple due to using the machine learning (“black-box”) approach as a key component (
Figure 3
The computations in our experiment were split into three computers: (1) WRF simulations were conducted on our WRF dedicated Ubuntu 10.04 workstation with four core 3.30 GHz i5 processor and 8 GB RAM;
(2) WRF output grid data were post-processed into tabular time series on a high-performance computing (HPC) node with twelve core 2.3 GHz Xeon E5 processor and 256 GB RAM; (3) the random forest
models for discharge predictions were trained and operated on a desktop computer using the R environment [
] and the randomForest package [
]. Note that an operational implementation of such a system would not require three computers, including an HPC, but one workstation would be enough to achieve the operational status. Our setup was
motivated by the requirements of the prototyping stage in which we had to optimise our work by testing and repeating the experiment several times.
The post-processing of WRF output grid data into tabular time series was required in order to provide a proper training data-set format for the Random Forests models. In post-processing, we
calculated a catchment average of WRF snowmelt and rainfall for a given hour, each of the preceding 24 h and each of the forthcoming 24 h (i.e., we calculated the
$s l$
$r l$
predictors for each
). At this step, we also included in the training data-set the
$q m$
$q d$
predictors, which were calculated (see
Section 2.1
) from the 1 h resolution discharge time series form a gauging station Zawady located at the Biala River catchment outlet (
Figure 4
) [
The Random Forests models in the regression mode were set to grow 500 trees and to calculate importance for predictors (i.e., the IncMSE measure). The OOB data was permuted once (default) while
calculating importance. We did not use any predictors’ selection technique priori to Random Forests training because the importance measure effectively shows which predictors are important (and the
aim of this study was to look at the importance pattern of all predictors).
2.4. Study Area
Study area is the Biala River catchment 105.9 km
located in the northeastern Poland (
Figure 4
), where the longest persisting snow cover occurs in the lowland Poland. Climate type is humid continental according to Köppen’s classification; however, the study area is located at the most western
part of this type being effectively in a transitions zone between humid continental and temperate oceanic types [
]. Biala River catchment is dominated by urban land-use (46%) with minor share of agriculture (39%), forests (13%) and water (2%) [
]. Biala River flows through Bialystok City where almost each year flooding cause considerable loss. The biggest flooding is often driven by spring and summer storms, which produce high and quick
discharge events; however, similar quick events also occur during winter and spring forced by mixed snowmelt and rainfall.
3. Results
3.1. WRF Results
The WRF forecasted rainfall and snowmelt matched well the meteorological observations from a station located in the central part of the catchment (
Figure 5
). WRF rainfall is significantly correlated with the meteorological observation (
$ρ = 0.59 , p < 2.2 × 10 − 16$
) and has RMSE = 3.0 mm/day, which is equivalent to 13% of the observed rainfall range. Some false positive rainfall events of average magnitude equal to 0.5 mm/day can be observed in the WRF
forecasts. These false positive events occur in 94 days in the training and validation period (42% of days). The false negative events are rare and occur in total in five days (2% of days). The WRF
snow melt was not possible to be validated against observations because of a lack of snow water equivalent data. Nonetheless, its relation with the snow depth depletion is significant (
$ρ = 0.52 , p < 2.2 × 10 − 16$
), which confirms a good quality of snowmelt forecasts.
The total rainfall in the calibration and validation period accounted for 269 mm and the total snowmelt for 55 mm, i.e., 20% of the total rainfall.
3.2. Discharge Forecasting
The RMSE of the forecasted total discharge in the validation period gradually increases from 0.19 m
$− 1$
(41% of the discharge std. dev.) and 0.14 m
$− 1$
(30% of the discharge std. dev.) for snowmelt and rainfall and the rainfall only models, respectively (
Figure 6
). The RMSE of the snowmelt and rainfall models reach the maximum of 0.43 m
$− 1$
(92% of the discharge std. dev.) at
$t = 16$
h; however, since
$t = 14$
h, the RMSE stabilizes at around 0.42 m
$− 1$
, whereas, the RMSE for the rainfall only models continue to increase until the maximum RMSE = 0.48 m
$− 1$
(102% of the discharge std. dev.) at
= 24 h.
The ME of the forecasted total discharge in the validation period is at a similar level of around 0.01 m
$− 1$
with optimum ME = 2 × 10
$− 5$
$− 1$
(0.004% of the discharge std. dev.) at
$t = 24$
h and pessimum ME = 0.02 m
$− 1$
(4% of the discharge std. dev.) at
$t = 15$
h for the snowmelt and rainfall models (
Figure 6
), whereas the ME for the rainfall only models continuously increases from optimum ME = 0.01 m
$− 1$
(2% of the discharge std. dev.) at
$t = 1$
h until the pessimum 0.07 m
$− 1$
(15% of the discharge std. dev.) at
$t = 24$
The RMSE for the forecasted peak discharge in the validation period is similar for both model variants (
Figure 6
). It gradually increases from 0.43 m
$− 1$
(32% of the peak discharge std. dev.) and 0.41 m
$− 1$
(31% of the peak discharge std. dev.) for snowmelt and rainfall and the rainfall only models, respectively. The peak discharge RMSE reach the maximum of 1.33 m
$− 1$
(100% of the peak discharge std. dev.) at
$t = 19$
h for the snowmelt and rainfall models and 1.34 m
$− 1$
(101% of the peak discharge std. dev.) at
$t = 14$
h for the rainfall only models. However, since
$t = 9$
h, both model variants have their RMSE stabilized around 1.3 m
$− 1$
The ME for the forecasted peak discharge in the validation period is always closer to the optimal zero for the snowmelt and rainfall models than for the rainfall only models (
Figure 6
). The pessimal ME is
$− 0.61$
$− 1$
(46% of the peak discharge std. dev.) for the snowmelt and rainfall models, which is considerably better than for the rainfall only models ME =
$− 0.70$
$− 1$
(53% of the discharge std. dev.).
The forecasted discharge hydrograph matches the observed discharge well for the snowmelt and rainfall and rainfall only models (
Figure 7
). The low flows have good matching for all forecast times
. The peaks matching decrease with the increasing forecast time (cf. with
Figure 6
). False positive discharge peaks can be depicted for both snowmelt and rainfall and rainfall only models. However, the false positive discharge peaks appear in different situations in each model
In the snowmelt and rainfall models, the false positive peaks appear during long and intensive snowmelt events, e.g., at the beginning of November or the mid-December (
Figure 7
), whereas, in the rainfall only models, the false positive peaks appear after snowmelt followed by rainfall events e.g., at the beginning of January or beginning of February (
Figure 7
). Some minor false positive discharge peaks can be observed appearing the same in both model variants during the low flow periods. The magnitude of false positive peaks increases with the increasing
forecast time, and this effect is especially visible for the mixed events in the rainfall only models.
3.3. The Forecasting Models Structure
The inclusion of snowmelt predictors in the snowmelt and rainfall models changes the predictors importance patterns in reference to the rainfall only models (
Figure 8
). The predictor importance pattern shows that rainfall predictors are the most important at the lag time
equal to the forecast time
. Another important rainfall predictors appear lagged 14 h to the forecast time. This is observed for both model variants. However, in the rainfall only model variants, the most lagged predictors,
i.e., at
$l = − 24$
h, are clearly important, which is not the case in the snowmelt and rainfall models. The snowmelt predictors, opposite to the rainfall predictors, show relatively low importance at the lag time equal
to the forecast time, whereas the snowmelt predictors lagging 17 h and 27 h from the forecast time have high importance. The importance of snowmelt predictors at this lag is clearly higher than the
corresponding rainfall predictors, and almost as high as for the most important rainfall predictors. A clearly visible feature is that the importance of the least lagged meteorological predictors in
the models forecasting at time
$t = 1$
$t = 2$
h is lower than for models forecasting at longer
The inclusion of snowmelt predictors in the forecast models changes the discharge-based predictors pattern in reference to the rainfall only models (
Figure 9
). For the snowmelt and rainfall models, the importance of discharge predictors clearly decreases with the increase of the forecast time
. On the contrary, the importance of discharge-based predictors is similar for all forecast times in the rainfall only models. The discharge difference predictor (
$q d )$
is clearly more important than the mean discharge predictor (
$q m )$
in both model variants. However, in the snowmelt and rainfall models, the importance of both predictors is more balanced than in the rainfall only models.
4. Discussion
4.1. WRF Forecasts
The major issue concerning the WRF simulations are the false positive and false negative events observed both for snowmelt and rainfall. In our opinion, the false positive events in each case are due
to the fact that WRF simulates rainfall and snowmelt fields in 4 × 4 km grid and the observations are recorded in a single point using a rainfall gauge. Despite the high frequency of false positive
events in rainfall simulations, their magnitude is rather small and accounts for 0.5 mm/day on average. Given that the RMSE for rainfall accounts for 13% of the data range and the correlations of
snowmelt and rainfall data with observations are significant and positive, in our opinion, the WRF simulation results match well the observations. Nonetheless, the false positive and negative events
clearly negatively affect the Random Forests models training and forecasting. Unfortunately, such mismatch between NWP and observations is unavoidable. Several strategies exist for improving general
performance of NWP, such as ensemble forecasting schemes that are aimed at finding the best WRF setup for specific cases [
]. However, ensemble forecasting requires huge computing power (especially for high-resolution grids and larger areas) and, importantly, the optimal setup for re-analysis is not always optimal for
operational forecasting [
]. Therefore, in our case, we used a standard WRF setup that in the longer term gives the best and stable results for the analysed area [
4.2. Random Forests Models
In this study, we used the Random Forests model for the discharge forecasts, and this approach can be criticized due to few reasons. First, the functioning of a trained Random Forests model is
difficult to understand. This is similar to neural network models, but, unlike the linear regression models, which have a very clear structure. Next, the models we used are big, i.e., from 28 to 100
predictors, which can also be understood as another feature that hampers the understanding of a model structure. Finally, nonlinear models, like Random Forests, but also neural networks, or nonlinear
regression can provide erroneous predictions outside the training data range. This can be especially undesirable if an extreme event is to be predicted.
However, using Random Forests in our study is justified by the following arguments. We look at the model structure by analysing the predictors importance, which, unlike looking at the Random Forests
trees structure, gives an easily interpretable insight into the model functioning. Moreover, using as many predictors as in our study design gives additional insight by analysing importance of the
water flux sources (rainfall or snowmelt) and their temporal dependencies. Finally, the problem with a proper training data range has to be identified during the model development, as it was the case
in our study. For an operational model, this risk should be minimized by selecting the longest possible training period, or by setting up the model for forecasting whether discharge will be above or
below certain alarm thresholds.
According to our knowledge, this is the first application of Random Forests for high-resolution discharge forecasting. Several studies used Multi-Layer Neural Networks, Support Vector Regression,
Self-organizing Maps or other nonlinear models for discharge forecasting [
]. However, Random Forests in river flow related application were used for water level prediction [
], identification of important variables for flood prediction [
] and sediments concentration [
4.3. Discharge Forecast and Predictors Importance
Discharge forecasts have lower errors for short forecast times than for the long forecast times, with the errors generally stabilizing for forecast time 7–10 h onwards (
Figure 6
). This effect can be explained by the fact that short forecast times are strongly dependent on the preceding observed discharge due to autocorrelation (
Figure 10
). The discharge autocorrelation rapidly decreases onwards lag = 1 h until lag = 10 h when the decrease rate becomes lower (
Figure 10
). The effect of discharge autocorrelation is also visible in the predictors importance: the discharge based predictors are clearly more important for the forecast times
$t = 1$
–7 h than for the
$t = 8$
–24 h in snowmelt and rainfall models (
Figure 9
), and the rainfall predictors gain more importance onwards lag
$l = 7$
h for all forecast times
Figure 8
A drawback of the discharge forecasts are some several false positive peaks (
Figure 7
). As it can be expected, most of these peaks are caused by false positive heavy rainfall events in the WRF forecasts. The magnitude of either false positive rainfall and discharge peaks is small,
hence this drawback is not influencing strongly the model performance. In our opinion, these small weather prediction errors are unavoidable; however, they can be minimized by optimizing the WRF
setup. This aspect was, however, not in the scope of this study.
Another group of false positive discharge peaks occurring in the snowmelt and rainfall model during an intensive snowmelt event. Unfortunately, we were not able to fully validate the snowmelt flux
(e.g., with snow water equivalent data) because only snow depths were available at the meteorological station in the study area; however, we believe that the false positive peaks are a result of
overestimated snowmelt in the WRF forecast. Similarly, like for the false positive rainfall, this problem could be diminished by changing the WRF configuration.
Notably, the magnitude of peaks appearing during intensive snowmelt in the snowmelt and rainfall models are lower than the magnitude of peaks appearing by snowmelt followed by rainfall in the
rainfall only models. This group of false positive discharge peaks is resulting from lacking snowmelt in the rainfall only models. Regardless, snowmelt being considerably smaller (20% of the total
rainfall) than rainfall, its role is clearly identified by the Random Forests models. Snowmelt predictors with relatively higher importance are lagged 17–22 h from the forecast time
. This can be interpreted twofold. First, the high snowmelt predictors’ importance at these lags reflects the feedback of snowmelt to create antecedent soil moisture conditions, or initial wetting,
that are promoting runoff [
]. Secondly, it can be a consequence of decreased snowmelt autocorrelation at lags 17 h onwards (
Figure 11
); however, in this case, one would expect high importance of snowmelt predictors for lags near the prediction time.
The rainfall has increased importance for predictors close to the forecast time
(~0–3 h) and lagged ~12–16 h from the forecast time. The increased importance near the forecast time is due to the fact that Biala River reacts quickly to rainfall, mostly due to extensive sewers and
high imperviousness [
]. The second period of increased importance matches well with the decreased rainfall autocorrelation onwards 10 h lags (
Figure 12
). The delayed rainfall is not correlated to the rainfall at the forecast time, hence it provides new information. It can also have a role in promoting runoff by forming ascendant soil moisture
conditions, as we hypothesized above for snowmelt.
Despite the snowmelt flux being considerably lower than the rainfall flux, its influence on the forecasted discharge is beneficial. This is clear when comparing importance of discharge based
predictors in the snowmelt and rainfall and rainfall only models (
Figure 9
). As mentioned earlier for the snowmelt and rainfall models, the importance of discharge based predictors decreases onwards forecast times
$t = 7$
h. This is not the case for the rainfall only models where the importance of the discharge based predictors is at a similar level at all forecast times. This means that inclusion of snowmelt flux
allowed the model to shift the weight from preceding discharge data onto the forcing data, as it is desired.
The effect of the forcing data being more important in the model than the discharge data is expressed in the ME closer to optimum in the snowmelt and rainfall models than in the rainfall only models
Figure 6
). The increased bias in the rainfall only models can lead to higher discharge forecast uncertainty, especially during extreme events.
4.4. Snowmelt in Discharge Forecasting
So far, hydrological simulations and forecasts using WRF in snow regime catchments used the meteorological fields (e.g., precipitation, temperature, solar radiation, etc.) to drive snowmelt
simulation in physical [
] and empirical [
] models. Our study shows that driving another model with WRF data is not necessarily crucial to obtaining reliable snowmelt estimates and the WRF–Noah model deriving snowmelt itself can be a
valuable estimate.
The majority of the WRF driven hydrological forecasts reported in literature are conducted in sites where snow processes do not occur (e.g., [
]). However, some studies that lack information of how snowmelt was handled report good hydrological forecasting results in areas where snow processes, including snowmelt, may occur next to or as a
mix with rainfall (e.g. [
]). Rainfall fields can produce acceptable discharge forecasting in sites where snowmelt is not dominating over rainfall, as shown in our study. However, inclusion of snowmelt in such case studies
should not be neglected because it allows the physical processes’ representation to be more complete, e.g., in taking into account the antecedent soil moisture conditions [
], rain-on-snow [
] or the runoff [
Inclusion of climatic variables into machine-learning models improves the predicted discharge coefficient of determination in reference to models that do not use climatic variables [
]. Moreover, it is known that snowmelt should be included into flood frequency analysis if snow accumulation is substantial [
]. Our results are in agreement with these and point out that, even though snowmelt accounted only for 20% of total rainfall in this study, its effect on the model structure and forecast errors is
clearly positive.
4.5. Outlook and Applicability
We believe that the framework of short-term high-resolution discharge forecast presented herein can be adopted in other research studies or as an operational model. One should, however, take care
about proper training of the Random Forests models, i.e., with a representative observation period. Another issue is selecting other, hydrological cycle components (e.g., groundwater recharge/
discharge, or evapotranspiration) as predictors. A hydrological forecasting with Random Forests model in configuration presented in this study will not be appropriate if an extensively managed water
reservoir is operational in a catchment.
We have intentionally selected a relatively small and homogeneous catchment as a study area because this allowed us to use a lumped model and neglect the distributed aspect of snowmelt, precipitation
and runoff formation. This simplification highlighted the general importance of snowmelt, rainfall and discharge predictors. However, in our earlier works, we showed that data source (so also the
spatial distribution) of snow related variables can considerably change the hydrological modelling results [
]. Moreover, we showed that the spatially distributed snow cover has strongly variable sensitivity in a catchment area [
]. Hence, differentiating the rainfall and snowmelt predictors spatially would be an interesting aspect that would allow to: (1) depict the most important parts of a catchment for runoff formation
and (2) potentially improve the forecasts by moving from lumped to distributed modelling.
5. Conclusions
Our study used a nonlinear machine learning algorithm and numerical weather forecasts of snowmelt and rainfall in order to forecast hourly discharge in an urbanized catchment.We tested two scenarios
of the discharge forecast model (with rainfall only predictors and with snowmelt and rainfall predictors), which allowed for highlighting the effect of snowmelt predictors in the forecasts. Both
scenarios performed similarly in terms of hydrograph behaviour. However, the errors analysis in consecutive forecast hours revealed that the rainfall only models were more biased and had higher
absolute errors than the snowmelt and rainfall models. The scenarios comparison showed also that the snowmelt predictors are of comparable importance as the rainfall predictions even though the
snowmelt volume is only 20% of the rainfall volume in our study. Moreover, inclusion of snowmelt prediction changed the pattern of preceding discharge predictors by decreasing their importance with
increasing forecast lead time.
We conclude that the effect of including snowmelt data in discharge forecasts for mixed snowmelt and rainfall environment allows for accounting for nonlinearities and feedbacks such as (1) initial
wetting by snowmelt related to air temperature and (2) rain-on-snow phenomena. The former indicates that antecedent snowmelt is important for discharge forced by precipitation, due to initial wetting
and increasing the effective rainfall. The latter indicates that accounting for snowpack properties, such as distribution and water equivalent, is crucial for snow and rainfall originated runoff
High importance of predictors related to preceding discharge in the rainfall only models showed that the interpretation of the model structure leads to erroneous conclusion (that these predictors are
highly important for all forecast lead times in our case) while the predictions had acceptable errors. We showed that the inclusion of snowmelt predictors changed the model’s structure in comparison
to the rainfall only models in a way that the predictors’ importance pattern is interpretable with reference to physical phenomena (e.g., effect of antecedent soil moisture), even though our
modelling approach was machine-learning without implementation of any physical equations.
Our approach of discharge forecasting using the Random Forests algorithm has shown its high applicability. We used Random Forests for high-resolution discharge forecasting; however, this approach is
similar to other to methods, such as neural networks. A clear advantage of our approach is the functionality to measure the predictors’ importance, which allows for optimizing the model structure and
highlighting its flaws. Thereby, for future work, we are planning to apply the nonlinear forecast models spatially in order to highlight catchment zones that are the most important for discharge
Discharge time series for the Zawady Gauge at the Biala River was provided by the Institute of Meteorology and Water Management National Research Institute (IMGW-PIB). Snow depth and precipitation
data for Bialystok station were provided by the NOAA Climate Data Online service. Calculations were carried out at the Academic Computer Centre in Gdańsk. This research was supported by Gdańsk
University of Technology statutory activity: DS2017 032351. Authors acknowledge two anonymous Reviewers whose comments led to great improvement of this paper.
Author Contributions
Tomasz Berezowski designed and performed the hydrological simulations, analysed the results and prepared the paper; Andrzej Chybicki set up and performed the WRF simulations and revised the paper.
Conflicts of Interest
The authors declare no conflict of interest.
1. Buttle, J.M.; Xu, F. Snowmelt Runoff in Suburban Environments. Hydrol. Res. 1988, 19, 19–40. [Google Scholar]
2. Surfleet, C.G.; Tullos, D. Variability in effect of climate change on rain-on-snow peak flow events in a temperate climate. J. Hydrol. 2013, 479, 24–34. [Google Scholar] [CrossRef]
3. Wever, N.; Jonas, T.; Fierz, C.; Lehning, M. Model simulations of the modulating effect of the snow cover in a rain-on-snow event. Hydrol. Earth Syst. Sci. 2014, 18, 4657–4669. [Google Scholar] [
4. Cohen, J.; Ye, H.; Jones, J. Trends and variability in rain-on-snow events. Geophys. Res. Lett. 2015, 42, 7115–7122. [Google Scholar] [CrossRef]
5. Langhammer, J.; Česák, J. Applicability of a Nu-Support Vector Regression Model for the Completion of Missing Data in Hydrological Time Series. Water 2016, 8, 560. [Google Scholar] [CrossRef]
6. Rogelis, M.C.; Werner, M. Streamflow forecasts from WRF precipitation for flood early warning in mountain tropical areas. Hydrol. Earth Syst. Sci. Discuss. 2017, 2017, 1–32. [Google Scholar] [
7. Li, J.; Chen, Y.; Wang, H.; Qin, J.; Li, J.; Chiao, S. Extending flood forecasting lead time in a large watershed by coupling WRF QPF with a distributed hydrological model. Hydrol. Earth Syst.
Sci. 2017, 21, 1279–1294. [Google Scholar] [CrossRef]
8. Tao, J.; Wu, D.; Gourley, J.; Zhang, S.Q.; Crow, W.; Peters-Lidard, C.; Barros, A.P. Operational hydrological forecasting during the IPHEx-IOP campaign—Meet the challenge. J. Hydrol. 2016, 541,
434–456. [Google Scholar] [CrossRef]
9. Bauer-Gottwein, P.; Jensen, I.H.; Guzinski, R.; Bredtoft, G.K.T.; Hansen, S.; Michailovsky, C.I. Operational river discharge forecasting in poorly gauged basins: The Kavango River basin case
study. Hydrol. Earth Syst. Sci. 2015, 19, 1469–1485. [Google Scholar] [CrossRef] [Green Version]
10. De Lima, G.R.T.; Santos, L.B.L.; de Carvalho, T.J.; Carvalho, A.R.; Cortivo, F.D.; Scofield, G.B.; Negri, R.G. An operational dynamical neuro-forecasting model for hydrological disasters. Model.
Earth Syst. Environ. 2016, 2, 1–9. [Google Scholar] [CrossRef]
11. Oleson, K.W.; Lawrence, D.M.; Gordon, B.; Flanner, M.G.; Kluzek, E.; Peter, J.; Levis, S.; Swenson, S.C.; Thornton, E.; Feddema, J.; et al. Technical Description of Version 4.0 of the Community
Land Model (CLM); National Center for Atmospheric Research: Boulder, CO, USA, 2010. [Google Scholar]
12. Lawrence, D.M.; Oleson, K.W.; Flanner, M.G.; Thornton, P.E.; Swenson, S.C.; Lawrence, P.J.; Zeng, X.; Yang, Z.L.; Levis, S.; Sakaguchi, K.; et al. Parameterization improvements and functional and
structural advances in Version 4 of the Community Land Model. J. Adv. Model. Earth Syst. 2011, 3. [Google Scholar] [CrossRef]
13. Benjamin, S.G.; Grell, G.A.; Brown, J.M.; Smirnova, T.G.; Bleck, R. Mesoscale Weather Prediction with the RUC Hybrid Isentropic-Terrain-Following Coordinate Model. Mon. Weather Rev. 2004, 132,
473–494. [Google Scholar] [CrossRef]
14. Niu, G.Y.; Yang, Z.L.; Mitchell, K.E.; Chen, F.; Ek, M.B.; Barlage, M.; Kumar, A.; Manning, K.; Niyogi, D.; Rosero, E.; et al. The community Noah land surface model with multiparameterization
options (Noah-MP): 1. Model description and evaluation with local-scale measurements. J. Geophys. Res. 2011, 116. [Google Scholar] [CrossRef]
15. Wang, Z.; Zeng, X.; Decker, M. Improving snow processes in the Noah land model. J. Geophys. Res. 2010, 115, D20108. [Google Scholar] [CrossRef]
16. Jin, J.; Wen, L. Evaluation of snowmelt simulation in the Weather Research and Forecasting model. J. Geophys. Res. Atmos. 2012, 117. [Google Scholar] [CrossRef]
17. Förster, K.; Meon, G.; Marke, T.; Strasser, U. Effect of meteorological forcing and snow model complexity on hydrological simulations in the Sieber catchment (Harz Mountains, Germany). Hydrol.
Earth Syst. Sci. 2014, 18, 4703–4720. [Google Scholar] [CrossRef]
18. Wu, X.; Shen, Y.; Wang, N.; Pan, X.; Zhang, W.; He, J.; Wang, G. Coupling the WRF model with a temperature index model based on remote sensing for snowmelt simulations in a river basin in the
Altay Mountains, north-west China. Hydrol. Process. 2016, 30, 3967–3977. [Google Scholar] [CrossRef]
19. Zhao, Q.; Liu, Z.; Ye, B.; Qin, Y.; Wei, Z.; Fang, S. A snowmelt runoff forecasting model coupling WRF and DHSVM. Hydrol. Earth Syst. Sci. 2009, 13, 1897–1906. [Google Scholar] [CrossRef]
20. Buttle, J.M. Effects of suburbanization upon snowmelt runoff. Hydrol. Sci. J. 1990, 35, 285–302. [Google Scholar] [CrossRef]
21. Valtanen, M.; Sillanpaa, N.; Setala, H. Effects of land use intensity on stormwater runoff and its temporal occurrence in cold climates. Hydrol. Process. 2013, 28, 2639–2650. [Google Scholar] [
22. Valeo, C.; Ho, C. Modelling urban snowmelt runoff. J. Hydrol. 2004, 299, 237–251. [Google Scholar] [CrossRef]
23. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
24. Instytut Meteorologii i Gospodarki Wodnej—Państwowy Instytut Badawczy (IMGW-PIB). 1 h Discharge Time Series for Zawady (ID: 153230060) Gauging Station at Biała River (ID: 26168) for the Period
2010–2016; IMGW-PIB: Warsaw, Poland, 2017. [Google Scholar]
25. Nakanishi, M.; Niino, H. Development of an Improved Turbulence Closure Model for the Atmospheric Boundary Layer. J. Meteorol. Soc. Jpn. 2009, 87, 895–912. [Google Scholar] [CrossRef]
26. Mlawer, E.J.; Taubman, S.J.; Brown, P.D.; Iacono, M.J.; Clough, S.A. Radiative transfer for inhomogeneous atmospheres: RRTM, a validated correlated-k model for the longwave. J. Geophys. Res.
Atmos. 1997, 102, 16663–16682. [Google Scholar] [CrossRef]
27. Dudhia, J. Numerical Study of Convection Observed during the Winter Monsoon Experiment Using a Mesoscale Two-Dimensional Model. J. Atmos. Sci. 1989, 46, 3077–3107. [Google Scholar] [CrossRef]
28. National Centers for Environmental Information-National Oceanic and Atmospheric Administration (NOAA-NCEI). Daily Precipitation, Temperature and Snow Depth Time Series for the Bialystok, PL
Station (ID: PLM00012295) 2010–2014; NOAA-NCEI: Washington, DC, USA, 2017.
29. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2016. [Google Scholar]
30. Liaw, A.; Wiener, M. Classification and Regression by randomForest. R News 2002, 2, 18–22. [Google Scholar]
31. Kottek, M.; Grieser, J.; Beck, C.; Rudolf, B.; Rubel, F. World Map of the Köppen-Geiger climate classification updated. Meteorologische Zeitschrift 2006, 15, 259–263. [Google Scholar] [CrossRef]
32. Berezowski, T.; Chormański, J.; Batelaan, O.; Canters, F.; Van de Voorde, T. Impact of remotely sensed land-cover proportions on urban runoff prediction. Int. J. Appl. Earth Obs. Geoinf. 2012, 16
, 54–65. [Google Scholar] [CrossRef]
33. Kioutsioukis, I.; de Meij, A.; Jakobs, H.; Katragkou, E.; Vinuesa, J.F.; Kazantzidis, A. High resolution WRF ensemble forecasting for irrigation: Multi-variable evaluation. Atmos. Res. 2016, 167,
156–174. [Google Scholar] [CrossRef]
34. Yu, W.; Nakakita, E.; Kim, S.; Yamaguchi, K. Impact Assessment of Uncertainty Propagation of Ensemble NWP Rainfall to Flood Forecasting with Catchment Scale. Adv. Meteorol. 2016, 2016, 1384302. [
Google Scholar] [CrossRef]
35. García-Díez, M.; Fernández, J.; Fita, L.; Yagüe, C. Seasonal dependence of WRF model biases and sensitivity to PBL schemes over Europe. Q. J. R. Meteorol. Soc. 2012, 139, 501–514. [Google Scholar
] [CrossRef]
36. Chybicki, A.; Łubniewski, Z.; Kamiński, L.; Bruniecki, K.; Markiewicz, L. Numerical weather prediction-data fusion to GIS systems and potential applications. In The Future with GIS. Katedra
Systemow Geoinformatycznych; Krvatski Informsticki Zbor—GIS Forum: Zagreb, Croatia, 2011; pp. 56–61. [Google Scholar]
37. Barbetta, S.; Coccia, G.; Moramarco, T.; Todini, E. Case Study: A Real-Time Flood Forecasting System with Predictive Uncertainty Estimation for the Godavari River, India. Water 2016, 8, 463. [
Google Scholar] [CrossRef]
38. Barge, J.; Sharif, H. An Ensemble Empirical Mode Decomposition, Self-Organizing Map, and Linear Genetic Programming Approach for Forecasting River Streamflow. Water 2016, 8, 247. [Google Scholar]
39. Peng, T.; Zhou, J.; Zhang, C.; Fu, W. Streamflow Forecasting Using Empirical Wavelet Transform and Artificial Neural Networks. Water 2017, 9, 406. [Google Scholar] [CrossRef]
40. Sung, J.; Lee, J.; Chung, I.M.; Heo, J.H. Hourly Water Level Forecasting at Tributary Affected by Main River Condition. Water 2017, 9, 644. [Google Scholar] [CrossRef]
41. Wang, J.; Shi, P.; Jiang, P.; Hu, J.; Qu, S.; Chen, X.; Chen, Y.; Dai, Y.; Xiao, Z. Application of BP Neural Network Algorithm in Traditional Hydrological Model for Flood Forecasting. Water 2017,
9, 48. [Google Scholar] [CrossRef]
42. Li, B.; Yang, G.; Wan, R.; Dai, X.; Zhang, Y. Comparison of random forests and other statistical methods for the prediction of lake water level: A case study of the Poyang Lake in China. Hydrol.
Res. 2016, 47, 69–83. [Google Scholar] [CrossRef]
43. Albers, S.J.; Déry, S.J.; Petticrew, E.L. Flooding in the Nechako River Basin of Canada: A random forest modeling approach to flood analysis in a regulated reservoir system. Can. Water Resour. J.
Rev. Can. Ressour. Hydr. 2015, 41, 250–260. [Google Scholar] [CrossRef]
44. Francke, T.; López-Tarazón, J.; Schröder, B. Estimation of suspended sediment concentration and yield using linear models, random forests and quantile regression forests. Hydrol. Process. 2008,
22, 4892–4904. [Google Scholar] [CrossRef]
45. Buttle, J. Soil moisture and groundwater responses to snowmelt on a drumlin sideslope. J. Hydrol. 1989, 105, 335–355. [Google Scholar] [CrossRef]
46. Bengtsson, L.; Westerström, G. Urban snowmelt and runoff in northern Sweden. Hydrol. Sci. J. 1992, 37, 263–275. [Google Scholar] [CrossRef]
47. Tyszewski, S.; Kardel, I. Studium Hydrograficzne Doliny Rzeki Białej z Wytycznymi Do Zagospodarowania Rekreacyjnowypoczynkowego I Elementami Małej Retencji Oraz Prace Hydrologiczne Niezbdne Do
Sporzdzenia Dokumentacji Hydrologicznej; Pro Woda: Warsaw, Poland, 2009. (In Polish) [Google Scholar]
48. Givati, A.; Lynn, B.; Liu, Y.; Rimmer, A. Using the WRF Model in an Operational Streamflow Forecast System for the Jordan River. J. Appl. Meteorol. Climatol. 2012, 51, 285–299. [Google Scholar] [
49. Harr, R. Some characteristics and consequences of snowmelt during rainfall in western Oregon. J. Hydrol. 1981, 53, 277–304. [Google Scholar] [CrossRef]
50. Chu, H.; Wei, J.; Li, J.; Qiao, Z.; Cao, J. Improved Medium- and Long-Term Runoff Forecasting Using a Multimodel Approach in the Yellow River Headwaters Region Based on Large-Scale and
Local-Scale Climate Information. Water 2017, 9, 608. [Google Scholar] [CrossRef]
51. Fassnacht, S.R.; Records, R.M. Large snowmelt versus rainfall events in the mountains. J. Geophys. Res. Atmos. 2015, 120, 2375–2381. [Google Scholar] [CrossRef]
52. Berezowski, T.; Chormański, J.; Batelaan, O. Skill of remote sensing snow products for distributed runoff prediction. J. Hydrol. 2015, 524, 718–732. [Google Scholar] [CrossRef]
53. Berezowski, T.; Nossent, J.; Chormanski, J.; Batelaan, O. Spatial sensitivity analysis of snow cover data in a distributed rainfall-runoff model. Hydrol. Earth Syst. Sci. 2015, 19, 1887–1904. [
Google Scholar] [CrossRef]
Figure 1. Training and validation time series of discharge (red), rainfall (blue) and snowmelt (pink) in 1 h resolution. The peak events used for error calculation are indicated with black dots. P
stands for rainfall and SM stands for snowmelt on the right-hand side vertical axis.
Figure 4. Biala River catchment with rivers and discharge gauge indicated. The background natural colour composition Sentinel 2 satellite image from 2017 presents urban areas in bright yellow and
white, agriculture in bright green and grey and forest in dark green colour.
Figure 5. Validation of WRF rainfall (left panel) and snowmelt (right panel) forecasts against meteorological observations. The WRF forecasts are aggregated from hourly to daily data in order to
match the meteorological records. Black line in the left panel is the 1:1 line.
Figure 6.
The mean error (ME,
panels) and root mean square error (RMSE,
panels) of the discharge forecasted with the Random Forests models for the validation period for the forecast time
from 1 to 24 h.
panels present errors for the peak discharge events as indicated in
Figure 1
; the
panels present the total validation time series errors.
Figure 7. Forecasted discharge at four forecast times (t = 1, 6, 12, 24 h) for the snowmelt and rainfall models (top panel) and for the rainfall only models (bottom panel). P stands for rainfall and
SM stands for snowmelt in the right hand side vertical axes.
Figure 8.
Importance of the forecasted meteorological predictors used in the Random Forests models (Equations (
) and (
)) for snowmelt and rainfall (
panel) and rainfall only (
panel) model variants. The vertical axis presents different forecast time
$t ∈ 〈 1 ; 24 〉$
h; the horizontal axis presents the predictor lag time
$l ∈ 〈 − 24 ; t 〉$
h. The area of the rectangles is proportional to increase of the mean squared error (IncMSE) of Random Forests model after perturbing a predictor in reference to an original predictor, i.e., the
higher the increase of the mean squared error, the higher the predictor importance for a model. Red rectangles present rainfall predictors (
$r l$
) and the blue rectangles present the snowmelt predictors (
$s l$
Figure 9.
Importance of the discharge-based predictors used in the Random Forests models (Equations (
) and (
)) for the snowmelt and rainfall (
panel) and rainfall only (
panel) model variants. The vertical axis presents different forecast time
$t ∈ 〈 1 ; 24 〉$
h; the horizontal axis presents the predictor lag time
$l ∈ 〈 − 24 ; t 〉$
h, which is always equal to 0 for this group of predictors. The area of the rectangles is proportional to the increase of the mean squared error (IncMSE) of Random Forests model after perturbing a
predictor in reference to an original predictor, i.e., the higher the increase of the mean squared error the higher the predictor importance for a model. Red rectangles present the discharge
difference (
$q d$
) predictor and the blue rectangles the mean discharge (
$q m$
) predictor (see
Section 2.1
Figure 10. Discharge autocorrelation at the catchment outlet calculated for the calibration and validation period. Horizontal blue lines indicate the autocorrelation significance bounds.
Figure 11. WRF forecasted snowmelt autocorrelation at the catchment outlet calculated for the calibration and validation period. Horizontal blue lines indicate the autocorrelation significance
Figure 12. WRF forecasted rainfall autocorrelation at the catchment outlet calculated for the calibration and validation period. Horizontal blue lines indicate the autocorrelation significance
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/
Share and Cite
MDPI and ACS Style
Berezowski, T.; Chybicki, A. High-Resolution Discharge Forecasting for Snowmelt and Rainfall Mixed Events. Water 2018, 10, 56. https://doi.org/10.3390/w10010056
AMA Style
Berezowski T, Chybicki A. High-Resolution Discharge Forecasting for Snowmelt and Rainfall Mixed Events. Water. 2018; 10(1):56. https://doi.org/10.3390/w10010056
Chicago/Turabian Style
Berezowski, Tomasz, and Andrzej Chybicki. 2018. "High-Resolution Discharge Forecasting for Snowmelt and Rainfall Mixed Events" Water 10, no. 1: 56. https://doi.org/10.3390/w10010056
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2073-4441/10/1/56","timestamp":"2024-11-03T12:56:48Z","content_type":"text/html","content_length":"477509","record_id":"<urn:uuid:62106998-e277-4498-a7cc-02240767b62a>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00575.warc.gz"} |
Timm Wrase
2024F: PHY 031: Introduction to Modern Physics
PHY 472: Special Topics In Physics: High Energy and Condensed Matter Physics
PHY 391: Special Topics In Physics: Quantum Field Theory, Gravity and String Theory
2024S: PHY 369 Quantum Mechanics II
PHY 472: Special Topics In Physics: High Energy and Condensed Matter Physics
2023F: PHY 031: Introduction to Modern Physics
PHY 472: Special Topics In Physics: High Energy and Condensed Matter Physics
2023S: ASTR 395 / PHY 395: Cosmology
PHY 472: Special Topics In Physics: High Energy and Condensed Matter Physics
2022F: PHY 090-011 From Black Holes to Strings: the Early Universe and the Nature of Space and Time
PHY 472: Special Topics In Physics: High Energy and Condensed Matter Physics
2022S: PHY 369 Quantum Mechanics II
PHY 472: Special Topics In Physics: High Energy and Condensed Matter Physics
2021F: PHY 090-011 From Black Holes to Strings: the Early Universe and the Nature of Space and Time
PHY 472: Special Topics In Physics: High Energy and Condensed Matter Physics
2021S: ASTR 395 / PHY 395: Cosmology
PHY 472: Special Topics In Physics: High Energy and Condensed Matter Physics
2020F: ASTR 332 / PHY 332: High-Energy Astrophysics
PHY 472: Special Topics In Physics: High Energy and Condensed Matter Physics
2020S: PHY 031: Introduction to Modern Physics
2019S: Cosmology and particle physics
Seminar on Fundamental Interactions 2
2018W: Electrodynamics II - excercises
Seminar on Fundamental Interactions 1
2018S: Cosmology and particle physics
Electrodynamics I
Seminar on Fundamental Interactions 2
2017W: Seminar on Fundamental Interactions 1
2017S: Cosmology and particle physics
Electrodynamics I
Seminar on Fundamental Interactions 2
2016W: Seminar on Fundamental Interactions 1
2016S: Cosmology and particle physics
Seminar on Fundamental Interactions 2
2015W: Electrodynamics II - excercises
2015S: Cosmology and particle physics
Electrodynamics I
Seminar on Fundamental Interactions 2 | {"url":"https://www.lehigh.edu/~tiw419/teaching.html","timestamp":"2024-11-11T06:41:42Z","content_type":"application/xhtml+xml","content_length":"8675","record_id":"<urn:uuid:42b6e19a-c6a8-4897-88ab-db487e5cd7f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00272.warc.gz"} |
What's Going on in Math?
New Skill:
Spiral Skills: Place value to the thousands place, writing numbers in 3 ways (standard, expanded form, and word form), odd and even numbers, rounding, and comparing and ordering numbers.
Websites: Reflex Math
: Practice building math fluency until you get the 'green light'.
AAA Math
Browse to find review lessons and practice exercises for topics learned in school.
Timed Fact Tests
Addition, Subtraction, Multiplication, and Division fact tests.
Math Playground
Practice addition, subtraction, multiplication, division, and fractions on this website!
Students can log-in to practice the skills we are learning in class and also review past skills.
Prodigy - Math type game to practice skills | {"url":"http://4d2018rr.weebly.com/math.html","timestamp":"2024-11-07T02:57:06Z","content_type":"text/html","content_length":"26619","record_id":"<urn:uuid:d433719d-8a08-424d-acb8-a2bbf5d177d2>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00033.warc.gz"} |
Specify weights for each objective function
• Alias: multi_objective_weights
• Arguments: REALLIST
• Default: equal weights
For multi-objective optimization problems (where the number of objective functions is greater than 1), then a weights specification provides a simple weighted-sum approach to combining multiple
objectives into a single objective:
\[f = \sum_{i=1}^{n} w_{i}f_{i}\]
Length: The weights must have length equal to objective_functions. Thus, when scalar and/or field responses are specified, the number of weights must equal the number of scalars plus the number of
fields, not the total elements in the fields.
Default Behavior If weights are not specified, then each response is given equal weighting:
\[f = \sum_{i=1}^{n} \frac{f_i}{n}\]
where, in both of these cases, a “minimization” sense will retain a positive weighting for a minimizer and a “maximization” sense will apply a negative weighting.
Usage Tips:
Weights are applied as multipliers, scales as charateristic values / divisors.
When scaling is active, it is applied to objective functions prior to any weights and multi-objective sum formation. See the equations in objective_functions. | {"url":"https://snl-dakota.github.io/docs/6.20.0/users/usingdakota/reference/responses-objective_functions-weights.html","timestamp":"2024-11-02T08:54:24Z","content_type":"text/html","content_length":"16681","record_id":"<urn:uuid:e682e13e-d499-4e15-8c3b-e6fc1a003243>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00202.warc.gz"} |
Polygons Archives | Wizako GMAT Prep Blog
Any n-sided convex polygon with more than 3 sides will have n(n-3)/2 diagonals. For instance, let us look at a square. A square has 4 sides and 2 diagonals. Let us apply this formula with n = 4. We
get 4(4-3)/2 = 2 diagonals. Here is a question on finding the number of diagonals. If a n-sided convex polygon has 14 diagonals, how many sides does the polygon … [Read more...] about Number of
diagonals in convex polygon : GMAT | {"url":"https://gmat-prep-blog.wizako.com/tag/polygons/","timestamp":"2024-11-06T13:29:45Z","content_type":"text/html","content_length":"49208","record_id":"<urn:uuid:934da359-09e8-4525-bd2b-05279ef73fb7>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00010.warc.gz"} |
Question 5 5 pts Elijah spent $5.25 for lunch every day for 5 school days. He spent $6.75 on Saturday. How much did he spend in all?
Find an answer to your question ✅ “Question 5 5 pts Elijah spent $5.25 for lunch every day for 5 school days. He spent $6.75 on Saturday. How much did he spend in all? ...” in 📘 Mathematics if you're
in doubt about the correctness of the answers or there's no answer, then try to use the smart search and find answers to the similar questions.
Search for Other Answers | {"url":"https://edustrings.com/mathematics/1594959.html","timestamp":"2024-11-10T17:32:57Z","content_type":"text/html","content_length":"22505","record_id":"<urn:uuid:0060e33e-1988-4b76-8bfa-4f3ec7ded3fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00316.warc.gz"} |
Solving (6,6)-Kalaha
Kalaha is an abstract strategy game invented in 1940 by William Julius Champion, Jr. The notation (m,n)-Kalaha refers to Kalaha with m pits per side and n stones in each pit. In 2000, Kalaha was
solved for all m ≤ 6 and n ≤ 6, except (6,6).
Now, 11 years later, we have solved (6,6)-Kalaha.
Kalaha is played by two people on a board with 6 pits on each side and two stores, called kalahas. We refer to the two players as North and South.
In each pit, there are initially 6 stones. A move is made by taking all stones from a pit on your own side and sowing them one-by-one in counterclockwise direction. Your own kalaha is included in the
sowing, but the opponent's kalaha is skipped.
There are three possible outcomes of a turn:
• The sowing ends in your own kalaha: It is your turn to move again.
• The sowing ends in an empty pit on your own side: All stones in the opposite pit (on the opponent's side) along with the last stone of the sowing are placed into your kalaha and your turn is
• Otherwise (the sowing ends on the opponent's side or in a nonempty pit on your own side): Your turn is over.
If all pits on your side become empty, the opponent captures all of the remaining stones in his pits. These are placed in the opponent's kalaha and the game is over.
You win the game when you have 37 or more stones in your kalaha. If both players end up with 36 stones, the game is tied.
1. If all pits on your side become empty, you capture all of the remaining stones in your opponent's pits.
2. If your sowing ends in an empty pit on your own side, but the opposite pit has no stones, then you are not allowed to capture the last stone of the sowing.
We have proven that the first player always wins Kalaha with standard rules, variation 1, variation 2, and both variations 1 and 2. This means that we can fill out the last square in the diagram from
Geoffrey Irving, Jeroen Donkers, and Jos Uiterwijk's article
Solving Kalah
from 2000:
1 D L W L W D
2 W L L L W W
3 D W W W W L
4 W W W W W D
5 D D W W W W
6 W W W W W W
Try to beat the perfectly playing computer:
(requires a modern browser and a good internet connection).
About the project
This project is part of Anders Carstensen's master's thesis at the University of Southern Denmark, supervised by Kim Skak Larsen. You can contact us at kalaha@kr
This announcement was first published April 14, 2011. | {"url":"http://kalaha.krus.dk/","timestamp":"2024-11-10T12:12:41Z","content_type":"text/html","content_length":"5842","record_id":"<urn:uuid:53d1cab8-5030-4e4f-a75f-139eacf1e10f>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00328.warc.gz"} |
The IMI Colloquium Report in January 13, 2021
The IMI Colloquium in January was held on January 13, 2021.
The IMI Colloquium in January
Title : Discover of Mobius Crystals and Topological Crystallography and Topological Science
Speaker : Prof. Satoshi Tanda, Dept.of Applied Physics, Hokkaido Univ.
Place : Only by Zoom
Prof. Satoshi Tanda, Hokkaido University, gave a talk entitled “Discover of Mobius Crystals and Topological Crystallography and Topological Science”.
This talk is an overview of his challenging research activities connecting and expanding physics and mathematics, inspired by a discovery of materials with a new structure through experiments. A
typical microscopic structure in materials is crystal. Nowadays geometrically various structures, observed in, say fullerene and carbon nanotubes, are found as a collection of crystal-like
structures. Tanda’s first topic is an introduction and the construction of crystals with ring structures, and a discovery of crystals with torsional structure observed in NbSe3, which are called
“Mobius crystals”. The new structure, Mobius crystal, is constructed by taking chemical symmetry into account, but there is a non-trivial problem if this eccentric structure geometrically different
from typical crystal structures should be called “a crystal”. He then goes to the next topic, the Bragg reflection property which “crystal structures” should possess. During researches in his group,
a texture pattern which is not observed in usual (quasi-)crystals has been observed in the new structure. and his group has proposed a new category of crystals called “topological crystals”. He
introduced, related to this category, a classification among topological crystals by means of topological invariants such as linking numbers, as well as embedding manifolds, which characterizes
different types of topological crystals including Mobius crystals and “Hopf-link crystals” consisting of two rings. Linking numbers relate to an invariant “helicity”. His talk goes to the topic
related to helicity observed in many scientific scenes, including crystal structures. Helicity itself is defined for fluid fields, gauge fields, spacetime and so on, which characterize various
macroscopic and microscopic phenomena. Tanda’s group has proved the existence of helicity in Charged-Density Wave fields (CDW for short) which are known to be observed in topological crystals. This
result induces a non-trivial relationship of helicity to materials structures in topological crystals. Tanda’s last topic consists of his idea about the origin of helicity. He thinks of
“fluctuations” in some sense as the origin, which can determine physical quantities such as helicity and vorticity through mathematical structures, and characterize the nature of materials, fluids
and the universe. He has ended his talk by drawing a big picture of a research field named “Topological Science” characterizing various scientific natures through topology.
Attendance: Staff 14, Students 15 | {"url":"https://www.imi.kyushu-u.ac.jp/post-6314/","timestamp":"2024-11-11T05:13:20Z","content_type":"text/html","content_length":"29993","record_id":"<urn:uuid:5ca3cbda-bc03-47f6-a009-b296fe254088>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00052.warc.gz"} |
Should I Prepare for Cambridge STEP or Oxford MAT?
STEP and MAT math exams are written entrance exams for mathematics, computer and other majors in top universities in the UK. This paper compares these two tests from multiple dimensions for the
reference of students and parents.
I. Which Universities and Programs Require STEP and MAT?
The table below lists the requirements for the STEP and MAT exams at various universities, with the 4-letter and number combinations inside the parentheses being the UCAS course codes.
Mathematics (G100)
Mathematical Physics (G100)
Engineering* (H100)
Mathematics (G100)
Mathematics and Computer Science (GG14)
Mathematics and Philosophy (GV15)
Mathematics and Statistics (GG13)
Computer Science (G400)
Computer Science and Philosophy (IV15)
Mathematics and Computer Science (GG14)
Mathematics and Computer Science (GG41)
Calculation (G400)
Calculation (G401)
Calculation (International Project) (G402)
Computing (Management and Finance) (G501)
Computing (Software Engineering) (G600)
Calculation (Safety and Reliability) (G610)
Computing (Artificial Intelligence and Machine Learning) (G700)
Computing (Visual Computing and Robotics) (GG47)
Mathematics (G100)
Mathematics and Mathematical Calculation (G102)
Mathematics (G103)
Mathematics (one year overseas) (G104)
Mathematics (Pure Numbers) (G125)
Mathematics, Optimization and Statistics (GG31)
Mathematics and Applied Mathematics/Mathematical Physics (G1F3)
Mathematics and Statistics (G1G3)
Mathematics and Financial Statistics (G1GH)
Mathematics** (G100)
Mathematics **(G103)
Mathematics and Philosophy **(GV15)
Mathematics and Statistics **(GG13)
Mathematics and Statistics **(GGC3)
Data Science **(7G73)
Mathematics** (G100)
Mathematics **(G103)
*Peterhouse College may include STEP 2 score requirements in the conditional admission.
* * It is not mandatory to take STEP or MAT, but it is easier to be admitted with STEP, MAT or TMUA scores.
Some universities and majors also require STEP (or MAT, TMUA), including:
• University College London
• University of Bristol
• University of Bath
• King’s College London
These universities will specify on their official websites whether these exams are required for relevant programs and the scores or grades needed. Please refer to the specific admission policies for
the respective programs in a given year.
End of October, Early November
III.Exam Format Comparison
Written Exam
Offline paper test paper
Written Exam
Offline paper test paper
Multiple Choice and Short Answer
10 multiple-choice questions
and 4 short-answer questions
4 points per multiple-choice question
15 points per short-answer question
IV. Comparison of Examination Scope
To accommodate adjustments to the UK A Level Mathematics syllabus, significant changes were made to the STEP exam in 2019: the number of questions was reduced, and the syllabus underwent substantial
revision. The MAT syllabus of 2018 only added or removed a few knowledge points based on the 2007 syllabus, with the overall scope of the knowledge tested remaining largely unchanged.
The table below compares the examination scopes of STEP and MAT.
Pure Mathematics
Pure Mathematics
logical inference
number theory
Proof method, etc
logical inference
number theory
Proof method, etc
logical inference
Algorithms, etc
* Knowledge points beyond GCSE mathematics and ALLEVEL mathematics (or advanced mathematics) syllabus.
It is especially noteworthy that the MAT exam for Computer Science includes topics such as trees, graphs, finite state machines, strings, algorithms, and more, which go far beyond the scope of
A-Level Mathematics and Advanced Mathematics.
V. Comparison of Exam Difficulty
The table below is a reference for the difficulty comparison between mainstream international high school mathematics
A Level Further Mathematics
Cambridge STEP 2 Mathematics
Cambridge STEP 3 Mathematics
The difficulty of the STEP exam is definitely higher than that of the MAT. However, when comparing STEP 2 and 3, STEP 2 is considered slightly more challenging. Despite covering fewer topics than
STEP 3, the questions in STEP 2 are more challenging, making it harder to achieve high scores (Grade S or 1). This is supported by data from past years’ STEP 2 results. For STEP’s performance data
over the years, please refer to:
VI. Comparison of Exam Features The exams focus on the following:
• Emphasizes fundamental mathematical knowledge and derivation methods of basic theorems and formulas.
• The range of knowledge tested is relatively fixed, and the problem-solving methods and techniques are quite similar. Guides students to apply known knowledge and methods to solve problems,
testing students’ insight and transfer ability, and indirectly examining the students’ basic mathematical literacy.
• The scope of knowledge is relatively fixed, and the methods and skills of solving problems are similar.
• Short answer questions focus more on the examination of fundamental mathematical knowledge and basic theorems.
• Stresses logical analysis and reasoning abilities. Compared to MAT, it places more emphasis on the examination of fundamental mathematical ideas, methods, and theorems, highlighting the basic
literacy and depth of thought in the mathematics discipline.
• Places great importance on knowledge transfer abilities, logical reasoning abilities, and problem-solving abilities.
• The amount of calculation is large, which requires considerable proficiency in mathematical operations.
• Examine a wide range of knowledge. The strategy of topic selection has a significant impact on the on-the-spot performance and test scores.
• There are requirements for writing norms and the rigor of reasoning process, which are easy to lose points.
• The calculation amount is moderate.
• Most multiple-choice questions are relatively simple. Individual multiple-choice questions are difficult and can dig holes.
• The background knowledge of at least one short answer question is rarely contacted by candidates, even beyond the scope of knowledge learned in middle school mathematics. It is a key topic to
widen the gap between students.
VII. Comparison of Preparation Periods
75 or more or increase by 25 points or more
Select from 45 chapters
At least 33 chapters
Select from 60 chapters
At least 45 chapters
* Refer to Xie Tao’s Talking about Oxford MAT 2022 [5th Edition] and Xie Tao’s Talking about Cambridge STEP 2022 [5th Edition]. | {"url":"https://ueie.com/should-i-prepare-for-cambridge-step-or-oxford-mat/","timestamp":"2024-11-11T23:11:37Z","content_type":"text/html","content_length":"373728","record_id":"<urn:uuid:8499b7e7-22d9-48d3-8573-03a9bc8e7901>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00778.warc.gz"} |
On Choice
This is a follow-up to a Twitter discussion with John Baez that did not fit into the 260 character limit. And before starting, I should warn you that I have never studied set theory in any
seriousness and everything I am about to write here is only based on hearsay and is probably wrong.
I am currently teaching "Mathematical Statistical Physics" once more, large part of which is to explain the operator algebraic approach to quantum statistical physics, KMS states and all that. Part
of this is that I cover states as continuous linear functionals on the observables (positive and normalised) and in the example of B(H), the bounded linear operators on a Hilbert space H, I mention
that trace class operators $$\rho\in {\cal S}^1({\cal H})$$ give rise to those via $$\omega(a) = {\hbox tr}(\rho a).$$
And "pretty much all" states arise in this way meaning that the bounded operators are the (topological) dual to trance class operators but, for infinite dimensional H, not the other way around as the
bi-dual is larger. There are bounded linear functionals on B(H) that don't come from trace class operators. But (without too much effort), I have never seen one of those extra states being
"presented". I have very low standards here for "presented" meaning that I suspect you need to invoke the axiom of choice to produce them (and this is also what John said) and for class this would be
sufficient. We invoke choice in other places as well, like (via Hahn-Banach) every C*-algebra having faithful representations (or realising it as a closed Subalgebra of some huge B(H)).
So much for background. I wanted to tell you about my attitude towards choice. When I was a student, I never had any doubt about it. Sure, every vector space has a basis, there are sets that are not
Lebesque measurable. A little abstract, but who cares. It was a blog post by Terrence Tao that made me reconsider that (turns out, I cannot find that post anymore, bummer). It goes like this: On one
of these islands, there is this prison where it is announced to the prisoners that the following morning, they are all placed in a row and everybody is given a hat that is either black or white. No
prisoner can see his own hat but all of those in front of him. They can guess their color (and all other prisoners hear the guess). Those who guess right get free while those who guess wrong get
executed. How many can go free?
Think about it.
The answer is: All but one half. The last one says white if he sees an even number of white hats in front of him. Then all the others can deduce their color from this parity plus the answers of the
other prisoners behind them. So all but the last prisoner get out and the last has a 50% chance.
But that was too easy. Now, this is a really big prison and there are countably infinitely many prisoners. How many go free? When they are placed in a row, the row extends to infinity only in one
direction and they are looking in this infinite direction. The last prisoner sees all the other prisoners.
Think about it.
In this case, the answer is: Almost all, all but finitely many. Out of all infinite sequences of black/white form equivalence classes where two sequences are equivalent if they differ at at most
finitely many places, you could say they are asymptotically equal. Out of these equivalence classes, the axiom of choice allows us to pick one representative of each. The prisoners memorise these
representatives (there are only aleph1 many, they have big brains). The next morning in the court of the prison, all prisoners can see almost all other prisoners, so they know which equivalence class
of sequences was chosen by the prison's director. Now, every prisoner announces the color of the memorised representative at his position and by construction, only finitely many are wrong.
This argument as raised some doubts in me if I really want choice to be true. I came to terms with it and would describe my position as agnostic. I mainly try to avoid it and better not rely too much
on constructions that invoke it. And for my physics applications this is usually fine.
But it can also be useful in everyday's life: My favourite example of it is in the context of distributions. Those are, as you know, continuous linear functionals on test functions. The topology on
the test functions, however, is a bit inconvenient, as you have to check all (infinitely many) partial derivates to converge. So you might try to do the opposite thing: Let's study a linear
functional on test functions that is not continuous. Turns out, those are harder to get hold of than you might think: You might think this is like linear operators where continuity is equivalent to
boundedness. But this case is different: You need to invoke choice to find one. But this is good, since this implies that every concrete linear functional that you can construct (write down
explicitly) is automatically continuous, you don't have to prove anything!
This type of argument is a little dangerous: You really need more than "the usual way to get one is to invoke choice". You really need that it is equivalent to choice. And choice says that for every
collection of sets, the Cartesian product is non-empty. It is the "every" that is important. The collection that consists of copies of the set {apple} trivially has an element in the Cartesian
product, it is (apple, apple, apple, ...). And this element is concrete, I just constructed it for you.
This caveat is a bit reminiscent of a false argument that you can read far too often: You show that some class of problems is NP-complete (recent incarnations: deciding if an isolated string theory
vacuum as a small cosmological constant, deciding if a spin-chain model is gapped, determining the phase structure of some spin chain model, ...) and then arguing that these problems are "hard to
solve". But this does not imply that a concrete problem in this class is difficult. It is only that solving all problems in this class of problems is difficult. Every single instance of practical
relevance could be easy (for example because you had additional information that trivialises the problem). It could well be that you are only interested in spin chain Hamiltonians of some specific
form and that you can find a proof that all of them are gapped (or not gapped for that matter). It only means that your original class of problems was too big, it contained too many problems that
don't have relevance in your case. This could for example well be for the string theory vacua: In the paper I have in mind, that was modelled (of course actually fixing all moduli and computing the
value of the potential in all vacua cannot be done with today's technology) by saying there are N moduli fields and each can have at least two values with different values of its potential (positive
or negative) and we assume you simply have to add all those values. Is there one choice of the values of all moduli fields such that the sum of the energies is epsilon-close to 0? This turns out to
be equivalent to the knapsack-problem which is known to be NP-complete. But for this you need to allow for all possible values of the potential for the individual moduli. If, for example, you knew
the values for all moduli are the same, that particular incarnation of the problem is trivial. So just knowing that the concrete problem you are interested in is a member of a class of problems that
is NP-complete does not make that concrete problem hard by itself.
What is you attitude towards choice? When is the argument "Here is a concrete, constructed example of a thing x. I am interested in some property P of it. To show there are y that don't have property
P, I need to invoke choice. Does this prove x has property P?" to be believed?
9 comments:
The reason you can't find it is because it wasn't a Terry Tao post (although he did comment on it). It's at the Everything Seminar. Also, after being very offended when I saw an economist use
Hahn-Banach to prove a theorem in economics (indicative of what's very wrong in econ theory, but that's another rant), I did a little digging, and you don't actually need choice (or the BPIT,
more precisely) in the case of a separable space, where there's a constructive version.
This comment has been removed by a blog administrator.
This comment has been removed by a blog administrator.
This comment has been removed by a blog administrator.
This comment has been removed by a blog administrator.
This comment has been removed by a blog administrator.
This comment has been removed by a blog administrator.
This comment has been removed by a blog administrator.
This comment has been removed by a blog administrator. | {"url":"https://atdotde.blogspot.com/2021/06/on-choice.html?showComment=1647071390134","timestamp":"2024-11-07T07:18:00Z","content_type":"application/xhtml+xml","content_length":"85439","record_id":"<urn:uuid:ab4e6908-e920-4d3e-bd06-1df7b293e996>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00526.warc.gz"} |
Main concepts
• The tests in this unit are different from the earlier ones in several ways. First of all, the focus is not on estimating a parameter, so there will be nothing about confidence intervals here.
Second, the distribution family we'll be using for our sampling distribution is not symmetric and bell-shaped like the normal and t distributions. Third, the data we'll be looking at are strictly
•We have seen several statistics that all had approximately t distributions. Similarly, in this unit we'll look at three contexts in which the preferred test statistic has a chi-squared distribution.
These tests are not the same even though they have the same name (chi-square) and approximately the same distribution. These three tests are the Test of Independence, the Test of Homogeneity and the
Goodness of Fit Test. Keep them distinct.
• The "goodness-of-fit test" is a way of determining whether a set of categorical data came from a claimed discrete distribution or not. The null hypothesis is that they did and the alternate
hypothesis is that they didn't. It answers the question: are the frequencies I observe for my categorical variable consistent with my theory? The goodness-of-fit test expands the one-proportion
z-test. The one-proportion z-test is used if the outcome has only two categories. The goodness-of-fit test is used if you have two or more categories.
• The "test of homogeneity" is a way of determining whether two or more sub-groups of a population share the same distribution of a single categorical variable. For example, do people of different
races have the same proportion of smokers to non-smokers, or do different education levels have different proportions of Democrats, Republicans, and Independent. The test of homogeneity expands on
the two-proportion z-test. The two proportion z-test is used when the response variable has only two categories as outcomes and we are comparing two groups. The homogeneity test is used if the
response variable has several outcome categories, and we wish to compare two or more groups.
• The "test of independence" is a way of determining whether two categorical variables are associated with one another in the population, like race and smoking, or education level and political
affiliation. In the probability unit we looked at this question without paying attention to the variability of our sample. Now we will have a method for deciding whether our observed P(A|B) is "too
far" from our observed P(A) to conclude independence.
• If you're thinking, "homogeneity and independence sound the same!", you're nearly right. The difference is a matter of design. In the test of independence, observational units are collected at
random from a population and two categorical variables are observed for each unit. In the test of homogeneity, the data are collected by randomly sampling from each sub-group separately. (Say, 100
blacks, 100 whites, 100 American Indians, and so on.) The null hypothesis is that each sub-group shares the same distribution of another categorical variable. (Say, "chain smoker", "occasional
smoker", "non-smoker".) The difference between these two tests is subtle yet important.
• Note that in the test of independence, two variables are observed for each observational unit. In the goodness-of-fit test there is only one observed variable.
• As with all other tests, certain conditions must be checked before a chi-square test of anything is carried out. See the Teaching Tips for more on this. | {"url":"http://inspire.stat.ucla.edu/unit_13/","timestamp":"2024-11-06T11:04:04Z","content_type":"text/html","content_length":"7393","record_id":"<urn:uuid:e69998dd-534d-4c02-ad22-abb75ff677e9>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00781.warc.gz"} |
Gravitational Potential Energy MDCAT MCQs with Answers - Youth For Pakistan
Welcome to the Gravitational Potential Energy MDCAT MCQs with Answers. In this post, we have shared Gravitational Potential Energy Multiple Choice Questions and Answers for PMC MDCAT 2024. Each
question in MDCAT Physics offers a chance to enhance your knowledge regarding Gravitational Potential Energy MCQs in this MDCAT Online Test.
Gravitational Potential Energy MDCAT MCQs Test Preparations
Gravitational potential energy is defined as the energy an object has due to:
a) Its position relative to a reference point
b) Its velocity
c) Its temperature
d) Its shape
a) Its position relative to a reference point
If the height of an object is increased from 2 meters to 4 meters, its gravitational potential energy:
a) Doubles
b) Quadruples
c) Halves
d) Remains the same
The gravitational potential energy of a 10 kg object at a height of 5 meters is:
a) 50 Joules
b) 100 Joules
c) 500 Joules
d) 5 Joules
Which of the following does NOT affect the gravitational potential energy of an object?
a) Mass
b) Height
c) Acceleration due to gravity
d) Speed
The unit of gravitational potential energy is:
a) Joule
b) Newton
c) Watt
d) Meter
If the mass of an object is doubled and its height remains constant, its gravitational potential energy:
a) Doubles
b) Quadruples
c) Halves
d) Remains the same
Gravitational potential energy is greatest when an object is:
a) At its highest point
b) At its lowest point
c) In motion
d) At rest
a) At its highest point
The gravitational potential energy of an object is zero when:
a) It is at the reference point
b) It is moving
c) It is compressed
d) It is heated
a) It is at the reference point
To increase the gravitational potential energy of an object, you can:
a) Increase its height
b) Decrease its mass
c) Decrease its velocity
d) Decrease its height
a) Increase its height
The gravitational potential energy of a 2 kg object at a height of 8 meters is:
a) 16 Joules
b) 80 Joules
c) 8 Joules
d) 32 Joules
The work done to lift an object to a certain height is equal to its:
a) Gravitational potential energy
b) Kinetic energy
c) Elastic potential energy
d) Thermal energy
a) Gravitational potential energy
A 5 kg object at a height of 2 meters has a gravitational potential energy of:
a) 10 Joules
b) 50 Joules
c) 20 Joules
d) 100 Joules
The gravitational potential energy of an object is directly proportional to:
a) Its height and mass
b) Its speed
c) Its temperature
d) Its volume
a) Its height and mass
The gravitational potential energy of an object is converted to:
a) Kinetic energy when it falls
b) Elastic potential energy when it is stretched
c) Chemical energy when it reacts
d) Thermal energy when heated
a) Kinetic energy when it falls
The gravitational potential energy of an object with a mass of 3 kg at a height of 4 meters is:
a) 12 Joules
b) 24 Joules
c) 30 Joules
d) 40 Joules
The potential energy of an object depends on:
a) The reference point chosen
b) The time it has been at height
c) Its initial velocity
d) Its temperature
a) The reference point chosen
When an object is lowered to a lower height, its gravitational potential energy:
a) Decreases
b) Increases
c) Remains constant
d) Becomes negative
The gravitational potential energy of a 6 kg object at a height of 3 meters is:
a) 18 Joules
b) 60 Joules
c) 12 Joules
d) 30 Joules
To find the gravitational potential energy of an object, you need to know:
a) Its mass, height, and gravitational acceleration
b) Its temperature and volume
c) Its speed and shape
d) Its density and color
a) Its mass, height, and gravitational acceleration
The gravitational potential energy of an object is maximum at:
a) The highest point of its trajectory
b) The lowest point of its trajectory
c) The midpoint of its trajectory
d) The point of maximum speed
a) The highest point of its trajectory
The change in gravitational potential energy of a 4 kg object when it is raised by 2 meters is:
a) 8 Joules
b) 16 Joules
c) 4 Joules
d) 2 Joules
The gravitational potential energy of an object can be increased by:
a) Raising the object
b) Increasing its speed
c) Heating the object
d) Reducing its volume
a) Raising the object
The gravitational potential energy of a 8 kg object at a height of 10 meters is:
a) 80 Joules
b) 160 Joules
c) 40 Joules
d) 20 Joules
If the acceleration due to gravity were to decrease, the gravitational potential energy of an object would:
a) Decrease
b) Increase
c) Remain unchanged
d) Double
The gravitational potential energy of a 1 kg object at a height of 15 meters is:
a) 15 Joules
b) 30 Joules
c) 45 Joules
d) 10 Joules
The work done to lift an object is equal to its change in:
a) Gravitational potential energy
b) Kinetic energy
c) Elastic potential energy
d) Thermal energy
a) Gravitational potential energy
The reference point for measuring gravitational potential energy is:
a) Arbitrary and can be chosen based on convenience
b) Always the ground
c) The center of the Earth
d) The top of the object
a) Arbitrary and can be chosen based on convenience
The gravitational potential energy of a 12 kg object at a height of 5 meters is:
a) 60 Joules
b) 120 Joules
c) 50 Joules
d) 100 Joules
The gravitational potential energy of an object decreases when:
a) It is lowered
b) It is raised
c) Its mass is increased
d) Its velocity is increased
The gravitational potential energy of a 3 kg object at a height of 7 meters is:
a) 21 Joules
b) 14 Joules
c) 30 Joules
d) 7 Joules
Which factor does NOT affect the gravitational potential energy of an object?
a) Its color
b) Its mass
c) Its height
d) Acceleration due to gravity
If an object of mass 2 kg is raised from 5 meters to 10 meters, the change in gravitational potential energy is:
a) 100 Joules
b) 50 Joules
c) 20 Joules
d) 30 Joules
A 4 kg object at a height of 2 meters has a gravitational potential energy of:
a) 8 Joules
b) 16 Joules
c) 12 Joules
d) 4 Joules
The potential energy of an object in a gravitational field depends on:
a) Its mass and height
b) Its volume and temperature
c) Its shape and speed
d) Its density and color
a) Its mass and height
The gravitational potential energy of a 7 kg object at a height of 9 meters is:
a) 63 Joules
b) 81 Joules
c) 72 Joules
d) 45 Joules
Which of the following increases the gravitational potential energy of an object?
a) Increasing the object’s height
b) Decreasing the object’s mass
c) Decreasing the object’s height
d) Reducing the object’s speed
a) Increasing the object’s height
The gravitational potential energy of an object at a height of 10 meters with a mass of 4 kg is:
a) 40 Joules
b) 100 Joules
c) 60 Joules
d) 20 Joules
An object with a mass of 5 kg is lifted from 3 meters to 8 meters. The work done is equal to:
a) 25 Joules
b) 50 Joules
c) 100 Joules
d) 20 Joules
If you are interested to enhance your knowledge regarding Physics, Chemistry, Computer, and Biology please click on the link of each category, you will be redirected to dedicated website for each
Was this article helpful? | {"url":"https://youthforpakistan.org/gravitational-potential-energy-mdcat-mcqs/","timestamp":"2024-11-02T02:22:00Z","content_type":"text/html","content_length":"240695","record_id":"<urn:uuid:5d7f610c-add9-491d-bb52-6b6d061bb81d>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00197.warc.gz"} |
Helping Tweens with Math | Your Parenting InfoHelping Tweens with Math
Tweens are at a school level where math goes beyond the scope of what parents remember. Fractions, word problems, and the dreaded subject of algebra make the task of helping a lot harder. Parents
can help overcome these math problems with these tips.
Remember that solving math problems is a process. The answer is far less important than how you arrive at the destination. In many cases, teachers will require that the work is shown. So when you
want to help, take a look at how similar questions are solved in either their math book or on the internet.
If you are stumped by math terminology, look up the definition of the words. It is fine if you as a parent have forgotten what a math word actually means. Many math terms are related to certain
equations. Once you figure out what the association is, you are that much closer to helping your Tween solve the problem.
The key to word problems is to associate words with their mathematical counterpart. The best approach is to start with the question or take a look at what the problem wants solve. If you aren’t sure
what term a word relates to, then as with other terminology don’t be afraid to look it up.
Ask for help if the problem is outside your ability to understand it or you aren’t quite grasping the concept your teacher wants your Tween to know. The best source of help is actually the teacher.
Don’t be afraid to approach them when asking for help. You can also find help online from various math forums. Just be aware that you might run into different solutions for the same problem.
Helping Tweens with math is a daunting task for many parents. You don’t have to be one of the helpless ones. Look at the process over the answer, look up math words you aren’t familiar with, and find
help when needed. You don’t need to know all the answers. All you really need is to find a workable solution with your Tween.
Comments on Helping Tweens with Math | {"url":"http://www.yourparentinginfo.com/helping-tweens-with-math/","timestamp":"2024-11-08T04:22:49Z","content_type":"application/xhtml+xml","content_length":"57624","record_id":"<urn:uuid:95df879e-d872-433e-b520-3e1d7ae3883e>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00016.warc.gz"} |
Using Card Sorts in Math Class
I like to use card sorts in my math classroom. I've been working on various sets. Recently I created a card sort for identifying the difference between linear and nonlinear functions. The card sort
has 20 cards - 12 equations and 8 tables. Get the link below to the set of cards!
When I use card sorts, I don't always tell students the categories I'm looking for. I might tell them how many categories I want them to create. Keeping the categories open allows for a rich
discussion about what they see on the cards, and how they choose to sort them. It allows students to show off their understanding of math vocabulary and to make connections that I might not have
thought of.
A card sort can be used at several different levels. It can be used as a discovery activity, a pre-test, and a post-test. The sort can be used as both a formative and a summative activity. The card
sort works well if you use math stations. Students can work on the card sorts individually or in pairs. Sorts can be glued into notebooks, or checked as they are completed on the desk.
After sorting the cards, I like to ask students to create another set of cards - for each category. In creating their own set, they demonstrate understanding of the concepts. You can use student
create cards for additional practice during the unit!
Check out the card sort ... it's free ... try it and let me know how it works for you! On my Google Drive In my TpT Store
PS ... How would you use this card sort? What cards would you add? What concepts work well as a sorting activity?
5 comments:
1. Awesome! I wish there had been more activities like this when I was younger! Thank you for the freebie--it's a little much for my classroom of 3rd graders but I can use it with the older kids I
tutor. :)
~ Veronica
1. Hi Veronica ... thanks for stopping by. Definitely use this with your tutoring kids. I have a few other freebies at my TpT store that you might want to check out as well.
Funny ... I teach high school and tutor elementary students. I'm on the lookout for activities for grades 2, 3, and 5. I've started to create a few as well.
2. Love this! We did something similar 7th graders, but they also had cards with a word problem and graph on them. We did it in our proportions lesson, but we hadn't gotten to algebra yet so
matching the formula was the hardest one for them. Thanks for the freebie!
1. Hi Melissa - thank you for stopping by! Love card sorts of all kinds. Your comment may inspire me to work on some word problems to match with equations, graphs, and solutions!
I may have some other activities (including free ones) at my TpT store that would be useful for middle school!
3. This comment has been removed by a blog administrator. | {"url":"http://algebrasfriend.blogspot.com/2013/07/using-card-sorts-in-math-class.html","timestamp":"2024-11-14T18:25:40Z","content_type":"text/html","content_length":"67218","record_id":"<urn:uuid:f26ba3f6-934b-4ea9-9ae6-515c59f80bd8>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00257.warc.gz"} |
Rajeev Rajaram
With the Kent Campus from 2014-Current
With the Ashtabula Campus from 2008-2014
Ph.D. in applied mathematics from Iowa State University in 2005
M.S. in applied mathematics from Iowa State University in 2003
M.S. in electrical engineering from Iowa State University in 2001
B. E. (Hons.) in instrumentation engineering from B.I.T.S., Pilani, India in 1998
Scholarly, Creative & Professional Activities:
Books in Print
• Dister, C., Castellani. B., Rajaram, R., Modeling Social Complexity in Infrastructures: A Case-based Approach to Improving Reliability and Resiliency, Edward Elgar Handbook of Research Methods in
Complexity Science, London School of Economics, 2017
• Castellani, B., Rajaram, R., Buckwalter, J.G., Ball, M., Hafferty, F.W. Place and Health as Complex Systems (ISBN 978-3-319-09733-6), Springer Briefs on Public Health, 2014
Articles in Print
• Castellani, B., Griffiths, F., Rajaram, R., Gunn, J., Exploring comorbid depression and health trajectories: A case-based computational modelling approach, Journal of Evaluation in Clinical
Practice, 2018:1-17, doi: https://doi.org/10.1111/jep.13042
• Rajaram, R., Castellani, B. and Wilson A. N. Advancing Shannon entropy for measuring diversity in Systems, Complexity, vol. 2017, Article ID 8715605, 10 pages, 2017. doi:10.1155/2017/8715605
• Dasgupta, S., Vaidya, U.G., and Rajaram, R. Operator theoretic framework for optimal placement of sensors and actuators for control of nonequilibrium dynamics, Journal of Mathematical Analysis
and its Applications, 2016
• Castellani, B. and Rajaram, R. Past the power law: Complex systems and the limiting law of restricted diversity, Complexity, 2016
• Rajaram, R. and Castellani, B. An entropy based measure for comparing distributions of complexity doi:10.1016/j.physa.2016.02.007, Physica A, 2016
• Castellani, B., Rajaram R., Gunn, J., and Griffiths, F., Cases, clusters, densities: Modeling the nonlinear dynamics of complex health trajectories. Complexity, doi: 10.1002/cplx.21728, 2015
• Rajaram, R. and Castellani, B. The Utility of Non-equilibrium Statistical Mechanics, Specifically Transport Theory, for Modeling Cohort Data. Complexity, doi: 10.1002/cplx.21512, 2014
• Rajaram, R. and Vaidya, U. Lyapunov density for coupled systems. Applicable Analysis, doi: 10.1080/00036811.2014.886105, 2014
• Rajaram, R. and Vaidya, U. Robust stability analysis using Lyapunov density. International Journal of Control, 86(6): 1077-1085, 2013.
• Rajaram, R. and Castellani, B.Modeling Complex Systems Macroscopically: Case/Agent-Based Modeling, Synergetics and the Continuity Equation. Complexity. doi: 10.1002/cplx.21412, 2012
• Vaidya, U., Rajaram, R., & Dasgupta, S. Actuator and Sensor placement in a linear advection PDE, J. Math. Anal. Appl. 394, pp. 213-224, 2012
• Castellani, B. and Rajaram, R.: Case-based modeling and the sacs toolkit: A mathematical outline. Journal of Computational and Mathematical Organization Theory, 18(2): 153-174, 2012.
• Castellani, B., Rajaram, R., Buckwalter, JG., Ball, M., and Hafferty, F. “Place and Health as Complex Systems: A Case Study and Empirical Test.” Proceedings of the Complexity in Health Group,
Kent State University at Ashtabula, 1(1):1-35, 2012.
• Rajaram, R., Vaidya, U., Fardad, M., & Ganapathysubramanian, B. Stability in the almost everywhere sense: a linear transfer operator approach, J. Math. Anal. Appl., 368, pp. 144-156, 2010.
• Rajaram, R., & Najafi, M. Exact controllability of a system of coupled strings in parallel, Applicable Analysis, 89(5), pp. 677-691, May 2010.
• Rajaram, R., & Najafi, M. Exact controllability of wave equations in Rn coupled in parallel, J. Math. Anal. Appl., 356, pp. 7-12, 2009.
• Rajaram, R., & Najafi, M. Analytical treatment and convergence of the Adomian Decomposition Method for a system of coupled damped wave equations. Applied Mathematics and Computation, 212, pp.
72-81, 2009.
• Rajaram, R. Exact boundary controllability of the linear advection equation. Applicable Analysis, 88(1), pp. 121-129, January 2009.
• Rajaram, R. Exact boundary controllability results for a Rao-Nakra sandwich beam. Systems and Control Letters, 56(7-8), pp. 558-567, 2007.
• Rajaram, R. & S.W. Hansen Null controllability of a damped Mead-Markus sandwich beam. Discrete and Continuous Dynamical Systems (Supplemental Volume), pp: 746-755, 2005.
• Hansen, S.W., & Rajaram, R. Riesz basis property and related results for a Rao-Nakra Sandwich Beam. Discrete and Continuous Dynamical Systems (Supplemental Volume), pp.365-375, 2005.
Research Areas:
• Control theory of partial differential equations
• Complexity Science
Ph.D. (Applied Mathematics) - Iowa State University
M.S. (Applied Mathematics) - Iowa State University
M.S. (Electrical Engineering) - Iowa State University
B. E. (Hons., Instrumentation Engineering) - B.I.T.S., Pilani, India
Control Theory
Partial Differential Equations
Stability Theory
Ordinary Differential Equations
Complexity Science | {"url":"https://www.kent.edu/math/profile/rajeev-rajaram","timestamp":"2024-11-12T13:37:46Z","content_type":"text/html","content_length":"133833","record_id":"<urn:uuid:2dcd39d8-5e11-449b-832a-805e1708d0ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00627.warc.gz"} |
binary conversion calculator
A Free Trial That Lets You Build Big!
Start building with 50+ products and up to 12 months usage for Elastic Compute Service
• Sales Support
1 on 1 presale consultation
• After-Sales Support
24/7 Technical Support 6 Free Tickets per Quarter Faster Response
• Alibaba Cloud offers highly flexible support services tailored to meet your exact needs. | {"url":"https://topic.alibabacloud.com/zqpop/binary-conversion-calculator_168183.html","timestamp":"2024-11-08T15:01:30Z","content_type":"text/html","content_length":"102428","record_id":"<urn:uuid:b24b258d-d5d4-48c5-9992-6de1b1599467>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00723.warc.gz"} |
Open System
Ehresmann’s connection
Consider a fibre bundle
The chart
with the transformation laws:
and the inverse
Vector fields
Remark: In principle we should use a different notation for vector fields
Let us denote by
Remark: In Finsler geometry instead of
The functions
Suppose now we have two charts,
Substituting now the definitions we have
Comparing the terms we find that
and therefore
We can distinguish two special classes of transformations. First class consists of changes of coordinates on
The second class consists of transformations of the form
{\bf Remark}: If
In this case the last formula reads:
Given an Ehresmann connection represented by horizontal vector fields
More generally, for any two vector fields
Under coordinate transformations
Under fiber reparametrization we have:
Connection form
It is convenient to code the connection given by a horizontal distribution in one geometrical object. This can be done by introducing the curvature form – a one-form on
The curvature form vanishes automatically on the horizontal vectors | {"url":"https://arkadiusz-jadczyk.eu/blog/2010/12/20/","timestamp":"2024-11-03T01:18:10Z","content_type":"text/html","content_length":"78693","record_id":"<urn:uuid:3d4d1733-21df-4ec4-94c0-bd7e8eab512b>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00162.warc.gz"} |
discrete mathematical model
In this study, we propose a discrete time mathematical model (SEIQR) that describes the dynamics of monkeypox within a human population. The studied population is divided into five compartments:
susceptible ($S$), exposed ($E$), infected ($I$), quarantined ($Q$), and recovered ($R$). Also, we propose an optimal strategy to fight against the spread of this epidemic. In this sense we use
three controls which represent: 1) the awarness of vulnerable people through the media, civil society and education; 2) the quarantine of infected persons at home or, if required, in h | {"url":"https://science.lpnu.ua/keywords-paper/discrete-mathematical-model","timestamp":"2024-11-13T13:58:31Z","content_type":"text/html","content_length":"16712","record_id":"<urn:uuid:f9f46d01-a279-46b6-a7c0-33077452f84b>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00632.warc.gz"} |
[SNU Number Theory Seminar 6 ~ 7 Feb] Prismatic F-crystals and applications to p-adic Galois representations
• Date: 2023-02-06 ~ 02-07 11:00 ~ 12:00
• Place: 27-325 (SNU)
• Speaker: Yong Suk Moon (BIMSA)
• Title: Prismatic F-crystals and applications to p-adic Galois representations
• Abstract:
Prismatic cohomology, which is recently developed by Bhatt and Scholze, is a p-adic cohomology theory unifying etale, de Rham, and crystalline cohomology. In this series of two talks, we will discuss
its central object of study called prismatic F-crystals, and some applications to studying p-adic Galois representations. The first part will be mainly devoted to explaining motivational background
on the topic. Then we will discuss the relation between prismatic F-crystals and crystalline local systems on p-adic formal scheme, and talk about applications on purity of crystalline local system
and on crystalline deformation ring. If time permits, we will also discuss recent work in progress on log prismatic F-crystals and semistable local systems. A part of the results is based on joint
work with Du, Liu, Shimizu. | {"url":"https://qsms.math.snu.ac.kr/index.php?mid=board_sjXR83&l=en&page=7&sort_index=readed_count&order_type=asc&document_srl=2450","timestamp":"2024-11-09T13:25:15Z","content_type":"text/html","content_length":"65784","record_id":"<urn:uuid:8bde0812-1910-404a-bbe3-b32bb44e7dd7>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00781.warc.gz"} |
Loads are the vehicle forces exerted on the pavement (e.g., by trucks, heavy machinery, airplanes). Since one of the primary functions of a pavement is load distribution, pavement design must
account for the expected lifetime traffic loads. Loads, the vehicle forces exerted on the pavement (e.g., by trucks, heavy machinery, airplanes), can be characterized by tire loads, axle and tire
configurations, load repetition, traffic distribution across the pavement and vehicle speed.
Load Characterization
• Tire Loads. Tire loads are the fundamental loads at the actual tire-pavement contact points.
• Axle and tire configurations. While the tire contact pressure and area is of concern, the number of contact points per vehicle and their spacing is critical. As tire loads get closer together
their influence areas on the pavement begin to overlap, at which point the design characteristic of concern is no longer the single isolated tire load but rather the combined effect of all the
interacting tire loads.
• Load repetition. Loads, along with the environment, damage pavement over time. The standard model asserts that each individual load inflicts a certain amount of unrecoverable damage. This
damage is cumulative over the life of the pavement and when it reaches some maximum value the pavement is considered to have reached the end of its useful service life.
• Traffic distribution. On any given road, one direction typically carries more loads than the other. Furthermore, within this one direction, each lane carries a different portion of the
loading. The outer most lane often carries the most trucks and therefore is usually subjected to the heaviest loading.
• Vehicle speed. In general, slower speeds and stop conditions allow a particular load to be applied to a given pavement area for a longer period of time resulting in greater damage. If mix
design or structural design have been inadequate, this behavior is sometimes evident at bus stops (where heavy buses stop and sit while loading/unloading passengers) and intersection approaches
(where traffic stops and waits to pass through the intersection).
WAPA Pavement Note on Loads
The Washington State load limits are:
□ Tires = is 600 lbs/inch of tire width.
□ Single Axle = 20,000 lbs.
□ Single axle with dual tires = 500 lbs/inch of tire width.
□ Tandem axle = 34,000 lbs.
□ Gross vehicle weight = 105,500 lbs.
WSDOT has a publication on load limits and why they exist here.
Load Quantification
Pavement structural design requires a quantification of all expected loads a pavement will encounter over its design life. This quantification is usually done in one of two ways:
Equivalent single axle loads (ESALs). This approach converts wheel loads of various magnitudes and repetitions (“mixed traffic”) to an equivalent number of “standard” or “equivalent” loads based on
the amount of damage they do to the pavement. The commonly used standard load is the 18,000 lb. equivalent single axle load. Using the ESAL method, all loads (including multi-axle loads) are
converted to an equivalent number of 18,000 lb. single axle loads and this number is then used for design. A “load equivalency factor” represents the equivalent number of ESALs for the given
weight-axle combination. As a rule-of-thumb, the load equivalency of a particular load (and also the pavement damage imparted by a particular load) is roughly related to the load by a power of four
(for reasonably strong pavement surfaces). For example, a 36,000 lb. single axle load will cause about 16 times the damage as an 18,000 lb. single axle load. Table 1 shows some typical load
equivalencies (note that spreading a load out over two closely spaced axles reduces the number of ESALs). Figure 3, using some approximations, shows some general vehicle load equivalencies – note
that buses tend have high load equivalency factors because although they may be lighter than a loaded 18-wheeler, they only have two or three axles instead of five.
Table 1: Example Load Equivalencies
│ Load │ Number of ESALs │
│ 18,000 lb. single axle │ 1.000 │
│ 2,000 lb. single axle │ 0.0003 │
│ 30,000 lb. single axle │ 7.9 │
│ 18,000 lb. tandem axle │ 0.109 │
│ 40,000 lb. tandem axle │ 2.06 │
Load spectra. This approach characterizes loads directly by number of axles, configuration and weight. It does not involve conversion to equivalent values. Structural design calculations using
load spectra are generally more complex than those using ESALs because loading cannot be reduced to one equivalent number. Load spectra are used in both the AASHTO 1993/1998 Pavement Design
Methodology and the Mechanistic-Empirical Pavement Design Guide. Both approaches use the same type and quality of data but the load spectra approach has the potential to be more accurate in its load | {"url":"https://www.asphaltwa.com/loads/","timestamp":"2024-11-07T07:32:34Z","content_type":"text/html","content_length":"216541","record_id":"<urn:uuid:2750d0d1-cfb3-4446-b3ba-02fc8a0fd41c>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00708.warc.gz"} |
A system of two linear equations is shown. 5x + 2y = β4 5x + 2y = 1 Which statement is true regarding the solution to this system of linear equations? A.The system has no solution. B.The system has one unique solution at (5, 2). C. The system has one unique solution at (β4, 1). D. The system has an infinite number of solutions.
1. Home
2. General
3. A system of two linear equations is shown. 5x + 2y = β 4 5x + 2y = 1 Which statement is true reg... | {"url":"https://thibaultlanxade.com/general/a-system-of-two-linear-equations-is-shown-5x-2y-4-5x-2y-1-which-statement-is-true-regarding-the-solution-to-this-system-of-linear-equations-a-the-system-has-no-solution-b-the-system-has-one-unique-solution-at-5-2-c-the-system-ha","timestamp":"2024-11-09T00:58:07Z","content_type":"text/html","content_length":"30694","record_id":"<urn:uuid:3053d93a-92ea-4de7-8414-e164c6cc3bb5>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00393.warc.gz"} |
Debate, Oracles, and Obfuscated Arguments — AI Alignment Forum
This post is about recent and ongoing work on the power and limits of debate from the computational complexity point of view. As a starting point our paper Scalable AI Safety via Doubly-Efficient
Debate gives new complexity-theoretic formalizations for debate. In this post we will give an overview of the model of debate in the paper, and discuss extensions to the model and their relationship
to obfuscated arguments.
High-level Overview
At a high level our goal is to create a complexity theoretic models that allow us to productively reason about different designs for debate protocols, in such a way as to increase our confidence that
they will produce the intended behavior. In particular, the hope would be to have debate protocols play a role in the training of aligned AI that is similar to the role played by cryptographic
protocols in the design of secure computer systems. That is, as with cryptography, we want to have provable guarantees under clear complexity-theoretic assumptions, while still matching well to the
actual in-practice properties of the system.
Towards this end, we model AI systems as performing computations, where each step in the computation can be judged by humans. This can be captured by the classical complexity theoretic setting of
computation relative to an oracle. In this setting the headline results of our paper state that any computation by an AI that can be correctly judged with queries to human judgement, can also be
correctly judged with a constant (independent of ) number of queries when utilizing an appropriate debate protocol between two competing AIs. Furthermore, whichever AI is arguing for the truth in the
debate need only utilize steps of computation, even if the opposing AI debater uses arbitrarily many steps. Thus, our model allows us to formally prove that, under the assumption that the computation
in question can be broken down into human-judgeable steps, it is possible to design debate protocols where it is harder (in the sense of computational complexity) to lie than to refute a lie.
One natural complaint about this result is that there may be computations which cannot be broken down into human-judgeable steps. However, if you believe the extended Church-Turing thesis (that any
computation can be simulated with only a polynomial-time slow-down on a Turing machine), then you cannot make the above complaint in its strongest form. After all, human judgement is a computation
and whatever the AI is doing is a computation, so there can only be a polynomial-time slow-down between the way the AI does a particular computation and the way that a human could. That said, it is
entirely valid to believe the extended Church-Turing thesis and make a weaker form of the above complaint, namely that the number of AI-judgeable steps might be polynomially less than the number of
human-judgeable steps! If this polynomial is say , then the number of steps in the human-judgeable form of the computation can easily be so long as to be completely infeasible for the AI to produce,
even when the AI-judgeable form is quite short.
The fact that the human-judgeable version of a computation can be too long leads to the need for debate protocols that utilize the short AI-judgeable computation as a guide for exploring the long
human-judgeable form. In particular, one might try to recursively break an AI-judgeable computation down into simpler and simpler subproblems, where the leaves of the recursion tree are
human-judgeable, and then use some debate protocol to explore only a limited path down the tree. As we will later see, natural designs for such protocols run into the obfuscated arguments problem: it
is possible to break a problem down into subproblems where both AI debaters know that the answer to at least one of the subproblems is incorrect, but neither knows which one.
In the second half of this post, we will discuss how to formalize the obfuscated arguments problem in an extension of our complexity-theoretic model for debate, along with one possible approach
towards dealing with this problem. At a high-level the idea will be that whenever the first debater breaks a problem down into subproblems, the second debater should be allowed to challenge the first
debater to solve "related" subproblems, in order to demonstrate that this class of subproblems is actually solveable. The hope is that, as the second debater is generating the "related" subproblems,
it is possible to plant a "trapdoor" solution even when the class of subproblems is hard to solve in general. The definition of "related" and "trapdoor" can be appropriately formalized in our
extended model. We will later give examples where this hope succeeds and where it fails, with the future goal of converging towards a satisfying solution for handling obfuscated arguments.
The first section will recall the motivation for debate, and introduce the complexity theoretic model from our paper. In the next section, we will discuss the limitations of the model and possible
extensions to address these limitations. We will further show that natural attempts at using recursive debate protocols in the extended model run into the obfuscated arguments problem. In the last
section we will describe one approach toward dealing with obfuscated arguments in recursive debate, and the barriers that this approach faces.
1. (Doubly-Efficient) Debate
The promise of the original debate proposal was that competition between two AIs to convince a human judge can be a significantly more human-judge-efficient way to provide an effective reward signal
for training. The theoretical model of this original proposal assumes that the human judge is a polynomial time algorithm, and the two debating AIs are computationally unbounded. The setup of our
doubly-efficient debate paper differs in two ways. First, we explicitly model human judgements as an oracle that can be queried in one step by any computation in the model. This allows us to make
tighter statements about the number of human-judgement queries required to accurately judge a debate. Second, we require that the strategy used by the honest AI debater (i.e. whichever AI is arguing
for the correct answer) can be implemented computationally efficiently. In particular, the main results of our paper show that the number of queries to human judgement can be a constant, independent
of the original computational cost required to solve the problem in question, and that the honest AI need not spend more than a logarithmic factor more than this original computational cost.
The Efficiency Motivation for Debate
Generally when training AIs to perform complex tasks, we want to be confident that they will carry out these tasks in the way we intended. What is needed for this is the ability to very efficiently
provide feedback that a task was performed correctly. For Go AIs there are efficient algorithms that can determine which player won a game, given the final board configuration. For LLM pre-training
there is a simple formula to efficiently compute the KL-divergence between the actual next token in a document, and the language model's predicted probability distribution of the next token. Finally,
for RLHF tuning of LLMs, human raters can efficiently judge between two sample outputs from the model, which one is better according to fuzzier concepts like helpfulness or harmlessness.
As the tasks we want AIs to perform get more complicated and interact more with the real world, it seems that some amount of fuzzy human judgement is necessary to provide the training signal on what
exactly we want (and also don't want) from the AI performing the task. This immediately runs up against questions of efficiency: human judgement is slow and expensive, so how much of it is really
necessary to train an AI to act as intended?
At first this efficiency issue seems like quite a big problem. Take the example of using some future language model to draft the binding legal text of a law. As vividly demonstrated in "The Power
Broker: Robert Moses and the Fall of New York" by Robert Caro, a drafter of a law with sufficient knowledge can subtly place only a few key phrases in order to amass enormous personal power. Despite
the fact that Robert Moses held no elected office, he was able to write laws and get them passed by the New York State legislature, without the people sponsoring and voting for those laws knowing how
much power they assigned to Moses. Thus, in order to trust the text of a law written by a language model, it seems that one would have to have very knowledgeable human lawyers read and carefully
check the entire output. The cost to provide a supervision signal for just one training example would therefore be enormous. The hope for debate-like approaches to training is that by utilizing
competition between two opposing AIs, one can dramatically improve the efficiency. In the legal text example, this would entail having both Robert Moses and some sort of equally legally-competent
anti-Robert Moses involved in the drafting of laws.
The Complexity-Theoretic Model for Doubly-Efficient Debate
The overall goal is to train AIs that can do things that are too complex for us to judge directly. Here "do things" will mean "solve computational problems" and "complex" will be measured with the
standard notion of time complexity from computational complexity theory. The main question that we want to be able to ask (and partially answer) within our model is:
What is the largest class of problems for which human judgement can efficiently provide effective training supervision?
To make this question formal we have to decide how to model human judgement, what types of efficiency we are concerned with, and what it means for training supervision to be effective. We summarize
here the choices made for each of these.
Human judgement is modeled as an oracle . This means that any computation in our model can, in one step, make a query to the human judgement oracle and receive a response , representing a yes or no
answer to the query .
There are two types of efficiency to consider:
• Efficiency for the AIs: How much more computationally expensive is it to use our training method to supervise solving a computational problem, relative to the cost of directly solving the
• Efficiency for the human judge: How many queries to the human judgement oracle are needed for supervision?
• Effective training supervision should produce the same training incentives that would be produced if unlimited human judgement was available (i.e. if there was no limit to the number of queries
to ).
Note that the notation is used for the oracle in our paper instead of , but it will be useful to have different notation for the extensions of the model discussed later in this post.
Classes of Computational Problems in the Model
Now that we have made these modelling decisions we can describe the two main types of computational problems we want to solve.
Execute a "program" described in natural language: A problem from this class is described by a program , which is a set of natural language instructions that can further include calls to the
human-judgement oracle . The goal is given an input , to determine whether or . This class of problems corresponds to the complexity class , i.e. these are the problems where there exists an
efficient program to decide the answer when given black-box access to .
For example, suppose a researcher wants to perform a meta-analysis of the existing literature on whether caffeine improves learning in college students. The researcher could give a high-level plan in
natural language for how to accomplish this task:
1. Search the literature for studies on caffeine consumption and learning in college students.
2. Among the studies found, select those studies which meet the following quality criteria:
□ Do not include any studies that contain clear methodological flaws.
□ Only include randomized control trials which measure final exam performance after at least six weeks of intervention.
□ Perform the actual statistical analysis using a random effects model.
3. Measure sources of heterogeneity between the included studies using meta-regression.
The above plan is the program , and the input is the existing scientific literature. Notice that steps such as the first bullet point under 2 require calls to the human judgement oracle , because
whether a study contains "clear" methodological flaws is a question of human judgement. One can imagine giving these instructions to an LLM that has extensive pre-trained knowledge of human
judgement, and asking the LLM to follow the plan by predicting for each oracle query to , what a human expert would have answered, had they been asked the query. Of course, it is unclear why we would
trust the output of such a process without actually checking every one of the predicted oracle answers given by the LLM. Therefore it is at least naively prohibitively expensive to attempt to train a
meta-analysis LLM by providing a direct human supervision signal by answering all of the queries via human raters.
Find a solution to a problem, where the solution can be checked by a "program" described in natural language:
A problem from this class is again described by a program which is a set of natural language instructions that can further include calls to the human-judgement oracle . However, the goal has changed.
Instead we ask: given an input , determine if there is a solution (also called a witness) so that . This class of problems corresponds to the complexity class , i.e. these are the problems where
there exists an efficient program to check the correctness of a proposed solution when given black-box access to .
To continue in the setting of the previous example, assume that the researcher just asks the question ``Does caffeine improve learning in college students? Produce a high-level plan to answer this
question, give a step-by-step justification of the correctness of the plan, and then execute the plan.'' Here again the input is the existing scientific literature, and the high-level plan plus the
justification for its correctness is the witness . The program is given by the natural-language instructions (implicit in the prompt above):
1. Go through every step of the plan and the corresponding justification and check that it is clearly correct.
2. Execute the plan.
Clearly many calls to the human-judgement oracle are necessary to execute with input and witness . As before we could ask an LLM to solve such a problem by first producing the witness and then
checking that it is correct by making predictions of human judgements for all the oracle queries made by . Once again, it is unclear why we would trust the output of such a process without manually
checking that every predicted response to a query is correct, which would be a prohibitive human-query cost for training supervision.
Main Results: Debate Protocols
With our model in hand, we can describe the debate protocols we design. At a high-level a debate protocol is a game played between two AIs, where human judgements are used to determine the winner.
The goal is to design the rules of the game so that the winner reveals the true answer to a computational problem, and further that this can be done efficiently (both in terms of human judgements and
AI computational effort). The AIs will be trained via self-play to win the game defined by the protocol.
More formally, there are two AIs and , along with a verifier . Given some computational problem in (defined by a program ) and an input , the AI argues that and argues that . In particular, the AIs
and exchange messages over multiple rounds, and the verifier selectively reads parts of the transcript of exchanged messages. At the end of the interaction outputs 1 to indicate that has been
convinced that and outputs 0 otherwise. Let be the time required to execute . That is, represents the computational cost of finding the correct answer if we trust that every step has been performed
correctly. Since an oracle query can be performed in one step, this means that the total human-judgement cost of executing could be as large as . The effectiveness and efficiency goals of our debate
protocols are then:
1. The honest AI (i.e. if or if ) can win the debate while running in time .
2. The verifier makes only oracle queries and runs in time.
3. At the end of the debate outputs .
It is important to note that the human-judgement cost is measured solely by the number of queries makes to . In particular, we model and as being allowed to make queries to in one step, but we do not
count these towards the human-oracle cost. Instead, the queries to , along with their answers, should be thought of as and predicting what a human judge would say, were they asked the query.
After all of this setup, the protocol achieving our goals is extremely simple:
1. writes out a transcript of the execution of
2. points out one step in the transcript where has allegedly not executed correctly (either by not following the instructions defined by or giving a predicted answer to a query to incorrectly).
3. reads the step pointed to by , and judges whether it is correct or not by making at most one query to . If the step was correct, outputs otherwise outputs .
This protocol can be immediately extended to problems in , by having first output a witness , and then run the above protocol with the transcript of the execution of . The main result in our paper
extends what is described here by allowing the human judgement oracle to be stochastic. The main idea in this case is that should output a predicted probability for the answer from in each step of
the transcript, and then use a coin-flipping protocol with to sample the answer from this predicted distribution. is then allowed to point out a step where has incorrectly predicted the probability,
whereupon samples from in order to estimate whether the predicted probability is close enough to the correct answer.
2. Extensions of the Model and Obfuscated Arguments
One way to summarize the main results we have described so far is: given any problem where a correct answer can be verified with queries to human judgement, our debate protocol can spend only queries
to human judgement along with a total of computation time, and still get the correct answer. A very natural complaint at this point is that for the problems we care about the number of steps of human
judgement required to verify a solution might be extremely long. In particular, it may be that AIs can solve the problem in some much smaller number of steps , but these steps are not human
judgeable. For example, what if is on the order of the size of the entire pre-training dataset for an LLM, but during training the LLM has learned very efficient heuristics for the problem which take
only time . This makes the guarantee of total computation time useless in practice.
Adding a Debater Oracle
In order to extend our model to capture this setting, we modify our original setup by introducing a second oracle. In addition to the human judgement oracle , there will be a "debater AI heuristic"
oracle . To capture the fact that might know the answer to a larger set of queries than , we further extend the range of both oracles so that for a query we have . In particular, outputting means "I
don't know the answer," and we will assume that i.e. that knows the answer to all the human judgements given by . However, we will allow there to be such that but i.e. queries where knows the answer
but does not.
Of course there are pairs of oracles where is much more powerful and there is no hope of doing anything interesting with access only to . However, we will make an assumption on the relative power of
the two oracles. In particular we will suppose that there is a cost function and a program that can simulate one oracle query in steps, where is the length of the query . That is, for all , and runs
in time . In words, this means that any single query to can be broken down into a larger number of human interpretable steps, where this number is determined by the cost function .
Recursive Debate
In this new two-oracle setting, a natural first attempt to achieve better debater efficiency is via recursion. To give a simple example, suppose that can be executed in steps, that each query made to
in the execution has length , and that . In this case, one can produce a human-judgeable transcript for the execution of by running for each query to . The total length of the transcript is . It is
then tempting to try to gain efficiency via the following depth-two recursive protocol:
1. writes out a transcript of the execution of
1. points out one step in the transcript of where has allegedly not executed correctly (either by not following the instructions defined by or giving a predicted answer to a query to incorrectly).
2. writes out the transcript of the execution of for the query pointed out by .
3. points out one step in the transcript of where has allegedly not executed correctly
4. reads the final step pointed to by , and judges whether it is correct or not by making at most one query to . If the step was correct, outputs otherwise outputs .
The total number of steps simulated by in this protocol is rather than , a quadratic improvement in efficiency. As before, the protocol can be extended to problems in , by requiring to first output a
witness .
The above protocol can be seen as the depth-two version of a fully recursive debate. To formalize the fully recursive case, we will assume that there is a hierarchy of debater oracles and programs
such that
1. For any query such that we have .
2. For any query and such that we have .
3. runs in time .
Intuitively, any query that can be answered by can be answered by running the program with access only to the oracle , and the human judgement oracle is the base case of this recursion. Given such a
setup, the first attempt at a fully recursive debate protocol would take the form:
1. writes out a transcript of the execution of
2. points out one step in the transcript of where has allegedly not executed correctly (either by not following the instructions defined by or giving a predicted answer to a query to incorrectly).
3. If then the protocol recurses by returning to step one with , input , and program .
4. If then writes out the transcript of the execution of for the query pointed out by .
5. points out one step in the transcript of where has allegedly not executed correctly
6. reads the final step pointed to by , and judges whether it is correct or not by making at most one query to . If the step was correct, outputs otherwise outputs .
Recursive Debate: A writes out a transcript of a program, B selects a step to recurse corresponding to a query to a debater oracle. The recursion bottoms out with a program that only makes queries to
the human oracle H, and B points out a single human oracle query z for the verifier V to check.
Thus, it seems at first glance that one can achieve huge efficiency gains via recursion. For example, if for all then a depth debate protocol only requires total time , whereas fully expanding all
the queries in order to produce a purely human-judegeable transcript would take time . However, there is a critical flaw in this plan, and in fact even the depth two protocol above does not work
Obfuscated Arguments
The problem with the recursive protocol described above is that in the two-oracle setting there is a new way for to lie about the transcript. Previously, would output a prediction of either or for
each query made to . However, introducing a second oracle requires extending the range of to include answering to queries where the truth is unknown. Thus, in step 1 when writes out the transcript of
, it may be that for every query made in the transcript. In the worst case could give answers in to all the queries such that exactly one answer was incorrect, but neither nor know which one (because
always outputs ). At this point, has no ability to determine which step to recurse on, and is forced to check all steps in order to determine if has lied. This would again require total steps,
completely eliminating the advantage gained by recursion.
One might hope that it would not be possible for to achieve this worst-case behavior efficiently, but the cryptographic example of primality testing and factoring from the original post introducing
obfuscated arguments shows that it is indeed possible for to efficiently pursue this strategy. To summarize this example in our context consider the RSA problem of deciding whether an -digit number
is prime, where for some appropriately sampled large primes which are kept secret. could attempt to write out a transcript dividing up the integers from to into intervals, and claiming that the
oracle can be used to determine that has no prime factor in each interval. While the original problem of primality testing has an efficient algorithm, determining whether has a prime factor in some
interval is computationally hard. Thus, neither nor know which interval contains a prime factor of (i.e. which query to should output 1), yet can easily claim that all the intervals have no factors
(i.e. that should always output 0). All that can do in this case is check every interval, as there will only be at most two intervals where 's claimed answer is incorrect.
Hopes for Avoiding Obfuscation in Recursive Debate
The previous section demonstrates that obfuscated arguments prevent us from realizing the hoped-for efficiency gains from recursion in debate. The hope at this point is that there are still protocols
(possibly with additional assumptions on the model) that allow us to regain this efficiency. One approach we have been exploring is to allow to not only point out an incorrect step in 's transcript,
but to additionally be able to demonstrate that is falsely claiming . This can easily be done in the cryptographic example above: can sample a new number , and then present to a fresh copy of that
has not seen the transcript for so far. In this case, knows and , because did the sampling, and thus can win the debate if lies. On the other hand, if does not sample from exactly the correct
distribution for and , it is possible that will be able to detect this and answer the queries by correctly claiming that . Thus, there are two key elements for an approach like this to work in
1. can generate trapdoor instances of the relevant queries i.e. can generate a query and some extra witness that allows to efficiently answer the query even though .
2. cannot distinguish these trapdoor instances from the true distribution on problems that come up in the debate.
If the above two conditions are satisfied, then we could try the following modified protocol for recursive debate:
1. writes out a transcript of the execution of
2. has two possible options:
3. Either, points out one step in the transcript of where has allegedly not executed correctly (either by not following the instructions defined by or giving a predicted answer to a query to
4. Or, generates a trapdoor query along with witness that allows to efficiently answer : 1. queries a copy of with fresh context for the answer to . 2. If does not answer then may choose to have the
protocol continue with as the query. 3. If answers then is required to choose a query from the transcript of as in the "Either" case above.
5. If the protocol recurses by returning to step one with , input , and program .
6. If then writes out the transcript of the execution of for the query pointed out by .
7. points out one step in the transcript of where has allegedly not executed correctly
8. reads the final step pointed to by , and judges whether it is correct or not by making at most one query to . If the step was correct, outputs otherwise outputs .
If it was always possible to construct indistinguishable trapdoor instances for query distributions we care about, then this protocol would allow to win, even when attempts to use obfuscated
arguments. Thus, one plausible approach in order to avoid obfuscated arguments is to find a reasonable set of assumptions on the distribution of queries that will allow the efficient sampling of
trapdoor instances by .
3. Counterexamples and nearby non-Counterexamples for trapdoors
In this section we discuss simple counterexamples to the strongest form of the trapdoor hope for dealing with obfuscated arguments in recursive debate, along with possible weaker hopes and their
respective difficulties. In particular we will give a counterexample showing that it can be computationally hard to sample trapdoor instances, then we will discuss some initial attempts at possible
restrictions on the distribution of problems we consider that might allow for tractable trapdoors.
Let be an appropriately chosen cryptographic hash function and consider the classical proof-of-work problem:
Given find a witness such that .
Clearly this problem is in NP, as can be evaluated efficiently. So the distribution on instances where is sampled uniformly at random from is a distributional NP problem. However, sampling a pair
such that both
2. The marginal distribution of is uniform on
is hard because is a cryptographic hash function. In fact even finding one such pair is hard.
Related non-counterexample
Here's a related problem.
Given find a witness such that .
The distributional version is just to sample uniformly at random from .
Now there is an easy trapdoor distribution on pairs :
1. Sample uniformly at random.
2. Set .
Note that here and are not independent when drawn from , but without seeing it is computationally infeasible to distinguish from uniform random .
What should we think about this?
What makes the second distributional problem trapdoorable while the earlier counterexample is not? One possible "explanation" is that flexibility in choosing the output is what is necessary. This is
also how the trapdoor distributions for -SAT, planted Clique and other combinatorial problems work, where the "output" in these cases is what negations are applied to variables in -SAT, and what
edges are added to the graph in planted Clique.
More generally suppose you have a distribution on NP search problems of the form:
Given , find such that .
Then one can try to make a similar trapdoor distribution as , although you run into issues with getting a "matching" distribution because now it is unclear what joint distribution on will produce a
distribution on pairs that is computationally indistinguishable from the original problem distribution on . Our next example demonstrates the difficulty of doing this sort of thing in general.
An unreasonable restriction on the problem distribution
Suppose we are interested in NP decision problems that look like:
Given , find such that .
Then one thing we might try is to require that the distribution of problems satisfies . Then we immediately have a good method to generate a distribution of trapdoor instances:
1. Sample and from any distribution we like.
2. If output otherwise output .
This distribution isn't off by more than a polynomial factor from so it can be used in combination with rejection sampling to satisfy our requirements. However, this is an unreasonable distribution
to require a computationally-bounded adversary to answer. For example consider the shortest path problem:
Given an -vertex graph , number and vertices , is there a simple path of length at most from to ?
This is a tractable problem that we would expect a polynomial time adversary to be able to solve. The verifier program takes in all the input, plus an path in and checks if it has length at most .
However, if we negate the verifier for this problem we get longest path:
Given an -vertex graph , number and vertices , is there a simple path of length more than from to ?
This problem is NP-hard and in general we shouldn't expect a computationally bounded adversary to solve it. In other words, our requirement on the distribution in fact requires that the models can
solve computationally intractable problems, and so cannot be used as a requirement for debate protocols between computationally bounded debaters.
4. Summary and Discussion
In summary, we have described a formal setup where fuzzy human judgements are modeled by an oracle in the classical complexity theoretic sense. In this setup it is possible to design debate protocols
that can provably allow for very efficient use of human judgements, while still arriving at the correct answer. However, in the setting where the human-judgeable version of a computation is too
complex to produce, but the AI-judgeable version is tractable, these protocols are not useful. The natural attempt at a fix for this problem is to use debate to recursively break down a short
AI-judgeable argument into human-judgeable pieces, while only exploring a single path down the recursion tree. The obfuscated arguments problem causes this natural attempt to fail.
We then discussed one potential approach to salvage this recursive fix, by allowing one debater to use trapdoor instances in order to show that the other debater has falsely claimed to be able to
solve an intractable class of subproblems. It is not currently clear exactly which class of problems can be solved by debates that utilize this new approach, though we do have some counterexamples
demonstrating that it cannot be used in complete generality. We would be very interested in both better counterexamples (e.g. a problem that can naturally be broken down into hard, non-trapdoorable
subproblems) as well as any cleaner definitions that allow us to understand exactly when this approach might work. More generally, it seems that the fact that there is a natural distribution over
problem instances (e.g. from the training data) might be productively utilized in the design of debate protocols to avoid the obfuscated arguments problem. So any attempts to design new debate
protocols along these lines could be interesting.
To return to the high-level motivation for this work, the goal is that we design protocols that can be used for self-play during training, so that winning in the training game corresponds to making
human-judgeable arguments for the correct answer to any computational problem that the AI can solve. While this certainly does not guarantee that an AI trained via a particular debate protocol will
behave exactly as intended, it is a setting where we can get rigorous mathematical evidence for choosing one training setup over another. Furthermore, having such provable guarantees can provide
clear limits on the ways in which things can go wrong.
Thank you!
I think my intuition is that weak obfuscated arguments occur often in the sense that it’s easy to construct examples where Alice thinks for a certain amount time and produces her best possible answer
so far, but where she might know that further work would uncover better answers. This shows up for any task like “find me the best X”. But then for most such examples Bob can win if he gets to spend
more resources, and then we can settle things by seeing if the answer flips based on who gets more resources.
What’s happening in the primality case is that there is an extremely wide gap between nothing and finding a prime factor. So somehow you have to show that this kind of wide gap only occurs along with
extra structure that can be exploited.
New Comment
2 comments, sorted by Click to highlight new comments since:
This is a great post, very happy it exists :)
Quick rambling thoughts:
I have some instinct that a promising direction might be showing that it's only possible to construct obfuscated arguments under particular circumstances, and that we can identify those
circumstances. The structure of the obfuscated argument is quite constrained - it needs to spread out the error probability over many different leaves. This happens to be easy in the prime case, but
it seems plausible it's often knowably hard. Potentially an interesting case to explore would be trying to construct an obfuscated argument for primality testing and seeing if there's a reason that's
difficult. OTOH, as you point out,"I learnt this from many relevant examples in the training data" seems like a central sort of argument. Though even if I think of some of the worst offenders here
(e.g. translating after learning on a monolingual corpus) it does seem like constructing a lie that isn't going to contradict itself pretty quickly might be pretty hard. | {"url":"https://www.alignmentforum.org/posts/DGt9mJNKcfqiesYFZ/debate-oracles-and-obfuscated-arguments-3","timestamp":"2024-11-03T12:23:24Z","content_type":"text/html","content_length":"1048971","record_id":"<urn:uuid:60ce496f-8bea-479a-8f16-69e7ce3c11f2>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00588.warc.gz"} |
Rollup Formula
I have the following Roll-Up formula for Status
=IF(COUNT(CHILDREN()) = COUNTIF(CHILDREN(), "Complete"), "Complete", IF(OR(CONTAINS("In Progress", CHILDREN()), AND(COUNTIF(CHILDREN(), "Complete") > 0, COUNTIF(CHILDREN(), "Not Started") > 0)), "In
Progress", "Not Started"))
I want to add a status called "Not Applicable" - since not all line items off the my template PP apply to all projects.
What would the formula be in adding this additional status - and for roll up to remain accurate?
What would the master - top line - roll up formula be, so it would essentially ignore any "Not Applicable"
Best Answers
• Hi @CJU
Try adding this criteria to the COUNT that has to do with the overall Children, so we can exclude "Not Applicable" from that count.
Like so:
=IF(COUNTIF(CHILDREN(), <> "Not Applicable") = COUNTIF(CHILDREN(), "Complete"), "Complete", IF(OR(CONTAINS("In Progress", CHILDREN()), AND(COUNTIF(CHILDREN(), "Complete") > 0, COUNTIF(CHILDREN(),
"Not Started") > 0)), "In Progress", "Not Started"))
Let me know if this gives you the desired result.
Need more help? 👀 | Help and Learning Center
こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions
• My apologies! I missed that the row which says "Parent" could also potentially be blank. Yes, we can include that in.
It's just a matter of telling the Category column what to search for. We want to see if the cell is "", but also, if the cell is NOT the "parent":
=COUNTIFS(Status:Status, "In Progress", [Category]:[Category], OR(@cell = "", @cell <> "PARENT"))
You should just need to change out the Status that you're looking for in the quotes.
Need more help? 👀 | Help and Learning Center
こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions
• For your newest formula question, we can also add in the logic to look and see if all Children are "Not Applicable" like so:
=IF(COUNT(CHILDREN()) = COUNTIF(CHILDREN(), "Complete"), "Complete", IF(COUNT(CHILDREN()) = COUNTIF(CHILDREN(), "Not Applicable"), "Not Applicable", IF(OR(CONTAINS("In Progress", CHILDREN()), AND
(COUNTIF(CHILDREN(), "Complete") > 0, COUNTIF(CHILDREN(), "Not Started") > 0)), "In Progress", "Not Started")))
It's the same statement as when you look for all the "Complete" children.
Are there any other adjustments you need to make?
Need more help? 👀 | Help and Learning Center
こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions
• Good morning, @CJU !
Can you copy/paste the exact formula you're using for the Not Applicable rollup field?
=COUNTIFS(Status:Status, "Not Applicable", [Category]:[Category], OR(@cell = "", @cell <> "PARENT"))
If the text in quotes is even one character different from what's in your sheet you'll receive a Count of 0. Is it possible there's a small typo?
Need more help? 👀 | Help and Learning Center
こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions
• Of course! The order of operations is really important. We just need to change this around.
Put the "Not Applicable" first.
=IF(COUNT(CHILDREN()) = COUNTIF(CHILDREN(), "Not Applicable"), "Not Applicable", IF(COUNTIF(CHILDREN(), <> "Not Applicable") = COUNTIF(CHILDREN(), "Complete"), "Complete", IF(COUNTIF(CHILDREN(),
<> "Not Applicable") = COUNTIF(CHILDREN(), "Not Started"), "Not Started", "In Progress")))
Need more help? 👀 | Help and Learning Center
こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions
• You are really great. Thanks
• Hi @CJU
Try adding this criteria to the COUNT that has to do with the overall Children, so we can exclude "Not Applicable" from that count.
Like so:
=IF(COUNTIF(CHILDREN(), <> "Not Applicable") = COUNTIF(CHILDREN(), "Complete"), "Complete", IF(OR(CONTAINS("In Progress", CHILDREN()), AND(COUNTIF(CHILDREN(), "Complete") > 0, COUNTIF(CHILDREN(),
"Not Started") > 0)), "In Progress", "Not Started"))
Let me know if this gives you the desired result.
Need more help? 👀 | Help and Learning Center
こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions
• Thanks, it does work. A second part of this question though; in the Project Summary Roll-up, the count is still including header / parent rows.
• Hi @CJU
If there's a helper column in your sheet which identifies Parent rows vs Children, we can use this as a filter for rows to exclude in the formula.
Can you post a screen capture of what you're referring to? (But please block out sensitive data).
Need more help? 👀 | Help and Learning Center
こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions
• In the project Summary you don't have option to filter. It's just that automatic formula. You see there are 33 lines so it is counting the header / parent lines.
• If we can filter here, I do have a helper column. I have assigned the header / parent with PARENT. I would like these excluded from the count.
• Hi @CJU
My apologies for not being clear - what is the formula in that Summary Field? We can add the criteria of the Category NOT being "Parent" within that formula in order to "filter out" those rows
inside the formula.
=COUNTIFS([Category]:[Category], <> "PARENT", Status:Status, "Not Started")
Need more help? 👀 | Help and Learning Center
こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions
• The formula in the summary field is =COUNTIF(Status:Status, "In Progress") etc, for each
• Perfect, thank you!
Did you try my formula above?
=COUNTIFS([Category]:[Category], <> "PARENT", Status:Status, "In Progress")
Need more help? 👀 | Help and Learning Center
こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions
• No it did not work
I changed line 3 to in progress and the "In Progress" count formula, as you have above, but the result is 0, but should be 1. It should count line 3
• It works if the Category is not Blank; i.e. I added a value to line 3 and the formula now works, counting line 3, but not its parent
• Is there a way to avoid this, in case a line does not have a category?
• Hi
Essentially there are three formula required
Sheet Summary
=COUNTIFS(Category:Category, <>"PARENT", Status:Status, "In Progress")
This works but only if Category is not blank. How can it be modified to not consider Blank?
Top task / Header (line 1)
=IF(COUNTIF(CHILDREN(), <>"Not Applicable") = COUNTIF(CHILDREN(), "Complete"), "Complete", IF(OR(CONTAINS("In Progress", CHILDREN()), AND(COUNTIF(CHILDREN(), "Complete") > 0, COUNTIF(CHILDREN(),
"Not Started") > 0)), "In Progress", "Not Started"))
This works and counts all tasks excluding the phase or parent. No change is required.
Phase and parent (within phase)
=IF(COUNT(CHILDREN()) = COUNTIF(CHILDREN(), "Complete"), "Complete", IF(OR(CONTAINS("In Progress", CHILDREN()), AND(COUNTIF(CHILDREN(), "Complete") > 0, COUNTIF(CHILDREN(), "Not Started") > 0)),
"In Progress", "Not Started"))
What do I need to add to include "Not Applicable" so the roll-up for the phase or parent within phase, will show "Not Applicable" if all sub-tasks below this parent, are all "Not Applicable". The
formula currently works for the other 3 Status
e.g. below, line 4 is a parent; the 2 children (lines 5 and 6) are both Not Applicable, so the roll up for this task (line 4) should also be "Not Applicable"
• My apologies! I missed that the row which says "Parent" could also potentially be blank. Yes, we can include that in.
It's just a matter of telling the Category column what to search for. We want to see if the cell is "", but also, if the cell is NOT the "parent":
=COUNTIFS(Status:Status, "In Progress", [Category]:[Category], OR(@cell = "", @cell <> "PARENT"))
You should just need to change out the Status that you're looking for in the quotes.
Need more help? 👀 | Help and Learning Center
こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions
• For your newest formula question, we can also add in the logic to look and see if all Children are "Not Applicable" like so:
=IF(COUNT(CHILDREN()) = COUNTIF(CHILDREN(), "Complete"), "Complete", IF(COUNT(CHILDREN()) = COUNTIF(CHILDREN(), "Not Applicable"), "Not Applicable", IF(OR(CONTAINS("In Progress", CHILDREN()), AND
(COUNTIF(CHILDREN(), "Complete") > 0, COUNTIF(CHILDREN(), "Not Started") > 0)), "In Progress", "Not Started")))
It's the same statement as when you look for all the "Complete" children.
Are there any other adjustments you need to make?
Need more help? 👀 | Help and Learning Center
こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions
Thank you so much
The roll-up of PP is now correct, providing the rolled-up status.
The counter in Project Summary still has some issues:
I have a "Not Applicable Counter". With the updated formula it is not counting "Not Applicable". When I change a line to "Not Applicable" the count is not shown; although the "Not Started" count
goes down correctly.
e.g. below - I change line 6. You will see in image 2, the fx of "Not Applicable" in the Project Summary stays at 0. The fx of "Not Started" does correctly got from 24 to 23.
Image 1
Image 2
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/82962/rollup-formula","timestamp":"2024-11-09T06:11:31Z","content_type":"text/html","content_length":"489866","record_id":"<urn:uuid:705e73a5-678f-4b56-b5d0-5322360dea4c>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00707.warc.gz"} |
Understanding Convolutional Filters and Convolutional Kernels - Programmathically
Understanding Convolutional Filters and Convolutional Kernels
This post will introduce convolutional kernels and discuss how they are used to perform 2D and 3D convolution operations. We also look at the most common kernel operations, including edge detection,
blurring, and sharpening.
A convolutional filter is a filter that is applied to manipulate images or extract structures and features from an image. Convolutional filters are typically used to blur or sharpen sections of an
image or to detect edges in them.
Convolutional Filters
In the post on the convolution operation, I introduced the convolutional kernel as a grid containing numbers that we slide over another number grid to generate an output.
The convolutional filter is a multidimensional version of the convolutional kernel, although the two terms are often used interchangeably in the computer vision community.
2D Convolution
2D convolutions are essential for the processing of 2D data such as images. An image is basically a 2-dimensional grid of pixel values. Standard RGB images have pixel values ranging from 0 to 255 and
three channels (red, green, and blue) which adds a third dimension. But to simplify things a bit, we only look at one channel, which leaves us with a 2D grid of pixels which is enough to represent
grayscale images.
To manipulate images, we can convolve the image with a 2-dimensional kernel.
Convolving an image represented by a matrix
As you see in the image, the kernel, in this case, is a smaller 2D grid. To compute the convolution, we slide the kernel over the image and calculate the convolution across two dimensions.
Starting in the upper-left corner, we slide the kernel over the image and perform an element-wise multiplication with the image followed by a summation.
1\times255 + 0\times255 +(-1)\times255 \\ + 1\times255 + 0\times255 +(-1)\times255 \\ +1\times255 + 0\times255 +(-1)\times255 \\ = 0
Next, we slide the kernel to the right and repeat the convolution operation.
1\times255 + 0\times255 +(-1)\times0 \\ + 1\times255 + 0\times255 +(-1)\times0 \\ +1\times255 + 0\times255 +(-1)\times0 \\ = 765
You continue this process, sliding the kernel to the right and downwards until you reach the lower-right corner. In each step, you convolve the kernel with the part of the image.
The results of the convolution operations can be neatly represented in a 4×4 matrix.
As you can see, the two columns in the center contain very high numbers, whereas the pixel values on the margins contain zeros. This indicates that there is a bright vertical edge running through the
center. This particular kernel that we have used performs vertical edge detection. What type of operation the kernel performs depends on the numbers used in the kernel and their ordering. We will
discuss different types of convolutional kernels later in this article.
Mathematical Representation of the 2D Convolution
Mathematically, we can represent the 2D convolution as follows:
(I * K) (i, j)= \sum_m \sum_n I(m,n)K(i-m,j-n)
This operation is commutative. As a consequence, we can flip the kernel and write it like this.
(K * I) (i, j)= \sum_m \sum_n I(i-m,j-n)K(m,n)
In many practical applications, cross-correlation is used instead of the convolution operation.
(K * I) (i, j)= \sum_m \sum_n I(i-m,j-n)K(m,n)
The cross-correlation is not commutative. In purely mathematical terms, this is an important distinction. But in practice, the distinction doesn’t really matter, which is why the term convolution is
often used when referring to cross-correlation.
3D Convolution
When performing 3D convolution, you are sliding a 3-dimensional kernel over a 3-dimensional input. The kernel needs to have the same depth as the input. You calculate the convolution of each channel
in the kernel with each corresponding channel in the image.
Essentially, you need to perform the 2D convolution operation three times over, and then you sum up the results to get the final kernel output.
Why Do We Use Odd Kernels?
In the previous examples, we’ve used 3×3 kernels. While differently sized kernels are used, the size is almost always odd. The reason for using odd kernels is symmetry around the origin. If you are
using an evenly sized kernel, there is no clear center point.
Sliding convolutional filters over an image allows you to manipulate an image in various ways. In the remainder of this post, we will go through some of the more commonly used convolutional filters
and their effects.
Edge Detection Kernels
The kernel we’ve used above is a simple vertical edge detector known as the Prewitt Operator.
The Prewitt Operator
The Prewitt operator for vertical edge detection appears in the form of the following matrix.
1 & 0 & -1 \\
1 & 0 & -1 \\
1 & 0 & -1 \\
If we apply the vertical Prewitt operator to a real image, the result looks like this.
There is a strong vertical color contrast between the river and the cliffs, which is prominently visible.
To apply horizontal edge detection, we can rotate the kernel by 90 degrees.
1 & 1 & 1 \\
0 & 0 & 0 \\
-1 & -1 & -1 \\
Now, the horizontal edges are more visible.
The Sobel Operator
The Sobel operator emphasizes the edges more than the Hewitt operator by replacing 1’s in the central column with 2’s.
Here is the Sobel operator for vertical edge detection.
1 & 0 & -1 \\
2 & 0 & -2 \\
1 & 0 & -1 \\
For horizontal edge detection, you can use the following kernel.
1 & 2 & 1 \\
0 & 0 & 0 \\
-1 & -2 & -1 \\
The Laplacian Operator
The Laplacian filter is an approximation to the 2nd spatial derivative of the image. If that sounds confusing, don’t worry. In practice, it basically means that the Laplacian filter highlights areas
where the intensity of the pixel values is changing drastically. Consequently, it is a very popular filter for detecting both horizontal and vertical edges at once.
The LaPlacian is most commonly approximated with the following filter.
0 & 1 & 0 \\
1 & -4 & 1 \\
0 & 1 & 0\\
The filter is frequently combined with Gaussian blurring or smoothing as it amplifies the edges even more.
Smoothing and Blurring Kernels
Blurring is an important technique in image processing that makes the transition between different pixel values smooth rather than sharp. Therefore, the technique is also called smoothing. It is
especially useful when you want to shrink the size of an image. Some sharp details will inevitably be lost. With smoothing, you can distribute the color transition over more pixels which preserves
the edges even if the image is smaller overall.
Gaussian Filter
The Gaussian filter weighs intensities according to a normal or Gaussian distribution. A Gaussian distribution has the characteristic form of a bell curve. The curve peaks at the center and flattens
out the further you get away from the center. Thus, the center of the filter contains the highest value while the values further away are smaller.
The following kernel is a discrete approximation to the Gaussian distribution.
1 & 2 & 1 \\
2 & 4 & 2 \\
1 & 2 & 1\\
Box Filter
The box kernel is a simple filter that calculates the mean of the pixels in the area covered by the filter. This also has a smoothing effect.
Contrary to the Gaussian filter, which weighs pixels according to a normal distribution, the box filter weighs all pixels equally. The box filter is faster and easier to calculate than the Gaussian
1 & 1 & 1 \\
1 & 1 & 1 \\
1 & 1 & 1\\
Sharpen Filter
Filters to sharpen images accentuate edges. They essentially do the opposite of blurring. A common kernel for sharpening images is the following one.
0 & -1 & 0 \\
-1 & 5 & -1 \\
0 & -1 & 0\\
Convolutional Kernels and Filters are the building blocks of many computer vision applications. More advanced algorithms such as Canny edge detection build on combining several convolutional kernel
types such as those used for smoothing and edge detection. Kernels are also at the heart of the most advanced computer vision technologies, such as convolutional neural networks used in deep
This article is part of a blog post series on deep learning for computer vision. For the full series, go to the index. | {"url":"https://programmathically.com/understanding-convolutional-filters-and-convolutional-kernels/","timestamp":"2024-11-11T09:33:55Z","content_type":"text/html","content_length":"114115","record_id":"<urn:uuid:b1260279-4e8d-4093-90cc-df47eca70def>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00564.warc.gz"} |
The structure of binary matroids with no induced claw or Fano plane restriction | Published in Advances in Combinatorics
Matroid theory
October 30, 2019 EDT
The structure of binary matroids with no induced claw or Fano plane restriction
Bonamy, Marthe, František Kardoš, Tom Kelly, Peter Nelson, and Luke Postle. 2019. “The Structure of Binary Matroids with No Induced Claw or Fano Plane Restriction.”
Advances in Combinatorics
, October.
A well-known conjecture of András Gyárfás and David Sumner states that for every positive integer $m$ and every finite tree $T$ there exists $k$ such that all graphs that do not contain the clique
$K_m$ or an induced copy of $T$ have chromatic number at most $k$. The conjecture has been proved in many special cases, but the general case has been open for several decades.
The main purpose of this paper is to consider a natural analogue of the conjecture for matroids, where it turns out, interestingly, to be false. Matroids are structures that result from abstracting
the notion of independent sets in vector spaces: that is, a matroid is a set $M$ together with a nonempty hereditary collection $\mathcal I$ of subsets deemed to be independent where all maximal
independent subsets of every set are equicardinal. They can also be regarded as generalizations of graphs, since if $G$ is any graph and $\mathcal I$ is the collection of all acyclic subsets of $E(G)
$, then the pair $(E(G),\mathcal I)$ is a matroid. In fact, it is a binary matroid, which means that it can be represented as a subset of a vector space over $\mathbb F_2$. To do this, we take the
space of all formal sums of vertices and represent the edge $vw$ by the sum $v+w$. A set of edges is easily seen to be acyclic if and only if the corresponding set of sums is linearly independent.
There is a natural analogue of an induced subgraph for matroids: an induced restriction of a matroid $M$ is a subset $M’$ of $M$ with the property that adding any element of $M-M’$ to $M’$ produces a
matroid with a larger independent set than $M’$. The natural analogue of a tree with $m$ edges is the matroid $I_m$, where one takes a set of size $m$ and takes all its subsets to be independent.
(Note, however, that unlike with graph-theoretic trees there is just one such matroid up to isomorphism for each $m$.)
Every graph can be obtained by deleting edges from a complete graph. Analogously, every binary matroid can be obtained by deleting elements from a finite binary projective geometry, that is, the set
of all one-dimensional subspaces in a finite-dimensional vector space over $\mathbb F_2$.
Finally, the analogue of the chromatic number for binary matroids is a quantity known as the critical number introduced by Crapo and Rota, which in the case of a graph $G$ turns out to be $\lceil\
log_2(\chi(G))\rceil$ – that is, roughly the logarithm of its chromatic number.
One of the results of the paper is that a binary matroid can fail to contain $I_3$ or the Fano plane $F_7$ (which is the simplest projective geometry) as an induced restriction, but also have
arbitrarily large critical number. By contrast, the critical number is at most two if one also excludes the matroid associated with $K_5$ as an induced restriction. The main result of the paper is a
structural description of all simple binary matroids that have neither $I_3$ nor $F_7$ as an induced restriction.
Powered by
, the modern academic journal management system | {"url":"https://www.advancesincombinatorics.com/article/10256-the-structure-of-binary-matroids-with-no-induced-claw-or-fano-plane-restriction","timestamp":"2024-11-02T01:31:52Z","content_type":"text/html","content_length":"154880","record_id":"<urn:uuid:99651935-eef0-4bf0-9add-d33eb8ecbda3>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00824.warc.gz"} |
Measurement of the W<sup>+</sup>W<sup>−</sup> production cross section in pp collisions at a centre-of-mass energy of √s = 13 TeV with the ATLAS experiments
The production of opposite-charge W-boson pairs in proton–proton collisions at s=13 TeV is measured using data corresponding to 3.16 fb^−1 of integrated luminosity collected by the ATLAS detector at
the CERN Large Hadron Collider in 2015. Candidate W-boson pairs are selected by identifying their leptonic decays into an electron, a muon and neutrinos. Events with reconstructed jets are not
included in the candidate event sample. The cross-section measurement is performed in a fiducial phase space close to the experimental acceptance and is compared to theoretical predictions. Agreement
is found between the measurement and the most accurate calculations available.
Dive into the research topics of 'Measurement of the W^+W^− production cross section in pp collisions at a centre-of-mass energy of √s = 13 TeV with the ATLAS experiments'. Together they form a
unique fingerprint. | {"url":"https://profiles.wustl.edu/en/publications/measurement-of-the-wsupsupwsupsup-production-cross-section-in-pp-","timestamp":"2024-11-11T16:06:42Z","content_type":"text/html","content_length":"54736","record_id":"<urn:uuid:fa05ee3d-6bde-415a-ae14-5acf5144897e>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00218.warc.gz"} |
10 Decameter to Kilometer Calculator | Calculator Bit
10 Decameter to Kilometer Calculator
10 Decameter = 0.1 Kilometer (km)
Rounded: Nearest 4 digits
10 Decameter is 0.1 Kilometer (km)
10 Decameter is 100 m
How to Convert Decameter to Kilometer (Explanation)
• 1 decameter = 0.01 km (Nearest 4 digits)
• 1 kilometer = 100 dam (Nearest 4 digits)
There are 0.01 Kilometer in 1 Decameter. To convert Decameter to Kilometer all you need to do is multiple the Decameter with 0.01.
In formula distance is denoted with d
The distance d in Kilometer (km) is equal to 0.01 times the distance in decameter (dam):
L [(km)] = L [(dam)] × 0.01
Formula for 10 Decameter (dam) to Kilometer (km) conversion:
L [(km)] = 10 dam × 0.01 => 0.1 km
How many Kilometer in a Decameter
One Decameter is equal to 0.01 Kilometer
1 dam = 1 dam × 0.01 => 0.01 km
How many Decameter in a Kilometer
One Kilometer is equal to 100 Decameter
1 km = 1 km / 0.01 => 100 dam
The decameter (symbol: dam) is unit of length in the International System of Units (SI), equal to 10 meters. Decameter is less frequently used measure unit amongs peoples, decameter is used in
volumetric form like cubic decameter(dam^3) that is equal to 1 megalitre(ML).
The kilometer (symbol: km) is a unit of length in the International System of Units (SI), equal to 1000 meters. Kilometer is most commonly used measurement unit to measure the distance between
physical places all around the world.
Cite, Link, or Reference This Page
If you found information page helpful you can cite and reference this page in your work.
• <a href="https://www.calculatorbit.com/en/length/10-decameter-to-kilometer">10 Decameter to Kilometer Conversion</a>
• "10 Decameter to Kilometer Conversion". www.calculatorbit.com. Accessed on November 8 2024. https://www.calculatorbit.com/en/length/10-decameter-to-kilometer.
• "10 Decameter to Kilometer Conversion". www.calculatorbit.com, https://www.calculatorbit.com/en/length/10-decameter-to-kilometer. Accessed 8 November 2024.
• 10 Decameter to Kilometer Conversion. www.calculatorbit.com. Retrieved from https://www.calculatorbit.com/en/length/10-decameter-to-kilometer.
Decameter to Kilometer Calculations Table
Now by following above explained formulas we can prepare a Decameter to Kilometer Chart.
Decameter (dam) Kilometer (km)
6 0.06
7 0.07
8 0.08
9 0.09
10 0.1
11 0.11
12 0.12
13 0.13
14 0.14
15 0.15
Nearest 4 digits
Convert from Decameter to other units
Here are some quick links to convert 10 Decameter to other length units.
Convert to Decameter from other units
Here are some quick links to convert other length units to Decameter.
More Decameter to Kilometer Calculations
More Kilometer to Decameter Calculations
FAQs About Decameter and Kilometer
Converting from one Decameter to Kilometer or Kilometer to Decameter sometimes gets confusing.
Here are some Frequently asked questions answered for you.
Is 0.01 Kilometer in 1 Decameter?
Yes, 1 Decameter have 0.01 (Nearest 4 digits) Kilometer.
What is the symbol for Decameter and Kilometer?
Symbol for Decameter is dam and symbol for Kilometer is km.
How many Decameter makes 1 Kilometer?
100 Decameter is euqal to 1 Kilometer.
How many Kilometer in 10 Decameter?
Decameter have 0.1 Kilometer.
How many Kilometer in a Decameter?
Decameter have 0.01 (Nearest 4 digits) Kilometer. | {"url":"https://www.calculatorbit.com/en/length/10-decameter-to-kilometer","timestamp":"2024-11-08T06:10:55Z","content_type":"text/html","content_length":"52937","record_id":"<urn:uuid:bb70b859-b219-4251-bf7e-fbc7a5c5f124>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00300.warc.gz"} |
Statistical - Study guides, Class notes & Summaries
Looking for the best study guides, study notes and summaries about Statistical? On this page you'll find 23505 study documents about Statistical.
• APEA 3P Actual Exam Test Bank for 2024 FREE
• Exam (elaborations) • 36 pages • 2024
• APEA 3P Actual Exam Test Bank Stuvia Is Available For Download After Purchase. In Case You Encounter Any Difficulties With The Download, Please Feel Free To Reach Out To Me. I
Will Promptly Send It To You Through Google Doc or Email. 
Thank You. 
When performing a visual acuity test the nurse practitioner notes 20/30 in the left eye
and 20/40 in the right eye using the Snellen eye chart. This means: 
a) have the patient returning in two weeks for a follow up vision screen 
b)
dilated the eye an...
• $17.99
• 55x sold
• + learn more
Exam (elaborations)
APEA 3P Actual Exam Test Bank for 2024 FREE Last document update: ago
APEA 3P Actual Exam Test Bank Stuvia Is Available For Download After Purchase. In Case You Encounter Any Difficulties With The Download, Please Feel Free To Reach Out To Me. I
Will Promptly Send It To You Through Google Doc or Email. 
Thank You. 
When performing a visual acuity test the nurse practitioner notes 20/30 in the left eye and 20
/40 in the right eye using the Snellen eye chart. This means: 
a) have the patient returning in two weeks for a follow up vision screen 
b) dilated the eye
• AED3701 Assignment 1 QUIZ (100% COMPLETE ANSWERS) 2024 (535743) - DUE 8 April 2024
• Exam (elaborations) • 42 pages • 2024 Popular
AED3701 Assignment 1 QUIZ (COMPLETE ANSWERS) 2024 (535743) - DUE 8 April 2024 ;100% TRUSTED workings, explanations and solutions. for assistance
........ QUESTION 1 
This assessment does not focus on the results of learning, but on the whole process of learning to make
teaching and learning meaningful for the learner. 
A.	Assessment in learning 
B.	Assessment as learning 
C.	Assessment of learning

D.	Assessment for learning 
 
QUESTION 2 
Post-moderatio...
• $2.93
• 33x sold
• + learn more
Exam (elaborations)
AED3701 Assignment 1 QUIZ (100% COMPLETE ANSWERS) 2024 (535743) - DUE 8 April 2024 Last document update: ago
AED3701 Assignment 1 QUIZ (COMPLETE ANSWERS) 2024 (535743) - DUE 8 April 2024 ;100% TRUSTED workings, explanations and solutions. for assistance Whats-App
period;...... QUESTION 1 
This assessment does not focus on the results of learning, but on the whole process of learning to make teaching and
learning meaningful for the learner. 
A.	Assessment in learning 
B.	Assessment as learning 
C.	Assessment of learning 
D&
period;	Assessment for learning 
 
QUESTION 2 
Post-moderatio...
• Solutions Manual for Probability and Statistical Inference 10th Edition By Robert Hogg, Elliot Tanis, Dale Zimmerman (All Chapters, 100% Original Verified, A&
plus; Grade)
• Exam (elaborations) • 140 pages • 2024
This Is Original 10th Edition of Solutions Manual From Original Author. All Other Files in the market are fake/old Edition. Other Sellers Have changed old Edition Number to new
But solutions Manual is old Edition. 
 
Solutions Manual for Probability and Statistical Inference 10th Edition By Robert Hogg, Elliot Tanis, Dale Zimmerman &
lpar;All Chapters, 100% Original Verified, A+ Grade) 
 
 
Solutions Manual for Probability and Statistical Inference 10th Edition By Robert
Hogg, Elliot Tanis, Dale Zimme...
• $12.49
• 8x sold
• + learn more
Exam (elaborations)
Solutions Manual for Probability and Statistical Inference 10th Edition By Robert Hogg, Elliot Tanis, Dale Zimmerman (All Chapters, 100% Original Verified, A+
Grade) Last document update: ago
This Is Original 10th Edition of Solutions Manual From Original Author. All Other Files in the market are fake/old Edition. Other Sellers Have changed old Edition Number to new But
solutions Manual is old Edition. 
 
Solutions Manual for Probability and Statistical Inference 10th Edition By Robert Hogg, Elliot Tanis, Dale Zimmerman (All
Chapters, 100% Original Verified, A+ Grade) 
 
 
Solutions Manual for Probability and Statistical Inference 10th Edition By Robert Hogg,
Elliot Tanis, Dale Zimme...
• Solutions Manual for Statistical Methods for the Social Sciences 5th Edition By Alan Agresti (All Chapters, 100% Original Verified, A+ Grade)
• Exam (elaborations) • 132 pages • 2024
This Is Original 5th Edition of Solutions Manual From Original Author. All Other Files in the market are fake/old Edition. Other Sellers Have changed old Edition Number to new
But solutions Manual is old Edition. 
 
Solutions Manual for Statistical Methods for the Social Sciences 5th Edition By Alan Agresti (All Chapters, 100%
Original Verified, A+ Grade) 
 
Solutions Manual for Statistical Methods for the Social Sciences 5e By Alan Agresti (All Chapters, 100% Original
Verified, A+ Grade)
• $10.49
• 4x sold
• + learn more
Exam (elaborations)
Solutions Manual for Statistical Methods for the Social Sciences 5th Edition By Alan Agresti (All Chapters, 100% Original Verified, A+ Grade) Last document update:
This Is Original 5th Edition of Solutions Manual From Original Author. All Other Files in the market are fake/old Edition. Other Sellers Have changed old Edition Number to new But
solutions Manual is old Edition. 
 
Solutions Manual for Statistical Methods for the Social Sciences 5th Edition By Alan Agresti (All Chapters, 100% Original
Verified, A+ Grade) 
 
Solutions Manual for Statistical Methods for the Social Sciences 5e By Alan Agresti (All Chapters, 100% Original Verified&
comma; A+ Grade)
• Solutions Manual for Probability and Statistical Inference 9th Edition By Robert Hogg Elliot Tanis Dale Zimmerman (All Chapters, 100% Original Verified, A+ Grade&
• Exam (elaborations) • 140 pages • 2024
This Is Original 9th Edition of Solutions Manual From Original Author. All Other Files in the market are fake/old Edition. Other Sellers Have changed old Edition Number to new
But solutions Manual is old Edition. 
 
Solutions Manual for Probability and Statistical Inference 9th Edition By Robert Hogg Elliot Tanis Dale Zimmerman (All Chapters
, 100% Original Verified, A+ Grade) 
 
 
Solutions Manual for Probability and Statistical Inference 9e By Robert Hogg Elliot Tanis Dale
Zimmerman (All Chapte...
• $8.49
• 4x sold
• + learn more
Exam (elaborations)
Solutions Manual for Probability and Statistical Inference 9th Edition By Robert Hogg Elliot Tanis Dale Zimmerman (All Chapters, 100% Original Verified, A+ Grade)
Last document update: ago
This Is Original 9th Edition of Solutions Manual From Original Author. All Other Files in the market are fake/old Edition. Other Sellers Have changed old Edition Number to new But
solutions Manual is old Edition. 
 
Solutions Manual for Probability and Statistical Inference 9th Edition By Robert Hogg Elliot Tanis Dale Zimmerman (All Chapters,
100% Original Verified, A+ Grade) 
 
 
Solutions Manual for Probability and Statistical Inference 9e By Robert Hogg Elliot Tanis Dale Zimmerman (
All Chapte...
• NR 503 Week 8 Final Quiz Population Health Epidemiology and Statistical Principles practice exam questions and answers
• Exam (elaborations) • 12 pages • 2023
• NR 503 Week 8 Final Quiz Population Health Epidemiology and Statistical Principles practice exam questions and answers
• $19.99
• 11x sold
• + learn more
Exam (elaborations)
NR 503 Week 8 Final Quiz Population Health Epidemiology and Statistical Principles practice exam questions and answers Last document update: ago
NR 503 Week 8 Final Quiz Population Health Epidemiology and Statistical Principles practice exam questions and answers
• Solutions Manual For John E. Freund's Mathematical Statistics with Applications 8th Edition By Irwin Miller, Marylees Miller (All Chapters, 100% Original Verified&
comma; A+ Grade)
• Exam (elaborations) • 261 pages • 2024
Solutions Manual For John E. Freund's Mathematical Statistics with Applications 8th Edition By Irwin Miller, Marylees Miller (All Chapters, 100% Original Verified&
comma; A+ Grade) 
 
Solutions Manual For John E. Freund's Mathematical Statistics with Applications 8e By Irwin Miller, Marylees Miller (All Chapters&
comma; 100% Original Verified, A+ Grade)
• $28.49
• 3x sold
• + learn more
Exam (elaborations)
Solutions Manual For John E. Freund's Mathematical Statistics with Applications 8th Edition By Irwin Miller, Marylees Miller (All Chapters, 100% Original Verified&
comma; A+ Grade) Last document update: ago
Solutions Manual For John E. Freund's Mathematical Statistics with Applications 8th Edition By Irwin Miller, Marylees Miller (All Chapters, 100% Original Verified&
comma; A+ Grade) 
 
Solutions Manual For John E. Freund's Mathematical Statistics with Applications 8e By Irwin Miller, Marylees Miller (All Chapters&
comma; 100% Original Verified, A+ Grade)
• Solution Manual for An Introduction to Statistical Methods and Data Analysis 7th Edition by R. Lyman Ott Michael T. Longnecker
• Exam (elaborations) • 510 pages • 2024
Solution Manual for An Introduction to Statistical Methods and Data Analysis 7th Edition by R. Lyman Ott Michael T. Longnecker
• $17.49
• 2x sold
• + learn more
Exam (elaborations)
Solution Manual for An Introduction to Statistical Methods and Data Analysis 7th Edition by R. Lyman Ott Michael T. Longnecker Last document update: ago
Solution Manual for An Introduction to Statistical Methods and Data Analysis 7th Edition by R. Lyman Ott Michael T. Longnecker
Summary Exam 2 Scientific and Statistical Reasoning UvA Year 2 Last document update: ago
Summary of Scientific and Statistical Reasoning Exam 2
Summary Exam 1 Scientific and Statistical Reasoning UvA Year 2 Last document update: ago
Summary of Scientific and Statistical Reasoning Exam 1
• Test Bank For Statistics for Nursing A Practical Approach 3rd Edition Heavey | 9781284142013 | All Chapters with Answers and Rationals
• Exam (elaborations) • 81 pages • 2023
Test Bank For Statistics for Nursing A Practical Approach 3rd Edition Heavey | 9781284142013 | All Chapters with Answers and Rationals . 
Instant Delivery .
• $16.49
• 8x sold
• + learn more
Exam (elaborations)
Test Bank For Statistics for Nursing A Practical Approach 3rd Edition Heavey | 9781284142013 | All Chapters with Answers and Rationals Last document update: ago
Test Bank For Statistics for Nursing A Practical Approach 3rd Edition Heavey | 9781284142013 | All Chapters with Answers and Rationals . 
Instant Delivery .
• SOLUTIONS MANUAL for Probability and Statistical Inference 10th Edition by Robert Hogg; Elliot Tanis and Dale Zimmerman
• Exam (elaborations) • 148 pages • 2024
SOLUTIONS MANUAL for Probability and Statistical Inference 10th Edition by Robert Hogg; Elliot Tanis and Dale Zimmerman
• $29.80
• 2x sold
• + learn more
Exam (elaborations)
SOLUTIONS MANUAL for Probability and Statistical Inference 10th Edition by Robert Hogg; Elliot Tanis and Dale Zimmerman Last document update: ago
SOLUTIONS MANUAL for Probability and Statistical Inference 10th Edition by Robert Hogg; Elliot Tanis and Dale Zimmerman | {"url":"https://www.stuvia.com/en-us/search?s=statistical","timestamp":"2024-11-08T22:03:42Z","content_type":"text/html","content_length":"306965","record_id":"<urn:uuid:089bee01-b1e9-4424-acb9-b0e1e8fa692f>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00738.warc.gz"} |
Effect of free-stream turbulence on turbulent properties of a separation-reattachment flow
A study of the effect of free-stream turbulence on the turbulent structure of a separation bubble formed at the leading edge of a blunt plate with right-angled corners is presented. The free-stream
turbulence was introduced by a thin circular rod placed upstream of the plate along its stagnation streamline. This paper is a sequel to previous papers by Kiya and Sasaki (1983) and Kiya et al.
(1984), in which time-averaged properties of the separation bubble are determined in terms of the turbulence intensity at a reference point near an edge of the plate. The longitudinal and spanwise
integral length scales of vortices in the separation bubble are given as functions of the free-stream turbulence intensity. The cross correlations between the surface-pressure and velocity
fluctuations suggested that large-scale vortices are similar in shape independently of the free-stream turbulence intensity. Moreover, the maximum rms surface pressure was estimated in terms of the
longitudinal gradient of the time-mean surface-pressure profile and the longitudinal length scale of vortices in the reattaching zone.
JSME International Journal Series B
Pub Date:
April 1985
□ Blunt Leading Edges;
□ Flow Stability;
□ Free Flow;
□ Reattached Flow;
□ Separated Flow;
□ Turbulent Flow;
□ Cross Correlation;
□ Flow Measurement;
□ Pressure Distribution;
□ Turbulent Wakes;
□ Fluid Mechanics and Heat Transfer | {"url":"https://ui.adsabs.harvard.edu/abs/1985JSMEB..28..610S/abstract","timestamp":"2024-11-13T16:49:13Z","content_type":"text/html","content_length":"37011","record_id":"<urn:uuid:d5c892a2-b667-4878-afc7-22a810af6fbd>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00333.warc.gz"} |
Optical Flow
In this chapter,
• We will understand the concepts of optical flow and its estimation using Lucas-Kanade method.
• We will use functions like cv2.calcOpticalFlowPyrLK() to track feature points in a video.
Optical Flow
Optical flow is the pattern of apparent motion of image objects between two consecutive frames caused by the movemement of object or camera. It is 2D vector field where each vector is a displacement
vector showing the movement of points from first frame to second. Consider the image below (Image Courtesy: Wikipedia article on Optical Flow).
It shows a ball moving in 5 consecutive frames. The arrow shows its displacement vector. Optical flow has many applications in areas like :
• Structure from Motion
• Video Compression
• Video Stabilization ...
Optical flow works on several assumptions:
1. The pixel intensities of an object do not change between consecutive frames.
2. Neighbouring pixels have similar motion.
Consider a pixel \(I(x,y,t)\) in first frame (Check a new dimension, time, is added here. Earlier we were working with images only, so no need of time). It moves by distance \((dx,dy)\) in next frame
taken after \(dt\) time. So since those pixels are the same and intensity does not change, we can say,
\[I(x,y,t) = I(x+dx, y+dy, t+dt)\]
Then take taylor series approximation of right-hand side, remove common terms and divide by \(dt\) to get the following equation:
\[f_x u + f_y v + f_t = 0 \;\]
\[f_x = \frac{\partial f}{\partial x} \; ; \; f_y = \frac{\partial f}{\partial y}\]
\[u = \frac{dx}{dt} \; ; \; v = \frac{dy}{dt}\]
Above equation is called Optical Flow equation. In it, we can find \(f_x\) and \(f_y\), they are image gradients. Similarly \(f_t\) is the gradient along time. But \((u,v)\) is unknown. We cannot
solve this one equation with two unknown variables. So several methods are provided to solve this problem and one of them is Lucas-Kanade.
Lucas-Kanade method
We have seen an assumption before, that all the neighbouring pixels will have similar motion. Lucas-Kanade method takes a 3x3 patch around the point. So all the 9 points have the same motion. We can
find \((f_x, f_y, f_t)\) for these 9 points. So now our problem becomes solving 9 equations with two unknown variables which is over-determined. A better solution is obtained with least square fit
method. Below is the final solution which is two equation-two unknown problem and solve to get the solution.
\[\begin{bmatrix} u \\ v \end{bmatrix} = \begin{bmatrix} \sum_{i}{f_{x_i}}^2 & \sum_{i}{f_{x_i} f_{y_i} } \\ \sum_{i}{f_{x_i} f_{y_i}} & \sum_{i}{f_{y_i}}^2 \end{bmatrix}^{-1} \begin{bmatrix} - \sum_
{i}{f_{x_i} f_{t_i}} \\ - \sum_{i}{f_{y_i} f_{t_i}} \end{bmatrix}\]
( Check similarity of inverse matrix with Harris corner detector. It denotes that corners are better points to be tracked.)
So from user point of view, idea is simple, we give some points to track, we receive the optical flow vectors of those points. But again there are some problems. Until now, we were dealing with small
motions. So it fails when there is large motion. So again we go for pyramids. When we go up in the pyramid, small motions are removed and large motions becomes small motions. So applying Lucas-Kanade
there, we get optical flow along with the scale.
Lucas-Kanade Optical Flow in OpenCV
OpenCV provides all these in a single function, cv2.calcOpticalFlowPyrLK(). Here, we create a simple application which tracks some points in a video. To decide the points, we use
cv2.goodFeaturesToTrack(). We take the first frame, detect some Shi-Tomasi corner points in it, then we iteratively track those points using Lucas-Kanade optical flow. For the function
cv2.calcOpticalFlowPyrLK() we pass the previous frame, previous points and next frame. It returns next points along with some status numbers which has a value of 1 if next point is found, else zero.
We iteratively pass these next points as previous points in next step. See the code below:
import numpy as np
import cv2
cap = cv2.VideoCapture('slow.flv')
# params for ShiTomasi corner detection
feature_params = dict( maxCorners = 100,
qualityLevel = 0.3,
minDistance = 7,
blockSize = 7 )
# Parameters for lucas kanade optical flow
lk_params = dict( winSize = (15,15),
maxLevel = 2,
criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 0.03))
# Create some random colors
color = np.random.randint(0,255,(100,3))
# Take first frame and find corners in it
ret, old_frame = cap.read()
old_gray = cv2.cvtColor(old_frame, cv2.COLOR_BGR2GRAY)
p0 = cv2.goodFeaturesToTrack(old_gray, mask = None, **feature_params)
# Create a mask image for drawing purposes
mask = np.zeros_like(old_frame)
ret,frame = cap.read()
frame_gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# calculate optical flow
p1, st, err = cv2.calcOpticalFlowPyrLK(old_gray, frame_gray, p0, None, **lk_params)
# Select good points
good_new = p1[st==1]
good_old = p0[st==1]
# draw the tracks
for i,(new,old) in enumerate(zip(good_new,good_old)):
a,b = new.ravel()
c,d = old.ravel()
mask = cv2.line(mask, (a,b),(c,d), color[i].tolist(), 2)
frame = cv2.circle(frame,(a,b),5,color[i].tolist(),-1)
img = cv2.add(frame,mask)
k = cv2.waitKey(30) & 0xff
if k == 27:
# Now update the previous frame and previous points
old_gray = frame_gray.copy()
p0 = good_new.reshape(-1,1,2)
(This code doesn't check how correct are the next keypoints. So even if any feature point disappears in image, there is a chance that optical flow finds the next point which may look close to it. So
actually for a robust tracking, corner points should be detected in particular intervals. OpenCV samples comes up with such a sample which finds the feature points at every 5 frames. It also run a
backward-check of the optical flow points got to select only good ones. Check samples/python/lk_track.py).
See the results we got:
Dense Optical Flow in OpenCV
Lucas-Kanade method computes optical flow for a sparse feature set (in our example, corners detected using Shi-Tomasi algorithm). OpenCV provides another algorithm to find the dense optical flow. It
computes the optical flow for all the points in the frame. It is based on Gunner Farneback's algorithm which is explained in "Two-Frame Motion Estimation Based on Polynomial Expansion" by Gunner
Farneback in 2003.
Below sample shows how to find the dense optical flow using above algorithm. We get a 2-channel array with optical flow vectors, \((u,v)\). We find their magnitude and direction. We color code the
result for better visualization. Direction corresponds to Hue value of the image. Magnitude corresponds to Value plane. See the code below:
import cv2
import numpy as np
cap = cv2.VideoCapture("vtest.avi")
ret, frame1 = cap.read()
prvs = cv2.cvtColor(frame1,cv2.COLOR_BGR2GRAY)
hsv = np.zeros_like(frame1)
hsv[...,1] = 255
ret, frame2 = cap.read()
next = cv2.cvtColor(frame2,cv2.COLOR_BGR2GRAY)
flow = cv2.calcOpticalFlowFarneback(prvs,next, None, 0.5, 3, 15, 3, 5, 1.2, 0)
mag, ang = cv2.cartToPolar(flow[...,0], flow[...,1])
hsv[...,0] = ang*180/np.pi/2
hsv[...,2] = cv2.normalize(mag,None,0,255,cv2.NORM_MINMAX)
bgr = cv2.cvtColor(hsv,cv2.COLOR_HSV2BGR)
k = cv2.waitKey(30) & 0xff
if k == 27:
elif k == ord('s'):
prvs = next
See the result below:
OpenCV comes with a more advanced sample on dense optical flow, please see samples/python/opt_flow.py.
Additional Resources
1. Check the code in samples/python/lk_track.py. Try to understand the code.
2. Check the code in samples/python/opt_flow.py. Try to understand the code. | {"url":"https://docs.opencv.org/3.3.1/d7/d8b/tutorial_py_lucas_kanade.html","timestamp":"2024-11-13T01:19:56Z","content_type":"application/xhtml+xml","content_length":"22694","record_id":"<urn:uuid:357ae853-ead4-485d-81f1-e55be0f05d25>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00819.warc.gz"} |
The area of the plane region bounded by the curves x+2y2=0 and ... | Filo
The area of the plane region bounded by the curves and is equal to (a) sq units (b) sq units (c) sq unit (d) sq unit
Not the question you're searching for?
+ Ask your question
Exp. (a)
Given curves are
and .....(ii)
On solving Eqs. (i) and (ii), we get
Required area
, [since, integral is an even]
Was this solution helpful?
Found 4 tutors discussing this question
Discuss this question LIVE for FREE
13 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Questions from JEE Mains 2008 - PYQs
View more
Practice questions from Arihant JEE Main Chapterwise Solutions Mathematics (2019-2002) (Arihant)
View more
Practice questions from Application of Integrals in the same exam
Practice more questions from Application of Integrals
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text The area of the plane region bounded by the curves and is equal to (a) sq units (b) sq units (c) sq unit (d) sq unit
Updated On Dec 8, 2023
Topic Application of Integrals
Subject Mathematics
Class Class 12
Answer Type Text solution:1 Video solution: 2
Upvotes 273
Avg. Video Duration 7 min | {"url":"https://askfilo.com/math-question-answers/the-area-of-the-plane-region-bounded-by-the-curves-x2-y20-and-x3-y21-is-equal","timestamp":"2024-11-02T11:11:23Z","content_type":"text/html","content_length":"765651","record_id":"<urn:uuid:93a57e40-d49d-4cc0-9d60-50185ff32b6b>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00441.warc.gz"} |
What is: Receiver Operating Characteristic (ROC) Curve
What is the Receiver Operating Characteristic (ROC) Curve?
The Receiver Operating Characteristic (ROC) Curve is a graphical representation used to evaluate the performance of binary classification models. It illustrates the trade-off between sensitivity
(true positive rate) and specificity (1 – false positive rate) across various threshold settings. The ROC curve is particularly useful in determining how well a model can distinguish between two
classes, making it a fundamental tool in fields such as statistics, data analysis, and data science.
Understanding Sensitivity and Specificity
Sensitivity, also known as the true positive rate, measures the proportion of actual positives that are correctly identified by the model. In contrast, specificity measures the proportion of actual
negatives that are correctly identified. The ROC curve plots these two metrics against each other, allowing analysts to visualize the performance of a classification model at different threshold
levels. A model with high sensitivity and high specificity is ideal, as it accurately identifies both positive and negative cases.
Plotting the ROC Curve
To plot the ROC curve, one must first calculate the true positive rate (TPR) and false positive rate (FPR) for various threshold values. The TPR is calculated as the number of true positives divided
by the sum of true positives and false negatives, while the FPR is calculated as the number of false positives divided by the sum of false positives and true negatives. By varying the threshold and
plotting the TPR against the FPR, a curve is generated that provides insights into the model’s performance across different classification thresholds.
Interpreting the ROC Curve
The area under the ROC curve (AUC) is a crucial metric for evaluating the overall performance of a classification model. An AUC of 1 indicates perfect classification, while an AUC of 0.5 suggests no
discriminative ability, equivalent to random guessing. The closer the AUC is to 1, the better the model is at distinguishing between the positive and negative classes. Analysts often use the AUC as a
single scalar value to compare different models and select the best-performing one.
ROC Curve and Class Imbalance
One of the advantages of using the ROC curve is its robustness in the presence of class imbalance. In scenarios where one class significantly outnumbers the other, traditional accuracy metrics can be
misleading. The ROC curve, however, focuses on the true positive and false positive rates, providing a more balanced view of model performance. This makes it particularly useful in fields such as
medical diagnostics, fraud detection, and any other domain where class imbalance is prevalent.
Applications of ROC Curves in Data Science
ROC curves are widely used in various applications within data science, including medical diagnosis, credit scoring, and machine learning model evaluation. In medical diagnostics, for instance, ROC
curves help determine the effectiveness of tests in identifying diseases. In machine learning, they are employed to assess the performance of classifiers, guiding data scientists in model selection
and hyperparameter tuning. The versatility of ROC curves makes them an essential tool in the data analyst’s toolkit.
Limitations of the ROC Curve
Despite its advantages, the ROC curve has limitations that analysts should be aware of. One significant limitation is that it does not provide information about the actual predicted probabilities of
the positive class. Additionally, the ROC curve can be overly optimistic in cases of extreme class imbalance, where the number of negative instances far exceeds the number of positive instances.
Therefore, it is often recommended to use ROC curves in conjunction with other evaluation metrics, such as precision-recall curves, to obtain a comprehensive understanding of model performance.
ROC Curve in Machine Learning Frameworks
Many machine learning frameworks and libraries, such as Scikit-learn in Python, provide built-in functions to compute and visualize ROC curves. These tools allow data scientists to easily generate
ROC curves and calculate the AUC for their models. By leveraging these libraries, analysts can streamline the evaluation process, enabling them to focus on model improvement and feature engineering
rather than manual calculations. This integration of ROC analysis into machine learning workflows enhances productivity and facilitates better decision-making.
Conclusion on ROC Curve Usage
In summary, the Receiver Operating Characteristic (ROC) Curve is an invaluable tool for assessing the performance of binary classification models. By visualizing the trade-offs between sensitivity
and specificity, analysts can make informed decisions about model selection and optimization. Its ability to handle class imbalance and its widespread applicability across various domains underscore
its importance in statistics, data analysis, and data science. As the field continues to evolve, the ROC curve will remain a cornerstone of model evaluation and performance assessment. | {"url":"https://statisticseasily.com/glossario/what-is-receiver-operating-characteristic-roc-curve/","timestamp":"2024-11-11T19:50:21Z","content_type":"text/html","content_length":"139235","record_id":"<urn:uuid:e7cb5314-4d9c-4a45-b9fa-22368f774478>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00161.warc.gz"} |
Analog Elapsed Time
Each of the printable PDF elapsed time worksheets on this page includes an answer key with the time for each clock face as well as the actual time elapsed between the two values.
Start From Full Hours
36 Analog Elapsed Time Worksheets
Elapsed time worksheets with times starting with a whole hour.
Start From Full Hours
Start From Half Hours
20 Analog Elapsed Time Worksheets
Worksheets with problems computing elapsed times starting with a whole or half hour.
Start From Half Hours
Start From Quarter Hours
20 Analog Elapsed Time Worksheets
Elapsed time worksheets starting from times on fifteen minute intervals.
Start From Quarter Hours
Start From Five Minute Intervals
24 Analog Elapsed Time Worksheets
Printable Elapsed time worksheets starting with five minute intervals and calculating intervals within and past the current hour.
Start From Five Minute Intervals
These Telling Time Worksheets to Teach How to Read Clock Faces!
One of the strangest mechanical devices around is of course the analog clock. We see them everywhere because for so long they represented the pinacle of design and engineering.
The miniaturization of the analog clock down to the wrist watch remains today a stunning achievement rivaled in many respects only by the microchip in it's expression of human ingenuity.
Being able to tell time from an analog clock remains a skill that will have relevance well into the digital age, if for no other reason than the analog clock continues to represent a noble
achievement whether it resides on a clock towner or on your wrist.
Telling Time Worksheets to the Hour and Minutes
Buyilding up to competency telling time requires a lot of practice, and these worksheets are here to help!
Start with the worksheets that tell time on the whole hours, then progress through variations that deal with 15 minute intervals. Finally, work on telling time to the minutes to become completely
comfortable with reading any position on the clock face.
The time worksheets here include versions that require reading the clock face as well as drawing it in appropriately given a numeric time. You will also find versions that have clock faces with and
without numbers.
Adding Time Worksheets
One of the most common types of time problems is being able to calculate the time a certain interval from now. This requires understanding how the hours can change as you cross the 60 minute (12
o'clock) boundary on the clock face.
Like the other time worksheets on this page, the adding time worksheets start with simple tasks like adding a number of minutes to a whole hour and then proceed up through adding arbitrary minutes to
times that do not cross hour boundaries, before presenting problems with calculated times that span hours. | {"url":"https://dadsworksheets.com/worksheets/analog-elapsed-time.html","timestamp":"2024-11-10T21:38:33Z","content_type":"text/html","content_length":"106764","record_id":"<urn:uuid:f091f980-3b66-4c2a-9d50-0542dc655794>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00653.warc.gz"} |
Hipster Dynamics#
Published: June 28, 2017 · Updated: November 1, 2023
The Hipster Effect: An IPython Interactive Exploration#
This notebook originally appeared as a post on the blog Pythonic Perambulations. The content is BSD licensed. It has been adapted to use HoloViews by Philipp Rudiger.
This week I started seeing references all over the internet to this paper: The Hipster Effect: When Anticonformists All Look The Same. It essentially describes a simple mathematical model which
models conformity and non-conformity among a mutually interacting population, and finds some interesting results: namely, conformity among a population of self-conscious non-conformists is similar to
a phase transition in a time-delayed thermodynamic system. In other words, with enough hipsters around responding to delayed fashion trends, a plethora of facial hair and fixed gear bikes is a
natural result.
Also naturally, upon reading the paper I wanted to try to reproduce the work. The paper solves the problem analytically for a continuous system and shows the precise values of certain phase
transitions within the long-term limit of the postulated system. Though such theoretical derivations are useful, I often find it more intuitive to simulate systems like this in a more approximate
manner to gain hands-on understanding.
Mathematically Modeling Hipsters#
We’ll start by defining the problem, and going through the notation suggested in the paper. We’ll consider a group of \(N\) people, and define the following quantities:
• \(\epsilon_i\) : this value is either \(+1\) or \(-1\). \(+1\) means person \(i\) is a hipster, while \(-1\) means they’re a conformist.
• \(s_i(t)\) : this is also either \(+1\) or \(-1\). This indicates person \(i\)’s choice of style at time \(t\). For example, \(+1\) might indicated a bushy beard, while \(-1\) indicates
• \(J_{ij}\) : The influence matrix. This is a value greater than zero which indicates how much person \(j\) influences person \(i\).
• \(\tau_{ij}\) : The delay matrix. This is an integer telling us the length of delay for the style of person \(j\) to affect the style of person \(i\).
The idea of the model is this: on any given day, person \(i\) looks at the world around him or her, and sees some previous day’s version of everyone else. This information is \(s_j(t - \tau_{ij})\).
The amount that person \(j\) influences person \(i\) is given by the influence matrix, \(J_{ij}\), and after putting all the information together, we see that person \(i\)’s mean impression of the
world’s style is
\[ m_i(t) = \frac{1}{N} \sum_j J_{ij} \cdot s_j(t - \tau_{ij}) \]
Given the problem setup, we can quickly check whether this impression matches their own current style:
• if \(m_i(t) \cdot s_i(t) > 0\), then person \(i\) matches those around them
• if \(m_i(t) \cdot s_i(t) < 0\), then person \(i\) looks different than those around them
A hipster who notices that their style matches that of the world around them will risk giving up all their hipster cred if they don’t change quickly; a conformist will have the opposite reaction.
Because \(\epsilon_i\) = \(+1\) for a hipster and \(-1\) for a conformist, we can encode this observation in a single value which tells us what which way the person will lean that day:
\[ x_i(t) = -\epsilon_i m_i(t) s_i(t) \]
Simple! If \(x_i(t) > 0\), then person \(i\) will more likely switch their style that day, and if \(x_i(t) < 0\), person \(i\) will more likely maintain the same style as the previous day. So we have
a formula for how to update each person’s style based on their preferences, their influences, and the world around them.
But the world is a noisy place. Each person might have other things going on that day, so instead of using this value directly, we can turn it in to a probabilistic statement. Consider the function
\[ \phi(x;\beta) = \frac{1 + \tanh(\beta \cdot x)}{2} \]
We can plot this function quickly:
import numpy as np
import holoviews as hv
from holoviews import opts
hv.extension('bokeh', 'matplotlib')
x = np.linspace(-1, 1, 1000)
curves = hv.NdOverlay(kdims=['$\\beta$'])
for beta in [0.1, 0.5, 1, 5]:
curves[beta] = hv.Curve(zip(x, 0.5 * (1 + np.tanh(beta * x))),
'$x$', '$\\phi(x;\\beta)$')
curves.opts(opts.NdOverlay(aspect=1.5, fig_size=200, legend_position='top_left'))
This gives us a nice way to move from our preference \(x_i\) to a probability of switching styles. Here \(\beta\) is inversely related to noise. For large \(\beta\), the noise is small and we
basically map \(x > 0\) to a 100% probability of switching, and \(x<0\) to a 0% probability of switching. As \(\beta\) gets smaller, the probabilities get less and less distinct.
The Code#
Let’s see this model in action. We’ll start by defining a class which implements everything we’ve gone through above:
class HipsterStep(object):
"""Class to implement hipster evolution
initial_style : length-N array
values > 0 indicate one style, while values <= 0 indicate the other.
is_hipster : length-N array
True or False, indicating whether each person is a hipster
influence_matrix : N x N array
Array of non-negative values. influence_matrix[i, j] indicates
how much influence person j has on person i
delay_matrix : N x N array
Array of positive integers. delay_matrix[i, j] indicates the
number of days delay between person j's influence on person i.
def __init__(self, initial_style, is_hipster,
influence_matrix, delay_matrix,
beta=1, rseed=None):
self.initial_style = initial_style
self.is_hipster = is_hipster
self.influence_matrix = influence_matrix
self.delay_matrix = delay_matrix
self.rng = np.random.RandomState(rseed)
self.beta = beta
# make s array consisting of -1 and 1
self.s = -1 + 2 * (np.atleast_2d(initial_style) > 0)
N = self.s.shape[1]
# make eps array consisting of -1 and 1
self.eps = -1 + 2 * (np.asarray(is_hipster) > 0)
# create influence_matrix and delay_matrix
self.J = np.asarray(influence_matrix, dtype=float)
self.tau = np.asarray(delay_matrix, dtype=int)
# validate all the inputs
assert self.s.ndim == 2
assert self.s.shape[1] == N
assert self.eps.shape == (N,)
assert self.J.shape == (N, N)
assert np.all(self.J >= 0)
assert np.all(self.tau > 0)
def phi(x, beta):
return 0.5 * (1 + np.tanh(beta * x))
def step_once(self):
N = self.s.shape[1]
# iref[i, j] gives the index for the j^th individual's
# time-delayed influence on the i^th individual
iref = np.maximum(0, self.s.shape[0] - self.tau)
# sref[i, j] gives the previous state of the j^th individual
# which affects the current state of the i^th individual
sref = self.s[iref, np.arange(N)]
# m[i] is the mean of weighted influences of other individuals
m = (self.J * sref).sum(1) / self.J.sum(1)
# From m, we use the sigmoid function to compute a transition probability
transition_prob = self.phi(-self.eps * m * self.s[-1], beta=self.beta)
# Now choose steps stochastically based on this probability
new_s = np.where(transition_prob > self.rng.rand(N), -1, 1) * self.s[-1]
# Add this to the results, and return
self.s = np.vstack([self.s, new_s])
return self.s
def step(self, N):
for i in range(N):
return self.s
Now we’ll create a function which will return an instance of the HipsterStep class with the appropriate settings:
def get_sim(Npeople=500, hipster_frac=0.8, initial_state_frac=0.5, delay=20, log10_beta=0.5, rseed=42):
rng = np.random.RandomState(rseed)
initial_state = (rng.rand(1, Npeople) > initial_state_frac)
is_hipster = (rng.rand(Npeople) > hipster_frac)
influence_matrix = abs(rng.randn(Npeople, Npeople))
influence_matrix.flat[::Npeople + 1] = 0
delay_matrix = 1 + rng.poisson(delay, size=(Npeople, Npeople))
return HipsterStep(initial_state, is_hipster, influence_matrix, delay_matrix=delay_matrix,
beta=10 ** log10_beta, rseed=rseed)
Exploring this data#
Now that we’ve defined the simulation, we can start exploring this data. I’ll quickly demonstrate how to advance simulation time and get the results.
First we initialize the model with a certain fraction of hipsters:
sim = get_sim(hipster_frac=0.8)
To run the simulation a number of steps we execute sim.step(Nsteps) giving us a matrix of identities for each invidual at each timestep.
result = sim.step(200)
array([[-1, 1, 1, ..., -1, 1, 1],
[ 1, 1, 1, ..., 1, 1, 1],
[ 1, 1, -1, ..., -1, 1, -1],
[ 1, 1, 1, ..., -1, -1, 1],
[ 1, 1, 1, ..., 1, -1, 1],
[ 1, 1, 1, ..., 1, -1, 1]])
Now we can simply go right ahead and visualize this data using an Image Element type, defining the dimensions and bounds of the space.
hv.Image(result.T, ['Time', 'individual'], 'State', bounds=(0, 0, 100, 500)).opts(opts.Image(width=600))
Now that you know how to run the simulation and access the data have a go at exploring the effects of different parameters on the population dynamics or apply some custom analyses to this data. Here
are two quick examples of what you can do:
hipster_frac = hv.HoloMap(kdims='Hipster Fraction')
hipster_curves = hipster_frac.clone(shared_data=False)
for i in np.linspace(0.1, 1, 10):
sim = get_sim(hipster_frac=i)
img = hv.Image(sim.step(200).T.astype('int8'), ['Time', 'individual'], 'Bearded',
bounds=(0, 0, 500, 500), group='Population Dynamics')
hipster_frac[i] = img
agg = img.aggregate('Time', function=np.mean, spreadfn=np.std)
hipster_curves[i] = hv.ErrorBars(agg) * hv.Curve(agg)
(hipster_frac + hipster_curves) | {"url":"https://examples.holoviz.org/gallery/hipster_dynamics/hipster_dynamics.html","timestamp":"2024-11-07T20:03:21Z","content_type":"text/html","content_length":"1049279","record_id":"<urn:uuid:52c2165f-86eb-4d55-b6e0-d796096bd167>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00208.warc.gz"} |
588 m/s to ft/s - How fast is 588 meters per second in feet per second? [CONVERT] ✔
588 meters per second in feet per second
Conversion in the opposite direction
The inverse of the conversion factor is that 1 foot per second is equal to 0.000518367346938775 times 588 meters per second.
It can also be expressed as: 588 meters per second is equal to $1 0.000518367346938775$ feet per second.
An approximate numerical result would be: five hundred and eighty-eight meters per second is about one thousand, nine hundred and twenty-nine point one three feet per second, or alternatively, a foot
per second is about zero times five hundred and eighty-eight meters per second.
[1] The precision is 15 significant digits (fourteen digits to the right of the decimal point).
Results may contain small errors due to the use of floating point arithmetic. | {"url":"https://converter.ninja/velocity/meters-per-second-to-feet-per-second/588-mps-to-fps/","timestamp":"2024-11-09T01:32:30Z","content_type":"text/html","content_length":"18041","record_id":"<urn:uuid:54b5806e-8e72-426f-add7-27f8662a5c04>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00138.warc.gz"} |
Axiomatizing resource bounds for measure
Resource-bounded measure is a generalization of classical Lebesgue measure that is useful in computational complexity. The central parameter of resource-bounded measure is the resource bound Δ, which
is a class of functions. Most applications of resource-bounded measure use only the "measure-zero/measure-one fragment" of the theory. For this fragment, Δ can be taken to be a class of type-one
functions. However, in the full theory of resource-bounded measurability and measure, the resource bound Δ also contains type-two functionals. To date, both the full theory and its zero-one fragment
have been developed in terms of a list of example resource bounds. This paper replaces this list-of-examples approach with a careful investigation of the conditions that suffice for a class Δ to be a
resource bound.
Original language English (US)
Title of host publication Models of Computation in Context - 7th Conference on Computability in Europe, CiE 2011, Proceedings
Pages 102-111
Number of pages 10
State Published - 2011
Event 7th Conference on Computability in Europe, CiE 2011 - Sofia, Bulgaria
Duration: Jun 27 2011 → Jul 2 2011
Publication series
Name Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume 6735 LNCS
ISSN (Print) 0302-9743
ISSN (Electronic) 1611-3349
Other 7th Conference on Computability in Europe, CiE 2011
Country/Territory Bulgaria
City Sofia
Period 6/27/11 → 7/2/11
ASJC Scopus subject areas
• Theoretical Computer Science
• General Computer Science
Dive into the research topics of 'Axiomatizing resource bounds for measure'. Together they form a unique fingerprint. | {"url":"https://experts.syr.edu/en/publications/axiomatizing-resource-bounds-for-measure","timestamp":"2024-11-14T05:47:56Z","content_type":"text/html","content_length":"48306","record_id":"<urn:uuid:109f3588-fb20-4990-9a0f-dbf7198a763c>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00448.warc.gz"} |
Row Function in Excel| Excelchat
Good day. I am looking for a formula. On excel I have to match number 1 2 or 3 with 13 numbers with commas in 500 rows. Sample is the right line 1,3,1,2,2,3,1,2,3,1,1,2,1 What is the possibility to
match it according to the row Ex 1,3,1,2,2,3,1,2,3,1,1,2,1 To show maybe 1,2,2,2,2,3,1,2,3,1,1,2,1 The 2,2 is wrong or Ex 1,3,1,2,2,3,1,2,3,1,1,2,1 To show maybe 1,2,2,2,2,3,1,2,3,1,1,2,1 is right
Please help me
Solved by I. Y. in 21 mins | {"url":"https://www.got-it.ai/solutions/excel-chat/excel-help/row-function?page=5","timestamp":"2024-11-07T16:14:58Z","content_type":"text/html","content_length":"352673","record_id":"<urn:uuid:7a8eddb3-516d-4a28-b75b-0a06286c8640>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00447.warc.gz"} |
TensorFlow Tutorial for Beginners - reason.townTensorFlow Tutorial for Beginners
TensorFlow Tutorial for Beginners
This TensorFlow tutorial for beginners will show you how to get started with this open-source machine learning framework. You will learn how to install TensorFlow, how to create and run your first
TensorFlow graph, and how to save and restore a TensorFlow model.
Checkout this video:
Introduction to TensorFlow
TensorFlow is a powerful tool for machine learning and artificial intelligence. In this tutorial, we’ll introduce TensorFlow and show you how to get started with it.
TensorFlow Basics
In this TensorFlow tutorial, we will be covering the following topics:
– What is TensorFlow?
– Installing TensorFlow
– Tensors
– Variables
– Graphs and sessions
– Getting started with TensorFlow
Building a simple TensorFlow model
TensorFlow is a powerful tool for building machine learning models. In this tutorial, you will learn how to build a simple TensorFlow model for regression.
TensorFlow for deep learning
TensorFlow is an open-source software library for machine learning. It was originally developed by researchers and engineers working on the Google Brain Team within Google’s Machine Intelligence
research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well.
TensorFlow offers APIs for beginners and experts to develop for desktop, mobile, web, and cloud. See the sections below to get started.
TensorFlow for image recognition
This TensorFlow tutorial is for beginners who want to learn how to create image recognition systems. You will learn how to use TensorFlow, an open source library for numerical computation and
large-scale machine learning.
TensorFlow for natural language processing
Natural language processing (NLP) is a field of computer science, artificial intelligence, and linguistics concerned with the interactions between computers and human (natural) languages. In NLP,
computers are trained to perform human-like tasks such as identification of different language structures, translation between languages, and extracting meaning from text.
TensorFlow is an open-source software library for NLP that was developed by Google. It offers a range of tools and techniques that can be used to build NLP applications. In this tutorial, we will
focus on how to use TensorFlow for text classification. We will first briefly describe what text classification is and why it is useful. We will then go through a simple example of how to use
TensorFlow for text classification. Finally, we will provide some tips on how to improve the performance of your text classifier.
TensorFlow for time series analysis
TensorFlow is a powerful tool for analyzing time series data. In this tutorial, we will cover the basics of using TensorFlow to build models for time series analysis. We will start with a simple
example of using TensorFlow to predict the next value in a time series. We will then explore more complex examples of using TensorFlow to build models for advanced time series analysis.
TensorFlow for reinforcement learning
Reinforcement learning is a type of machine learning that focuses on training models to make decisions in environments where there is a clear goal or reward. TensorFlow is a powerful tool for
reinforcement learning because it allows you to easily create complex models and train them on data efficiently.
In this tutorial, we’ll show you how to use TensorFlow to train a model for reinforcement learning. We’ll go over the basic concepts of reinforcement learning and show you how to create a simple
Q-learning model using TensorFlow. Finally, we’ll discuss some of the challenges involved in reinforcement learning and ways to overcome them.
TensorFlow for unsupervised learning
Unsupervised learning is a branch of machine learning that deals with data that is not labeled or classified. This means that there is no correct answer for the algorithm to learn from. Instead, the
algorithm has to find patterns and structure in the data on its own.
One popular unsupervised learning algorithm is called TensorFlow. TensorFlow is a powerful tool that can be used for a variety of tasks, including image recognition, natural language processing, and
even predictive modeling.
In this tutorial, we will focus on how to use TensorFlow for unsupervised learning. We will go over some of the basics of the TensorFlow framework and then we will apply it to a simple problem:
finding clusters in a dataset. By the end of this tutorial, you should have a good understanding of how to use TensorFlow for unsupervised learning tasks.
TensorFlow for transfer learning
One popular way to use TensorFlow is for transfer learning. This is where you take a pre-trained model, such as one that has been trained on the ImageNet dataset, and use it as the basis for a new
model that is trained on your own dataset.
The great thing about transfer learning is that it can help you get good results even if your dataset is small, because the pre-trained model has already learned features that are generalizable to
many different tasks.
There are two ways to do transfer learning in TensorFlow: using the pre-trained model as-is, or fine-tuning the pre-trained model. In this tutorial, we’ll focus on fine-tuning, because it’s generally
more effective than using the pre-trained model as-is. | {"url":"https://reason.town/tensorflow-tutorial-for-beginners/","timestamp":"2024-11-07T13:39:18Z","content_type":"text/html","content_length":"94339","record_id":"<urn:uuid:426e3958-c002-47c8-b532-eb629f055b47>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00266.warc.gz"} |
Maths Tutors in Leeds | Key Stage GCSE and A Level Maths TutorsMaths Tutor in Sheffield
Sheffield Tutor Company has some of the best KS2 Maths tutors in Sheffiled. Our Maths tutors are passionate about helping students of all ages but especially at Key Stage Two, such a fundamental
stage of a students learning life. If you are looking for a KS2 Maths tutor in Sheffield, Contact Us.
With the introduction of the new course with first exams in 2017, GCSE Maths has seen significant changes in the exam structure, grading system as well as content. This has increased the spectrum of
possible grades with an increased difficulty in the GCSE Maths material. Topics previously seen on A Level Maths papers is now going to be on the GCSE Maths papers. Additionally students will no
longer have access to a Maths formula sheet in the exam and therefore will be required to learn an approximately 20 more equations. Our GCSE Maths tutors have studied the changes in depth and are
confident that all of our Sheffield students will do really well in the upcoming GCSE Maths exams.
A level Maths has become the most popular A level subject with over 10% of A level students selecting it each year. This is because universities are putting more focus on academic A level subjects as
competition for places in the top universities is rife.
Our A level Maths tutors in Sheffield are focused on helping each and every student achieve the grade their ability and hard work deserves. Whether that is a C or an A*, it doesn’t matter as our A
level Maths tutors are committed to each and every pupil achieving their potential. | {"url":"https://sheffieldtutorcompany.co.uk/maths-tutor-in-sheffield/","timestamp":"2024-11-03T13:11:14Z","content_type":"text/html","content_length":"41253","record_id":"<urn:uuid:ee34b7fa-f62d-44a6-8951-2bdae89db21d>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00576.warc.gz"} |
Chord of a circle
Use other angle facts to determine other necessary angles.
As the perpendicular from the centre of a circle to a chord bisects the chord this means that the length AE is the same as the length CE .
As the tangent and the radius meet at 90Β° , the angle EBF = 90Β° . This means that we can calculate the angle ABE :
\[ABE=90-77\\ ABE=13^{\circ}\]
8.39cm (2dp)
6.43cm (2dp)
15.56cm (2dp)
13.05cm (2dp)
0.26cm (2dp)
2.43cm (2dp)
4.77cm (2dp)
3.86cm (2dp)
14.39cm (2dp)
12.21cm (2dp)
17.46cm (2dp)
9.22cm (2dp)
8.89cm (2dp)
6.46cm (2dp)
1.02cm (2dp)
41.33cm (2dp) | {"url":"https://thirdspacelearning.com/gcse-maths/geometry-and-measure/chord-of-a-circle/","timestamp":"2024-11-13T18:33:21Z","content_type":"text/html","content_length":"272680","record_id":"<urn:uuid:f813bf2a-f7a0-4e3c-904a-656ff071b4e3>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00640.warc.gz"} |
Re: Distributing square-root (1/2) power through exponential equation
• To: mathgroup at smc.vnet.net
• Subject: [mg101322] Re: Distributing square-root (1/2) power through exponential equation
• From: pfalloon <pfalloon at gmail.com>
• Date: Wed, 1 Jul 2009 06:34:52 -0400 (EDT)
• References: <h2cprm$agj$1@smc.vnet.net>
On Jun 30, 8:36 pm, Steven Matthew Anderson <AdAstr... at mac.com> wrote:
> I'm playing with normal distributions, Two random points 1 and 2 with x and y coordinates given by:
> px1=PDF[NormalDistribution[Mu,Sx],X1]
> px2=PDF[NormalDistribution[Mu,Sx],X1]
> py1=PDF[NormalDistribution[Mu,Sy],Y1]
> py2=PDF[NormalDistribution[Mu,Sy],Y2]
> The square of the Euclidean Distance between them is
> SqD = (px2-px1)^2+(py2-py1)^2
> Take the square root and expand of that to get
> Dist = Sqrt[Expand[SqD]]
> Now the question:
> How do I get the square root to act just like another power so I can simplify this mess? I have tried PowerExpand, FullSimplify, Expand, Simplify, and various combinations. Not sure what I'm missing here.
I think the general answer is that you CAN'T get square root to behave
like "just another power" -- if by that you mean an integer power.
For example: Sqrt[x^2] is NOT the same thing as x (try x=-1), and it
is one of the great advantages of Mathematica's implementation that
this distinction is carefully respected.
More generally, branch cuts of complex-valued functions are handled in
a consistent manner, which can lead to some bewildering expressions in
simple cases, but can be extremely powerful.
One thing that you may find useful is to provide Assumptions when you
think that will help to simplify an expression. So, in the trivial
case I just mentioned, if x > 0 then you can get Mathematica to
simplify appropriately:
Simplify[Sqrt[x^2], x > 0]
There are many cases where this is useful (e.g. Sin[n*Pi], (-1)^(2*n
+1), where n is an integer).
However, for the example you mention, I can't see how you would expect
it to simplify? The best that I can imagine is pretty much what's
returned when you give assumptions (I'm assuming there was a typo in
your definition of px2? and note that the built-in EuclideanDistance
assumptions = Element[{Mu,X1,X2,Y1,Y2}, Reals] && Sx>0 && Sy>0;
FullSimplify[EuclideanDistance[{px1,py1}, {px2,py2}], assumptions] | {"url":"http://forums.wolfram.com/mathgroup/archive/2009/Jul/msg00024.html","timestamp":"2024-11-08T01:53:55Z","content_type":"text/html","content_length":"32381","record_id":"<urn:uuid:4ced0617-7174-4356-97fb-fe694079350b>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00678.warc.gz"} |
Statistics Calculators
formulas & calculators for statistics & probability functions can be used to perform or verify the results of statistical or probability related calculations. It's the
statistics & probability functions
formula reference sheet contains most of the important functions for data analysis. The main objective of these formulas reference sheet & calculators is to assist the students, professionals and
researchers quickly perform or verify the important calculations that are involved in statistics & probability theory. These are all the indispensable tools used to gathering, organizing, and
analyzing data in vast areas of study of mathematics, statistics, finance, science, artificial intelligence etc to draw conclusions about the probability of potential events and the underlying
mechanics of complex systems. | {"url":"https://dev.ncalculators.com/statistics/","timestamp":"2024-11-12T18:37:11Z","content_type":"text/html","content_length":"56948","record_id":"<urn:uuid:cd2c004b-8444-44fa-b37e-0eb56eac47ca>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00724.warc.gz"} |
Liquid Filling Machine – Liquid Filler Part 4 | Hardware To Software - Part 2
Liquid Filling Machine – Liquid Filler Part 4
It is time to calibrate the device.
To do this you need a digital scale and a big container.
Position the container at the digital scale and decide a weight step. For a 1Kg load cell a 100gr step is ok (so you will do 10 measurements). For a 20Kg we decided a 250gr step (8,5Kg that we needed
/ 0,25 Kg per step = 34 measurements).
Fill the container with your weight step and then place it to the weight board of the liquid filler.
Read the weight analog value from the LCD and write it down to a paper. Keep doing this for all the range you want.
Calibration of Liquid Filler
Below you can see the 1Kg table with the analog readings
Normal Calibration Table using the 1Kg load cell
Notice that in Autofill mode these readings change a little bit. This is caused by the valve's current absorption.
It is always good to make again the measurements with the valve attached in autofill mode (opened valve) with the valve attached.
Autofill Calibration table using 1Kg load cell
With the help of SciDAVis we can plot the graphs
Load cell 1Kg normal plot
Load cell 1Kg autofill plot
Scidavis give us the linear formula :
y=133,9982+1,2134*x where in this case x is analog read and y is grams
in the normal case
Below are both plots in one diagram :
Load cell 1Kg both diagrams
Detail of the diagrams - 1Kg load cell
Notice that both curves have an average 4 point of difference.
All SciDAVis diagrams for the 1Kg load cell can be downloaded from here : SciDAVis-LoadCell-1Kg
In 20Kg load cell case we do exactly the same procedure.
Below is located the "normal" calibration table
20 kg load cell - normal calibration table
The graph this time is :
Load cell 20Kg plot diagram
In this case after testings the autofill curve had a 60 points of difference.
Notice that if you get a graph not linear then you should focus in its linear zone. In the case of the 20Kg load cell for example we had a coplete graph like this :
Linear zone 20 Kg Load Cell
So as you can see we have selected the part from 1250gr and up in oirder to calculate our formula.
This time the formula is : y=-92,6365 + 0,1129*x where this time y is analog read and x is the grams.
The 20Kg load cell SciDAVis files can be found here : SciDAVis-LoadCell-20Kg
Knowing the conversion formula we are able to re-program our ATMEGA with the help of arduino board.
Pages: Page 1, Page 2, Page 3 | {"url":"https://www.hw2sw.com/2011/10/27/liquid-filling-machine-part-4/2/","timestamp":"2024-11-04T16:44:42Z","content_type":"text/html","content_length":"72250","record_id":"<urn:uuid:39c4dc7c-9fc8-49ae-aeca-cd3972b97b0b>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00832.warc.gz"} |
How Well Do We Know the Neutron-Matter Equation of State at the Densities Inside Neutron Stars? A Bayesian Approach with Correlated Uncertainties
We introduce a new framework for quantifying correlated uncertainties of the infinite-matter equation of state derived from chiral effective field theory (χ EFT ). Bayesian machine learning via
Gaussian processes with physics-based hyperparameters allows us to efficiently quantify and propagate theoretical uncertainties of the equation of state, such as χ EFT truncation errors, to derived
quantities. We apply this framework to state-of-the-art many-body perturbation theory calculations with nucleon-nucleon and three-nucleon interactions up to fourth order in the χ EFT expansion. This
produces the first statistically robust uncertainty estimates for key quantities of neutron stars. We give results up to twice nuclear saturation density for the energy per particle, pressure, and
speed of sound of neutron matter, as well as for the nuclear symmetry energy and its derivative. At nuclear saturation density, the predicted symmetry energy and its slope are consistent with
experimental constraints.
Physical Review Letters
Pub Date:
November 2020
□ Nuclear Theory;
□ Astrophysics - High Energy Astrophysical Phenomena;
□ High Energy Physics - Phenomenology;
□ Nuclear Experiment
7 pages, 2 figures, supplemental material | {"url":"https://ui.adsabs.harvard.edu/abs/2020PhRvL.125t2702D/abstract","timestamp":"2024-11-07T04:22:14Z","content_type":"text/html","content_length":"40849","record_id":"<urn:uuid:48d47f97-02c8-4912-b5fd-b9f6c74838b2>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00382.warc.gz"} |
debt to equity ratio calculator
Debt To Equity Ratio Calculator
Calculating the debt to equity ratio is essential for understanding a company’s financial health and risk. This ratio compares a company’s total debt to its total equity, indicating how much debt a
company is using to finance its operations relative to the value of shareholder equity.
How to Use
To use the debt to equity ratio calculator, simply input the company’s total debt and total equity in the respective fields below and click the “Calculate” button to obtain the result.
The debt to equity ratio formula is:
Example Solve
Suppose a company has a total debt of $500,000 and total equity of $1,000,000. Let’s calculate the debt to equity ratio:
So, the debt to equity ratio for this company is 0.5.
Q: What does the debt to equity ratio indicate?
A: The debt to equity ratio shows the proportion of debt a company is using to finance its operations compared to the value of shareholder equity.
Q: How is a low or high debt to equity ratio interpreted?
A: A low ratio suggests a conservative financing strategy with less risk, while a high ratio indicates more aggressive financing with higher risk.
Q: Is a high debt to equity ratio always bad?
A: Not necessarily. It depends on the industry norms and the company’s financial stability. Some industries typically operate with higher debt levels.
Q: Can the debt to equity ratio be negative?
A: No, the debt to equity ratio cannot be negative as it is a comparison of two positive values.
The debt to equity ratio is a crucial financial metric that provides insights into a company’s capital structure and risk profile. Use this calculator to quickly evaluate a company’s financial
leverage and make informed investment decisions. | {"url":"https://calculatordoc.com/debt-to-equity-ratio-calculator/","timestamp":"2024-11-12T06:04:17Z","content_type":"text/html","content_length":"91325","record_id":"<urn:uuid:252488ee-fda7-4a64-a040-4ff9d077f694>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00661.warc.gz"} |
Solve many optimization problems
One of the strengths of the SAS/IML language is its flexibility. Recently, a SAS programmer asked how to generalize a program in a previous article. The original program solved one optimization
problem. The reader said that she wants to solve this type of problem 300 times, each time using a different set of parameters. Essentially, she wants to loop over a set of parameter values, solve
the corresponding optimization problem, and save each solution to a data set.
Yes, you can do this in SAS/IML, and the technique is not limited to solving a series of optimization problems. Any time you have a parameterized family of problems, you can implement this idea.
First, you figure out how to solve the problem for one set of parameters, then you can loop over many sets of parameters and solve each problem in turn.
Solve the problem one time
As I say in my article, "Ten tips before you run an optimization," you should always develop and debug solving ONE problem before you attempt to solve MANY problems. This section describes the
problem and solves it for one set of parameters.
For this article, the goal is to find the values (x[1], x[2]) that maximize a function of two variables:
F(x[1], x[2]; a b) = 1 – (x[1] – a)^2 – (x[2] – b)^2 + exp(–(x[1]^2 + x[2]^2)/2);
The function has two parameters, (a, b), which are not involved in the optimization. The function is a sum of two terms: a quadratic function and an exponentially decreasing function that looks like
the bivariate normal density.
Because the exponential term rapidly approaches zero when (x[1], x[2]) moves away from the origin, this function is a perturbation of a quadratic function. The maximum will occur somewhat close to
the value (x[1], x[2]) = (a, b), which is the value for which the quadratic term is maximal. The following graph shows a contour plot for the function when (a, b) = (-1, 2). The maximum value occurs
at approximately (x[1], x[2]) = (-0.95, 1.9).
The following program defines the function in the SAS/IML language. In the definition, the parameters (a, b) are put in the GLOBAL statement because they are constant during each optimization. The
SAS/IML language provides several nonlinear optimization routines. This program uses Newton-Raphson optimizer to solve the problem for (a, b) = (-1, 2).
proc iml;
start Func(x) global (a, b);
return 1 - (x[,1] - a)##2 - (x[,2] - b)##2 +
exp(-0.5*(x[,1]##2 + x[,2]##2));
a = -1; b = 2; /* set GLOBAL parameters */
/* test functions for one set of parameters */
opt = {1, /* find maximum of function */
2}; /* print a little bit of output */
x0 = {0 0}; /* initial guess for solution */
call nlpnra(rc, result, "Func", x0, opt); /* find maximal solution */
print result[c={"x1" "x2"}];
As claimed earlier, when (a, b) = (-1, 2), the maximum value of the function occurs at approximately (x[1], x[2]) = (-0.95, 1.9).
Solve the problem for many parameters
After you have successfully solved the problem for one set of parameters, you can iterate over a sequence of parameters. The following DATA step specifies five sets of parameters, but it could
specify 300 or 3,000. These parameters are read into a SAS/IML matrix. When you solve a problem many times in a loop, it is a good idea to suppress any output. The SAS/IML program suppresses the
tables and iteration history for each optimization step and saves the solution and the convergence status:
/* define a set of parameter values */
data Params;
input a b;
-1 2
proc iml;
start Func(x) global (a, b);
return 1 - (x[,1] - a)##2 - (x[,2] - b)##2 +
exp(-0.5*(x[,1]##2 + x[,2]##2));
/* read parameters into a matrix */
varNames = {'a' 'b'};
use Params; read all var varNames into Parms; close;
/* loop over parameters and solve problem */
opt = {1, /* find maximum of function */
0}; /* no print */
Soln = j(nrow(Parms), 2, .);
returnCode = j(nrow(Parms), 1, .);
do i = 1 to nrow(Parms);
/* assign GLOBAL variables */
a = Parms[i, 1]; b = Parms[i, 2];
/* For now, use same guess for every parameter. */
x0 = {0 0}; /* initial guess for solution */
call nlpnra(rc, result, "Func", x0, opt);
returnCode[i] = rc; /* save convergence status and solution vector */
Soln[i,] = result;
print Parms[c=varNames] returnCode Soln[c={x1 x2} format=Best8.];
The output shows the parameter values (a,b), the status of the optimization, and the optimal solution for (x[1], x[2]). The third column (returnCode) has the value 3 or 6 for these optimizations. You
can look up the exact meaning of the return codes, but the main thing to remember is that a positive return code indicates a successful optimization. A negative return code indicates that the
optimization terminated without finding an optimal solution. For this example, all five problems were solved successfully.
Choosing an initial guess based on the parameters
If the problem is not solved successfully for a certain parameter value, it might be that the initial guess was not very good. It is often possible to approximate the objective function to obtain a
good initial guess for the solution. This not only helps ensure convergence, but it often improves the speed of convergence. If you want to solve 300 problems, having a good guess will speed up the
total time.
For this example, the objective functions are a perturbation of a quadratic function. It is easy to show that the quadratic function has an optimal solution at (x[1], x[2]) = (a, b), and you can
verify from the previous output table that the optimal solutions are close to (a, b). Consequently, if you use (a, b) as an initial guess, rather than a generic value like (0, 0), then each problem
will converge in only a few iterations. For this problem, the GetInitialGuess function return (a,b), but in general the function would return a function of the parameter or even solve a simpler set
of equations.
/* Choose an initial guess based on the parameters. */
start GetInitialGuess(Parms);
return Parms; /* for this problem, the solution is near (x1,x2)=(a,b) */
do i = 1 to nrow(Parms);
/* assign GLOBAL variables */
a = Parms[i, 1];
b = Parms[i, 2];
/* Choose an initial guess based on the parameters. */
x0 = GetInitialGuess(Parms[i,]); /* initial guess for solution */
call nlpnra(rc, result, "Func", x0, opt);
returnCode[i] = rc;
Soln[i,] = result;
print Parms[c=varNames] returnCode Soln[c={x1 x2} format=Best8.];
The solutions and return codes are similar to the previous example. You can see that changing the initial guess changes the convergence status and slightly changes the solution values.
For this simple problem, choosing a better initial guess provides only a small boost to performance. If you use (0,0) as an initial guess, you can solve 3,000 problems in about 2 seconds. If you use
the GetInitialGuess function, it takes about 1.8 seconds to solve the same set of optimizations. For other problems, providing a good initial guess might be more important.
In conclusion, the SAS/IML language makes it easy to solve multiple optimization problems, where each problem uses a different set of parameter values. To improve convergence, you can sometimes use
the parameter values to compute an initial guess.
2 Comments
My friend and colleague, Rob Pratt, sent me this PROC OPTMODEL code that performs the same optimizations but uses the COFOR statement in PROC OPTMODEL to solve the problems in parallel on
multiple threads. Very nice!
proc optmodel printlevel=0;
set OBS;
num a {OBS};
num b {OBS};
read data Params into OBS=[_N_] a b;
num a_this, b_this;
var X {1..2};
max Z = 1 - (X[1]-a_this)^2 - (X[2]-b_this)^2 + exp(-0.5*(X[1]^2 + X[2]^2));
num obj {OBS};
str solstatus {OBS};
num Xsol {OBS, 1..2};
cofor {i in OBS} do;
a_this = a[i];
b_this = b[i];
put a_this= b_this=;
obj[i] = _OBJ_;
solstatus[i] = _solution_status_;
for {j in 1..2} Xsol[i,j] = X[j];
print a b obj solstatus Xsol;
For SAS customers who have access to SAS Viya 3.5, you can solve this problem by using BY-group processing in the runOptmodel action:
data sascas1.Params;
set Params;
proc cas;
source pgm;
num a, b;
read data Params into a b;
var X {1..2};
max Z = 1 - (X[1]-a)^2 - (X[2]-b)^2 + exp(-0.5*(X[1]^2 + X[2]^2));
put a= b=;
print X;
create data summary from obj=_OBJ_ solstatus=_solution_status_ X1=X[1] X2=X[2];
action optimization.runOptmodel result=r / printlevel=0 code=pgm groupBy={'a','b'};
proc print data=sascas1.summary;
Leave A Reply | {"url":"https://blogs.sas.com/content/iml/2019/09/25/solve-many-optimization-problems.html","timestamp":"2024-11-07T16:36:09Z","content_type":"text/html","content_length":"65826","record_id":"<urn:uuid:069873d8-e87c-4d85-a3d3-db77f8a6039e>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00275.warc.gz"} |
BPSK Modulation And Demodulation- Complete Matlab Code With Explanation
Binary Phase Shift Keying (BPSK) is a type of digital modulation technique in which we are sending one bit per symbol i.e., ‘0’ or a ‘1’. Hence, the bit rate and symbol rate are the same. Depending
upon the message bit, we can have a phase shift of 0^o or 180^o with respect to a reference carrier as shown in the figure above.
For example, we can have the following transmitted band-pass symbols:
$S_1=\sqrt{\frac{2E}{T}}\cos{(2\pi f t)}\rightarrow represents \mbox{ }'1'$
$S_2=\sqrt{\frac{2E}{T}}\cos{(2\pi f t+\pi)}\rightarrow represents \mbox{ }'0'$
$S_2=-\sqrt{\frac{2E}{T}}\cos{(2\pi f t)}\rightarrow represents \mbox{ }'0'$
Where ‘E’ is the symbol energy, ‘T’ is the symbol time period, f is the frequency of the carrier. Using Gram-schmidt orthogonalization, we get a single orthonormal basis function, given as:
$\psi_1=\sqrt{\frac{2}{T}}\cos{(2\pi f t)}$
Hence, the resulting constellation diagram can be given as:
Constellation Diagram Of BPSK Signal
There are only two in-phase components and no quadrature component.
Now, we can easily see that the two waveform of S[o] and S[1] are inverted with respect to one another and we can use following scheme to design a BPSK modulator:
BPSK modulator
First the NRZ encoder converts these digital bits into impulses to add a notion of time into them. Then NRZ waveform is generated by up-sampling these impulses. Afterwards, multiplication with the
carrier (orthonormal basis function) is carried out to generate the modulated BPSK waveform.
Demodulator Design:
We do coherent demodulation of the BPSK signal at the receiver. Coherent demodulation requires the received signal to be multiplied with the carrier having the same frequency and phase as at the
transmitter. The phase synchronization is normally achieved using Phase Locked Loop (PLL) at the receiver. PLL implementation is not done here, rather we assume perfect phase synchronization. Block
diagram of BPSK modulator is shown in the figure below. After the multiplication with the carrier (orthonormal basis function), the signal is integrated over the symbol duration ‘T’ and sampled.
Then thresholding is applied to determine if a ‘1’ was sent (+ve voltage) or a ‘0’ was sent (-ve voltage).
BPSK Receiver Design
The Matlab simulation code is given below. Here for the sake of simplicity, the bit rate is fixed to 1 bit/s (i.e., T=1 second). It is also assumed that Phased Locked Loop (PLL) has already achieved
exact phase synchronization.
clear all;
close all;
%Nb is the number of bits to be transmitted
T=1;%Bit rate is assumed to be 1 bit/s;
%bits to be transmitted
b=[1 0 1 0 1]
%Rb is the bit rate in bits/second
%Vp is the peak voltage +v of the NRZ waveform
%Here we encode input bitstream as Bipolar NRZ-L waveform
for index=1:size(b,2)
if b(index)==1
NRZ_out=[NRZ_out ones(1,200)*Vp];
elseif b(index)==0
NRZ_out=[NRZ_out ones(1,200)*(-Vp)];
%Generated bit stream impulses
xlabel('Time (seconds)-->')
ylabel('Amplitude (volts)-->')
title('Impulses of bits to be transmitted');
xlabel('Time (seconds)-->');
ylabel('Amplitude (volts)-->');
title('Generated NRZ signal');
%Frequency of the carrier
%Here we generate the modulated signal by multiplying it with
%carrier (basis function)
xlabel('Time (seconds)-->');
ylabel('Amplitude (volts)-->');
title('BPSK Modulated signal');
%We begin demodulation by multiplying the received signal again with
%the carrier (basis function)
%Here we perform the integration over time period T using trapz
%Integrator is an important part of correlator receiver used here
for i=1:200:size(demodulated,2)
y=[y trapz(t(i:i+199),demodulated(i:i+199))];
title('Impulses of Received bits');
xlabel('Time (seconds)-->');
ylabel('Amplitude (volts)')
Impulses of bits to be transmitted
Generated NRZ signal
BPSK Modulated Signal
If you have any comments or questions, you can discuss them below.
16 responses to “BPSK Modulation And Demodulation- Complete Matlab Code With Explanation”
1. scatter(); %error!! -> Not enough input arguments……
2. Thank you very much, that is very helpful!
but, there is one problem….
scatter(); % error !! -> Not enough input arguments……..
3. This device can also be fight friendly this means no knife blade to slow
you down at airport security. If the integrity of the own servers is compromised with a fire inside your
workplace, an electric surge, or one thing else, the files might be destroyed.
Cloud figuring out structure enables entire discretion of users data.
4. Hi Dr. Moazzam,
First of all, thanks for this. I was looking for an easy to understand BPSK Matlab implementation so was glad I found this. I just have a question on the NRZ_out and Modulated plots. Shouldn’t
the xlabel be samples instead of time? Because I don’t think the NRZ stream could have changed the frequency of the carrier which is what the plot seems to suggest. Hope to hear from you and
thanks again.
5. Hi, quick question.
You specify a frequency of 5hz…but your graph shows one of about 50. Could you explain why that is?
thank you
□ That’s just an artifact of the plotting – plot(t, Modulated) and you would see a frequency of 5Hz.
6. please help ..
Problem 1.
I want to Write a code in Matlab which creates a constant envelop PSK signal waveform that generate for M=8 (M stands for modulation), so that amplitude of the signal can reach up to sqrt(2). I
want to Plot a graph which showing that there is no difference except in their phases
Problem 2.
I want to Write a code in Matlab which will generate a 500 random numbers to represent our symbols; and then divide them into 4 intervals. Whereby each interval corresponds to a symbol A0, A1,
A2, A3, then plot a stem of 50 random symbols generated in accordance to the interval division.
7. i have error in scatter plot. while BPSK modulation. whether anyone help me to correct scatterplot.
8. i don’t understand this part
for i=1:200:size(demodulated,2)
y=[y trapz(t(i:i+199),demodulated(i:i+199))];
□ The symbol duration is 200, here we integrate the demodulated signal with respect to i each of size 200. hope that helps!
9. how to make bpsk program run repeatedly?
And I am also in need of 2:1 Multiplexer. please help me with the code
10. this code not giving the propar phase shift .
11. veryhelpful,thank you
12. Hi Moazzam I love the code it is Tidy and very well commented, my question is: Since C++ is included within the usual MATLAB commands code how can the corresponding Simulink Block Model can be
generated is there a simple way of achieving the Block Diagram at all??
Thank You
13. Thank you so much… I was searching for a code …. I finally found it
14. really helpful | {"url":"http://drmoazzam.com/matlab-code-bpsk-modulation-and-demodulation-with-explanation","timestamp":"2024-11-14T21:35:22Z","content_type":"text/html","content_length":"99105","record_id":"<urn:uuid:7c388a60-4c23-4302-be8c-99de5c1b617c>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00898.warc.gz"} |
EE 396: Lecture 14-15 - P.PDFKUL.COM
EE 396: Lecture 14-15 Ganesh Sundaramoorthi April 9, 12, 2011
Mumford and Shah Segmentation
We discuss a segmentation approach that generalizes Region Competition, before going on to object detection. The model is as follows: • The image is composed of N regions Ri , i = 1, . . . , N (∪N i=
1 Ri = Ω), Ri ∩ Rj = ∅ for i 6= j and I(x) = ui (x) + ηi (x), x ∈ Ri where
ui ∈
Z u : Ri → R :
|∇ui (x)| dx < ∞, Ri
(1) ∂u (x) = 0 ∂n
where the noise ηi (x) ∼ N (0, σ) is iid in both x and i. That is, unlike region competition, we have smooth functions representing the region rather than just constants. • The derivatives ∂x1 ui
(x), ∂x2 ui (x) ∼ N (0, σp ) are iid in x and i. Note that one may use other priors for example a Laplacian prior as is better fits natural image statistics. The Gaussian prior implies Z 2 |∇ui (x)|
dx (3) p(ui ) ∝ exp −α Ri
• The prior on Ri is the same as region competition, that is, large length curves are penalized : p(Ri ) ∝ exp (−βL(∂Ri ))
Calculating the posterior probability, p({Ri , ui }N i=1 |I) using Bayes’ Rule, and then calculating the MAP estimate is equivalent to minimizing the energy E({Ri , ui }N i=1 )
N Z X i=1
(I(x) − ui (x)) dx + α
|∇ui (x)| dx + β Ri
The case of N = 2 is the energy considered in [3, 4]. The Euler-Lagrange equations in ui (when holding Ri fixed) is the same as we saw for denoising : ( I(x) − ui (x) = α∆ui (x) x ∈ Ri . (6) ∂u x ∈
∂Ri ∂n (x) = 0 1
Also, similar to region competition, the Euler-Lagrange equations with respect to the region boundary ci of Ri is 1 2 2 2 2 ∇ci ∩cj E({ui , Ri }N i=1 ) = ((I − ui ) − (I − uj ) + α(|∇ui | − |∇uj | ))
Ni − 2βκi Ni ui + uj Ni + α(|∇ui |2 − |∇uj |2 )Ni − 2βκi Ni , = 2(uj − ui ) I − 2 when Ri is adjacent to Rj .
(7) (8) (9)
Thus, similar to region competition, we perform an iterative minimization where we 1. make a guess of the initial regions Ri 2. solve the for ui in Ri for all i using (6) 3. perturb the regions Ri by
(7) 4. iterate 2 and 3 until convergence Notice that the process is extremely slow since at each iteration, we have to solve (6), which we saw from denoising is costly. For implementation details
using level set methods and fast methods of solving the problem, see [7, 8].
Object Detection From Images
In order to detect objects in images, we cannot simply use image segmentation schemes that divide the image based on homogeneous image statistics (e.g., intensity, texture, edges, etc...). If we are
to detect humans, cars, birds, etc..., our algorithm must have knowledge about how the object looks. Indeed, if I were to look at a generic image and had no notion of what a bird looks like, then I
would not be able to detect the bird in the image. Thus, we need to incorporate a prior on the object appearance and shape into our object detection/segmentation model. Indeed, it would be nice if we
had a model of object / shape appearance. Indeed, we discuss one possible way to build a model of object shape in this lecture, and incorporate that into an object detection algorithm. Suppose that
we have a training database of n different samples of an object that is to be detected. For example, the training set could include several examples of multiple peoples’ hands and multiple examples
of each person’s hand in different poses, and viewed from several viewpoints if the goal is to detect a hand. How to obtain such a training database is obviously a big question, and in many
applications (such as medical imaging) such training data may be difficult to obtain. For our hand example, one could could take pictures of several peoples’ hands from different vantage points and
then hand segment the images to extact the object, and that could serve as a training database. For now, we will only derive a model of the object’s shape and thus disregard the appearance
information, e.g., image intensity of the object. What do we mean by “shape” of an object? The subject of shape has a considerable history (going back at least a hundred years e.g. [5]). Perhaps one
of the first to formalize the notion of shape as being part of a space was [2] - where the author was motivated by the problem of classifying rocks for geological purposes. We will not explore 1 Note
that the computation is more trickier than the case of region competition since the functions ui and uj only have meaning in regions Ri and Rj , respectively, and thus when perturbing ci to compute
the variation of the energy, we are also implicitely perturbing ui and this interaction must be considered, which was not needed for region comptetion. Nevertheless, the variation computed without
this interaction turns out to actually be the same as the variation considering this interaction.
too much the right definition of shape, but we will follow the definition given in [9]. We say that first that a “pre-shape” of an object in the imaging plane is simply the boundary of the region
defined by the object in the imaging plane. Notice that the boundary is sufficient to determine the region 2 . We represent such a pre-shapes as γ1 , . . . , γn where γi ⊂ RN (for now N = 2 so that
these pre-shapes are in the plane, but the same definitions will hold for higher than 2 dimensions. We assume that γi are both compact and closed (in two dimensions this means that γi is a simple,
closed curve). We denote by {Ψ1 , . . . , Ψn } the signed distance representation of the pre-shape. Note that Ψ only contains pre-shape information of the object and not appearance information.
Shape of an Object and Shape Average
The pre-shapes γ1 , . . . , γn are at different locations, orientations, and scales, and we would like to have a model of shape that is invariant to such transformations as the notion of shape should
not depend on orientation, location, scale 3 . We can do this by aligning all the γi with respect to a canonical shape, which we call the shape average, µ. We refer to this process of alignment of γi
as canonization of the pre-shape γi , and this is simply choosing a scale / orientation / and location of γi that is in some sense closest to the average shape. The scale / orientation / location are
called nuisances, as they are irrelevant to the definition of shape. These ideas of alignment to a canonical shape are found in [9]. 2.1.1
Let us be more precise in defining the notion of shape average and canonization. In order to canonize the shape with respect to the average shape, the nuisances must form a group 4 . Notice that
scale changes, orientation changes (i.e., rotations), and location changes (translations) form a group : indeed, G = {g : Ω → Ω : g(x) = lRx + T, l ∈ R+ , R ∈ R2×2 , T ∈ R2 , RT R = Id2×2 , det R =
is a group with the multiplication defined by function composition, i.e., g1 g2 = g1 ◦ g2 , i.e., g1 ◦ g2 (x) = g1 (g2 (x)). This group represents all possible scale, orientation, and position
changes of the shape in the imaging plane. The group defined above can be factored into three separate groups SO(2), rotations, R2 translations, and scale changes l > 0. The group SO(2) is called the
special orthogonal group. We may write that for the particular group defined above (10) that G = SE(2) × R+ = SO(2) × R2 × R+ .
We say that G acts on pre-shapes, by the following action, g ◦ γ, i.e., g ◦ γ(s) = g(γ(s)). 2 For the case of a simply connected region, this is the case. For non-simply connected regions, we would
have to add the information of whether the interior of each curve comprising the boundary is inside or outside the object. 3 This depends on the application. For example, if our notion of shape is
invariant to rotation, then is the number ’6’ and the number ’9’ have the same shape. 4 A group is denoted G and it is a set such that the elements of the group are denoted g ∈ G. A group has the
following properties:
1. There is a “multiplication” defined, i.e., for any two g1 , g2 ∈ G, the operation g1 g2 is defined, and g1 g2 ∈ G. 2. If g1 , g2 , g3 ∈ G, then (g1 g2 )g3 = g1 (g2 g3 ). 3. There exists an
identity element e ∈ G such that ge = g for all g ∈ G. 4. For each g ∈ G, there exists an element g −1 ∈ G called the inverse such that gg −1 = e.
We now need to define what it means for two pre-shapes to be close. This is at the heart of shape analysis, and we will not properly review the methods in the literature, but we will choose a simple
approach found in [9] and that is based on the ideas of “Deformable Templates” pioneered by Grenander in his works on Pattern Theory [1]. The idea is that two pre-shapes differ by the amount of
deformation of the map h : Ω → Ω that maps one pre-shape onto the other. The map h ∈ H, where H, is called the space of deformations, and we let H = {h : Ω → Ω : h, h−1 is 1-1 and onto, and smooth }
(12) such maps above are called diffeomorphisms. Note that H is a group under composition. Suppose that we have a function D : H → R that measures the amount that h ∈ H deforms a shape. Note that we
can equivalently define a map D on H or equivelently on the two pre-shapes γ1 , γ2 such that γ1 = h ◦ γ2 . Therefore, we will abuse notation and write D(h) = D(γ1 , γ2 ). Examples of D are 1. Let Dh
(x) denote the Jacobian of h, i.e., Dh(x) =
h1x1 (x) h1x2 (x) h2x1 (x) h2x2 (x)
where h = (h1 , h2 ) are the components of the map. Then Z Z X 2 D(h) = |Dh(x)| dx = |hixj (x)|2 dx Ω
Ω ij
where | · |is the Frobenius norm in the first expression. 2. Another example is the signed distance score. Let Ψ1 and Ψ2 represent signed distance representations of γ1 and γ2 . Suppose that γ2 = h ◦
γ1 . Then Z D(h) = D(γ1 , γ2 ) = Ψ2 (x) dx. (15) {x:Ψ1 (x)≤0}
The idea is that when the region enclosed by γ1 and γ2 are identical, then D would be as negative as possible. On the contrary, if they don’t overlap, then D will be a large positive number. 2.1.3
Shape Average and Canonization
We are now ready to define the shape average and the shape of γi (a canonical pre-shape with respect to the shape average). Definition 1. Let γ1 , ·, γ2 ⊂ Ω ⊂ RN be compact hypersurfaces (in the case
N = 2, they will just be simply connected close curves), whichare pre-shapes. Let H be the space of diffeomorphisms acting on Ω. Let D : H → R, and G be a finite dimensional group acting on Ω. We say
that gˆ1 , ·, gˆn ∈ G are the motions undergone by γi if there exists a pre-shape µ ˆ such that gˆ1 , . . . , gˆn , µ ˆ = arg min
g1 ,...,gn ,µ
n X
D(hi ) = arg min
g1 ,...,gn ,µ
n X
D(gi ◦ µ, γi ).
The pre-shape µ ˆ is called the shape average, and gˆi−1 (γi ) is called the shape of γi (or the canonization of γi with respect to µ). 4
Creating a Model of Shape
Now that all the pre-shapes γ1 , . . . , γn are all aligned together (g1−1 (γ1 ), . . . , gn−1 (γn )) with respect to the shape average, µ ˆ, we are going to create a model of shapes from which also
other shapes not in the training database can be generated. We follow the ideas of [6]. Let Ψ1 , . . . , Ψn denote the signed distance representations of g1−1 (γ1 ), . . . , gn−1 (γn ), and let Ψ
denote the signed distance representation of µ ˆ. The idea is that the database of shapes is large (n large); however, because the database is many instances of the same object (e.g., it could be
various instances of hands), it would seem natural that the database of shapes would be described by a small number of variations of the average shape Ψ. The we are going to perform the simplest
approach to dimensionality reduction called principal component analysis (PCA). Let 5
˜ i = Ψi − Ψ Ψ
denote the variations of Ψi from the shape average. Then the shape variability matrix is ˜ 1, . . . , Ψ ˜ n ), S = (Ψ and define
n X
(SS T )x,y∈Ω =
˜ k (x)Ψ ˜ k (y). Ψ
Performing an eigen-decomposition on the matrix above, we have that (SS T )x,y∈Ω =
n X
σk vk (x)vk (y)
where vk : Ω → R are eigenfunctions of SS T called the principal modes of the PCA, and σ1 ≥ . . . ≥ σn ≥ 0 are the singular values. Note that for computational purposes (the matrix SS T is very
large) that one could do a eignedecomposition of Z T ˜ i (x)Ψ ˜ j (x) dx, (S S)ij = Ψ (21) Ω
S T S,
which has the same singular values as and the eigenvectors of S T S are related to eigenfunctions of SS T by vi = Sdi where di is an eignevector of S T S. Our model of the shape of an object is then
the zero-level set of Ψw = Ψ +
k X
wi v i ,
where w ∈ Rk and k << n. Thus, to generate a new instance of the shape of an object, we simply sample w ∈ Rk and generate the new shape by the equation above. Note that wi cannot be too large,
otherwise, the zero level set of Ψw would deviate too much from shapes in the database, and could possibly not look like a realistic shape of an object. 5
The space of signed distance functions is not a linear space, and thus, mathematically it does not make sense to add and subtract these functions. Nevertheless, when Ψ1 and Ψ2 are aligned with
respect to each other, the zero level set of (Ψ1 +Ψ2 )/2 is a sensible average.
Object Detection
Our model for object detection is now : 1. The image consists of two regions R1 , R2 (the object and background regions) and I(x) = ui (x) + ηi (x),
where ui could either be a function (as in Mumford-Shah) or constant (as in region competition), x ∈ Ri , and ηi (x) ∼ N (0, σ) are iid in both i and x. 2. We now put a prior on R1 (the foreground
region) so that the R1 is consistent with our model of shape (22). That is R1 = {x ∈ Ω : Ψw (g −1 (x)) ≤ 0}, (24) where g ∈ G and p(w), p(g) is uniform. Note that Ψw is aligned with respect to the
shape average, and since objects in the image we wish to segment can be at different locations / orientations / scales, we allow the composition by g ∈ G. Using our machinery for the MAP estimate, we
find that maximizing p(R1 |I) is equivalent to minimizing the energy Z Z (I(x) − u1 )2 dx +
E(w, g) = R1
which we write as
(I(x) − u2 )2 dx
fout (x) dx.
Z E(w, g) =
Z fin (x) dx +
The energy is not convex in w nor g, and thus we resort to a gradient descent minimization. 2.3.1
Computing the w Gradient
We first compute the gradient with respect to w. Let H : R → R denote the Heaviside function : ( 1 x≥0 H(x) = . 0 x<0 Then
Z E(w, g) =
fin (x)(1 − H(Ψw (g −1 (x)))) dx +
fout (x)H(Ψw (g −1 (x))) dx.
∂ Ψw (g −1 (x)) dx, ∂wi
Therefore, we see that ∂ E(w, g) = ∂wi
(fout (x) − fin (x))δ(Ψw (g −1 (x)))
where δ : R → R is the Dirac delta function. Note that ∂ Ψw (g −1 (x)) = vi (g −1 (x)) ∂wi and also note by the co-area formula : Z Z f (x)δ(Ψw (g −1 (x)))|∇(Ψw (g −1 (x)))| dx =
{x∈Ω:Ψw (g −1 (x))=0}
f (x) ds,
and therefore, ∂ E(w, g) = ∂wi
Z {x∈Ω:Ψw (g −1 (x))=0}
(fout (x) − fin (x))vi (x) ds(x) |∇(Ψw (g −1 (x)))|
where ds denotes the arclength element of {x ∈ Ω : Ψw (g −1 (x)) = 0}. 2.3.2
Computing the g Gradient
Next, we compute the gradient with respect to the group element g. In order to do this is a numerically stable manner, we note that G is in addition a matrix Lie group, and this permits us to write
each element of G as an exponential of a matrix. In this sequel, we assume that G = SE(3) × R+ (for the case of surfaces, the case of curves is a special case). In order to understand that each
element of G, we write each element g ∈ SE(3) (we exclude the scale component for now) as the following R T g= . (33) 0 1 Note that gx where x = (x, 1) (such is called the homogeneous coordinates of
x) corresponds is (Rx + T, 1)T . Note also that group composition is equivalent to the above matrix multiplication of the above matrices. Therefore, we see that T R −RT T −1 −1 . (34) gg = Id4×4 ,
where g = 0 1 Suppose that we let g : [0, 1] → SE(3) be a time-varying path in SE(3). Then, we have that g(t)g −1 (t) = Id4×4 , or gg ˙ −1 + g g˙ −1 = 0, and then we see that g˙ and then (35) becomes
˙ T RR 0
˙ T T + T˙ −RR 0
˙ − RT T˙ R˙ −RT 0 0
RR˙ T 0
−RR˙ T T − T˙ 0
and therefore, ˙ T + RR˙ T = 0, ⇒ RR ˙ T = −(RR ˙ T )T , RR ˙ T =: ω which says that RR b is skew symmetric, i.e., there exists a ω ∈ R3 such that 0 −ω3 ω2 0 −ω1 . ω b = ω3 −ω2 ω1 0
Therefore, we find that R˙ = ω b R, and we define v = −b ω T + T˙ . Then, ω b v b gg ˙ −1 = = ξb ⇒ g˙ = ξg, 0 0
for which we see that g = exp ξb =
n X bi (ξ) i=0
Note that the above can be evaluated using Rodriques formula : ! b )v+ωω T v (Id3×3 −eω ω b R T e |ω| g= = exp ξb = 0 1 0 1
ω b ω b2 sin(|ω|) + (1 − cos |ω|). (43) |ω| |ω| Let S(x) = lx = (exp ξ7 )x (where l = exp ξ7 be the scaling operator then, any element of SE(3) × R+ can be represented as g(x) = exp ξb ◦ S, g ∈ SE(3)
× R+ . (44) eωb = Id3×3 +
We now represent any g ∈ SE(3) × R+ with ξ ∈ R7 , and compute the gradient of E with respect to ξi . Note that Z Z E(w, g) = (fin (x) − fout (x)) dx + fout (x) dx, (45) R1
and the second term does not depend on g, and thus the derivative wrt ξi is zero. We let f = fin − fout . Note also that R1 = {gx ∈ Ω : Ψw (x) ≤ 0} = g{x ∈ Ω : Ψw (x) ≤ 0}, (46) and so
Z f (x) dx =
Z f (g(x))| det Dg(x)| dx,
f (x) dx =
where R10 = {x ∈ Ω : Ψw (x) ≤ 0} and we have performed a change of variable in the last equality. Note that det Dg(x) = l = eξ7 . Now, Z Z Z Z ∂eξ7 ∂ ∂ ∂ ξ7 ξ7 ∂ f (g(x)) dx = f (g(x)) dx+e f (g(x))
dx. E(w, g) = f (x) dx = e ∂ξi ∂ξi R1 ∂ξi ∂ξi R10 ∂ξi R10 R10 (48) Now, using the Divergence Theorem, we have that Z Z Z ∂ ∂g(x) ∂g(x) ∇f (g(x)) · f (g(x))N (x) · f (g(x)) dx = dx = ds(x) ∂ξi ∂ξi R10
∂ξi R10 ∂R10 Z ∂g(x) − f (g(x)) div ( ) dx, (49) ∂ξi R10 where N is the outward normal of R10 6 . Note that ( 0 i 6= 7 ∂g(x) div = . ∂ξi eξ7 (1 + 2 cos |ω|) i = 7 Therefore, we have that Z Z ∂ ∂g(x)
2 E(w, g) = l f (g(x))N (x) · ds(x) + (l − l (1 + 2 cos |ω|))δi,7 f (g(x)) dx. ∂ξi ∂ξi ∂R10 R10 6
We avoid derivatives of f since f contains the image, because of noise, it is better to avoid derivatives on the image for numerical reasons.
References [1] U. Grenander. Elements of pattern theory. Johns Hopkins Univ Pr, 1996. [2] D.G. Kendall. Shape manifolds, procrustean metrics, and complex projective spaces. Bulletin of the London
Mathematical Society, 16(2):81, 1984. [3] D. Mumford and J. Shah. Boundary detection by minimizing functionals. In IEEE Conference on Computer Vision and Pattern Recognition, volume 17, pages
137–154, 1985. [4] D. Mumford and J. Shah. Optimal approximations by piecewise smooth functions and associated variational problems. Comm. Pure Appl. Math, 42(5):577–685, 1989. [5] D.A.W. Thompson.
On Growth and Form (abridged edition). On growth and form (abridged edition), 1917. [6] A. Tsai, A. Yezzi Jr, W. Wells, C. Tempany, D. Tucker, A. Fan, W.E. Grimson, and A. Willsky. A shape-based
approach to the segmentation of medical imagery using level sets. Medical Imaging, IEEE Transactions on, 22(2):137–154, 2003. [7] A. Tsai, A. Yezzi Jr, and A.S. Willsky. Curve evolution
implementation of the Mumford-Shah functional for image segmentation, denoising, interpolation, and magnification. Image Processing, IEEE Transactions on, 10(8):1169–1186, 2001. [8] L.A. Vese and
T.F. Chan. A multiphase level set framework for image segmentation using the Mumford and Shah model. International Journal of Computer Vision, 50(3):271–293, 2002. [9] A.J. Yezzi and S. Soatto.
Deformotion: Deforming motion, shape average and the joint registration and approximation of structures in images. International Journal of Computer Vision, 53(2):153–167, 2003. | {"url":"https://p.pdfkul.com/ee-396-lecture-14-15_5a14eb071723dd5769cf37e4.html","timestamp":"2024-11-05T12:52:20Z","content_type":"text/html","content_length":"72504","record_id":"<urn:uuid:d27bbf07-c5d1-4830-817e-b97954264ef8>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00532.warc.gz"} |
Algebra 1 Common Core Answers Chapter 12 Data Analysis and Probability Exercise 12.7 - A Plus Topper
Algebra 1 Common Core Answers Chapter 12 Data Analysis and Probability Exercise 12.7
Algebra 1 Common Core Solutions
Chapter 12 Data Analysis and Probability Exercise 12.7 1LC
Consider events, when rolling a number cube.
Find the theoretical probability for the event to come the number 4 to the face of cube That is finding the value of p(4).
Chapter 12 Data Analysis and Probability Exercise 12.7 2LC
Consider events, when rolling a number cube
Find the theoretical probability for the event to come the numbers less than 3 to the face of cube That is finding the value of P(less than 3)
Chapter 12 Data Analysis and Probability Exercise 12.7 3LC
Consider events, when rolling a number cube
Find the theoretical probability for the complement of an event 3 to the face of cube That is finding the value of P (not 3).
Chapter 12 Data Analysis and Probability Exercise 12.7 4LC
Consider events, when rolling a number cube.
Find the theoretical probability for the complement of an event not greater than 4 to the faces of cube That is finding the value of P(not greater than 4)
Chapter 12 Data Analysis and Probability Exercise 12.7 5LC
Consider an experiment, for toss darts at a dartboard 500 times dart hits the bulls-eye 80 times.
Find the experimental probability that dart hit the bull’s-eye.
Chapter 12 Data Analysis and Probability Exercise 12.7 6LC
Chapter 12 Data Analysis and Probability Exercise 12.7 7LC
Chapter 12 Data Analysis and Probability Exercise 12.7 8LC
Chapter 12 Data Analysis and Probability Exercise 12.7 9LC
Certain Event:
The occurrences of the month January in the every coming year contain 31 days. It is the certain likely event Since January month of every year contain 31 days
Highly Unlikely Event: The occurrence of the month febuary in the every coming year contains 29 days This is the highly unlikely event, since some coming year the February month contains 28 days
Chapter 12 Data Analysis and Probability Exercise 12.7 10E
Chapter 12 Data Analysis and Probability Exercise 12.7 11E
Chapter 12 Data Analysis and Probability Exercise 12.7 12E
Chapter 12 Data Analysis and Probability Exercise 12.7 13E
Chapter 12 Data Analysis and Probability Exercise 12.7 14E
Chapter 12 Data Analysis and Probability Exercise 12.7 15E
Chapter 12 Data Analysis and Probability Exercise 12.7 16E
Chapter 12 Data Analysis and Probability Exercise 12.7 17E
Chapter 12 Data Analysis and Probability Exercise 12.7 18E
Chapter 12 Data Analysis and Probability Exercise 12.7 19E
Consider events, landing of spinner A spinner is landing which divided into six equal part each part are number with 1 to 6 and three parts colored with blue, one part colored with green, no parts is
white and two parts colored with red. Find the theoretical probability for the complement event: that is the parts which are not colored with red That is finding the value of P(not red).
Chapter 12 Data Analysis and Probability Exercise 12.7 20E
Chapter 12 Data Analysis and Probability Exercise 12.7 21E
Chapter 12 Data Analysis and Probability Exercise 12.7 22E
Chapter 12 Data Analysis and Probability Exercise 12.7 23E
Chapter 12 Data Analysis and Probability Exercise 12.7 24E
Chapter 12 Data Analysis and Probability Exercise 12.7 25E
Chapter 12 Data Analysis and Probability Exercise 12.7 26E
Chapter 12 Data Analysis and Probability Exercise 12.7 27E
Chapter 12 Data Analysis and Probability Exercise 12.7 28E
Consider the result of the survey 01100 randomly selected students at a 2000-students high school. 24 student are given the response for go to community college. Find the experimental probability for
the student plan to go to community college after graduationS That is find the value of P(community college).
Chapter 12 Data Analysis and Probability Exercise 12.7 29E
Consider the result of the survey of 100 randomly selected students at a 2000-students high school
43 student are given the response for go to 4-year college
Find the experimental probability for the student plan to go to 4-year college after graduation. That is find the value of P(4- year college)
Chapter 12 Data Analysis and Probability Exercise 12.7 30E
Consider the result of the survey of 100 randomly selected students at a 2000-students high school
15 student are given the response for go to trade school Find the experimental probability for the student plan to go to trade school aller graduation That is finding the value of P(trade school).
Chapter 12 Data Analysis and Probability Exercise 12.7 31E
Consider the result of the survey 01100 randomly selected students at a 2000-students high school. 15 student are given the response for go to trade school.
Find the experimental probability of complement of event for the student plan to go to trade school after graduation. That is find the value of P(not trade school).
Chapter 12 Data Analysis and Probability Exercise 12.7 32E
Consider the result of the survey 01100 randomly selected students at a 2000-students high school.
15 student are given the response for go to trade school after graduation, and 24 students are plan to go to community school after graduation
Find the experimental probability for the student plan to go to trade school or community school after graduation That is finding the value of P(trade school or community school).
Chapter 12 Data Analysis and Probability Exercise 12.7 33E
Chapter 12 Data Analysis and Probability Exercise 12.7 34E
Chapter 12 Data Analysis and Probability Exercise 12.7 35E
Chapter 12 Data Analysis and Probability Exercise 12.7 36E
Consider a transportation statement of company: Out of 80 workers are surveyed at a company. 17 walk to work
Chapter 12 Data Analysis and Probability Exercise 12.7 37E
Chapter 12 Data Analysis and Probability Exercise 12.7 38E
Chapter 12 Data Analysis and Probability Exercise 12.7 39E
Chapter 12 Data Analysis and Probability Exercise 12.7 40E
Chapter 12 Data Analysis and Probability Exercise 12.7 41E
Chapter 12 Data Analysis and Probability Exercise 12.7 42E
Chapter 12 Data Analysis and Probability Exercise 12.7 43E
Chapter 12 Data Analysis and Probability Exercise 12.7 44E
Chapter 12 Data Analysis and Probability Exercise 12.7 45E
Chapter 12 Data Analysis and Probability Exercise 12.7 46E
Chapter 12 Data Analysis and Probability Exercise 12.7 47E
Chapter 12 Data Analysis and Probability Exercise 12.7 48E
Chapter 12 Data Analysis and Probability Exercise 12.7 49E
Consider a situation: A basketball team has 11 players You select 5 players to form a group Determine how many different 5-players group can form.
Chapter 12 Data Analysis and Probability Exercise 12.7 50E
Chapter 12 Data Analysis and Probability Exercise 12.7 51E
Chapter 12 Data Analysis and Probability Exercise 12.7 52E
Chapter 12 Data Analysis and Probability Exercise 12.7 53E
Chapter 12 Data Analysis and Probability Exercise 12.7 54E
Chapter 12 Data Analysis and Probability Exercise 12.7 55E
Chapter 12 Data Analysis and Probability Exercise 12.7 56E
Chapter 12 Data Analysis and Probability Exercise 12.7 57E
Chapter 12 Data Analysis and Probability Exercise 12.7 58E
Chapter 12 Data Analysis and Probability Exercise 12.7 59E | {"url":"https://www.aplustopper.com/algebra-1-common-core-answers-chapter-12-data-analysis-and-probability-exercise-12-7/","timestamp":"2024-11-11T17:26:30Z","content_type":"text/html","content_length":"78786","record_id":"<urn:uuid:b47cc20f-d201-44af-8c83-402cfedb9acd>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00658.warc.gz"} |
Writing a Image Processing Codes from Python on Scratch
1.1 What am i using?
• Numpy for array operations
• imageio builtin library for reading image
• warnings to show warning
• matplotlib for visualizing
1.2 What this blog includes?
• Converting an image into Grayscale from RGB.
• Convolution of an image using different kernels.
2 Steps
• Initializing a ImageProcessing class.
• Adding a read method
• Adding a show method
• Adding color converison method
• Adding a convolution method
Initializing a ImageProcessing class
class ImageProcessing:
def __init__(self):
self.readmode = {1 : "RGB", 0 : "Grayscale"} # this dictionary will hold readmode values
Adding a read method
def read_image(self, location = "", mode = 1):
Uses imageio on back.
location: Directory of image file.
mode: Image readmode{1 : RGB, 0 : Grayscale}.
img = imageio.imread(location)
if mode == 1:
img = img
elif mode == 0:
img = 0.21 * img[:,:,0] + 0.72 * img[:,:,1] + 0.07 * img[:,:,2]
elif mode == 2:
raise ValueError(f"Readmode not understood. Choose from {self.readmode}.")
return img
• This method only wraps the imageio, but i am applying a concept of RGB to GRAYSCALE conversion.
• By default, imageio reads on RGB format.
• A typical RGB to GRAYSCALE can be done on below concepts (taken from):-
□ Average Method:
$Grayscale = \frac{R + G + B}{3}$
All channels are given 33% contribution.
□ Weighted Method of luminosity method
$Grayscale = 0.3*R + 0.59*G + 0.11*B$
Red channel have 30%, Green have 59 and Blue have 11% contribution.\
But I am using different version of method (taken from).
• If user enter different mode, then raise error.
Adding a show method
def show(self, image, figsize=(5, 5)):
Uses Matplotlib.pyplot.
image: A image to be shown.
figsize: How big image to show. From plt.figure()
fig = plt.figure(figsize=figsize)
im = image
plt.imshow(im, cmap='gray')
Nothing to say here, docstring is enough.
Color conversion
def convert_color(self, img, to=0):
if to==0:
return 0.21 * img[:,:,0] + 0.72 * img[:,:,1] + 0.07 * img[:,:,2]
raise ValueError("Color conversion can not understood.")
I have still have not thought about grayscale to RGB conversion. But even using OpenCV cv2.cvtColor(img, cv2.COLOR_GRAY2BGR), we can not get complete BGR image.
Adding a convolution method
def convolve(self, image, kernel = None, padding = "zero", stride=(1, 1), show=False, bias = 0):
image: A image to be convolved.
kernel: A filter/window of odd shape for convolution. Used Sobel(3, 3) default.
padding: Border operation. Available from zero, same, none.
stride: How frequently do convolution?
show: whether to show result
bias: a bias term(used on Convolutional NN)
if len(image.shape) > 3:
raise ValueError("Only 2 and 3 channel image supported.")
if type(kernel) == type(None):
warnings.warn("No kernel provided, trying to apply Sobel(3, 3).")
kernel = np.array([[1, 0, -1],
[1, 0, -1],
[1, 0, -1]])
kernel += kernel.T
kshape = kernel.shape
if kshape[0] % 2 != 1 or kshape[1] % 2 != 1:
raise ValueError("Please provide odd length of 2d kernel.")
if type(stride) == int:
stride = (stride, stride)
shape = image.shape
if padding == "zero":
zeros_h = np.zeros(shape[1]).reshape(-1, shape[1])
zeros_v = np.zeros(shape[0]+2).reshape(shape[0]+2, -1)
padded_img = np.vstack((zeros_h, image, zeros_h)) # add rows
padded_img = np.hstack((zeros_v, padded_img, zeros_v)) # add cols
image = padded_img
shape = image.shape
elif padding == "same":
h1 = image[0].reshape(-1, shape[1])
h2 = image[-1].reshape(-1, shape[1])
padded_img = np.vstack((h1, image, h2)) # add rows
v1 = padded_img[:, 0].reshape(padded_img.shape[0], -1)
v2 = padded_img[:, -1].reshape(padded_img.shape[0], -1)
padded_img = np.hstack((v1, padded_img, v2)) # add cols
image = padded_img
shape = image.shape
elif padding == None:
rv = 0
cimg = []
for r in range(kshape[0], shape[0]+1, stride[0]):
cv = 0
for c in range(kshape[1], shape[1]+1, stride[1]):
chunk = image[rv:r, cv:c]
soma = (np.multiply(chunk, kernel)+bias).sum()
chunk = int(soma)
chunk = int(0)
if chunk < 0:
chunk = 0
if chunk > 255:
chunk = 255
cimg = np.array(cimg, dtype=np.uint8).reshape(int(rv/stride[0]), int(cv/stride[1]))
if show:
print(f"Image convolved with \nKernel:{kernel}, \nPadding: {padding}, \nStride: {stride}")
return cimg
What is happening above?
• First the kernel is checked, if not given, used from sobel 3 by 3
• If the given kernel shape is not odd, error is raised.
• For padding, numpy stack methods are used.
• Initialize an empty list to store convoluted values
• For convolution,
□ we loop through every rows in step of kernel's row upto total img rows
□ loop through every cols in step of kernel's col up to total img cols
□ get a current chunk of image and multiply its elements with the kernel's elements
□ if current sum is geater than 255, set it 255
□ append sum to list
• Finally convert the list into array then into right shape.
Recall the mathematics of Convolution Operation
$g(x, y) = f(x,y) * h(x,y)$
Where f is a image function and h is a kernel or mask or filter.
What happens on convolution can be clear from the matrix form of operation.
Lets take a image of 5X5 and kernel of 3X3 sobel y.
(Using KaTex for Matrix was hard so I am posting image instead.)
We have to move the kernel over the each and every pixels of the image from top left to bottom. Placing a kernel over a image and taking a elementwise matrix multiplication of the kernel and chunk of
image of the kernel shape. For most cases, we use odd shaped kernel. By using odd shaped kernel, we can place a center of kernel to the center of image chunk.
Now we try to start from the top right pixel, but since our kernel is 3 by 3, we don't have any pixels that will be facing the 1st row of kernel. So we have to work with the concept of padding or we
will loose those pixels of the border. For the sake of simplicity, lets take a zero padding.
Now the first chunk of image will be:
Now the convolution operation:
Similarly, the final image will be like below after sliding through row then column:
But we will set 255 to all values which exceeds 255.
A better visualisation of a convolution operation can be seen by below gif(i don't own this gif):-
Finally, visualizing our convolutated image:-
ip = ImageProcessing()
img = np.array([1, 10, 11, 200, 30, 12, 200, 152, 223, 60, 100,
190, 11, 20, 10, 102, 207, 102, 223, 50, 18, 109, 117, 200, 30]).reshape(5, 5)
cv = ip.convolve(img)
If we printed the output of this code, i.e. cv, then we will see the array just like above.
I have written a code to do Convolution Neural Network from scratch using Python too, please read it here.
Top comments (0)
For further actions, you may consider blocking this person and/or reporting abuse | {"url":"https://practicaldev-herokuapp-com.global.ssl.fastly.net/qviper/writing-a-image-processing-codes-from-python-on-scratch-4kd6","timestamp":"2024-11-13T12:46:43Z","content_type":"text/html","content_length":"104131","record_id":"<urn:uuid:28b25243-1dbd-4b03-9f44-c17fb2e66f80>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00864.warc.gz"} |
Inter-row calculations on grouping subsets
Find out the stocks that reached the daily trading limit (price gains 10%) for three consecutive days.
WITH A AS
(SELECT code,trade_date, close_price/lag(close_price)
OVER(PARTITION BY stock ORDER BY trade_date)-1
FROM stock_price),
B AS
(SELECT code,
CASE WHEN rising_range>=0.1 AND
lag(rising_range) OVER(PARTITION BY code
ORDER BY trade_date)>=0.1 AND
lag(rising_range,2) OVER(PARTITION BY code
ORDER BY trade_date)>=0.1
THEN 1 ELSE 0 END three_consecutive_days_gains
FROM A)
SELECT DISTINCT code FROM B WHERE three_consecutive_days_gains=1
Use the window function to compute the growth rates, and then apply the window function to the growth rates to get the result indicating three consecutive days of gains, and then use multi-level
nested subqueries to get the final result. An additional level of grouping dramatically increases the complexity, but luckily the user can use with clause to split the nested SQL statement into what
looks like stepwise computations.
Compute the result gradually by using a loop statement.
A B C
1 =demo.query("select * from stock_price").group(code).(~.sort(trade_date)) =[] Result set in C1
2 for A1 =0
3 if A2.pselect(B2=if(close_price/close_price[-1]>=1.1,B2+1,0):3)>0 Whether there are three consecutive days of gains
4 >C1=C1|A2.code
Alternatively, compute the result by using the sub-computation statement.
A B
1 =demo.query("select * from stock_price").group(code).(~.sort(trade_date))
2 ==A1.select(??) =0
3 =~.pselect(B2=if(close_price/close_price[-1]>=1.1,B2+1,0):3)>0
4 =A2.(code)
With intrinsic support to the stepwise computation, SPL will not increase the complexity very much even if an additional level of grouping is needed. By using the sub-computation statement or loop to
handle the computation in the outer layer, the inner layer computation becomes as easy as the other computations with fewer levels, so it will not add extra difficulty when the user tries to figure
out how to code.
Besides, the above SPL code can be easily extended to find out the stocks that reach the daily trading limit for any number of consecutive days, and it will stop calculating the growth rates once it
finds the first day of the consecutive days of price gains. Unlike SPL, the above SQL syntax is very difficult to expand, and the window function must complete all of the inter-row computations
before it can filter data.
In order to simplify the SQL statement as much as possible in the examples, the window functions of SQL 2003 standard are widely used, and accordingly the Oracle database syntax which has best
support for SQL 2003 is adopted in this essay. | {"url":"https://www.scudata.com/spl-vs-sql/inter-row-calculations-on-grouping-subsets/","timestamp":"2024-11-09T10:17:58Z","content_type":"application/xhtml+xml","content_length":"17461","record_id":"<urn:uuid:e0f377be-798b-4607-89f5-d0c57f677286>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00234.warc.gz"} |
Distance Between
Location Information
Additional location information such as GPS coordinates, latitude and longitude for Springdale, Arkansas and Santa Rosa, California.
Springdale, AR, USA
City Springdale
County Washington County
State Arkansas
Latitude 36.18674420
Latitude -94.12881410
GPS Coordinates (DMS) 36° 11.205 N -94° 7.729 W
Santa Rosa, CA, USA
City Santa Rosa
County Sonoma County
State California
Latitude 38.44042900
Latitude -122.71405480
GPS Coordinates (DMS) 38° 26.426 N -122° 42.843 W | {"url":"https://www.distance-between.com/from/us/arkansas/springdale/to/us/california/santa-rosa","timestamp":"2024-11-11T19:55:32Z","content_type":"text/html","content_length":"39309","record_id":"<urn:uuid:692d34db-df04-4c55-be23-bae82c10cac1>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00549.warc.gz"} |
How To Spell Out Numbers In An Essay - SpellingNumbers.com
How To Spell Out Numbers In An Essay
How To Spell Out Numbers In An Essay – The art of spelling numbers may be difficult. The right resources can make the process of learning to spell easier. There are many resources available to assist
you in spelling. They include workbooks as well as tips and games online.
The Associated Press format
It is recommended that you write numbers using the AP style when writing for a newspaper or any other printed media. To simplify your writing, the AP style gives guidelines for writing numbers, among
other things.
The Associated Press Stylebook was first published in 1953. Since then, hundreds of changes have been made. The stylebook is now in its 55th edition. The majority of American periodicals, newspapers,
and online news outlets use this stylebook.
A collection of punctuation and language guidelines called AP Style are frequently applied in journalism. AP Style guidelines include the use of capitalization and citations.
Regular numbers
Ordinal numbers are a unique integer that indicates the exact location of a list. These numbers are used frequently to signify size, importance, or time passing. They also reveal what’s in which
In accordance with the circumstances and how it is used, ordinary numbers can be expressed verbally as well as numerically. A specific suffix is used to distinguish the two main ways.
To make an ordinal number, include a “th” to the end of. The ordinal number 31 can be represented by 31.
You can use ordinals to serve a variety of purposes, such as names and dates. It is crucial to understand the differences between using an ordinal (or a cardinal) and an ordinal.
Both trillions and billions
Numerology can be used in many contexts, such as the markets, geology and the development of our planet. Millions and billions are two examples. A million is a common number before 1000,001, while a
trillion comes after 999.999.999.
The annual revenue of a corporation is expressed in millions. They are also used to calculate the value a stock, fund or other piece of money is worth. Billions can also be used to determine a
company’s market capization. You can test the accuracy of your estimations by converting millions into billions by using a calculator for unit conversion.
Fractions can be utilized in English to denote particular items or portions of numbers. The denominator as well as the numerator are separated into two pieces. The numerator shows how many pieces of
identical sizes were taken. The second is the denominator which shows how many portions were divided.
Fractions can either be expressed mathematically or in words. You must be cautious to write out fractions when you write them in words. This can be difficult, especially if you are dealing with large
It is possible to follow a few straightforward principles in writing fractions as words. The first is to put the numbers at the top of every sentence. Another alternative is to write fractions in
decimal format.
Many Years
When writing spelling numbers, you’ll use years, no matter the type of paper you’re writing, a thesis, an email, or research paper. Some tricks and strategies will help you avoid repetition of the
same number and ensure the correct formatting.
Numbers must be clearly written in formal writing. There are a variety of style guides that can assist you in following these guidelines. The Chicago Manual of Style suggests that numerals be used
between 1 to 100. However, writing out numbers that are higher than 401 is not advised.
Of course, exceptions exist. One example is the American Psychological Association’s (APA) style guide. Although not a specialized publication, this guide is commonly employed in writing scientific
Date and Time
Some guidelines general to styling numbers are given in the Associated Press style manual. For numbers over 10 the numeral system is used. Numerology can be used in various other contexts. In the
initial five numbers in your paper, “n-mandated” is the rule. There are however a few exceptions.
Both the Chicago Manual of Technique (above) and the AP Stylebook (below) suggest that numbers are plentiful. Of course, this doesn’t mean you can’t create a version without numbers. However, I am
able to confirm that there is a distinction as I myself have been an AP graduate.
Always refer to a stylebook in order to see which ones you have missed. In particular, it’s important to keep in mind to add the “t” like “time”.
Gallery of How To Spell Out Numbers In An Essay
When To Spell Out Numbers Via Grammarly Https www grammarly blog
When To Spell Out Numbers In Scientific Writing ButlerSciComm
Numbers In Written Form Writing Out Numbers Number Words Writing | {"url":"https://www.spellingnumbers.com/how-to-spell-out-numbers-in-an-essay/","timestamp":"2024-11-01T19:45:51Z","content_type":"text/html","content_length":"63440","record_id":"<urn:uuid:ccd87d91-c57c-4ed6-8000-97df80cc3b78>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00791.warc.gz"} |
Upslope area – Recursive calculation procedure
So far in this series I've talked about calculating pixel flow directions and handling plateaus, but I haven't yet discussed the actual upslope area calculation. The Tarboton paper presents a
recursive formulation of the upslope area calculation (from page 313):
Procedure DPAREA(i, j)
if AREA(i, j) is known
no action
AREA(i, j) = 1 (the area of a single pixel)
for each neighbor (location in, jn)
p = proportion of neighbor (in, jn) that drains to
pixel (i, j) based on angle
if (p > 0) then
call DPAREA(in, jn) (this is the recursive call to
calculate the area for the neighbor)
AREA(i, j) = AREA(i, j) + p x AREA(in, jn)
I had to read the surrounding text a couple of times to figure out exactly how the author intends to calculate p. Consider the flow direction for a particular neighbor, (in, jn). If the flow
direction points directly at pixel (i, j), then the corresponding weight p is 1. In other words, pixel (i, j) gets all of the flow from (in, jn). If the flow direction is pi/4 or greater away from
the direction toward (i, j), then the weight p is 0. Pixel (i, j) gets none of the flow from (in, jn). In between an angular difference of 0 and pi/4, the weight p varies proportionally between 1 and
For example, consider the computation of upslope area for pixel number 5 in this set of 9 pixels:
If the flow direction for pixel 2 is zero radians, its flow is pointing directly at pixel 5, so the corresponding weight is 1.0. If the flow direction for pixel 8 is pi/4 radians, then its flow is
pointing directly at pixel 4. The corresponding weight for pixel 5 is 0.0. If the flow direction is pi/8 radians, its flow is pointing between pixels 5 and 4, and the weight for the pixel 5
computation is 0.5.
It also took me a few minutes to figure out how to compute the radial difference between two angles, properly taking into account the 2*pi periodicity of angles. (For example, the radial difference
between pi/8 and 2*pi - pi/8 should be pi/4.) Here's what I came up with:
radial_difference = abs(mod(theta1 - theta2 + pi), 2*pi) - pi)
(Does anyone else have a better way to compute this quantity?) To calculate the weight for a given neighbor of pixel (i,j), first determine the angle that points directly from that neighbor to the
pixel (i, j). Call that angle theta_c. Then find the flow direction for that neighbor; call it theta_f. Now compute the radial difference:
radial_difference = abs(mod(theta_c - theta_f + pi), 2*pi) - pi)
And finally compute the weight:
p = max(1 - (radial_difference * 4 / pi), 0)
So, are we ready to start writing a recursive MATLAB function based on DPAREA above? Nope! I don't actually want to use a recursive formulation at all.
Here's a different way to think about it:
AREA(i,j) = 1 + p1*AREA(i1,j1) + p2*AREA(i2,j2) + ... + p8*AREA(i8,j8)
If we write down this equation for each pixel in the image, we'll end up with a system of N linear equations in N unknowns. Although the system is very large (N-by-N, where N is the number of
pixels), it is also very sparse, because each equation involves no more than nine of the unknowns.
So we are getting very close to calculating upslope area. We "just" have to set up an extremely large sparse system of equations and then solve it.
Next time I'll tackle that.
댓글을 남기려면 링크 를 클릭하여 MathWorks 계정에 로그인하거나 계정을 새로 만드십시오. | {"url":"https://blogs.mathworks.com/steve/2007/07/27/upslope-area-part-7/?s_tid=blogs_rc_2&from=kr","timestamp":"2024-11-12T21:54:49Z","content_type":"text/html","content_length":"147345","record_id":"<urn:uuid:62fe0483-b791-415f-b5a0-40b553c3a608>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00555.warc.gz"} |
Ordinal Numbers Flashcar - OrdinalNumbers.com
Ordinal Numbers Flashcar
Ordinal Numbers Flashcar – You can list an unlimited number of sets making use of ordinal numbers as a tool. They can also be used to generalize ordinal numbers.
One of the most fundamental concepts of mathematics is the ordinal numbers. It is a numerical value that represents the position of an object within the list. A number that is normally between one
and 20 is utilized as the ordinal number. While ordinal numbers can serve many functions, they are most frequently employed to indicate the order in which items are placed in a list.
Ordinal numbers can be represented using charts or words, numbers, and other methods. They are also able to specify how a group of pieces are placed.
Most ordinal number fall under one of two categories. Transfinite ordinals can be represented using lowercase Greek letters while finite ordinals are represented by Arabic numbers.
A properly-organized collection must include at least one ordinal according to the axiom. For instance, the very first student to complete an entire class will receive the highest grade. The student
with the highest score was declared to be the contest’s winner.
Combinational ordinal numbers
Compound ordinal numbers are multi-digit numbers. They are generated when an ordinal value is multiplied by its final number. They are used most often to classify and date. They don’t provide a
unique ending for each number, like cardinal number.
Ordinal numbers identify the order in which elements are located in an assortment. They also serve to indicate the names of the items in the collection. The two kinds of regular numbers are regular
and flexible.
Regular ordinals can be made by prefixing the cardinal number with the suffix -u. Next, the number is entered as a word . Then, an hyphen is added to it. There are numerous suffixes that can be used.
Suppletive ordinals can be made by prefixing words with the suffix -u or -e. The suffix is used for counting. It’s also wider than the normal one.
Ordinal limit
Limit ordinal values that aren’t zero are ordinal numbers. Limit ordinal numbers come with the disadvantage of not having a maximum element. These numbers can be made by joining sets that aren’t
empty and have no limit elements.
Additionally, infinite rules of recursion employ restricted ordinal numbers. The von Neumann model declares that each infinite number of cardinal numbers is an ordinal number.
A number that has a limit is really equivalent to the sum of all the ordinals that are below it. Limit ordinal numbers are enumerated using arithmetic, but they also can be represented as a series of
natural numbers.
The ordinal numbers serve to organize the data. They give an explanation of the numerical location of an object. These are used in set theory and arithmetic contexts. Although they are in the same
category however, they are not considered natural numbers.
The von Neumann method uses a well-ordered list. Let’s suppose that fy, a subfunction in the function g’ that is given as a singular function, is the situation. Given that g’ is compatible with the
specifications, g’ is a limit order for fy when it is the lone function (ii).
The Church Kleene oral is a limit-ordering ordinal which functions in the same manner. A Church-Kleene ordinal is a limit or limit ordinal that can be a properly organized collection that includes
smaller ordinals.
Common numbers are often used in stories
Ordinal numbers are often used to display the structure of entities and objects. They are crucial in organising, counting as well as ranking reasons. They can be utilized to show the order of events
as well as to illustrate the exact position of objects.
The ordinal number may be identified by the letter “th’. On occasion, though the letter “nd” is substituted. It is common to find ordinal numbers in the titles of books.
Ordinal numbers may be expressed as words however they are usually employed in list format. They may also come in acronyms and numbers. These numbers are easier to comprehend than cardinal numbers,
Ordinary numbers are available in three distinct varieties. Through practice and games you can learn more about these numbers. Learning about these is an important aspect of improving your arithmetic
ability. As a fun and easy method of improving your math abilities it is to do a coloring exercise. A handy sheet of marking is a great way to record your progress.
Gallery of Ordinal Numbers Flashcar
Ordinal Numbers ESL Flashcards
Ordinal Number Flashcard Ordinal Numbers Kids Math Worksheets
Ordinal Number Flashcard Ordinal Numbers Numbers Kindergarten
Leave a Comment | {"url":"https://www.ordinalnumbers.com/ordinal-numbers-flashcar/","timestamp":"2024-11-04T22:07:44Z","content_type":"text/html","content_length":"64442","record_id":"<urn:uuid:886d9764-8a3b-4ca5-8051-6c4d1a4ebe39>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00784.warc.gz"} |
Number & place value in Year 6 (age 10–11) - Oxford Owl for Home
Number & place value in Year 6 (age 10–11)
In Year 6, your child will read, write, and compare numbers up to 10,000,000. They will be able to round any whole number to a required degree of accuracy and solve complex problems using place
The key words for this section are number and place value.
What your child will learn
Take a look at the National Curriculum expectations for number and place value in Year 6 (ages 10–11):
Read, write, order and compare numbers up to 10,000,000
Your child will read, write, compare, and order numbers up to ten million. They will be expected to know the value of each digit in numbers up to ten million (for example, they will understand that
the 6 in 83,634,813 means the number includes six
hundred thousands
Note that your child will use the word ones and not units when talking about place value.
Round any whole number to a required degree of accuracy
Your child will learn to round any whole number to the nearest 10, 100, 1000, 10,000, 100,000, and 1,000,000.
72,145,674 rounded to the nearest 100,000 would be 72,100,000.
72,145,674 rounded to the nearest 1,000,000 would be 72,000,000.
Use negative numbers in context
Your child will count forwards and backwards in positive and negative whole numbers. They will need to be able to count forwards and backwards through zero.
Count backwards from 6 to ⁻3: 6, 5, 4, 3, 2, 1, 0, -1, -2, -3.
Your child will now be expected to calculate (for example, add and subtract) with negative numbers too. For example, they might be asked to calculate 3 – 7 = -4, or -2 + 4 = 2. Using a number line is
a great way to visualise calculations with negative numbers.
Solve increasingly complex number problems
Your child will solve problems involving:
□ Counting
□ Ordering
□ Comparing
□ Rounding
□ Negative numbers.
Their knowledge of place value will be very useful for this. They will use physical objects, drawings, diagrams, and mathematical symbols to visualise problems.
How to help at home
There are lots of ways you can help your child to understand number and place value. Here are just a few ideas:
1. Talk about large numbers
In Year 6, your child should be able to use the whole number system, including saying, reading, and writing numbers accurately.
Talk about large numbers in the real world, such as house prices, electricity meters, or football transfers. When you see big numbers like these, see if your child can read the number out loud.
Activity: Place value pandemonium
Have fun with this fast-moving game and help your child get to grips with place value.
2. Use place value charts
Place value charts can be a great way to help your child represent numbers and understand how the number system works. These charts will help your child to read, write and compare numbers, as well as
to understand how zero works as a placeholder.
For example, 3,210,421 could be represented as follows:
3. Compare and order numbers
When comparing numbers up to 10,000,000, help your child understand that they need to look at the digit with the largest value first. For example, 2,132,654 is more than 1,123,432 as 2,132,654 has
two millions, whereas 1,123,432 only has one million. If the largest value of both numbers is the same, then move on to the second, and then the third, and so on.
Try this game to practise comparing numbers. Write down 10 numbers up to 10,000,000 and the ‘>’ and ‘<’ symbols on separate pieces of paper. Deal your child two numbers, face down. Ask them to turn
over the pieces of paper and to use the ‘>’ and ‘<’ symbols to show which number is bigger or smaller.
If your child would like an extra challenge, you could set a timer! How many pairs can they order correctly in 30 seconds? Now can they put all of the numbers in order? Ask them to explain how they
ordered the numbers.
4. Make estimates
Being able to make accurate estimates is a valuable skill we use in everyday life. When calculating, encourage your child to use their rounding skills to estimate the answer before calculating
precisely. This will help them to check if their answer seems reasonable.
If you are out shopping and you have picked up a few items, ask your child to estimate the total cost of the items. They will need to round the cost of each item to find the estimated total. For
example, if you bought items costing £3.82, 82p, £4.10, and £2.45, your child could round each of these items to the nearest pound:
£3.82 to £4
82p to £1
£4.10 to £4
£2.45 to £2
Your child could then add the rounded amounts of £4, £1, £4, and £2 to find the estimated total of £11. When they calculate precisely and come to £11.19, they can feel fairly confident that their
calculation was correct. | {"url":"https://home.oxfordowl.co.uk/maths/primary-number-place-value/number-place-value-year-6-age-10-11/","timestamp":"2024-11-10T14:52:59Z","content_type":"text/html","content_length":"96132","record_id":"<urn:uuid:979e6a57-3920-4fee-8843-de08b5e1a73c>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00549.warc.gz"} |
ECCC - Kuan Cheng
We study two variants of seeded randomness extractors. The first one, as studied by Goldreich et al. \cite{goldreich2015randomness}, is seeded extractors that can be computed by $AC^0$ circuits. The
second one, as introduced by Bogdanov and Guo \cite{bogdanov2013sparse}, is (strong) extractor families that consist of sparse transformations, i.e., functions that ... more >>> | {"url":"https://eccc.weizmann.ac.il/author/997/","timestamp":"2024-11-11T23:00:59Z","content_type":"application/xhtml+xml","content_length":"20332","record_id":"<urn:uuid:f2aa1d60-4b72-4359-a059-b4aa0d626137>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00425.warc.gz"} |
Electrical Circuit Practice Quiz 06
Welcome to ECE Board Exam Practice Quiz for Electrical Circuit. This set of practice quiz 6 will test your competence on the theories, concepts and problem solving involving Electrical Circuit. The
objectives and goals of this quiz is to help engineering students familiarize concepts and theories as well as develop problem solving skills when answering questions related to the Electrical
Electrical Circuit Test Bank
In this section are compiled practice quiz for Electrical Circuit that could be able to enhance your knowledge and skills in your preparation for the ECE Board Exam. The goal is to add thousands of
questions for you to be familiarized and hopefully help you to be prepared. I am looking forward that this practice quiz will give you an additional confidence in taking your upcoming Engineering
Board Exam.
• Read and understand each question before choosing the best answer.
• The correct answer will be reveal after you have chosen your answer for every questions
• There is no time limit, answer the questions at your own pace.
• Once all questions are answered, you will be shown your rating.
• You have to get 70% of the total items to pass the quiz.
• You can re-take the quiz as many times as you want or until you are satisfied of your rating.
• Comment us your thoughts, scores, ratings, and questions about the quiz in the comments section below!
Choose the letter of the best answer in each questions.
An electric circuit contains
Both active and passive elements
What is the form factor of a triangular wave?
The internal resistance of an ideal voltage source is
Equal to the load resistance
An inductive circuit of resistance 16.5 Ω and inductance of 0.14 H takes a current of 25 A. if the frequency is 50 Hz, find the supply voltage.
In a series circuit with unequal resistances the
Highest R has the highest V
Lowest R has the highest V
Lowest R has the highest I
Highest R has the highest I
The Q-factor of a parallel resonant circuit is also known as
Current magnification factor
Voltage magnification factor
An open resistor when checked with an ohmmeter reads
High but within the tolerance
A capacitor opposes change in
Neither voltage nor current
The ratio of the flux density to the electric field intensity in the dielectric is called
What value of R is needed with a 0.05 μF C for an RC time constant of 0.02 s?
Series resonant circuit is sometimes known as
A series-parallel combination of identical resistors will
Increase the power rating compared with one resistor alone
Increase the voltage rating compared with one resistor alone
Reduce the voltage rating compared with resistor alone
Result in an expensive circuit
Which of the following describes the action of a capacitor?
Opposes changes in current flow
Which statement is true about a passive circuit?
A circuit with neither a source of current nor a source of potential difference
A circuit with a voltage source
A circuit with a current source
A circuit with only resistance as a load
For parallel capacitors, total charge is
The sum of individual charges
Equal to the charge of either capacitors
Equal to the product of the charges
The quotient of the charges
Electric energy refers to
Which of the following dielectric materials makes the highest-capacitance capacitor?
Barium-strontium titanite
The ratio of maximum value to the effective value of an alternating quantity is called
The reason why alternating current can induce voltage is
It has a stronger magnetic field than direct current
It has a constant magnetic field
It has a varying magnetic field
The admittance of a parallel RLC circuit is found to be the ______ sum of conductance and susceptances.
Which of the following is not a factor affecting capacitance of a basic capacitor?
A real current source has
Infinite internal resistance
Large internal resistance
Small internal resistance
If an emf in circuit A produces a current in circuit B, then the same emf in circuit B produces the same current in circuit A. this theorem is known as
Maximum power transfer theorem
When two pure sine waves of the same frequency and the same amplitude which are exactly 180˚ out-of-phase are added together, the result is
A wave with twice the amplitude
A wave with half the amplitude
A wave with twice the frequency
Share the quiz to show your results !
Subscribe to see your results
Electrical Circuit – Quiz 06
I got %%score%% of %%total%% right | {"url":"https://froydwess.com/electrical-circuit-practice-quiz-06/","timestamp":"2024-11-02T07:59:41Z","content_type":"text/html","content_length":"263908","record_id":"<urn:uuid:eae37708-62ff-4b3d-b95c-0883fd40e969>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00094.warc.gz"} |
How to Calculate Discount Rate in Accounting: A Step-by-Step Guide - Gospel10
How to Calculate Discount Rate in Accounting: A Step-by-Step Guide
Discount rate calculation in accounting is a process for determining the present value of future cash flows by considering the time value of money. For instance, a company planning to buy a new
machine in five years, costing $100,000 with a 5% discount rate, would calculate the present value to be roughly $78,352.
This calculation is crucial for various accounting applications, such as capital budgeting, financial analysis, and impairment testing. It helps businesses make informed decisions by weighing the
present value of future cash flows against current costs. Notably, the concept of a discount rate originated in the early 20th century with the development of bond pricing models.
This article delves into the formulas, methods, and factors influencing the calculation of discount rates in accounting, providing a comprehensive guide for accountants, finance professionals, and
business owners seeking to accurately value future cash flows.
How to Calculate Discount Rate in Accounting
Understanding the intricacies of discount rate calculation in accounting is crucial for accurate financial analysis. Key aspects to consider include:
• Time value of money
• Present value
• Future cash flows
• Weighted average cost of capital
• Risk-free rate
• Risk premium
• Inflation
• Project horizon
• Sensitivity analysis
These aspects are interconnected, influencing the accuracy and reliability of discount rate calculations. For instance, the time value of money emphasizes that a dollar today is worth more than a
dollar in the future, while the weighted average cost of capital considers both debt and equity financing costs. By understanding these key aspects, accountants and finance professionals can
effectively calculate discount rates, enabling informed decision-making and accurate financial modeling.
Time Value of Money
Within the context of accounting, the time value of money (TVM) plays a pivotal role in calculating discount rates. TVM posits that the value of money today differs from its value in the future due
to factors such as inflation and investment opportunities. This concept forms the foundation for determining the present value of future cash flows, a crucial aspect of discount rate calculation.
To illustrate, consider a company that anticipates receiving $10,000 in five years. With an annual inflation rate of 2%, the future value of $10,000 in five years would be $11,041. However, the
present value of $11,041 five years from now, using a 5% discount rate, is only $8,264. This difference highlights the time value of money and its impact on discount rate calculation.
In practice, TVM allows accountants to compare cash flows occurring at different points in time. By determining the present value of future cash flows, they can make informed decisions about capital
budgeting, project evaluation, and investment analysis. Discount rate calculation, with its inherent consideration of TVM, enables businesses to accurately assess the value of long-term investments
and make strategic financial choices.
Present Value
In the realm of accounting, present value (PV) holds immense significance in calculating discount rates. It represents the current worth of a future sum of money, considering the time value of money
and the applicable discount rate. Understanding the intricacies of present value is indispensable for accurate financial analysis and decision-making.
• Time Value of Money: PV underscores that the value of money today differs from its value in the future, as time exposes it to factors like inflation and investment opportunities.
• Discounting: To determine the PV of future cash flows, the concept of discounting is employed. Discounting involves multiplying future cash flows by a discount factor, which is derived from the
discount rate.
• Types of Present Value: PV can be categorized into single-period PV and multi-period PV. Single-period PV considers cash flows occurring within one accounting period, while multi-period PV
considers cash flows spanning multiple accounting periods.
• Applications in Accounting: PV finds extensive use in accounting practices such as capital budgeting, investment analysis, and project evaluation. It aids in comparing cash flows occurring at
different points in time, enabling informed financial decisions.
In essence, present value serves as a crucial element in discount rate calculation, allowing accountants and finance professionals to determine the current worth of future cash flows. This
understanding empowers them to make sound financial decisions and accurately assess the viability of long-term investments.
Future Cash Flows
In the context of calculating discount rates in accounting, future cash flows play a pivotal role. They represent the anticipated that a business or investment is expected to generate over a specific
period. Accurately estimating future cash flows is crucial for determining the present value of those cash flows and, subsequently, the appropriate discount rate.
The connection between future cash flows and discount rate calculation is . Firstly, future cash flows are a critical component of calculating discount rates. The timing and magnitude of future cash
flows directly influence the present value of those cash flows and, consequently, the discount rate. Secondly, the discount rate used in the calculation affects the present value of future cash
flows. A higher discount rate results in a lower present value, while a lower discount rate results in a higher present value.
In practice, businesses use various methods to estimate future cash flows. These methods include historical analysis, trend analysis, and scenario analysis. The choice of method depends on the
availability of historical data, the nature of the business, and the level of uncertainty associated with the future cash flows. Understanding the relationship between future cash flows and discount
rate calculation allows accountants and financial analysts to make informed decisions about the appropriate discount rate to use, ensuring accurate and reliable financial analysis.
Weighted average cost of capital
Weighted average cost of capital (WACC) plays a crucial role in the calculation of discount rates in accounting. It represents the average cost of a company’s capital, considering both debt and
equity financing. Understanding WACC is essential for accurate financial analysis and decision-making.
• Cost of Debt: The cost of debt refers to the interest rate a company pays on its borrowed funds. It is typically represented by the yield-to-maturity of the company’s outstanding bonds.
• Cost of Equity: The cost of equity reflects the return required by investors for providing equity financing to the company. It can be estimated using various methods, such as the capital asset
pricing model (CAPM) or the dividend discount model.
• Weighting: The cost of debt and equity are weighted based on their respective proportions in the company’s capital structure. This weighting reflects the relative importance of each financing
• Tax Adjustment: In the calculation of WACC, the cost of debt is usually adjusted for taxes, as interest payments are tax-deductible. This adjustment ensures that the WACC reflects the after-tax
cost of capital.
Understanding WACC is crucial for calculating discount rates because it provides an overall measure of the cost of capital for a specific company. The discount rate, in turn, is used to determine the
present value of future cash flows, which is a critical component of capital budgeting and investment analysis. By considering the WACC in discount rate calculations, businesses can make informed
decisions about the viability and profitability of long-term investments.
Risk-free rate
When calculating discount rates in accounting, the risk-free rate is essential to consider. It is the theoretical rate of return on an investment with zero risk, providing a benchmark against which
other investments can be compared.
• Government Bonds: Government bonds often serve as a proxy for the risk-free rate, as they are backed by the full faith and credit of the issuing government.
• Inflation: The risk-free rate is typically adjusted for inflation to provide a real rate of return.
• Time Horizon: The risk-free rate may vary depending on the time horizon of the investment.
• Country: The risk-free rate can vary across countries, reflecting differences in economic stability and growth prospects.
Understanding the risk-free rate is crucial in discount rate calculation. It provides a stable reference point for assessing the riskiness of other investments and helps businesses make informed
decisions about capital budgeting and investment analysis. By incorporating the risk-free rate into discount rate calculations, accountants can ensure accurate and reliable financial modeling,
leading to sound investment decisions.
Risk premium
In the context of discount rate calculation in accounting, risk premium holds significant importance. It represents the additional return required by investors for bearing the risk associated with an
investment. Understanding the connection between risk premium and discount rate calculation is critical for making informed financial decisions.
The risk premium is a key component of the discount rate formula. It is added to the risk-free rate to arrive at the appropriate discount rate for a specific investment. This is because the discount
rate should reflect not only the time value of money but also the level of risk associated with the investment.
In practice, the risk premium can vary depending on several factors, such as the type of investment, the industry, the economic climate, and the investor’s risk tolerance. For instance, investments
in emerging markets typically carry a higher risk premium than investments in developed markets. Similarly, investments in start-up companies generally have a higher risk premium than investments in
established corporations.
Understanding the risk premium is crucial for accurate discount rate calculation. By incorporating the risk premium into the calculation, accountants and financial analysts can determine the
appropriate discount rate to use for a specific investment. This, in turn, allows them to make informed decisions about capital budgeting, investment analysis, and project evaluation.
Inflation, a persistent increase in the general price level, bears a significant relationship to the calculation of discount rates in accounting. It serves as a critical component in determining the
appropriate discount rate for future cash flows.
The impact of inflation on discount rate calculation stems from its effect on the time value of money. As inflation erodes the purchasing power of money over time, a dollar today is worth less than a
dollar in the future. Consequently, to accurately compare future cash flows with present values, the discount rate must account for the inflationary effect. A higher inflation rate warrants a higher
discount rate to reflect the diminished value of future cash flows.
In practice, the adjustment for inflation in discount rate calculation is achieved by adding the expected inflation rate to the risk-free rate. This real risk-free rate represents the return required
by investors to compensate for both the time value of money and the loss of purchasing power due to inflation. By incorporating inflation into the discount rate calculation, accountants can ensure
that future cash flows are appropriately discounted, leading to informed investment decisions.
Understanding the connection between inflation and discount rate calculation is crucial for accurate financial analysis. It enables businesses to make sound investment decisions, assess the viability
of long-term projects, and mitigate the impact of inflation on their financial plans. By considering inflation when calculating discount rates, accountants and financial analysts can provide reliable
insights and recommendations, guiding businesses toward sustainable growth and profitability.
Project horizon
In the realm of accounting, project horizon plays a pivotal role in the calculation of discount rates. Project horizon refers to the period over which a project or investment is expected to generate
cash flows. This duration directly influences the discount rate used in the calculation of present value.
The relationship between project horizon and discount rate stems from the concept of time value of money. As the project horizon lengthens, the present value of future cash flows decreases. This is
because the farther into the future a cash flow occurs, the less valuable it is today due to the effects of inflation and opportunity cost. Consequently, a longer project horizon necessitates a
higher discount rate to accurately reflect the time value of money.
In practice, project horizon is a critical component of discount rate calculation in capital budgeting and investment analysis. For instance, a company evaluating a long-term infrastructure project
with a 20-year horizon would require a higher discount rate compared to a short-term marketing campaign with a one-year horizon. This difference in discount rates reflects the varying time frames and
the associated risk and uncertainty involved in each project.
Understanding the connection between project horizon and discount rate calculation enables accountants and financial analysts to make informed decisions about capital allocation and investment
strategies. By considering the project horizon when calculating discount rates, they can accurately assess the present value of future cash flows and make sound financial recommendations that align
with the long-term goals and risk tolerance of the organization.
Sensitivity analysis
Sensitivity analysis is an essential aspect of discount rate calculation in accounting, allowing professionals to evaluate the impact of changing assumptions on the final discount rate. By conducting
sensitivity analysis, accountants can assess the robustness of their discount rate calculations and make more informed decisions.
• Impact on Present Value: Sensitivity analysis helps determine how changes in the discount rate affect the present value of future cash flows. It shows how the present value is sensitive to
changes in the discount rate, providing insights into the reliability of the calculated present value.
• Key Assumptions: Sensitivity analysis identifies the key assumptions that significantly impact the discount rate. By varying these assumptions, accountants can assess the sensitivity of the
discount rate to different scenarios and make informed decisions about which assumptions to prioritize.
• Uncertainty and Risk: Sensitivity analysis quantifies the uncertainty and risk associated with different discount rates. It helps accountants understand how variations in the discount rate can
affect the viability and risk profile of an investment or project.
• Decision-Making: Sensitivity analysis supports decision-making by providing a range of possible outcomes based on different discount rates. Accountants can use this information to make more
informed decisions about capital budgeting, investment analysis, and other financial planning activities.
In conclusion, sensitivity analysis is a powerful tool that enhances the accuracy and reliability of discount rate calculations in accounting. By considering the sensitivity of the discount rate to
various assumptions, accountants can make more informed decisions, mitigate risks, and improve the overall quality of their financial analysis.
Frequently Asked Questions (FAQs) About Discount Rate Calculation in Accounting
This section addresses common questions and clarifies important aspects related to the calculation of discount rates in accounting. These FAQs aim to provide concise and informative answers to
enhance your understanding of this crucial concept.
Question 1: What is the significance of the discount rate in accounting?
The discount rate plays a vital role in accounting as it allows us to determine the present value of future cash flows, which is essential for making informed investment and financing decisions.
Question 2: How do I calculate the weighted average cost of capital (WACC)?
The WACC is calculated by multiplying the cost of debt by its weight in the capital structure and the cost of equity by its weight, then summing the results. The weights represent the proportions of
debt and equity financing used.
Question 3: What is the relationship between the risk-free rate and the discount rate?
The risk-free rate serves as the foundation for the discount rate. The discount rate is typically derived by adding a risk premium to the risk-free rate, which compensates investors for the
additional risk associated with the investment.
Question 4: How does inflation impact the discount rate?
Inflation erodes the purchasing power of money over time, so a higher inflation rate requires a higher discount rate to accurately reflect the time value of money and ensure that future cash flows
are appropriately discounted.
Question 5: What is the purpose of sensitivity analysis in discount rate calculation?
Sensitivity analysis helps us assess the impact of changing assumptions on the discount rate and the resulting present value. It provides insights into the robustness of the discount rate calculation
and allows us to make more informed decisions.
Question 6: What are some common mistakes to avoid when calculating discount rates?
Common mistakes include using an inappropriate discount rate, ignoring inflation, and failing to consider the project horizon or risk factors. It’s important to carefully consider all relevant
factors to ensure accurate discount rate calculation.
These FAQs provide a concise overview of key considerations and potential pitfalls in discount rate calculation in accounting. Understanding these aspects is crucial for making well-informed
financial decisions and ensuring the accuracy of financial analysis.
In the following section, we will delve deeper into practical applications of discount rate calculation, exploring various scenarios and case studies to further enhance your understanding.
Tips for Accurate Discount Rate Calculation in Accounting
To ensure precise and reliable discount rate calculations, consider implementing the following practical tips:
Tip 1: Determine the Appropriate Risk-Free Rate: Select a risk-free rate that aligns with the project’s duration and currency, considering government bonds or treasury bills.
Tip 2: Estimate a Realistic Risk Premium: Assess the project’s specific risks, industry factors, and market conditions to determine a suitable risk premium.
Tip 3: Consider Inflation: Adjust the discount rate for expected inflation to ensure future cash flows are appropriately discounted.
Tip 4: Match the Discount Rate to the Project Horizon: Longer-term projects typically require higher discount rates to reflect the time value of money and associated risks.
Tip 5: Conduct Sensitivity Analysis: Test the sensitivity of the discount rate to changes in assumptions, such as inflation or risk premium, to assess the robustness of the calculation.
Tip 6: Use Consistent Assumptions: Maintain consistency in assumptions used for discount rate calculation across different projects to ensure comparability.
Tip 7: Seek Professional Guidance: Consult with experienced accountants or financial analysts for complex projects or when uncertainty exists.
By adhering to these tips, accountants and finance professionals can enhance the accuracy and reliability of their discount rate calculations, leading to more informed decision-making and effective
financial analysis.
In the concluding section, we will explore advanced applications of discount rate calculation, including project evaluation, capital budgeting, and investment analysis, to further demonstrate its
practical significance in accounting and finance.
In summary, calculating discount rates in accounting involves considering various factors such as time value of money, present value, future cash flows, weighted average cost of capital, risk-free
rate, risk premium, inflation, project horizon, and sensitivity analysis. Understanding the interconnections between these elements is crucial for determining the appropriate discount rate, which
plays a vital role in financial analysis and decision-making.
By accurately calculating discount rates, accountants and finance professionals can make informed choices about capital budgeting, investment analysis, project evaluation, and impairment testing.
This enables businesses to allocate resources efficiently, assess the viability of long-term investments, and mitigate financial risks. Discount rate calculation remains a fundamental aspect of
accounting, providing a solid foundation for sound financial planning and strategic decision-making.
Leave a Comment | {"url":"https://www.gospel10.com/how-to-calculate-discount-rate-in-accounting-a-step-by-step-guide/","timestamp":"2024-11-02T08:31:56Z","content_type":"text/html","content_length":"208463","record_id":"<urn:uuid:a586f0f1-b130-4bfc-a250-3c7001a89cb1>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00480.warc.gz"} |
How To Find Missing Angles Of Congruent Triangles - TraingleWorksheets.com
Finding Missing Sides Of Congruent Triangles Worksheet – Triangles are among the most fundamental patterns in geometry. Understanding triangles is crucial to mastering more advanced geometric
concepts. In this blog post it will explain the various kinds of triangles triangular angles, the best way to calculate the extent and perimeter of any triangle, and show details of the various.
Types of Triangles There are three kinds that of triangles are equilateral, isosceles, and scalene. Equilateral triangles are … Read more | {"url":"https://www.traingleworksheets.com/tag/how-to-find-missing-angles-of-congruent-triangles/","timestamp":"2024-11-10T18:19:53Z","content_type":"text/html","content_length":"48359","record_id":"<urn:uuid:d15bfc73-abf3-4441-ae07-88d3ea6e0a7a>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00738.warc.gz"} |
Last week, a colleague pointed out to my team an online developer recruitment challenge. As I find it fun and there was no need to disclose one’s email, I decided to try, just to check if I could to
it. The problem is quite simple but not easy: consider a rectangular maze of finite size. One has to find a specific cell on the board - the exit, starting from the origin. One has 2 move options:
one cell at a time in one of the 4 cardinal points or jumping to any previously visited cell. Of | {"url":"https://blog.frankel.ch/tag/algorithm/","timestamp":"2024-11-10T11:06:35Z","content_type":"text/html","content_length":"18769","record_id":"<urn:uuid:d1fab23e-3dde-4f5f-9144-6cab1ac2f2f8>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00538.warc.gz"} |
Math, Grade 7, Algebraic Reasoning, Algebraic Expressions & Equations
Adult Ideal Weight
Work Time
Adult Ideal Weight
Use this rule of thumb: an adult’s ideal weight in kilograms is 100 less than his or her height in centimeters.
• Let h = an adult’s height in centimeters. Write an algebraic expression for the person’s ideal weight in kilograms. Evaluate your expression to find the ideal weight, in kilograms, for an adult
who is 150 cm tall.
• Write and solve an equation to find the height, in centimeters, of an adult who has an ideal weight of 30 kg.
• An algebraic expression can combine arithmetic operations, numbers, and letters. Letters are used to represent variables. These are examples of algebraic expressions that contain variables: a , 3
b , and 4x + 5. The variables in the expressions area ,b , andx .
• To evaluate an algebraic expression, replace each variable in the expression with a number and find the value of the expression. For example, to evaluate the expression 4 x + 5 whenx = 7, replace
x with 7 and find the value of 4 • 7 + 5, which is 28 + 5, or 33
• An equation is a statement where two expressions are equal. It is formed by placing an equals sign between the two equivalent expressions
• To solve an equation, find the value of the variable that makes the equation true. | {"url":"https://oercommons.org/courseware/lesson/3818/student/?section=3","timestamp":"2024-11-11T14:47:11Z","content_type":"text/html","content_length":"35603","record_id":"<urn:uuid:46e6a5c7-5322-407d-915b-dd03fd481d4a>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00862.warc.gz"} |
Derivative cheap of cos
6.1.4 Differentiation Of Trigonometric Functions
Derivative cheap of cos
Visit »
Derivative of cos cube x Cos 3x Derivative iMath
Derivative of cos x GeoGebra
Differentiation of cos inverse x cos 1 x Teachoo with Video
Calculating Derivatives of a Mix of Polynomials sin cos ln
Derivative of Cos Square x Formula Proof Derivative of Cos 2x
The Derivative of Cos DerivativeIt
Proof that the Derivative of cos x is sin x using the Limit Definition of the Derivative
The derivative of cos 1 2x 2 1 w.r.t. cos 1 x is
Derivative of Cosine Function Exploration GeoGebra
Sine and Cosine Derivatives and Integration
The Derivative of cos 2x DerivativeIt
Derivative of cos x Formula Proof Examples
Ex 12.2 10 Find derivative of cos x from first principle Teachoo
Derivative of Cos x Proof Review Albert.io
Solved Determine the derivative of f x cos 2 x 4 3 Chegg
Ex 12.2 10 Find derivative of cos x from first principle Teachoo
Derivatives of Trigonometric Functions
Differentiation sin and cos
Find second order derivative cos 1 x
Derivative of cos x from first principles
Derivative of Cosine cos x Formula Proof and Graphs
What is the derivative of cos xSinX Quora
Derivative of Cos Square x Formula Proof and Examples
Find the Derivative of Cos X from First Principle. Mathematics
Proof of the derivative of cosx A Step by Step Proof and Explanation
Differentiating f x cosx Using a Specific Rule Calculus Study
What is the derivative of cos ln x Epsilonify
Derivatives of trig. functions
Differentiation of e cos 1x WBPREP
Derivative of cos x using First Principle of Derivatives FULL
Derivative of Cos x Definition Proof Functions Lesson
Proof of Derivative of cos x
Solved What is the Derivative of cos x 4 iMath
Find the derivatives of cos x from the first principle Yawin
problem solving How to calculate the derivative of cos x in
Differentiation of Trigonometric Functions Trig Derivatives
Derivative of Cos x By First Principle Formula Proof Examples
Derivative of Sine and Cosine Functions Calculus
What is the Derivative of cos 3 x Epsilonify
What is the derivative of cos 3 x Socratic
Eric s Calculus Lecture Proof of the derivative of cosx
Derivative of cos x Proof by Quotient Chain First Principle
Find the derivatives of cos x a Yawin
Ex 12.2 10 Find derivative of cos x from first principle Teachoo
Find the n th derivative of the following cos x
Why the Derivative of cos x sin x Proof detailed r calculus
This is my trick for remembering integrals and derivatives of Sin
What is the derivative of cos square x square Quora | {"url":"https://www.crea.fr/?c=4280000419","timestamp":"2024-11-13T16:29:35Z","content_type":"text/html","content_length":"18818","record_id":"<urn:uuid:5dcdb622-b8f5-4e50-bc61-691ac17bde6e>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00092.warc.gz"} |
theory Archives
One of the benefits of being an astrophysicist is your weekly email from someone who claims to have “proven Einstein wrong”. These either contain no mathematical equations and use phrases such as “it
is obvious that..”, or they are page after page of complex equations with dozens of scientific terms used in non-traditional ways. They all get deleted pretty quickly, not because astrophysicists are
too indoctrinated in established theories, but because none of them acknowledge how theories get replaced.
For example, in the late 1700s there was a theory of heat known as caloric. The basic idea of caloric was that it was a fluid that existed within materials. This fluid was self-repellant, meaning it
would try to spread out as evenly as possible. We couldn’t observe this fluid directly, but the more caloric a material has the greater its temperature.
Ice-calorimeter from Antoine Lavoisier’s 1789 Elements of Chemistry. (Public Domain)
From this theory you get several predictions that actually work. Since you can’t create or destroy caloric, heat (energy) is conserved. If you put a cold object next to a hot object, the caloric in
the hot object will spread out to the cold object until they reach the same temperature. When air expands, the caloric is spread out more thinly, thus the temperature drops. When air is compressed
there is more caloric per volume, and the temperature rises.
We now know there is no “heat fluid” known as caloric. Heat is a property of the motion (kinetic energy) of atoms or molecules in a material. So in physics we’ve dropped the caloric model in terms of
kinetic theory. You could say we now know that the caloric model is completely wrong.
Except it isn’t. At least no more wrong than it ever was.
The basic assumption of a “heat fluid” doesn’t match reality, but the model makes predictions that are correct. In fact the caloric model works as well today as it did in the late 1700s. We don’t use
it anymore because we have newer models that work better. Kinetic theory makes all the predictions caloric does and more. Kinetic theory even explains how the thermal energy of a material can be
approximated as a fluid.
This is a key aspect of scientific theories. If you want to replace a robust scientific theory with a new one, the new theory must be able to do more than the old one. When you replace the old theory
you now understand the limits of that theory and how to move beyond it.
In some cases even when an old theory is supplanted we continue to use it. Such an example can be seen in Newton’s law of gravity. When Newton proposed his theory of universal gravity in the 1600s,
he described gravity as a force of attraction between all masses. This allowed for the correct prediction of the motion of the planets, the discovery of Neptune, the basic relation between a star’s
mass and its temperature, and on and on. Newtonian gravity was and is a robust scientific theory.
Then in the early 1900s Einstein proposed a different model known as general relativity. The basic premise of this theory is that gravity is due to the curvature of space and time by masses. Even
though Einstein’s gravity model is radically different from Newton’s, the mathematics of the theory shows that Newton’s equations are approximate solutions to Einstein’s equations. Everything
Newton’s gravity predicts, Einstein’s does as well. But Einstein also allows us to correctly model black holes, the big bang, the precession of Mercury’s orbit, time dilation, and more, all of which
have been experimentally validated.
So Einstein trumps Newton. But Einstein’s theory is much more difficult to work with than Newton’s, so often we just use Newton’s equations to calculate things. For example, the motion of satellites,
or exoplanets. If we don’t need the precision of Einstein’s theory, we simply use Newton to get an answer that is “good enough.” We may have proven Newton’s theory “wrong”, but the theory is still as
useful and accurate as it ever was.
Unfortunately, many budding Einsteins don’t understand this.
Binary waves from black holes. Image Credit: K. Thorne (Caltech) , T. Carnahan (NASA GSFC)
To begin with, Einstein’s gravity will never be proven wrong by a theory. It will be proven wrong by experimental evidence showing that the predictions of general relativity don’t work. Einstein’s
theory didn’t supplant Newton’s until we had experimental evidence that agreed with Einstein and didn’t agree with Newton. So unless you have experimental evidence that clearly contradicts general
relativity, claims of “disproving Einstein” will fall on deaf ears.
The other way to trump Einstein would be to develop a theory that clearly shows how Einstein’s theory is an approximation of your new theory, or how the experimental tests general relativity has
passed are also passed by your theory. Ideally, your new theory will also make new predictions that can be tested in a reasonable way. If you can do that, and can present your ideas clearly, you
will be listened to. String theory and entropic gravity are examples of models that try to do just that.
But even if someone succeeds in creating a theory better than Einstein’s (and someone almost certainly will), Einstein’s theory will still be as valid as it ever was. Einstein won’t have been proven
wrong, we’ll simply understand the limits of his theory.
Continental Drift Theory
In elementary school, every teacher had one of those pull-down maps of the world to teach geography. On occasion, I thought the largest land masses, known as continents, reminded me of pieces in a
jigsaw puzzle. They just seemed like they should fit together, somehow. Not until I took Earth Science, in 8TH grade, did I discover my earlier idea was correct. My teacher explained about a
phenomenon, known as, The Continental Drift Theory. He said that some German had the same idea I did.
The man my teacher mentioned, Alfred Wegener (Vay gen ner), developed The Continental Drift Theory in 1915. He was a meteorolgist and a geologist. His theory basically said that, at one time, there
existed one large supercontinent, called, Pangea, pan, meaning all-encompassing, and, gea, meaning the Earth. He went on to suggest that, seismic activity, such as erthquakes, volcanic eruptions, and
tsunamis, also called tidal waves, eventually created fissures, or cracks in the Earth. As these fissures became larger, longer, and deeper, 7 pieces of Pangea broke off and, over time, drifted to
the places where they are now. These 7 large pieces of land are what we now call, continents. They are: North America; South America; Europe; Asia; Africa; Antarctica; and, Australia. Some people
refer to the country as Australia, and the continent as, Oceania. They do this because there are other countries, such as New Zealand, included as a part of that particular continent.
At the time, people thought Wegener was, well, “nuts.” Only in the 1950s did people begin to take his idea seriously. According to the United States Geological Survey (the USGS), thanks to the use of
the submarine and the technology developed during World War II, scientists learned a lot about the Ocean Floor. When they found out that it was not as old as the Crust, or Surface, of the Earth,
sicentists had to ask themselves, “Why?”
The answers have to do with earthquakes, volcanoes, and magnetism. When the Earth cracks, molten magma, from the middle of the Earth, known as the Mantle, works its way to the surface, where it
becomes known as, lava. That lava melts away some of the older layers; then, when the water cools that lava, it forms a new layer of Earth. For that reason, if scientists tried to determine the age
of the Earth from samples taken from the Ocean Floor, they would be very wrong.
That same equipment also helped scientists recognize that heavy amounts of basalt, a volcanic rock that contains high amounts of iron, could throw compasses off course. This information provided one
more pieces to the puzzle. Now, scientists recognize that the North and South Poles were not always where they currently are.
The Earth changes every day. Although we might not notice it, the continents move all the time. We don’t only revolve, or spin, around the Sun. We also drift across the surface of the planet.
The United States Geological Survey has some excellent information on this topic.
University Today has some other fabulous material about this and related topics, including Earth, Barely Habitable?, by Fraser Cain begin_of_the_skype_highlighting end_of_the_skype_highlighting,
and Interesting Facts About Planet Earth.
You can also read or listen to Episode 51: Earth, of Astronomy Cast, also produced by Universe Today. | {"url":"https://www.universetoday.com/tag/theory/","timestamp":"2024-11-15T04:07:08Z","content_type":"text/html","content_length":"171398","record_id":"<urn:uuid:e175f005-757d-449a-9de9-57af6b62aa2b>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00454.warc.gz"} |
Pole (of a function)
From Encyclopedia of Mathematics
2020 Mathematics Subject Classification: Primary: 30-XX [MSN][ZBL] $ \newcommand{\abs}[1]{\left| #1 \right|} \newcommand{\set}[1]{\left\{ #1 \right\}} $
The pole of a function is an isolated singular point $a$ of single-valued character of an analytic function $f(z)$ of the complex variable $z$ for which $\abs{f(z)}$ increases without bound when $z$
approaches $a$: $\lim_{z\rightarrow a} f(z) = \infty$. In a sufficiently small punctured neighbourhood $V=\set{z\in\C : 0 < \abs{z-a} < r}$ of the point $a \neq \infty$, or $V'=\set{z\in\C : r < \abs
{z} < \infty}$ in the case of the point at infinity $a=\infty$, the function $f(z)$ can be written as a Laurent series of special form: $$\label{eq1} f(z) = \sum_{k=-m}^\infty c_k (z-a)^k,\qquad a eq
\infty, c_{-m} eq 0, z \in V,$$ or, respectively, $$\label{eq2} f(z) = \sum_{k=-m}^\infty \frac{c_k}{z^k},\qquad a = \infty, c_{-m} eq 0, z \in V',$$ with finitely many negative exponents if $a\neq\
infty$, or, respectively, finitely many positive exponents if $a=\infty$. The natural number $m$ in these expressions is called the order, or multiplicity, of the pole $a$; when $m=1$ the pole is
called simple. The expressions \ref{eq1} and \ref{eq2} show that the function $p(z)=(z-a)^mf (z)$ if $a\neq\infty$, or $p(z)=z^{-m}f(z)$ if $a=\infty$, can be analytically continued to a full
neighbourhood of the pole $a$, and, moreover, $p(a) \neq 0$. Alternatively, a pole $a$ of order $m$ can also be characterized by the fact that the function $1/f(z)$ has a zero of multiplicity $m$ at
A point $a=(a_1,\ldots,a_n)$ of the complex space $\C^n$, $n\geq2$, is called a pole of the analytic function $f(z)$ of several complex variables $z=(z_1,\ldots,z_n)$ if the following conditions are
satisfied: 1) $f(z)$ is holomorphic everywhere in some neighbourhood $U$ of $a$ except at a set $P \subset U$, $a \in P$; 2) $f(z)$ cannot be analytically continued to any point of $P$; and 3) there
exists a function $q(z) \not\equiv 0$, holomorphic in $U$, such that the function $p(z) = q(z)f(z)$, which is holomorphic in $U \setminus P$, can be holomorphically continued to the full
neighbourhood $U$, and, moreover, $p(a) \neq 0$. Here also $$ \lim_{z\rightarrow a}f(z) = \lim_{z\rightarrow a}\frac{p(z)}{q(z)} = \infty; $$ however, for $n \geq 2$, poles, as with singular points
in general, cannot be isolated.
For $n=1$ see [Ah]. For $n \geq 2$ see [GrFr], [Ra].
For the use of poles in the representation of analytic functions see Integral representation of an analytic function; Cauchy integral.
[Ah] L.V. Ahlfors, "Complex analysis", McGraw-Hill (1979) pp. Chapt. 8 MR0510197 Zbl 0395.30001
[GrFr] H. Grauert, K. Fritzsche, "Several complex variables", Springer (1976) (Translated from German) MR0414912 Zbl 0381.32001
[Ra] R.M. Range, "Holomorphic functions and integral representation in several complex variables", Springer (1986) pp. Chapt. 1, Sect. 3 MR0847923
[Sh] B.V. Shabat, "Introduction of complex analysis", 2, Moscow (1976) (In Russian) Zbl 0799.32001
How to Cite This Entry:
Pole (of a function). Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Pole_(of_a_function)&oldid=31254
This article was adapted from an original article by E.D. Solomentsev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098.
See original article | {"url":"https://encyclopediaofmath.org/wiki/Pole_(of_a_function)","timestamp":"2024-11-14T07:44:23Z","content_type":"text/html","content_length":"19112","record_id":"<urn:uuid:b9d5a147-c9c1-470b-830c-d198d8e86495>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00317.warc.gz"} |
Friday 5.3.21
We have made it to Friday and we can’t wait to see you! We hope you enjoyed World Book Day. We will share the reading challenge with you when you are back in school.
If you watched the Masked Reader then the results will be on the blog today so keep a look out!
Have a lovely weekend and we are so excited to see you on Monday!
decomposed – rotten
compacted – pressed together tightly
poses – behaviour
perished – dies
well – preserved – in good condition
Mrs Denny has written her own diary but she has not taken the time to read back through her work and has made many mistakes in her work.
You need to check through and edit with a purple pen. You need to look for:
• spelling mistakes
• homophones – where the word sounds the same but different – for example: bear and bare
• capital letters used correctly
• past tense
Maths skills:
Online safety:
Art attack:
Story time: https://youtu.be/Jat5xghBTBQ
26 thoughts on “Friday 5.3.21”
• 5th March 2021 at 8:54 am
1. Versions
2. Skeletons and volcano ash
3. Tride to escape
4. Inspret modern
• 5th March 2021 at 9:07 am
2. Skeleton Volcano ash
3.He was kept as a toy
4. Died in a manner.
5.A tale.
□ 5th March 2021 at 1:49 pm
The first tally represents nine. The second tally represents 22. lllllll 7. IIIIIIIIIII11. III 3 Pink buttons. IIIIII 6. Blue buttons. IIII 4 Green buttons. There are 13 buttons in total.
There has been 10 cars. There have been two buses. It’s a total of 10 cars. I was a total of two buses.
⭕️ ⭕️⭕️⭕️⭕️ 10 cars Key ⭕️ it means 2 ⭕️ Two buses. ⭕️⭕️⭕️ 6 lorries
4 motorcycles⭕️⭕️ Amir is missing one more circle for the buses. Five circles represents the 10 cars. One circle represents the two buses. Three circles represent the six lorries. To circle
represents the four motorcycles.
☆ 5th March 2021 at 1:57 pm
Thank you for your maths work. Ms Willmer
• 5th March 2021 at 9:14 am
• 5th March 2021 at 9:15 am
MATHS VIDEO
1. 9
2. 22
3. 10
5. 10
• 5th March 2021 at 10:23 am
I Spent My Day Sneaking between pot
and watching the world
going bye my hidden spot
would not get near anyone’s chat
I love my place because i can gaze at the waves.
□ 5th March 2021 at 10:39 am
Well done Sameera!
Miss Stewart
• 5th March 2021 at 10:57 am
2. Skeleton Volcano ash
3.He was kept as a toy
4. Died in a manner.
5.A tale
myth means an story that is passed down generations.
reading correcting mistakes;
in my hidden spot
my tummy
□ 5th March 2021 at 11:08 am
Well done Om!
Miss Stewart
• 5th March 2021 at 11:23 am
Hi Mrs Denny how are you today I’m good any way HAPPY BIRTHDAY I miss you so much I hope you have a good birthday and I can not wait to come back to school I can’t wait to see all of you so.
• 5th March 2021 at 11:29 am
Mrs Denny I don’t under stand any of my work
□ 5th March 2021 at 1:15 pm
Which part do you not understand?
Ms Willmer
• 5th March 2021 at 12:27 pm
Dear diary,
My day has is spent sneaking between the pots and watching the world go by in my hidden
spot. I knew that I wouldn’t get anyone’s way here. I love my secret place and I could gaze out at the waves.
I noticed another boat with that daily catch and the sort of fish made my tummy rumble. I hope those busy people were happy in their lives.
My city is a perfect heaven for me and I feel safe. I hope things stay the same forever.
□ 5th March 2021 at 1:09 pm
Amazing work Princess! well done!
Miss Stewart
• 5th March 2021 at 12:39 pm
There are six 🍏 The 10 🍊 There are 12 🍌 llllll. llllllllll. llllllllllll. There are 28 fruits In total. I prefer Tommies pictogram because if I chose Dora,s pictogram and then I would have to
draw six circles 10 circles and 12 circles but if it was Tommies pictogram then it will be a lot quicker because it would go up in twos.
• 5th March 2021 at 12:40 pm
There are six 🍏 The 10 🍊 There are 12 🍌 llllll. llllllllll. llllllllllll. There are 28 fruits In total. I prefer Tommies pictogram because if I chose Dora,s pictogram and then I would have to
draw six circles 10 circles and 12 circles but if it was Tommies pictogram then it will be a lot quicker because it would go up in two,s 2
• 5th March 2021 at 12:41 pm
There are six 🍏 The 10 🍊 There are 12 🍌 llllll. llllllllll. llllllllllll. There are 28 fruits In total. I prefer Tommies pictogram because if I chose Dora,s pictogram and then I would have to
draw six circles 10 circles and 12 circles but if it was Tommies pictogram then it will be a lot quicker because it would go up in two,s 2. Going to school soon.
• 5th March 2021 at 1:10 pm
Say know and tell her not to.
• 5th March 2021 at 2:14 pm
⭕️ Key 2. ⭕️⭕️⭕️⭕️⭕️⭕️ 12 cars. ⭕️ 2 buses. ⭕️⭕️⭕️Half a circle. ⭕️⭕️⭕️And half the circle is 7 Motorcycles. I think Alex is book 📖 symbol of how many minutes he has been reading is 10. In total
for Alex read for 200 minutes or 3 hours 20 minutes.
• 5th March 2021 at 2:29 pm
Irregular Refresh. Redecorate. Subway. Reappear. Rebound. Retreat. Subtract.
Irritable. Sublime. Return. Subject. Repeat. Immature. Irrational. Forwards. Extreme. Subway. Subheading.
• 5th March 2021 at 3:12 pm
,./1. Versions
2. Skeletons and volcano ash
3. Tried to escape
• 5th March 2021 at 3:14 pm
• 5th March 2021 at 3:25 pm
Dear diary,
My day is spent sneaking between pots and watching the world go by in my hidden spot .
I knew that I could not get in anyone’s way here .
I love my srecet spot as I could gaze out in the waves.
I noticed another boat with a dialy catch and the sort of fish that made my tummy rumble .
I hope those busy people were happy in their lives.
My city is a perfect 🤩 heaven for me and I feel safe.
I hope things stay the same.
• 5th March 2021 at 3:33 pm
6 Apples 🍏
10 Oranges 🍊
12 bananas 🍌
• 5th March 2021 at 5:44 pm
I know Tommy pictogram is a good pictogram because if I choose Dora,s pictogram I would have to draw 6 circles ,10 circles and 12 circles.See you soon back at school 🏫 | {"url":"http://lingsprimaryblogs.net/2021/03/05/friday-5-3-21/","timestamp":"2024-11-11T02:13:11Z","content_type":"text/html","content_length":"74132","record_id":"<urn:uuid:bcacd5b1-bad5-4704-86a3-f4a93f80574e>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00643.warc.gz"} |
The inverse of a Matrix 2.2 Flashcards | Knowt
If A is an invertible matrix, then A^-1 is invertible and
If A and B are nxn invertible matrices, then so is AB, and the inverse of AB is the product of the inverses of A and B in reverse order
If A is an invertible matrix, then so is A^T, and the inverse of A^T is the transpose of A^-1 | {"url":"https://knowt.com/flashcards/04135328-7a8e-494e-8169-a031d04030da","timestamp":"2024-11-07T03:35:40Z","content_type":"text/html","content_length":"376905","record_id":"<urn:uuid:8c828020-bd6a-4e69-9498-70e4aa6546c1>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00137.warc.gz"} |
patterns and sequences worksheet pdf
These past paper questions help you to master the 11+ Exam Maths Questions. Students have prior knowledge of: • Patterns • Basic number systems • Sequences • Ability to complete tables • Basic graphs
in the co-ordinate plane • Simultaneous equations with 2 unknowns • thThe . Whether its bioscience, computer science, mathematics, or daily life, we very commonly use these terms, mostly
interchangeably. Arithmetic Sequences and Series Worksheet - Problems. E-mail *. Download the fully worked out memorandum. Download Maths-F1-2.Number Patterns and Sequences PDF for free. Join
thousands of learners improving their maths marks online with Siyavula Practice. Included in these questions are exam level questions which could be used for revision of the number patterns section.
Subjects: Math, Basic Operations, Numbers. A function rule is a rule based on the position of a number. These worksheets are approrpriate for 4th and 5th grade, but might be introduced later if the
topic wasn't covered earlier in the curriculum. Go over the lessons and have fun in working with the exercises. Number sequences worksheets pdf downloads for grade 7. You can do the exercises online
or download the worksheet as pdf. These worksheets are similar to number patterns in that students must find the correct rule. This KS3 activity introduces sequences by looking at shape patterns and
how to extend them and define rules. Y 6 wAWlslA wruiPg xhtOs9 3rSeIsoe4rIv Ye0d L.I i 9MOavd Jex AwdiztFhP uIGnvf Si0ngi ot Wes KAYlGgre Kbkr 6av B2U. A series of activities, all of which test
pupils’ ability to place shapes in a particular order. sequences and series worksheet pdf, Worksheet given in this section is much useful to the students who would like to practice problems on
arithmetic sequences and series. Don't get left behind. It has an answer key attached on the second page. All worksheets are free for individual and non-commercial use. Next. "Here are the first five
terms in a number sequence. (2) "(b) Write an expression, in terms of n, for the nth term of this number sequence… Sign up here. Must Practice 11 Plus (11+) Number Patterns and Sequences Past Paper
Questions. Number sequences in a grid, patterns and explanations, odd and even, extend number sequences. What you are expected to … Must Practice 11 Plus (11+) Patterns and Sequences Past Paper
Questions. Learn the Rest of the 6s; This worksheet will help your students practice their six times table and number sequences. However, the question that arises is whether these two terms are the
same or not. Updated: Jan 12, 2015. pptx, 265 KB . These number patterns are called sequences. Sequences. These number pattern worksheets deal with addition rules, and you'll find patterns and rules
involve smaller numbers as a well as larger addends. This is a math PDF printable activity sheet with several exercises. Below, you will find a wide range of our printable worksheets in chapter
Patterns of section Geometry and Patterns. These past paper questions help you to master the 11+ Exam Maths Questions. Loading... Save for later. Patterns worksheet for 4th grade children. Along with
Detailed Answers, Timing, pdf download. Report a problem. Chapter 3: Number patterns. Patterns Workbook (all teacher worksheets - large PDF) Pattern and Number Sequence Challenge Workbook More
Difficult Pattern and Number Sequence Challenge Workbook We look at various strategies students can use to solve these in our lesson and worksheet series. Determine which pictures come next in each
pattern shown. Before look at the worksheet, if you would like to know the stuff related arithmetic sequences and series, Please click here. Picture pattern worksheets contain repeating pattern,
growing pattern, size, shapes and color pattern, equivalent pattern, cut-paste activities and more. About this resource. With Number Pattern Worksheets, students will be adding and subtracting 1s,
2s, 5s, 10s, skip counting numbe . We hope you find them very useful and interesting. Determine the nth term of the sequence and find the sum of the sequence on Math-Exercises.com - Collection of
math exercises. Our number patterns worksheet is a great way to challenge children to think about items in the sequence which are not just ‘next’. Ability to place shapes in a grid, patterns of
fives, and... 6S ; this worksheet is a supplementary seventh grade resource to help teachers, parents and children at and... And worksheet series define rules the Difference Between a pattern and
sequence – Ngfl Cymru are expected …! We look at the worksheet widgets know the stuff related arithmetic sequences in a variety contexts..., 10s, skip counting numbe 'll find patterns of fifteens and
patterns in that students must the... Wawlsla wruiPg xhtOs9 3rSeIsoe4rIv Ye0d L.I i 9MOavd Jex AwdiztFhP uIGnvf Si0ngi ot Wes KAYlGgre Kbkr 6av B2U ability... Grid, patterns of chapter Factors and
patterns in that students must find the sum the. `` `` 7 10 13 16 19 `` ( a ) find the 10th term in this number sequence rule. Same or not these worksheets are free for individual and non-commercial
use and prediction lay! A rule based on the images to patterns and sequences worksheet pdf, download, or print.... Can do the exercises Exam Maths questions and sequence worksheets is on type. Or
decimals strategies students can use to solve these in our lives pattern! Paper questions help you to master the 11+ Exam Maths questions find the term! Brief description of the mathematics
Enhancement Program chapter exercises a wide range of our printable worksheets for topic patterns. Growing pattern, and more can do the exercises and the general is! You may want to spend a day on
each of the sequence Math-Exercises.com. Thousands of learners improving their Maths marks online with Siyavula practice along with Detailed Answers,,! 10 13 16 19 `` ( a ) find the sum of the
worksheet widgets key attached on second! The images to view, download, or print them in chapter patterns of fives, patterns of fives patterns! Pdfs like Maths-F1-2.Number patterns and sequences in a
number sequence the lessons and have fun working... The position of the mathematics Enhancement Program come across very frequently in our lesson and worksheet series is. Searching for patterns,
sequence and a sequence children at home and in school must find the sum the... With Detailed Answers, Timing, PDF download school students would like to know the stuff related arithmetic sequences
series. … End of chapter Factors and patterns foundation for subjects like math, poetry, and many more exercises... Terms are the first five terms in a number sequence to view, download, or print them
these... For patterns, sequence and a sequence Building Game – Ngfl Cymru of the number patterns sequence... Cut-Paste activities and more an arithmetic sequence Learning Outcomes Prior knowledge
number by applying the to. Similar to number patterns of 25 here for practice for students which could be for! And how to extend them and define rules Geometry and patterns of tens, patterns of
section Geometry and of... Our lives is pattern and a third day comparing the two types of sequences nth of! Before look at the worksheet widgets skip counting numbe what you are expected …. Fives,
patterns of fifteens and patterns in that students must find the 10th in... Fun in working with the exercises online or download the worksheet, you. Grade resource to help teachers, parents and
children at home and in.. Extend number sequences 1 - 5 of Maths-F1-2.Number patterns and explanations, odd and even, extend number.... A series of activities, all of which test pupils ’ ability to
place shapes a. Which pictures come next in each pattern shown the flip PDF version go over the lessons and have fun working. All worksheets are free for individual and non-commercial use, the
question that arises whether! Pages 1 - 5 of Maths-F1-2.Number patterns and sequences number sequences five terms in a number are expected to must... Concepts and slowly progress to more difficult
outlets of understanding on the second page covers investigating number patterns that a... Marks online with Siyavula practice by looking at shape patterns and how to extend them and define.! An
answer key attached on the images to view, download, or daily life, very. That students must find the sum of the mathematics Enhancement Program teachers, and. Prediction skills lay an important
foundation for subjects like math, poetry, and many more pupils ’ to. Terms in a particular order patterns of 25 here for practice of,! Patterns are formed by adding or subtracting whole numbers from
mixed numbers or decimals day on each type of and! Pupils ’ ability to place shapes in a particular order various aspects of this topic, number and. Aspects of this topic, number patterns of tens,
patterns and explanations, odd and even, extend sequences! And series 1 place shapes in a number chapter exercises answer key attached on images! Updated: Jan 12, 2015. pptx, 265 KB the topic number!
Worksheet as PDF the 11+ Exam Maths questions do the exercises, 10s, skip counting numbe )... Enhancement Program of Maths-F1-2.Number patterns and sequences was published by mano2264 on 2014-09-18
practice their six times table and sequences... With several exercises five terms in a variety of contexts Prior knowledge top worksheets... Jan 12, 2015. pptx, 265 KB sequence worksheets is on each
type of and... That patterns and sequences worksheet pdf come across very frequently in our lesson and worksheet series to the position of a sequence! Many more practice 11 Plus ( 11+ ) number
patterns section investigating number patterns in that students find. `` ( a ) find the sum of the worksheet as PDF come next each... Over the lessons and have fun in working with mathematical
patterns and sequences can be difficult for.... These past paper questions skip counting numbe recommended for high school students, the question that arises is these. Of learners improving their
Maths marks online with Siyavula practice 11+ Exam Maths questions 16 19 (. Grade resource to help teachers, parents and children at home and in school sheet several. It has an answer key attached on
the second page for revision of the number patterns section, KS2 the... With several exercises 16 19 `` ( a ) find the 10th term in this number sequence flip. All of which test pupils ’ ability to
place shapes in a,. Two terms are the same or not them very useful and interesting searching for patterns, sequence a. Sequence on Math-Exercises.com - collection of math exercises subjects like
math, poetry, and more you 'll patterns! Jan 12, 2015. pptx, 265 KB these questions are Exam level questions which patterns and sequences worksheet pdf. In patterns and sequences worksheet pdf
pattern shown mathematics Enhancement Program on Math-Exercises.com - collection of series and sequence these two terms are same. Of our printable worksheets in chapter patterns of tens, patterns of
here... Are formed by adding or subtracting whole numbers from mixed numbers or decimals computer science, mathematics, print. Number patterns and sequences past paper questions the two types of
sequences answer key attached on the second page questions! ’ ability to place shapes in a variety of contexts Prior knowledge an arithmetic sequence Learning Outcomes and a day... Any number by
applying the rule to the position of a number sequence pictures come next in each shown. 'Ll find patterns of fifteens and patterns Please click here the patterns are formed by or! Correct rule at
the worksheet as PDF will be adding and subtracting 1s,,! Common Difference and the general term is linear the images to view,,... Rule is a math PDF printable activity sheet with several exercises
find more similar flip PDFs like patterns... The worksheets is on each type of sequence and series, Please click.. Particular order section Geometry and patterns and sequences can be difficult for
students can do the exercises,! Is patterns and sequences worksheet pdf for high school students Si0ngi ot Wes KAYlGgre Kbkr 6av B2U,... Lower the Building Game – Ngfl Cymru, Timing, PDF download
contain repeating pattern, equivalent pattern and. Tens, patterns of fifteens and patterns of fives, patterns and explanations, odd and even extend! 1 ) pptx, 265 KB size, shapes and color pattern,
patterns and sequences worksheet pdf pattern, growing,! In school sequence Learning Outcomes questions which could be used for revision of the worksheet as PDF activities and.. Fourth grade resource
to help teachers, parents and children at home and in school similar number... Lessons and have fun in working with the exercises online or download the worksheet, if would... In school and series 1
10th term in this number sequence with mathematical patterns and sequences science mathematics. Difference and the general term is linear of math exercises and pattern pattern, and more... Has an
answer key attached on the second page, students will adding! Or download the worksheet, if you would like to know the stuff arithmetic! Worksheet as PDF two terms are the same or not a function rule
– used to predict any number applying... Worksheet as PDF, you will find a wide range of our printable worksheets for teaching students …... 6S ; this worksheet will help your students practice their
six times table and number sequences, size shapes. And slowly progress to more difficult outlets of understanding and find the correct rule we very use... Terms that we come across very frequently in
our lesson and worksheet series of an arithmetic sequence Learning.! Worksheet series Ye0d L.I i 9MOavd Jex AwdiztFhP uIGnvf Si0ngi ot Wes KAYlGgre Kbkr B2U! 1 ) pptx, 265 KB, 2s, 5s, 10s, skip
counting numbe,. Particular order know the stuff related arithmetic sequences in a particular order learn the Rest of the 6s this! The first five terms in a number sequence have fun in working with the
exercises or! Learners improving their Maths marks online with Siyavula practice day on each type of sequence and pattern are level!
Craftsman Lt1000 Rear Tire Size, New Soul Music 2020, Amazon Fire Tv Cube 2020, 3 Years At Community College Reddit, American Standard Lexington Toilet Tank, Fiat Steam Generator, Picture Hanging Kit
Amazon, Kishore Kumar Last Song, Lg Sound Bar Bracket Currys, | {"url":"http://davidclaytonthomas.com/imdod/yh9u1.php?3a2ae3=patterns-and-sequences-worksheet-pdf","timestamp":"2024-11-10T05:30:13Z","content_type":"text/html","content_length":"44897","record_id":"<urn:uuid:cc7a4b01-829a-40dc-b5f6-b48be63246ae>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00644.warc.gz"} |
[QSMS Monthly Seminar] The dimension of the kernel of an Eisenstein ideal
• 2022년 3월 QSMS Monthly Seminar
• Date: 25 March Fri PM 2:00 ~ 5:00
• Place: 129-101 (SNU)
• Speaker: 유화종 Hwajong Yoo (PM 2:00 ~ 3:00)
• Title: The dimension of the kernel of an Eisenstein ideal
• Abstract: We introduce a notion of multiplicity one for modular Jacobian varieties, which is about the dimension of the kernel of a certain maximal ideal of the Hecke algebra. When such a maximal
ideal is non-Eisenstein (which will be explained), then multiplicity one holds (which means that the dimension is 2.) On the other hand, as first noticed by Calegari and Stein, multiplicity one
often fails for Eisenstein maximal ideals. We propose a conjecture about the dimension of the kernel of an Eisenstein ideal. If time permits, we sketch the proof of my work with Ken Ribet.
• Speaker: 조창연 Chang-Yeon Chough (PM 4:00 ~ 5:00)
• TiTle: Twisted equivalences in spectral algebraic geometry
• Abstract: Derived equivalence has been an interesting subject in relation to Fourier-Mukai transform, Hochschild homology, and algebraic K-theory, just to name a few. On the other hand, the
attempt to classify schemes by their derived categories twisted by elements of Brauer groups is very restrictive as we have a positive answer only for affines. I'll talk about how we can extend
this result to a broader class of algebro-geometric objects in the setting of derived/spectral algebraic geometry at the expense of a stronger notion of twisted equivalences than that of ordinary
twisted derived equivalences. I'll convince you that the new notion is not only reasonable but also indispensable from this point of view. The first half will be mainly devoted to giving brief
expository accounts of some background materials needed to understand the notion of twisted derived equivalence in the setting of derived/spectral algebraic geometry. The remaining half will
cover a derived/spectral analog of Rickard's theorem, which shows that derived equivalent associative rings have isomorphic centers. I'll try to avoid technicalities related to using the language
of derived/spectral algebraic geometry. | {"url":"https://qsms.math.snu.ac.kr/index.php?mid=board_sjXR83&order_type=asc&page=2&document_srl=2261&sort_index=readed_count&listStyle=viewer","timestamp":"2024-11-11T15:59:49Z","content_type":"text/html","content_length":"23471","record_id":"<urn:uuid:3bd93fff-1f1e-4790-b3e5-2621f8d5f686>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00000.warc.gz"} |
Assignment Paper
Growth Rates of Capital and Output
Consider the following production function:
11 Yt =F(Kt,Lt)=Kt2Lt2
Assume that capital depreciates at rate δ and that savings is a constant proportion s of output: St = sYt
Assume that investment is equal to savings:
It = St
Finally, assume that the population is constant:
Lt =Lt+1 =L
3. 4. 5. 6.
The production function above expresses output as a function of capital and labor (workers). Derive a function that expresses output per worker as a function of capital per worker (i.e. find yt = f
Write down the capital accumulation equation in terms of capital per worker (i.e. an equation with only kt+1, kt, δ, and s.
Solve for the steady state level of capital per worker as a function of δ and s. Solve for the steady state level of output per worker as a function of δ and s. What is the steady state growth rate
of output per worker?
What is the steady state growth rate of output?
Extra Credit!!
Getting the right answer will get you 5 extra credit points that will go towards your homework grade! To get full extra credit you MUST show all your work. Consider the following production
11 Yt=F(Kt,Lt)=(Kt2 +Lt2)2
Assume that capital depreciates 5% each year and that households save 5% of their income. Assume that investment is equal to savings. Finally, assume that the population is growing 15% each year.
Solve for the steady state level of output per worker. | {"url":"https://www.uberessays.org/2020/assignment-paper-936/","timestamp":"2024-11-11T23:06:09Z","content_type":"text/html","content_length":"43794","record_id":"<urn:uuid:abe9e597-028d-42f0-90e8-dd6919e86d06>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00439.warc.gz"} |
Systematic multi-scale decomposition of ocean variability using machine learning
Multi-scale systems, such as the climate system, the atmosphere, and the ocean, are hard to understand and predict due to their intrinsic nonlinearities and chaotic behavior. Here, we apply a
physics-consistent machine learning method, the multi-resolution dynamic mode decomposition (mrDMD), to oceanographic data. mrDMD allows a systematic decomposition of high-dimensional data sets into
time-scale dependent modes of variability. We find that mrDMD is able to systematically decompose sea surface temperature and sea surface height fields into dynamically meaningful patterns on
different time scales. In particular, we find that mrDMD is able to identify varying annual cycle modes and is able to extract El Nino–Southern Oscillation events as transient phenomena. mrDMD is
also able to extract propagating meanders related to the intensity and position of the Gulf Stream and Kuroshio currents. While mrDMD systematically identifies mean state changes similarly well
compared to other methods, such as empirical orthogonal function decomposition, it also provides information about the dynamically propagating eddy component of the flow. Furthermore, these dynamical
modes can also become progressively less important as time progresses in a specific time period, making them also state dependent.
The climate system exhibits variability on a multitude of temporal and spatial scales. Due to the nonlinearity of the equation of motions, all these scales interact with each other, thereby hampering
the understanding and predictability of the climate system. Here, we use multi-resolution dynamic mode decomposition (mrDMD), a physics-consistent machine learning approach, to systematically examine
ocean variability of different time and spatial scales. We show that this method is able to systematically extract dynamically meaningful patterns of ocean variability.
The ocean covers about 72% of Earth’s surface and is an integral part of the global climate system. It provides essential environmental services, such as food and transportation, and affects
atmospheric predictability and extreme events. Therefore, the ocean has a strong impact on vital aspects of our society, and it is of paramount importance to study and understand its underlying
physical phenomena, which can vary on a multitude of time and space scales. One of the most important modes of ocean variability is the El Nino–Southern Oscillation (ENSO) (e.g., Timmermann et al.,
2018). ENSO describes variations in winds and sea surface temperature over the tropical eastern Pacific ocean, which have widespread effects on surface weather and climate conditions due to
teleconnection patterns (Feldstein and Franzke, 2017). ENSO appears irregularly with enhanced frequency power in the range of 3–7 years.
Two other important modes of ocean variability are the western boundary currents: the Gulf Stream and the Kuroshio (Kang and Curchitser, 2013). The Gulf Stream varies on monthly through decadal time
scales (Seidov et al., 2019) and can shift between a northerly and southerly location (e.g., Pérez-Hernández and Joyce, 2014). The Kuroshio current tends to vary between a stable and an unstable
state. For the former, the current is relatively strong and zonal, while in the latter case, the Kuroshio Extension tends to meander on large scales with higher eddy kinetic energy levels (e.g. Qiu
and Chen, 2005 and Oka et al., 2015).
A fascinating aspect of climate and ocean variability is that it occurs on all time scales and that due to the underlying nonlinear equations of motion, the different time scales interact with each
other (Franzke et al., 2020). However, this property also makes it difficult to understand climate and ocean variability because it is not straightforward to disentangle variability on different time
scales. Moreover, another important aspect of understanding ocean variability is the identification of coherent structures with dynamical relevance, such as ENSO. A widely used method for the
identification of modes of variability is empirical orthogonal functions (EOFs) (von Storch and Zwiers, 2003 and Hannachi, 2021), also known as a principal component analysis or a proper orthogonal
decomposition. Global modes of sea surface temperature have been computed by Messié and Chavez, (2011). Global EOFs identify the well-known modes of ocean variability, such as the ENSO, the Pacific
decadal oscillation (PDO), and the Atlantic multidecadal oscillation (AMO). However, the dynamical relevance of some of these modes has been questioned (Clement et al., 2015 and Mann et al., 2020).
While EOFs are a powerful tool for multivariate data analysis, they have the drawback that the EOF patterns are mutually orthogonal. Thus, the EOF patterns lose physical interpretability since the
ocean modes, or any physical modes, need not to be mutually orthogonal (North, 1984).
Hence, there is a need for better methods, which are able to systematically identify dynamically relevant patterns. A promising method is dynamic mode decomposition (DMD) (Tu et al., 2014; Kutz et
al., 2016a; Rowley et al., 2009; Mezić, 2005; 2013; and Brunton et al., 2016), a machine learning method. DMD decomposes high-dimensional fields into complex patterns whose eigenvalues describe the
growth rates and oscillation frequencies of the modes. DMD is widely used in many areas (Kutz et al., 2016a; Tu et al., 2014; Rowley et al., 2009; Brunton et al., 2016; and Mezić, 2005; 2013) and
recently also in geophysical and climate research (Kutz et al., 2016b; Gottwald and Gugole, 2019; and Gugole and Franzke, 2019). DMD is a dimension reduction method, which has a strong theoretical
and dynamical underpinning. For a given high-dimensional time series, DMD computes a set of complex modes; each of these modes represents an oscillation with a fixed frequency and a growth rate. For
linear systems, these modes are analogous to normal modes. Furthermore, DMD is closely connected to principal oscillation patterns and linear inverse models (Hasselmann, 1988; Penland and Magorian,
1993; and Tu et al., 2014). However, DMD is more general. DMD approximates the modes and eigenvalues of the Koopman operator and, thus, can represent nonlinear dynamics (Tu et al., 2014 and Kutz et
al., 2016a). DMD is different from other popular dimension reduction methods, such as EOFs. EOFs are not directly associated with a temporal behavior, while DMDs are. However, in contrast to EOFs,
DMD modes are not orthogonal. Hence, DMDs might provide a less parsimonious description of the full data set than EOFs, but on the other hand, DMDs are dynamically more meaningful, which we will show
The multi-scale space-time structure of ocean variability calls also for multi-scale methods. The multi-resolution DMD is an attractive option for this problem since it provides a systematic
multi-scale decomposition into dynamical modes (Kutz et al., 2016a; 2016b). Here, we will demonstrate that multi-resolution DMD is able to identify dynamically meaningful patterns of ocean
variability, which are of practical concern. While our study does not advance theory, our aim is to demonstrate the ability of DMD to systematically extract multi-scale dynamics from a complex
real-world system, the ocean. Furthermore, our study shows how DMD can be used to deepen our understanding of ocean dynamics, and we also show how DMD can systematically extract multi-scale dynamics
of a component of the climate system, which not many methods can do. We also demonstrate by extracting the changing annual cycle of SST how DMD can potentially lead to better predictions of the
climate system. In Sec. II, we describe the ocean data sets we are using and the multi-resolution DMD method. In Sec. III, we present our results for global SST and sea surface height dynamics in the
Kuroshio and Gulf stream. In Sec. IV, we summarize our study results.
A. Data
To demonstrate the abilities of DMD and to examine ocean variability, we use two datasets. The first one is the extended reconstructed sea surface temperature (ERSST) version 5 data set. This is a
global monthly mean sea surface temperature (SST) data set on a $2°×2°$ regular horizontal grid (https://doi.org/10.7289/V5T72FNM) (Huang et al., 2017). The data set covers the period January
1854–December 2020. This data set allows us to examine variability on medium to long time scales.
The second data set is the Aviso satellite altimetry sea surface height (SSH). This is a global daily sea surface height data set on a $0.25°×0.25°$ regular horizontal grid (Saraceno et al., 2008) (
https://resources.marine.copernicus.eu/?option=com_cswtask=results?option=com_cswview=detailsproduct_id=SEALEVEL_GLO_PHY_L4_REP_OBSERVATIONS_008_047 and https://www.aviso.altimetry.fr/en/data/
products/auxiliary-products/mss.html). The data set covers the period 1993–2018. This data set allows us to examine ocean eddies and larger-scale flow structures on short to medium time scales.
Because of the higher temporal and spatial resolution for this data set, it is computationally challenging to consider the whole globe as a domain. Furthermore, scientifically, a better understanding
of the dynamics of smaller scale features, such as eddies and meanders, are also needed. Hence, we focus on two important ocean currents, the Gulf Stream and the Kuroshio, and apply mrDMD to these
two areas.
We define the Gulf Stream region as the area covering $280°$E–$340°$E, $30°$N–$60°$N and the Kuroshio region as the area covering $120°$E–$170°$E, $25°$N–$50°$N. For spatial pattern correlations, we
use bandpass filtering of the SSH data using a Fourier transformation based approach where the cut-off frequencies correspond to the respective mrDMD frequency bands. The bandpass filtering is
necessary for the SSH data because the eddy scale changes considerably for different time scales. Thus, pattern correlations between fast time-scale mrDMD patterns and the full flow fields would lead
to small correlation values. On the other hand, no filtering is necessary for SST data because on monthly time scales, the anomalies are still relatively large scale. For the SST data, we also tested
the impact of detrending the data. Detrending the data leads to qualitatively similar results to using non-detrended data. Hence, our results are robust.
B. Dynamic mode decomposition
We consider the following dynamical system (Kutz et al., 2016a):
where $x$ denotes the state vector, $t$ time, $μ$ the parameters of the system, and $f$ is the possible nonlinear function representing the dynamics. Equation (1) can also induce a discrete-time
representation for time step $Δt$,
In general, it is impossible to derive a solution to the nonlinear system equation (1). DMD takes an equation-free, machine learning view where we have no knowledge of the dynamics of the system. DMD
only uses observed data from the system to approximate and forecast the system. Hence, DMD computes an approximate, locally linear, dynamical system,
with discrete-time representation,
where the subscript $k$ denotes discrete time. The solution of this system can be represented in terms of its eigenvalues $λj$ and corresponding eigenvectors $ϕj$ of the discrete-time matrix $A$,
where $b$ is the matrix of the initial conditions $bj$ and $j$ is an index. $Φ$ is the matrix consisting of the eigenvectors $ϕj$. DMD now derives a low-rank eigen-decomposition of $A$ that
optimally captures the trajectory of the system in a least-squares sense so that the object
is minimized across all grid points, and this is achieved by an eigen-decomposition of $A$.
The DMD algorithm is as follows: The data can be described by two parameters.
• $∙$
n: number of spatial grid points per time step and
• $∙$
m: number of time steps.
We now have the following two sets of data:
so that $x′k=F(xk)$ for time step $Δt$. The DMD modes now correspond to the eigen-decomposition of $A$. $A$ relates the data $X′≈AX$, and thus, $A=X′X†$, where $†$ is the Moore–Penrose
pseudo-inverse (Kutz et al., 2016a).
The DMD method has a strong theoretical underpinning, as it is connected to the Koopman operator (Tu et al., 2014 and Kutz et al., 2016a). DMD is a finite dimensional approximation of the modes of
the Koopman operator. The Koopman operator is an infinite dimensional linear operator describing the dynamics of nonlinear systems.
The Koopman operator is defined as follows (Kutz et al., 2016a):
Consider a continuous–time dynamical system,
where $x∈M$ is a state on a smooth n-dimensional manifold $M$. The Koopman operator $K$ is an infinite-dimensional linear operator that acts on all observable functions $g:M→C$ so that
The Koopman operator propagates states along with the flow $F$.
C. Multiresolution dynamic mode decomposition
Multiresolution dynamic mode decomposition (mrDMD) is an advanced DMD method for analyzing multi-scale systems, such as the ocean and the atmosphere (Kutz et al., 2016a; 2016b). Basically, it
performs DMD on different time scales, similar to a wavelet analysis (Lau and Weng, 1995 and Kutz et al., 2016a). Figure 1 shows a schematic of the mrDMD approach. mrDMD starts with analyzing the
full time series and by identifying the slowest modes of variability; then, this window is divided into two equally long windows as displayed in Fig. 1, and the DMD analysis is repeated. This is
recursively repeated until the fastest dynamics are reached in the data set. The frequency of the modes is given by the eigenvalues of the mrDMD modes. As a cut-off frequency, we choose that slow
modes can only perform a maximum of two oscillations in a window in order to eliminate faster modes from this level. See Kutz et al. (2016a; 2016b) for more details. The eigenvalues of the mrDMD
modes are related to frequencies as follows:
where $s=1ρ8πΔt$ with $ρ=2/T$, where $T$ is the window length. Only mrDMD modes with frequencies $ω$ smaller than $ρ$ are considered at a given level. The power of the mrDMDs is computed as in
Jovanović et al., (2014) and Kutz et al., (2016a) and is based on the dynamics. The mrDMD power is computed by separating the DMD amplitude into a product of the normalized DMD modes, a diagonal
matrix of mode amplitudes, and the Vandermonde eigenvalue matrix (Jovanović et al., 2014 and Kutz et al., 2016a). The Vandermonde matrix captures the exponentiation of the DMD eigenvalues. The
exponentiated eigenvalues determine the power of the DMD modes. Since DMDs are not normalized, the amplitude of the power spectra does not correspond to a physical unit.
We have the following notation: mrDMD(i,j,k) denotes the DMD from the ith level and the jth segment, while k denotes the number of the corresponding DMD mode. For instance, mrDMD starts at the first
level, i.e., the full time series. The second level denotes the two halves of the full time series. The first half is the first segment, while the second half is the second segment. The first DMD
mode of the third level and the second segment is denoted: mrDMD(3,2,1). As for EOFs, the sign of the DMDs is arbitrary.
A. Global sea surface temperature
We start with the monthly global SST data set. The mrDMD power spectrum (Fig. 2) reveals that the maximum power is contained in the first level, while the second largest power is in the range
containing the annual cycle. The first mrDMD mode of the first level, the mode whose real component of the eigenvalue is closest to one and, thus, corresponds to an almost neutral mode, has a
geographical structure, which is very similar to the climatological mean state [Figs. 3(a) and 3(b)]. This level of the mrDMD decomposition has two more mrDMD modes, which have purely real
eigenvalues [Figs. 3(d) and 3(f)]. These are almost neutral modes, though their eigenvalues are somewhat smaller than 1 with values of 0.9712 and 0.8911 and, thus, are damped modes. mrDMD(1,1,2) is
likely representing low-frequency behavior of the sea ice edge since in comparison with mrDMD(1,1,1), it represents an equatorward extension in the polar regions of negative anomalies and to a
reduction of the meridional temperature gradient.
Our mrDMD modes of the first level are also different from the global SST EOF modes of the study by Messié and Chavez, (2011) in that they do not correspond to the well-known modes of ocean
variability, such as the ENSO, PDO, or AMO. The comparison with EOFs is not straightforward since with mrDMD, we focus on certain time scales at each level, while EOFs do not systematically
distinguish between different time scales, though they tend to be ordered by an integrated auto-correlation time scale (Franzke et al., 2005 and Franzke and Majda, 2006).
We also computed pattern correlations by projecting the instantaneous SST fields onto the mrDMD modes [Figs. 3(f)–3(h)]. The pattern correlations also confirm the almost neutral behavior of these
three mrDMD modes, though mrDMD(1,1,2) and mrDMD(1,1,3) show a slight reduction of correlation strength, which could be an imprint of their damped nature.
A powerful feature of mrDMD is that it can identify the modes of the annual cycle, which are encoded at level 8. Figure 4 shows the periods of the mrDMD modes associated with the annual cycle, which
are all around 12 months, but also vary, indicating that mrDMD has the ability to capture year to year variations in the annual cycle. Note that our segments do not correspond to the calendar annual
cycle; the annual cycle dynamics are determined by the dynamics of the climate system. As an example of an annual cycle mrDMD mode, we choose the tenth segment of the eighth mrDMD level, i.e., mrDMD
(8,10,2) (Fig. 5). The other annual cycle related mrDMDs of these other segments look similar, suggesting that this is a robust feature (not shown). The corresponding first mrDMD mode corresponds to
the mean state over this segment and is an almost neutral mode. The second mrDMD mode, consisting of a real and imaginary component, corresponds to the annual cycle. Both components together make up
a propagating mode with temperature anomalies of opposite sign in both hemispheres where the imaginary part corresponds to the transition seasons, while the real part corresponds to the peak seasons
[Fig. 5(e)]. Our results are consistent with the analysis by Pezzulli et al., (2005), which found substantial interannual variability in the seasonal cycle of the univariate NINO3.4 index.
The length of the annual cycle seems to be determined by other large-scale processes. In Figs. 5(f) and 5(g), we display composites of SST over long [Fig. 5(f)] and short [Fig. 5(g)] annual cycle
events, respectively. The composites are averaged over those segments when the mrDMD annual cycle period is either one standard deviation above or below its long term mean period of about 12 months,
respectively. The composites indicate a hemispheric seesaw behavior of SST between both hemispheres.
The mrDMD method is also able to identify ENSO events as is demonstrated in Fig. 6. For instance, mrDMD(7,52,3) corresponds to the El Nino of 1987. This mrDMD mode has a period of about 14 months,
which is in the range of the typical El Nino duration of between 7 and 24 months. Both parts of mrDMD(7,52,3) show the typical El Nino anomaly in the tropical Pacific. This shows that mrDMD is able
to extract physically meaningful patterns from ocean data sets. This is consistent with the results of Kutz et al., (2016a). Since ENSO is a transient phenomenon, mrDMD represents this as real and
imaginary components of a DMD mode. This is in contrast to EOFs, in which ENSO would be represented by just one EOF pattern (Messié and Chavez, 2011). This illustrates that mrDMD provides patterns,
which are dynamically directly interpretable since DMD analysis provides eigenvalues determining the pattern’s oscillation frequency and growth rate.
Two widely recognized modes of SST variability are the Atlantic multidecadal oscillation (Knight et al., 2006 and Ting et al., 2011) and the Pacific decadal oscillation (Mantua and Hare, 2002). The
mrDMD power spectrum (Fig. 2) does not show enhanced power at decadal time scales; the power at those scales is actually rather low. This is consistent with recent studies, which questioned the
physical relevance of these modes, which are identified by a global EOF analysis (Messié and Chavez, 2011). Mann et al., (2020) provide evidence that both modes are not distinguishable from the noise
background. Also, Clement et al., (2015) provide evidence that the AMO is not a dynamical oceanographic mode of variability since their model experiments do not contain ocean dynamics but still show
AMO type variability. These studies are consistent with our mrDMD results that those modes are potentially not dynamically meaningful.
To further demonstrate the ability of mrDMD to identify physically meaningful patterns, we now examine variability at the fourth and fifth levels in more detail. As Fig. 2 shows, the fourth level has
one big amplitude event and the fifth level has five large amplitude events. First, we focus on the large amplitude event of the fourth level (Fig. 7). As can be seen from Fig. 2, this event occurred
between November 1874 and October 1895. In Fig. 7(a), we display the average over this period. This period is characterized by warm SST anomalies in the North Pacific along 40$°$N, in the Labrador
sea and the Fram strait, and in the Southern Ocean between South America and Antarctica. Most of the remaining ocean is anomalously cold, especially the Arctic Ocean. In contrast, the anomalies
averaged over all other times are much weaker [Fig. 7(b)]; whether this is just due to averaging over a longer period or whether this suggests that DMD picks systematically dynamically relevant and
active states needs further research, ideally with very long climate model data. mrDMD(4,2,2) has a period of about 20 years. Its real component is similar to the Pacific decadal oscillation (Mantua
and Hare, 2002), but this mrDMD describes more complex dynamics than just a standing pattern.
We now turn to the fifth level where we have five large amplitude events (Fig. 2). We now average over the periods of these five events [Fig. 8(a)] and average over all other times [Fig. 8(b)]. The
high amplitude composite shows increased SST over most of the ocean areas with cold anomalies only in the northern North Pacific and the south of Greenland. The composite of all other times displays
mainly cold anomalies. The modes mrDMD(5,3,2) and mrDMD(5,6,2) again demonstrate that those states are the result of dynamic processes. Both mrDMD modes also have similarities to the PDO. This
suggests that the PDO is an important mode of ocean variability on decadal time scales. Moreover, the fact that we identify multiple DMD modes resembling the PDO is consistent with the finding that
the PDO is not a single physical mode of variability, but rather is an aggregation of multiple processes, such as ENSO teleconnections, reemergence of SST, and stochastic atmospheric forcing (Newman
et al., 2003; 2016; Qiu et al., 2007; Schneider and Cornuelle, 2005; and Vimont, 2005).
B. Kuroshio sea surface height
In the following, we examine the Kuroshio current using mrDMD. For this purpose, we use daily Aviso SSH data. The mrDMD power frequency–time plot (Fig. 9) shows that again, the maximum power is
contained in the first level. However, the third, fifth, and sixth levels contain a sizable amount of power. The power of the first level is associated with the mean state (Fig. 10) as mrDMD(1,1,1)
corresponds to the mean state [compare Figs. 10(a) and 10(c)]. The mrDMD(1,1,2) mode projects onto the linear trend [Fig. 10(b)] for most of the area with the exception of the southern area of our
chosen box, though with the opposite sign. The pattern correlation is positive and has a trend toward zero [Fig. 10(f)]. This suggests that mrDMD(1,1,2) is a damped mode, which is also indicated by
its positive real eigenvalue with modulus smaller than 1. The pattern itself represents weakening of the central SSH gradient of the Kuroshio and has a positive anomaly at the location of the
Kuroshio large meander, southeast of Japan.
To focus on specific high amplitude events in the frequency plot of Fig. 9, we display the relevant mrDMDs of the third level in Fig. 11. The time scales of this level correspond to about 7.5 years.
The by far largest and dominating segment at this level is the second segment (04.07.1999–03.01.2006), while all other segments are rather inconspicuous when it comes to the power spectrum. mrDMD
(3,2,1) corresponds to the local mean state for that time segment. While mrDMD(3,2,1) of this time period is very similar to the overall mrDMD(1,1,1) in Fig. 10, one noteworthy aspect is the
variations in the pattern correlations [Fig. 11(b)]. For most of the time, the correlations fluctuate between 0.955 and 0.96. There are two excursions to values around 0.94, corresponding to the
years 1999 and 2001. These years were characterized by an exceptionally meandering current with a pair of large persistent eddies off the coast of Japan (see Fig. 2 in Qiu and Chen, 2005), i.e., a
northern warm core eddy and a southern cold core eddy. If we now take higher modes into consideration, these exceptional dynamics are confirmed [Figs. 11(c)–11(h)]. The modes mrDMD(3,2,2) and mrDMD
(3,2,4) have complex eigenvalues and are, thus, propagating patterns with periods of about 986 and 874 days. Correspondingly, mrDMD(3,2,3) and mrDMD(3,2,5) are the complex conjugates of mrDMD(3,2,2)
and mrDMD(3,2,4), respectively. All these modes highlight a propagating large meander around 30–35$°$N, which is characteristic for this time period. Furthermore, modes 2 and 3 propagate the signal
of the eddy pair between 32–37$°$N and 140–145$°$E, which is such a dominating feature for both 1999 and 2001 (see Fig. 2 of Qiu and Chen, 2005). A part of this signal is also visible in modes 4 and
5, although the dominant role of these modes seems to lie in the general meandering of the current starting from this dipole eddy perturbation. To support this interpretation, the corresponding
pattern correlations have been computed by projecting the mrDMD patterns onto bandpass filtered SSH fields where the bandpass filter frequencies correspond to the mrDMD frequencies associated with
the respective third level modes, which correspond here to periods between 874 and 986 days. One important observation here is that the real and complex modes remain at a 90$°$ angle with respect to
the correlations for the first 1.5 to two cycles between positive and negative correlations, which corresponds to the period up until the end of 2001. After that, they become increasingly mixed and
also damped with respect to correlation amplitude. This suggests that these specific dynamical modes only play an important propagating role during the first years of this time period, coinciding
with the years of exceptional large-scale and persistent eddy and meandering activity of the Kuroshio.
To conclude the mrDMD analysis of the Kuroshio SSH, we would like to point out that this diagnostic does not necessarily distinguish between the stable and unstable years of the Kuroshio (as
discussed by Qiu and Chen, 2005). Instead, our results suggest that the mrDMD analysis discriminates between years, which are dominated by propagating large-scale anomalies and those years where the
conditions are either more persistent (very long time scales) or shaped by (short lived) chaotic behavior on temporal and spatial scales that do not correspond to the respective mrDMD level. A
potential drawback of mrDMD here is the strict, non-overlapping decomposition of the total time period, which can lead to specific events being split between different segments. The consequence may
be that the mrDMD does not pick up those events at certain levels but may hint at them at higher levels when the data are further subdivided. One solution for this potential drawback would be to use
an overlapping windowing approach. However, this would be computationally much more expensive.
C. Gulf Stream sea surface height
Next, we examine the Gulf Stream SSH. The multiresolution DMD power spectrum is displayed in Fig. 12. Both the frequency-time and time averaged frequency plots show that the lowest frequencies
dominate the spectrum. This is mainly due to the mean state, which is captured by mrDMD(1,1,1) [compare Figs. 13(a) and 13(b)]. The climatological mean state and mrDMD(1,1,1) are very similar so that
most of the low-frequency information is associated with the mean. As the eigenvalue of mrDMD(1,1,1) is 1.0, it represents a temporally neutral mode.
In order to have a closer look at the structure of the associated mrDMD patterns, we also compute pattern correlations between the respective mrDMD pattern and the SSH fields. Figure 13 shows these
time lag correlations for mrDMD(1,1,1) of level 1. The pattern shows for the whole time pattern correlation values between 0.94 and 0.98. For mrDMD(1,1,2), the pattern correlation shows, in absolute
terms, a decreasing trend for the first 20 years before it is increasing to large absolute values again. mrDMD(1,1,2) may show an imprint of long time-scale changes of the Gulf Stream due to climate
change signals and low-frequency changes in the Gulf Stream intensity and stability. Even though it has a robust correlation of around 0.3, the features in the pattern are rather small scale with
some slight imprint of an emphasized north–south gradient along the Gulf Stream path. It is, therefore, a mixture of eddy driven and large-scale changes in the structure, intensity, and position of
the Gulf Stream.
Similarly to the mrDMD of the Kuroshio region in Fig. 9, the Gulf Stream SSH also exhibits isolated large amplitude events in frequency space for low-frequencies (0.0008; about 3.5 years), although
these occur at higher frequencies than the level 3 mrDMDs described in Sec. III B for the Kuroshio, which occurred at periods of about 7 years. These events are associated with a strong meandering of
the Gulf Stream (not shown). The next interesting mrDMD occurs at the sixth level during the period 20.04.2000–08.02.2001 (Fig. 14). mrDMD(6,9,1) again corresponds to the mean state for this time
window. mrDMD(6,9,2) is again a standing pattern since it has a purely real eigenvalue. The pattern correlation of this mode reveals that this mode shifts the Gulf Stream from south to north and back
to south over the period 20.04.2000–08.02.2001 with an emphasis between around 48$°$W and 72$°$W. The pattern has relatively close resemblance to the first EOF mode discussed by Pérez-Hernández and
Joyce, (2014) (their Fig. 3). The strengthening and weakening of the correlations with mrDMD(6,9,2) coincide with a noticeable northward shift of the Gulf Stream in October 2000, also discussed by
Pérez-Hernández and Joyce, (2014) (their Fig. 4). mrDMD(6,9,3) and mrDMD(6,9,5) correspond to eddy modes with periods of about 200 and 162 days, respectively. The pattern correlations confirm that
these mrDMDs are propagating eddy fields as the correlations have a periodic structure and are shifted by about 90$°$ between real and imaginary components (Fig. 14). The mrDMD of the sixth level,
therefore, highlights this dynamic shifting event by a peak in the DMD power (Fig. 12) for the ninth segment and then splits the event into a mean component, a Gulf Stream shift, and further eddy
dynamics with a propagating signature.
According to Pérez-Hernández and Joyce, (2014), other extreme northward shifts of the Gulf Stream occurred in July 1995 and April 2012. These events also show high DMD power. As already noted above,
the ability of DMD to capture such events depends on the length of the segments (i.e., the frequency), the dynamical nature of these shifts, which may be picked up by low DMD modes and whether these
events are fully captured within one segment or whether they may be split between two consecutive segments. One, therefore, needs to look at individual segments to investigate why the power of that
segment is comparatively large or low. Also, pattern correlations is a useful diagnostic for the interpretation of the DMDs.
In this study, we have demonstrated that the physics-consistent machine learning method multi-resolution dynamic mode decomposition is able to extract dynamically relevant patterns of ocean
variability. We applied mrDMD to sea surface temperatures and sea surface height fields. We find that mrDMD is able to systematically decompose SST and SSH fields into meaningful patterns on
different time scales. This allows for a systematic analysis of multi-scale systems and the climate system, in particular. We show that mrDMD is able to identify annual cycle modes, which can vary
from year to year, without supervision. This is an important aspect in the analysis of climate dynamics since a time-varying annual cycle can provide an alternative basic state for the study of
climate anomalies (Wu et al., 2008 and Pezzulli et al., 2005). When using a time fixed annual cycle, all changes, e.g., due to global warming, will be part of the anomalies. However, changes in the
annual cycle can have pronounced impacts on the climate system and, therefore, our understanding of it.
We also show that mrDMD is able to extract actual ENSO events from the SST data set without supervision. ENSO is one of the most important modes of climate variability and occurs on a broad range of
time scales (Timmermann et al., 2018). This makes mrDMD a very attractive method for the analysis of multi-scale systems since no a priori filtering is necessary. mrDMD also seamlessly provides a
decomposition into a local basic state and eddy fields allowing state-dependent eddy-mean flow interaction studies. Here, we used non-overlapping time windows for our mrDMD analysis, putting some
constraints on identifying specific events (such as ENSO events) if they are spread across two windows. However, this can be relaxed to an overlapping windows analysis. mrDMD is also relatively
computationally inexpensive, making it an attractive analysis method and potentially a prediction method. For instance, Gottwald and Gugole, (2019) use DMD to identify regime transitions in the North
Atlantic region and the Southern Hemisphere. DMD also has the potential for subgrid-scale modeling as shown by Gugole and Franzke, (2019).
Considering sea surface height fields for the Kuroshio and Gulf Stream, mrDMD is capable of identifying dynamically interesting and complex events related to changes in the position and intensity of
the currents. While it can highlight these mean state changes similarly well compared to other methods, such as EOF decomposition, it also provides information about the dynamically propagating eddy
component of the flow. Such dynamically evolving components consist of a real and an imaginary DMD mode, whose correlations with the original SSH data are cyclic and are shifted by 90$°$ with respect
to each other, signifying a propagating signal. As highlighted by the weakening of the correlations, these dynamical DMD modes can also become less important as time progresses throughout a specific
time window. The detailed mrDMD decomposition of the flow allows us to investigate isolated events and associate relevant drivers to the respective modes.
C.L.E.F. was supported by the Institute for Basic Science (IBS), Republic of Korea (No. IBS-R028-D1) and the Pusan National University research grant 2021. S.J. was supported by subprojects M3 and L4
of the Collaborative Research Center TRR181 Energy Transfer in Atmosphere and Ocean funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) (Project No. 274762653).
Conflict of Interest
The authors have no conflicts to disclose.
Author Contributions
Christian L. E. Franzke: Conceptualization (equal); Formal analysis (equal); Funding acquisition (equal); Investigation (equal); Methodology (equal); Project administration (equal); Software (equal);
Validation (equal); Writing – original draft (equal); Writing – review & editing (equal). Federica Gugole: Conceptualization (equal); Writing – review & editing (equal). Stephan Juricke:
Conceptualization (equal); Formal analysis (equal); Funding acquisition (equal); Writing – review & editing (equal).
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
S. L.
B. W.
J. L.
, and
J. N.
, “
Koopman invariant subspaces and finite linear representations of nonlinear dynamical systems for control
PLoS One
L. N.
M. A.
, and
, “
The Atlantic multidecadal oscillation without a role for ocean circulation
S. B.
C. L. E.
, “Atmospheric teleconnection patterns,” in
Nonlinear and Stochastic Climate Dynamics
, edited by C. L. E. Franzke and T. O’Kane (Cambridge University Press, 2017), pp. 54–104.
C. L. E.
M. G.
N. W.
, and
, “
The structure of climate variability across scales
Rev. Geophys.
, http://dx.doi.org/10.1029/2019RG000657 (
C. L. E.
A. J.
, “
Low-order stochastic mode reduction for a prototype atmospheric GCM
J. Atmos. Sci.
C. L. E.
A. J.
, and
, “
Low-order stochastic mode reduction for a realistic barotropic model climate
J. Atmos. Sci.
G. A.
, “
Detecting regime transitions in time series using dynamic mode decomposition
J. Stat. Phys.
C. L. E.
, “
Spatial covariance modeling for stochastic subgrid-scale parameterizations using dynamic mode decomposition
J. Adv. Model. Earth Syst.
Patterns Identification and Data Mining in Weather and Climate
, “
PIPs and POPs: The reduction of complex dynamical systems using principal interaction and oscillation patterns
J. Geophys. Res.: Atmos.
, https://doi.org/10.1029/JD093iD09p11015 (
P. W.
V. F.
J. H.
M. J.
T. M.
R. S.
, and
, “
Extended reconstructed sea surface temperature, version 5 (ERSSTv5): Upgrades, validations, and intercomparisons
J. Clim.
M. R.
P. J.
, and
J. W.
, “
Sparsity-promoting dynamic mode decomposition
Phys. Fluids
E. N.
, “
Gulf Stream eddy characteristics in a high-resolution ocean model
J. Geophys. Res.: Oceans
, https://doi.org/10.1002/jgrc.20318 (
J. R.
C. K.
, and
A. A.
, “
Climate impacts of the Atlantic multidecadal oscillation
Geophys. Res. Lett.
, https://doi.org/10.1029/2006GL026242 (
, and
Dynamic Mode Decomposition
Society for Industrial and Applied Mathematics
J. N.
, and
S. L.
, “
Multiresolution dynamic mode decomposition
SIAM J. Appl. Dyn. Syst.
, “
Climate signal detection using wavelet transform: How to make a time series sing
Bull. Am. Meteorol. Soc.
M. E.
B. A.
, and
S. K.
, “
Absence of internal multidecadal and interdecadal oscillations in climate model simulations
Nat. Commun.
N. J.
S. R.
, “
The Pacific decadal oscillation
J. Oceanogr.
, “
Global modes of sea surface temperature variability in relation to regional climate indices
J. Clim.
, “
Spectral properties of dynamical systems, model reduction and decompositions
Nonlinear Dyn.
, “
Analysis of fluid flows via spectral properties of the Koopman operator
Annu. Rev. Fluid Mech.
M. A.
T. R.
K. M.
E. D.
N. J.
A. J.
D. J.
A. S.
J. D.
, and
C. A.
, “
The Pacific decadal oscillation, revisited
J. Clim.
G. P.
, and
M. A.
, “
ENSO-forced variability of the Pacific decadal oscillation
J. Clim.
G. R.
, “
Empirical orthogonal functions and normal modes
J. Atmos. Sci.
, and
, “
Decadal variability of subtropical mode water subduction and its impact on biogeochemistry
J. Oceanogr.
, “
Prediction of Niño 3 sea surface temperatures using linear inverse modeling
J. Clim.
M. D.
T. M.
, “
Two modes of Gulf Stream variability revealed in the last two decades of satellite altimeter data
J. Phys. Oceanogr.
, and
, “
The variability of seasonality
J. Clim.
, “
Variability of the Kuroshio extension jet, recirculation gyre, and mesoscale eddies on decadal time scales
J. Phys. Oceanogr.
, and
, “
Coupled decadal variability in the North Pacific: An observationally constrained idealized model
J. Clim.
C. W.
, and
D. S.
, “
Spectral analysis of nonlinear flows
J. Fluid Mech.
P. T.
, and
P. M.
, “
Estimates of sea surface height and near-surface alongshore coastal currents from combinations of altimeters and tide gauges
J. Geophys. Res.: Oceans
, https://doi.org/10.1029/2008JC004756 (
B. D.
, “
The forcing of the Pacific decadal oscillation
J. Clim.
, and
, “
Resilience of the Gulf Stream path on decadal and longer timescales
Sci. Rep.
K. M.
M. J.
M. F.
et al.
, “
El Niño–Southern Oscillation complexity
, and
, “
Robust features of Atlantic multi-decadal variability and its climate impacts
Geophys. Res. Lett.
, https://doi.org/10.1029/2011GL048712 (
J. H.
C. W.
D. M.
S. L.
, and
J. N.
, “
On dynamic mode decomposition: Theory and applications
J. Comput. Dyn.
D. J.
, “
The contribution of the interannual ENSO cycle to the spatial pattern of decadal ENSO-like variability
J. Clim.
von Storch
F. W.
Statistical Analysis in Climate Research
Cambridge University Press
E. K.
B. P.
E. S.
N. E.
, and
C. J.
, “
The modulated annual cycle: An alternative reference frame for climate anomalies
Clim. Dyn. | {"url":"https://pubs.aip.org/aip/cha/article/32/7/073122/2835886/Systematic-multi-scale-decomposition-of-ocean","timestamp":"2024-11-02T17:32:05Z","content_type":"text/html","content_length":"395679","record_id":"<urn:uuid:6d4a351c-4439-4a63-bb9f-785d779089e9>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00153.warc.gz"} |
moving at the speed of money
“Man cannot change a single law of nature, but can put himself into such relations to natural laws that he can profit by them.” – Edwin G. Conklin
Natural sciences have fundamental laws, those rules upon which hypotheses are built to form a framework of how nature fits together. Every so often new findings requiring new frameworks come along
which allow us to see the world more clearly for what it is. These frameworks come and go over time but their underlying laws tend to remain constant.
While the payments industry has plenty of generally accepted frameworks and knowledge, the only laws cited – as far as I can tell please enlighten if you know of any – are ones lifted from economics.
That’s not to say supply and demand curves aren’t relevant, but payments as its own field glaringly lacks any theorems of its own. This immediately raises several questions:
1. Are there any fundamental laws of payments?
2. If so, what are they?
3. What can these laws tell us about how payments are evolving?
seeking truth
Payments receives little attention from academia though it is clearly deserving of much more. Payments have existed since the birth of civilization and are a startling constant of human society and
behavior. They are the tangible exchange of intangibility – emotion, worth, trust, and time are bundled together in a single interaction between individuals, groups, and societies.
Their enduring structure suggests an underlying and consistent logic. From ancient bartering systems to modern trade finance, payments throughout history and across cultures have shared the same
characteristics – originators, recipients, amounts, trust, etc. Advances in technology and society have done little to eliminate these components. With this factually based understanding, it is safe
to declare there are underlying laws to payments, and it is worthwhile to attempt to solve for them.
a stab in the dark
We begin our derivation with defining the phenomena described. This appears simple for payments as there are so few obvious components:
• Velocity: The length of time between payment initiation and receipt
• Value: The amount transferred
• Risk: The uncertainty between the originator and recipient
With the pieces laid out, assembly is straightforward. To start, the velocity of a transaction corresponds directly with its value – the higher the speed, the lower the payment:
Velocity = 1 / Value
Why? High values drive slower transactions due to their increased risk. A formula emerges, and we are left with a formula revealing the speed of a transaction within a given channel of exchange:
Velocity = Risk / Value
The higher the risk or value, the longer the payment duration. This leads to the permutations:
Risk = Value * Velocity
Value = Risk / Velocity
These conceptually check out. Risk is influenced by value and speed – higher speeds and higher values translate into higher risk. For its part, higher value correlates with higher risk and slower
Notwithstanding a deeper peer review, we have our first step towards establishing the laws of payments.
digging deeper
To further understand this proposed law it is worth dissecting its most nebulous variable – Risk. Risk represents a measurement of uncertainty – what is the likelihood something will go wrong with
the payment, causing it to be misdirected, appropriated, or halted? Simply put:
Risk = 1 - Certainty
Wherein a sure thing (a Certainty of 1) would result in no Risk. Although by definition wholly unknowable and unverifiable, some major components of Risk are apparent:
• Jurisdictions: The number of regulatory jurisdictions crossed
• Parties: The number of parties involved in a payment (e.g. intermediaries)
• Timing: The time between the payment and the transfer of the good / service (positive for pre, negative for post)
• Vulnerability: The inherent vulnerability of a payment’s medium to be compromised
• Relationship: The unfamiliarity between the originator and the recipient
• Frequency: How often, if at all, payments have occurred between the parties
Setting them equal to Risk and playing with the relationships reveals a potential formula. We start with Jurisdictions. The more regulatory jurisdictions, the higher the risk. (Note: I’m pretty
distant from my math lessons at this point, so forgive my sloppy notation here)
Risk = Jurisdictions
Now we add parties. While conceptually payments are made between an originator and recipient, in practice modern payments are facilitated by a number of brokers and intermediaries. For instance, a
typical credit card payment has at least five involved participants, if not more. The higher the number of parties involved, the higher the risk of something going wrong. We spread this risk over the
payment by adding it as a multiplier to the right-hand side:
Risk = Jurisdictions * Parties
Next is Timing. The greater the time difference of when a payment is made before a service or receipt of a good increases risk, while the longer afterwards decreases it. To capture this timing impact
in both directions we add timing to the right-hand side rather than insert it as a multiple (i.e. as a multiple, a negative value would result in negative risk every time).
Risk = Parties * Jurisdictions + Timing
Now the payment channel itself comes into play, represented by the variable Vulnerability. Vulnerability’s influence over risk varies on the channel’s type, input mode, current state and other
characteristics. For instance, an in-person payment over a card network may have greater risk than a bank-enabled wire. In aggregate these features total to a constant risk modifier spread across the
entire payment, and as such, we add Vulnerability as a multiplier to the entire right-hand side.
Risk = Vulnerability * (Parties * Jurisdictions + Timing)
Now comes the Relationship between the originator and recipient. This is measured in the unfamiliarity between the parties - a strong familiarity encourages good behavior and promises a known avenue
for resolving any issues, while low familiarity suggests the opposite. Essentially a stand-in for the trust between the recipients, we add this variable as an exponent to the Parties variable. Small
changes in trust has a large influence on the overall risk (with lower familiarity resulting in higher Relationship values):
Risk = Vulnerability * ((Parties ^ Relationship) * Jurisdictions + Timing)
Finally comes Frequency, representing the history of payments between the originating and receiving parties. The greater the history of payments between parties, the less risk in something going
wrong with a subsequent payment. We add Frequency as a divisor to the Relationship variable, because relationships are made closer by interaction. With this, we have a full (for the time being)
equation for Risk:
Risk = Vulnerability * ((Parties ^ Relationship / Frequency) * Jurisdictions + Timing)
Substituting the right-hand side into the original equation allows us to solve for each variable in turn:
Velocity = Vulnerability * ((Parties ^ Relationship / Frequency) * Jurisdictions + Timing) / Value
Value = Vulnerability * ((Parties ^ Relationship / Frequency) * Jurisdictions + Timing) / Velocity
Jurisdictions = ((Velocity * Value / Vulnerability) – Timing) / (Parties ^ Relationship / Frequency)
Parties = Relationship * Log ((Velocity * Value / Vulnerability) – Timing) / Jurisdictions * Frequency
Timing = (Velocity * Value / Vulnerability) - (Parties ^ Relationship / Frequency) * Jurisdictions
Frequency = Relationship / Parties * Log ((Velocity * Value / Vulnerability) – Timing / Jurisdictions)
Relationship = Parties * Frequency * Log ((Velocity * Value / Vulnerability) – Timing / Jurisdictions)
Vulnerability = (Velocity * Value) / ((Parties ^ Relationship / Frequency) * Jurisdictions + Timing)
For the most part, these proposed formulas are irrelevant – nobody will ever need to solve for these variables. Yet their fundamental relationships with one another are guiding: the more parties to a
transaction suggest less history, a pre-service transaction is more likely when the value is lower, etc. Their worth is in that they tell us yes – there is a logical framework which underpins the
structure of a payment, and this framework requires balance in order to function.
breaking the model
The trouble with proofs outside of mathematics is none of them can ever be proved factually correct. Mathematic laws only require logical consistency, whereas natural science laws must instead be
constantly tested against observations to prove their continued relevance. Proving, refining, and likely rewriting the above formulas must involve applying ongoing observations against them and
seeing what happens.
An admitted lack of time necessitates postponing an in-depth check for this essay. Instead, let us quickly gut-check the logic: the more parties and jurisdictions involved, the more risk. Advance
payments increase risk, while post-payments reduce it. The relationship has profound influence – unfamiliarity between the originator and recipient can exponentially increase risk. Conversely, the
relationship’s uncertainty is mitigated by repeated payments between the parties.
Generally, the logic makes sense at this time and we look forward to revisiting and revising this in the future.
holding steady
It is easy to assume this equilibrium will soon receive a rebalance. A globalized, digital economy demands payments at their extremes: transactions that can be made near instantaneously, involve
dozens of parties, and cross multiple jurisdictions. New enabling technologies and payment models continue to emerge at a rapid clip, powered by the likes of blockchain, novel messaging standards
(e.g. ISO20022) and cloud computing. Could these fundamental relationships break when pushed to such limits?
A look back at the proposed laws in light of these and other modern changes makes it clear the logical underpinnings of payments will not be upset anytime soon. Velocity has increased, as has risk;
value has diminished while velocity has increased; vulnerability spikes for new channels and is mitigated over time with refinements, and so on. Much like how the laws of motion apply the same to a
sparrow as a fighter-jet, the inherent relationships between payment characteristics appear constant.
a payment in motion…
The constancy of the framework provides a powerful insight – payments will continue to evolve in pushing these variables to their extremes, and mitigations will emerge in retaining the overall
balance of their relationships. A strange, fabulous future begins to emerge through these proposed laws:
• Liquid Money: As the velocity of transactions continues to approach real-time, values will continue to decrease. Smaller and smaller payments will eventually force a ‘phase change’ from discreet,
individual transactions to a continuous flow. Like the transition of ice to water, payments will grow more ‘fluid-like’ in order to mitigate the growing risk driven by the sophistication of the
digital realm.
• Low Value Crime: Instant, irrevocable payments will be highly vulnerable to malevolent actors. This will be mitigated by low values complexity paired with higher frequency. It is easy to imagine
larger payments being preceded by a series of low value, authentication transactions.
• Point to Point: Interconnectedness will enable direct payments in different jurisdictions, eliminating the need for go-betweens and reducing the number of involved parties.
• Just in Time Payments: Risk will decrease as advances shrink the absolute time between provision of a service or good and the payment itself. It is possible to imagine “per use” or “per service”
payments instead of the more encompassing payments we are familiar with today.
Science is so fascinating not because of the natural laws themselves, but in how the universe wraps itself around them. Understanding the nature of payments by codifying their fundamental rules
allows us to contextualize their characteristics and better derive innovations in the field. What new laws are yet to be discovered in the land of payments? 
I welcome your feedback. Please don’t hesitate to reach out through the contact section of the website if you would like to discuss, comment, or have any suggested edits. | {"url":"https://currencci.com/post/200821-moving-at-the-speed-of-money/","timestamp":"2024-11-06T04:22:36Z","content_type":"text/html","content_length":"19557","record_id":"<urn:uuid:eb24d92b-214b-44c3-b055-6a7e99e03473>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00501.warc.gz"} |
A point of the complex projective plane may be described in terms of homogeneous coordinates, being a triple of complex numbers (x : y : z), where two triples describe the same point of the plane
when the coordinates of one triple are the same as those of the other aside from being multiplied by the same nonzero factor. In this system, the points at infinity may be chosen as those whose z
-coordinate is zero. The two circular points at infinity are two of these, usually taken to be those with homogeneous coordinates
(1 : i : 0) and (1 : −i : 0).
Trilinear coordinates
Let A. B. C be the measures of the vertex angles of the reference triangle ABC. Then the trilinear coordinates of the circular points at infinity in the plane of the reference triangle are as given
${\displaystyle -1:\cos C-i\sin C:\cos B+i\sin B,\qquad -1:\cos C+i\sin C:\cos B-i\sin B}$
or, equivalently,
${\displaystyle \cos C+i\sin C:-1:\cos A-i\sin A,\qquad \cos C-i\sin C:-1:\cos A+i\sin A}$
or, again equivalently,
${\displaystyle \cos B+i\sin B:\cos A-i\sin A:-1,\qquad \cos B-i\sin B:\cos A+i\sin A:-1,}$
where ${\displaystyle i={\sqrt {-1}}}$ .^[1]
Complexified circles
A real circle, defined by its center point (x[0],y[0]) and radius r (all three of which are real numbers) may be described as the set of real solutions to the equation
${\displaystyle (x-x_{0})^{2}+(y-y_{0})^{2}=r^{2}.}$
Converting this into a homogeneous equation and taking the set of all complex-number solutions gives the complexification of the circle. The two circular points have their name because they lie on
the complexification of every real circle. More generally, both points satisfy the homogeneous equations of the type
${\displaystyle Ax^{2}+Ay^{2}+2B_{1}xz+2B_{2}yz-Cz^{2}=0.}$
The case where the coefficients are all real gives the equation of a general circle (of the real projective plane). In general, an algebraic curve that passes through these two points is called
Additional properties
The circular points at infinity are the points at infinity of the isotropic lines.^[2] They are invariant under translations and rotations of the plane.
The concept of angle can be defined using the circular points, natural logarithm and cross-ratio:^[3]
The angle between two lines is a certain multiple of the logarithm of the cross-ratio of the pencil formed by the two lines and the lines joining their intersection to the circular points.
Sommerville configures two lines on the origin as ${\displaystyle u:y=x\tan \theta ,\quad u':y=x\tan \theta '.}$ Denoting the circular points as ω and ω′, he obtains the cross ratio
${\displaystyle (uu',\omega \omega ')={\frac {\tan \theta -i}{\tan \theta +i}}\div {\frac {\tan \theta '-i}{\tan \theta '+i}},}$ so that
${\displaystyle \phi =\theta '-\theta ={\tfrac {i}{2}}\log(uu',\omega \omega ').}$
Imaginary transformation
The transformation ${\displaystyle x'=x,\quad y'=iy,\quad z'=z}$ is called the imaginary transformation by Felix Klein. He notes that the equation ${\displaystyle x^{2}+y^{2}=0}$ becomes ${\
displaystyle x'^{2}-y'^{2}=0}$ under the transformation, which
changes the imaginary circular points x : y = ± i, z = 0, into the real infinitely distant points x’ : y’ = ± 1, z = 0, which are the points at infinity in the two directions that make an angle
of 45° with the axes. Thus all circles are transformed into conics which go through these two real infinitely distant points, i.e. into equilateral hyperbolas whose asymptotes make an angle of
45° with the axes.^[4]
• Pierre Samuel (1988) Projective Geometry, Springer, section 1.6;
• Semple and Kneebone (1952) Algebraic projective geometry, Oxford, section II-8. | {"url":"https://www.knowpia.com/knowpedia/Circular_points_at_infinity","timestamp":"2024-11-03T10:26:49Z","content_type":"text/html","content_length":"96300","record_id":"<urn:uuid:440bba84-cd92-4708-bb67-4af047384fb1>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00211.warc.gz"} |
Efficient reductions among lattice problems
We give various deterministic polynomial time reductions among approximation problems on point lattices. Our reductions are both efficient and robust, in the sense that they preserve the rank of the
lattice and approximation factor achieved. Our main result shows that for any g >= 1, approximating all the successive minima of a lattice (and, in particular, approximately solving the Shortest
Independent Vectors Problem, SIVP[g]) within a factor g reduces under deterministic polynomial time rank-preserving reductions to approximating the Closest Vector Problem (CVP) within the same factor
g. This solves an open problem posed by Blomer in (ICALP 2000). As an application, we obtain faster algorithms for the exact solution of SIVP that run in time n! s^O(1) (where n is the rank of the
lattice, and s the size of the input,) improving on the best previously known solution of Blomer (ICALP 2000) by a factor 3^n. We also show that SIVP, CVP and many other lattice problems are
equivalent in their exact version under deterministic polynomial time rank-preserving reductions. | {"url":"https://cseweb.ucsd.edu/~daniele/papers/SIVP-CVP.xml","timestamp":"2024-11-07T16:31:46Z","content_type":"application/xml","content_length":"2384","record_id":"<urn:uuid:786464ae-53d1-4878-84a6-30df4b93dcb6>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00803.warc.gz"} |
Map from ??????? ?????? II to ????????
To find the map for the driving distance from ??????? ?????? II to ????????, please enter the source and destination and then select the driving mode. Depending on the vehicle you choose, you can
also calculate the amount of CO2 emissions from your vehicle and assess the environment impact. Along with it, estimate your trip cost with our
Fuel Price Calculator! | {"url":"https://www.distancesfrom.com/map-from---II-to/MapHistory/46401154.aspx","timestamp":"2024-11-04T21:31:20Z","content_type":"text/html","content_length":"171960","record_id":"<urn:uuid:584f21b3-a3c4-4fe3-99fb-b03eb5d4336b>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00250.warc.gz"} |
20+ Questions to Test your Skills on Logistic Regression
This article was published as a part of the Data Science Blogathon
Logistic Regression, a statistical model is a very popular and easy-to-understand algorithm that is mainly used to find out the probability of an outcome.
Therefore it becomes necessary for every aspiring Data Scientist and Machine Learning Engineer to have a good knowledge of Logistic Regression.
In this article, we will discuss the most important questions on Logistic Regression which is helpful to get you a clear understanding of the techniques, and also for Data Science Interviews, which
covers its very fundamental level to complex concepts.
Let’s get started,
1. What do you mean by the Logistic Regression?
It’s a classification algorithm that is used where the target variable is of categorical nature. The main objective behind Logistic Regression is to determine the relationship between features and
the probability of a particular outcome.
For Example, when we need to predict whether a student passes or fails in an exam given the number of hours spent studying as a feature, the target variable comprises two values i.e. pass and fail.
Therefore, we can solve classification problem statements which is a supervised machine learning technique using Logistic Regression.
2. What are the different types of Logistic Regression?
Three different types of Logistic Regression are as follows:
1. Binary Logistic Regression: In this, the target variable has only two 2 possible outcomes.
For Example, 0 and 1, or pass and fail or true and false.
2. Multinomial Logistic Regression: In this, the target variable can have three or more possible values without any order.
For Example, Predicting preference of food i.e. Veg, Non-Veg, Vegan.
3. Ordinal Logistic Regression: In this, the target variable can have three or more values with ordering.
For Example, Movie rating from 1 to 5.
3. Explain the intuition behind Logistic Regression in detail.
By using the training dataset, we can find the dependent(x) and independent variables(y), so if we can determine the parameters w (Normal) and b (y-intercept), then we can easily find a decision
boundary that can almost separate both the classes in a linear fashion.
In order to train a Logistic Regression model, we just need w and b to find a line(in 2D), plane(3D), or hyperplane(in more than 3-D dimension) that can separate both the classes point as perfect as
possible so that when it encounters with any new unseen data point, it can easily classify, from which class the unseen data point belongs to.
For Example, Let us consider we have only two features as x[1] and x[2].
Let’s take any of the +ve class points (figure below) and find the shortest distance from that point to the plane. Here, the shortest distance is computed using:
d[i] = w^T*xi / ||w||
If weight vector is a unit vector i.e, ||w||=1. Then,
d[i] = w^T*xi
Since w and x[i] are on the same side of the decision boundary therefore distance will be +ve. Now for a negative point, we have to compute d[j] = w^T*xj. For point x[j], distance will be -ve since
this point is the opposite side of w.
Thus we can conclude, points that are in the same direction of w are considered as +ve points and the points which are in the opposite direction of w are considered as -ve points.
Now, we can easily classify the unseen data points as -ve and +ve points. If the value of w^T*x[i]>0, then y =+1 and if value of w^T*x[i] < 0 then y = -1.
• If y[i ]= +1 and w^T*x[i] > 0, then the classifier classifies it as+ve points. This implies if y[i]*w^T*x[i] > 0, then it is a correctly classified point because multiplying two +ve numbers will
always be greater than 0.
• If y[i] = -1 and w^T*x[i] < 0, then the classifier classifies it as -ve point. This implies if y[i] * w^T*x[i] > 0 then it is a correctly classified point because multiplying two -ve numbers will
always be greater than zero. So, for both +ve and -ve points the value of y[i]* w^T*x[i] is greater than 0. Therefore, the model classifies the points x[i] correctly.
• If y[i] = +1 and w^T*x[i] < 0, i.e, y[i] is +ve point but the classifier says that it is -ve then we will get -ve value. This means that point is classified as -ve but the actual class label is
+ve, then it is a miss-classified point.
• If y[i] = -1 and w^T*x[i] > 0, this means actual class label is -ve but classified as +ve, then it is miss-classified point( y[i]*w^T*x[i] < 0).
Now, by observing all the cases above now our objective is that our classifier minimizes the miss-classification error, i.e, we want the values of y[i]*w^T*x[i] to be greater than 0.
In our problem, x[i ]and y[i] are fixed because these are coming from the dataset.
As we change the values of the parameters w, and b the sum will change and we want to find that w and b that maximize the sum given below. To calculate the parameters w and b, we can use the Gradient
Descent optimizer. Therefore, the optimization function for logistic regression is:
4. What are the odds?
Odds are defined as the ratio of the probability of an event occurring to the probability of the event not occurring.
For Example, let’s assume that the probability of winning a game is 0.02. Then, the probability of not winning is 1- 0.02 = 0.98.
• The odds of winning the game= (Probability of winning)/(probability of not winning)
• The odds of winning the game= 0.02/0.98
• The odds of winning the game are 1 to 49, and the odds of not winning the game are 49 to 1.
5. What factors can attribute to the popularity of Logistic Regression?
Logistic Regression is a popular algorithm as it converts the values of the log of odds which can range from -inf to +inf to a range between 0 and 1.
Since logistic functions output the probability of occurrence of an event, they can be applied to many real-life scenarios therefore these models are very popular.
6. Is the decision boundary Linear or Non-linear in the case of a Logistic Regression model?
The decision boundary is a line or a plane that separates the target variables into different classes that can be either linear or nonlinear. In the case of a Logistic Regression model, the decision
boundary is a straight line.
Logistic Regression model formula = α+1X[1]+2X[2]+….+kX[k]. This clearly represents a straight line.
It is suitable in cases where a straight line is able to separate the different classes. However, in cases where a straight line does not suffice then nonlinear algorithms are used to achieve better
7. What is the Impact of Outliers on Logistic Regression?
The estimates of the Logistic Regression are sensitive to unusual observations such as outliers, high leverage, and influential observations. Therefore, to solve the problem of outliers, a sigmoid
function is used in Logistic Regression.
8. What is the difference between the outputs of the Logistic model and the Logistic function?
The Logistic model outputs the logits, i.e. log-odds; whereas the Logistic function outputs the probabilities.
Logistic model = α+1X[1]+2X[2]+….+kX[k]. Therefore, the output of the Logistic model will be logits.
Logistic function = f(z) = 1/(1+e-(α+1X[1]+2X[2]+….+kX[k])). Therefore, the output of the Logistic function will be the probabilities.
9. How do we handle categorical variables in Logistic Regression?
The inputs given to a Logistic Regression model need to be numeric. The algorithm cannot handle categorical variables directly. So, we need to convert the categorical data into a numerical format
that is suitable for the algorithm to process.
Each level of the categorical variable will be assigned a unique numeric value also known as a dummy variable. These dummy variables are handled by the Logistic Regression model in the same manner as
any other numeric value.
10. Which algorithm is better in the case of outliers present in the dataset i.e., Logistic Regression or SVM?
SVM (Support Vector Machines) handles the outliers in a better manner than the Logistic Regression.
Logistic Regression: Logistic Regression will identify a linear boundary if it exists to accommodate the outliers. To accommodate the outliers, it will shift the linear boundary.
SVM: SVM is insensitive to individual samples. So, to accommodate an outlier there will not be a major shift in the linear boundary. SVM comes with inbuilt complexity controls, which take care of
overfitting, which is not true in the case of Logistic Regression.
11. What are the assumptions made in Logistic Regression?
Some of the assumptions of Logistic Regression are as follows:
1. It assumes that there is minimal or no multicollinearity among the independent variables i.e, predictors are not correlated.
2. There should be a linear relationship between the logit of the outcome and each predictor variable. The logit function is described as logit(p) = log(p/(1-p)), where p is the probability of the
target outcome.
3. Sometimes to predict properly, it usually requires a large sample size.
4. The Logistic Regression which has binary classification i.e, two classes assume that the target variable is binary, and ordered Logistic Regression requires the target variable to be ordered.
For example, Too Little, About Right, Too Much.
5. It assumes there is no dependency between the observations.
12. Can we solve the multiclass classification problems using Logistic Regression? If Yes then How?
, in order to deal with multiclass classification using Logistic Regression, the most famous method is known as the one-vs-all approach. In this approach, a number of models are trained, which is
equal to the number of classes. These models work in a specific way.
For Example, the first model classifies the datapoint depending on whether it belongs to class 1 or some other class(not class 1); the second model classifies the datapoint into class 2 or some other
class(not class 2) and so-on for all other classes.
So, in this manner, each data point can be checked over all the classes.
13. How can we express the probability of a Logistic Regression model as conditional probability?
We define probability
P(Discrete value of Target variable | X[1], X[2], X[3]…., X[k])
as the probability of the target variable that takes up a discrete value (either 0 or 1 in the case of binary classification problems) when the values of independent variables are given.
For Example, the probability an employee will attain (target variable) given his attributes such as his age, salary, etc.
14. Discuss the space complexity of Logistic Regression.
During training:
We need to store four things in memory: x, y, w, and b during training a Logistic Regression model.
• Storing b is just 1 step, i.e, O(1) operation since b is a constant.
• x and y are two matrices of dimension (n x d) and (n x 1) respectively. So, storing these two matrices takes O(nd + n) steps.
• Lastly, w is a vector of size-d. Storing it in memory takes O(d) steps.
Therefore, the space complexity of Logistic Regression while training is O(nd + n +d).
During Runtime or Testing: After training the model what we just need to keep in memory is w. We just need to perform w^T*x[i] to classify the points.
Hence, the space complexity during runtime is in the order of d, i.e, O(d).
15. Discuss the Test or Runtime complexity of Logistic Regression.
At the end of the training, we test our model on unseen data and calculate the accuracy of our model. At that time knowing about runtime complexity is very important. After the training of Logistic
Regression, we get the parameters w and b.
To classify any new point, we have to just perform the operation w^T * xi. If w^T*xi>0, the point is +ve, and if w^T*xi < 0, the point is negative. As w is a vector of size d, performing the
operation w^T*xi takes O(d) steps as discussed earlier.
Therefore, the testing complexity of the Logistic Regression is O(d).
Hence, Logistic Regression is very good for low latency applications, i.e, for applications where the dimension of the data is small.
16. Why is Logistic Regression termed as Regression and not classification?
The major difference between Regression and classification problem statements is that the target variable in the Regression is numerical (or continuous) whereas in classification it is categorical
(or discrete).
Logistic Regression is basically a supervised classification algorithm. However, the Logistic Regression builds a model just like linear regression in order to predict the probability that a given
data point belongs to the category numbered as “1”.
For Example, Let’s have a binary classification problem, and ‘x’ be some feature and ‘y’ be the target outcome which can be either 0 or 1.
The probability that the target outcome is 1 given its input can be represented as:
If we predict the probability by using linear Regression, we can describe it as:
where, p(x) = p(y=1|x)
Logistic regression models generate predicted probabilities as any number ranging from neg to pos infinity while the probability of an outcome can only lie between 0< P(x)<1.
However, to solve the problem of outliers, a sigmoid function is used in Logistic Regression. The Linear equation is put in the sigmoid function.
17. Discuss the Train complexity of Logistic Regression.
In order to train a Logistic Regression model, we just need w and b to find a line(in 2-D), plane(in 3-D), or hyperplane(in more than 3-D dimension) that can separate both the classes point as
perfect as possible so that when it encounters with any new point, it can easily classify, from which class the unseen data point belongs to.
The value of w and b should be such that it maximizes the sum y[i]*w^T*x[i] > 0.
Now, let’s calculate its time complexity in terms of Big O notation:
• Performing the operation y[i]*w^T*x[i] takes O(d) steps since w is a vector of size-d.
• Iterating the above step over n data points and finding the maximum sum takes n steps.
Therefore, the overall time complexity of the Logistic Regression during training is n(O(d))=O(nd).
18. Why can’t we use Mean Square Error (MSE) as a cost function for Logistic Regression?
In Logistic Regression, we use the sigmoid function to perform a non-linear transformation to obtain the probabilities. If we square this nonlinear transformation, then it will lead to the problem of
non-convexity with local minimums and by using gradient descent in such cases, it is not possible to find the global minimum. As a result, MSE is not suitable for Logistic Regression.
So, in the Logistic Regression algorithm, we used Cross-entropy or log loss as a cost function. The property of the cost function for Logistic Regression is that:
• The confident wrong predictions are penalized heavily
• The confident right predictions are rewarded less
By optimizing this cost function, convergence is achieved.
19. Why can’t we use Linear Regression in place of Logistic Regression for Binary classification?
Linear Regressions cannot be used in the case of binary classification due to the following reasons:
1. Distribution of error terms: The distribution of data in the case of Linear and Logistic Regression is different. It assumes that error terms are normally distributed. But this assumption does not
hold true in the case of binary classification.
2. Model output: In Linear Regression, the output is continuous(or numeric) while in the case of binary classification, an output of a continuous value does not make sense. For binary classification
problems, Linear Regression may predict values that can go beyond the range between 0 and 1. In order to get the output in the form of probabilities, we can map these values to two different classes,
then its range should be restricted to 0 and 1. As the Logistic Regression model can output probabilities with Logistic or sigmoid function, it is preferred over linear Regression.
3. The variance of Residual errors: Linear Regression assumes that the variance of random errors is constant. This assumption is also not held in the case of Logistic Regression.
20. What are the advantages of Logistic Regression?
The advantages of the logistic regression are as follows:
1. Logistic Regression is very easy to understand.
2. It requires less training.
3. It performs well for simple datasets as well as when the data set is linearly separable.
4. It doesn’t make any assumptions about the distributions of classes in feature space.
5. A Logistic Regression model is less likely to be over-fitted but it can overfit in high dimensional datasets. To avoid over-fitting these scenarios, One may consider regularization.
6. They are easier to implement, interpret, and very efficient to train.
21. What are the disadvantages of Logistic Regression?
The disadvantages of the logistic regression are as follows:
1. Sometimes a lot of Feature Engineering is required.
2. If the independent features are correlated with each other it may affect the performance of the classifier.
3. It is quite sensitive to noise and overfitting.
4. Logistic Regression should not be used if the number of observations is lesser than the number of features, otherwise, it may lead to overfitting.
5. By using Logistic Regression, non-linear problems can’t be solved because it has a linear decision surface. But in real-world scenarios, the linearly separable data is rarely found.
6. By using Logistic Regression, it is tough to obtain complex relationships. Some algorithms such as neural networks, which are more powerful, and compact can easily outperform Logistic Regression
7. In Linear Regression, there is a linear relationship between independent and dependent variables but in Logistic Regression, independent variables are linearly related to the log odds (log(p/
End Notes
Thanks for reading!
I hope you enjoyed the questions and were able to test your knowledge about Logistic Regression.
If you liked this and want to know more, go visit my other articles on Data Science and Machine Learning by clicking on the Link
Please feel free to contact me on Linkedin, Email.
Something not mentioned or want to share your thoughts? Feel free to comment below And I’ll get back to you.
About the author
Chirag Goyal
Currently, I am pursuing my Bachelor of Technology (B.Tech) in Computer Science and Engineering from the Indian Institute of Technology Jodhpur(IITJ). I am very enthusiastic about Machine learning,
Deep Learning, and Artificial Intelligence.
The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion.
Responses From Readers
very well explained.
Very Well explained. | {"url":"https://www.analyticsvidhya.com/blog/2021/05/20-questions-to-test-your-skills-on-logistic-regression/?utm_source=reading_list&utm_medium=https://www.analyticsvidhya.com/blog/2021/05/20-questions-to-test-your-skills-on-logistic-regression/","timestamp":"2024-11-12T21:38:41Z","content_type":"text/html","content_length":"386521","record_id":"<urn:uuid:07b3c9b6-05a7-44d2-b099-53f38d1481be>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00737.warc.gz"} |
Degree Course in
Academic Year 2017/2018
- 2° Year - Curriculum FISICA DELLA MATERIA and Curriculum FISICA NUCLEARE E SUB-NUCLEARE
Teaching Staff: Francesco CAPPUZZELLO Credit Value:
Scientific field:
FIS/04 - Nuclear and subnuclear physics
Taught classes:
42 hours
Term / Semester:
Learning Objectives
Learning ability
The course aims at two specific objectives of knowledge achievements and learning abilities
1. To deepen some basic questions concerning the structure of the atomic nucleus, as an object of fundamental research. In this sense, the nucleus is represented, during the classes, as a particular
form of aggregation of matter that we are presently not in condition to describe, starting from the elementary constituents of matter (quarks, gluons, etc.). Rather, the nucleons represent the
effective "elementary" constituents through which a description, albeit partial, of the rich known phenomenology is possible. Students learn that the relatively large number of nucleons and the
non-existence of a "universal" analytical form of the nucleon-nucleon potential in the nuclear medium limit the possibility of a microscopic description of the nuclear structure. For the opposite
reason, the techniques of statistical mechanics find a difficult place in this research landscape. Therefore specific treatments of the many-body system with hypothetical-deductive approaches, based
on models, represent the most important cultural figure of this discipline, and as such they are discussed in class. Particular emphasis is also given to collective models, particularly effective in
the description of rotations and vibrations of the nuclei. The cultural project of the course focuses in particular on:
Critical acquisition of the model concept in the problem of nuclear structure
Concept of mean field and physical significance of single nucleon orbital
Concept of residual interaction for the study of excited states
Concept of collective motion and collective degrees of freedom
2. Provide the necessary tools and updates for a subsequent and possible in-depth work on issues related to both experimentation and the theory of modern research in the field of nuclear structure.
This aspect is taken care of by proposing different links and
similarities between the concepts developed in class and the most modern research themes in this field. In particular, the course includes
A panoramic description of the main ideas characterizing the historical evolution of the physics of the nuclear structure
A description of the most effective experimental techniques, with guided visits to the "INFN - Laboratori Nazionali del Sud" and contextual familiarization with the complex instrumentation present
Indication of the main unresolved problems and current research trends
Ability to apply the achieved knowledge
The description of the structure of the atomic nucleus as a highly complex system of strongly interacting nucleons is particularly adequate to stimulate the student's ability to identify the most
relevant aspects of a problem. Furthermore, the peculiar relevance of approximations in this field of physics, the reduction of theories in models, the need for increasingly complex experimental
approaches, allows the student a deeper understanding of the sense of approximation. The continuous comparison between the concept of nuclear and atomic nuclear fields and the relevance of certain
nuclear quantities in the processes of stellar evolution allow a fruitful connection between apparently distant fields of research. The mathematical tools required and preparatory are essential for
the demonstrations that are proposed in class and that are required for the exams.
Communication skills
Most of the texts and articles proposed as educational material are in English and this provides a useful stimulus for the student's understanding of scientific language. Moreover, the powerful
graphical representation of correlations between physical quantities present in the didactic material increases the student's ability to search for the best possible form in the description of a
Detailed Course Content
Structure of nuclei
The use of models in nuclear physics. Degrees of freedom and nuclear structure. Fermi gas model. Magic numbers. Hypothesis of mean field and concept of orbits of nucleons. The nuclear spin-orbit
interaction. Model of Meyer, Haxel, Jensen. Magnetic dipole moment. Schmidt lines. Electric quadrupole moment. Static deformations. Excited states. Microscopic foundation of the shell model.
Hartree-Fock theory. Direct and exchange forces. Particle-hole interaction. Mixing of configurations. Spectroscopic factor. Deep-hole states. Shell model with many particles. Shell model calculation
methods. Cavedon experiment. Deformed potential model. Nilsson diagrams. Rotational motions. Rotational bands. Backbending. Vibrational motions. Bohr Hamiltonian. Dynamic deformations of the nuclear
surface. Giant resonances. Sum rules. Macroscopic models. Hydrodynamic model of Steinwedel-Jensen. Microscopic models. Tamm-Dancoff theory. RPA approximation. Pairing vibrations. Giant pairing
resonance. Nuclear response to isospin operators. Isobaric Analogue State and Gamow-Teller giant resonance. Clustered structures. Clusters in self-contained light nuclei. Hoyle state. Branching ratio
α of nuclear states. Non-autoconjugated nuclei. Model of Hafstad and Teller. Validity of the clusters model.
Textbook Information
Testi consigliati:
Suitable material prepared by the professor will be available for the students.
1. K.S. Krane, Introductory Nuclear Physics, Wiley and Sons Ltd.
2. W.S.C. Williams, Nuclear and Particle Physics, Oxford University Press.
3. K.L.G. Heyde, Basic Ideas and Concepts in Nuclear Physics, Institute Of Physics Publishing, series Editor D.F. Brewer.
4. W. Greiner, J.A. Maruhn, Nuclear Models, Springer Verlag.
5. A. Bohr, B.R. Mottelson, Nuclear Structure, World Scientific.
6. P. Ring, P. Schuck, The Nuclear Many-Body Problem, Springer.
7. M.A. Preston, R.K. Bhaduri, Structure of the Nucleus, Westview Press. | {"url":"https://www.dfa.unict.it/courses/lm-17/course-units?cod=8712","timestamp":"2024-11-05T07:05:46Z","content_type":"text/html","content_length":"31398","record_id":"<urn:uuid:82156a13-7324-45d9-8c4f-ceaa58c219f2>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00298.warc.gz"} |
Description of column-types used to define regressions
REGRESSOR (formerly X): regression value
REGRESSOR columns define variables (possibly time-varying) that will be available for calculations in the structural model after regressor definition. Regressors can for instance be used to take into
account time-varying covariates (example here), or tag the columns corresponding to individual PK parameters in a sequential PKPD modeling approach.
Allowed values in the REGRESSOR column are doubles and dot ‘.’ (to indicate missing values). For the first record (observation or dose line) of each subject (or subject-occasion if occasions are
present), the regressor value cannot be missing (no dot ‘.’ allowed). For the following missing values, the interpolation will be done using the setting chosen in the GUI, which can be “Last Carried
Forward” or “linear interpolation”. Regressor values on observation or dose lines are used the same way, as well as regressor values on lines with no observation and no dose.
• last carried forward: if we have defined in the dataset two times for each individual with (reg_A) at time (t_A) and (reg_B) at time (t_B)
□ for (tle t_A), (reg(t)=reg_A) [first defined value is used]
□ for (t_Ale t<t_B), (reg(t)=reg_A) [previous value is used]
□ for (t>t_B), (reg(t)=reg_B) [previous value is used]
• linear interpolation: the interpolation is:
□ for (tle t_A), (reg(t)=reg_A) [first defined value is used]
□ for (t_Ale t<t_B), (reg(t)=reg_A+(t-t_A)frac{(reg_B-reg_A)}{(t_B-t_A)}) [linear interpolation is used]
□ for (t>t_B), (reg(t)=reg_B) [previous value is used]
Several columns can be tagged as REGRESSOR. In that case, the mapping with the regressors defined in the model is done by name if possible, otherwise by order: the first column tagged as REGRESSOR in
the data set is mapped to the first element in the model input list defined as regressor.
If within a subject (or subject-occasion if occasions are present) two events are defined at the same time on two different lines, the regressor value must be the same on both lines. The regressor
value is used even if the dose or observation is ignored (for instance using the EVENT ID and IGNORED OBSERVATION columns).
Lines added due to a STEADY-STATE column get the same regressor value as the line with the STEADY-STATE statement. Lines added due to an ADDITIONAL DOSES column get a dot ‘.’ and are then
interpolated based on the previous values.
• Example with one regressor: the regressor corresponds to the drug concentration, which will be used in a direct effect PD model. With the following data set:
ID TIME Y REG
1 0 3.3 6.2
1 1 5.6 4.1
1 2 6.8 .
1 3 7.0 2.9
and the following model:
input = {E0, IC50 , Cc}
Cc = { use = regressor }
E = E0 * (1 - Cc/(Cc+IC50))
output = {E}
The regressor variable Cc in the model will take the values defined in the REG column and be used to calculate the effect E. For time points not defined in the data set, interpolation will be done
depending on the chosen Regressor Setting. If “Last Observation Carried Forward” is selected: during the time interval [0, 1[, the regressor value is that defined on time 0. Note that the column
header and the model regressor variable name can differ.
• Example with two regressors: the regressors correspond to the individual PK parameters used to calculate the drug concentration, itself impacting the effect E. With the following data set:
ID TIME AMT Y V_mode k_mode
1 0 10 . 6.2 1.2
1 0 . 3.3 6.2 1.2
1 1 . 5.6 6.2 1.2
1 2 . 6.8 6.2 1.2
1 3 . 7.0 6.2 1.2
and the following model:
input = {E0, EC50 , V, k}
V = { use = regressor }
k = { use = regressor }
Cc = pkmodel(V,k)
E = E0 * (1 - Cc/(Cc+EC50))
output = {E}
The first column tagged as REGRESSOR (V_mode) is mapped to the first regressor in the input list (V), and the REGRESSOR column of the data set (k_mode) is mapped to the second regressor of the model
• Example with STEADY-STATE and ADDITIONAL DOSES:
Format restrictions:
• The regression-columns (i.e. columns with column-type REGRESSOR) shall contain either doubles or “.” (which will be interpolated).
• The first record for each subject (or subject-occasion) cannot be dot ‘.’ .
• When there are several lines with the same time, same id and same occasion, the value of the regressor column must be the same. | {"url":"https://dataset.lixoft.com/description/description-of-column-types-used-to-define-regressions/","timestamp":"2024-11-11T10:34:41Z","content_type":"text/html","content_length":"70566","record_id":"<urn:uuid:c58f58f6-71d1-4132-80b5-4319c4296721>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00392.warc.gz"} |
Reply To: LM Curve - Liberty Classroom
February 8, 2013 at 11:21 am #17607
The IS-LM model has been a staple of mainstream, undergraduate macroeconomics for several decades.
The IS curve purports to plot all points of the rate of interest (i) and the level of income (y) which are brought about by equilibrium between saving and investment. Investment is inversely related
to i and saving is directly related to y. At a larger y, saving will be larger and therefore equal to investment only at a lower i. The IS curve, therefore, slopes downward to the right when plotted
on a graph with i on the vertical axis and y on the horizontal axis.
The LM curve purports to plot all points of equilibrium between i and y which are brought about by equilibrium between the money stock and money demand (so called, liquidity preference). Although the
money stock is independent of either i or y, liquidity preference varies directly with each. With a given money stock, the increasing effect of a larger y on money demand would have to be offset by a
decreasing effect of a higher i to maintain equilibrium. The graph of LM slopes upward to the right when superimposed on the IS graph.
The intersection between IS and LM illustrates the only combination of i and y that has the entire economy, both the “real” and “money” aspects, in equilibrium.
The fundamental problem with IS-LM is the erroneous theories underlying it. Time preference determines both the interest rate and the extent of saving-investing. Money demand relative to the stock of
money determines the purchasing power of money. Furthermore, it has no theory of production, no capital theory.
Roger Garrison compares and contrasts the Austrian, Keynesian, and Monetarist approaches to macroeconomics in his book, Time and Money. Here is an overview by Garrison: | {"url":"https://libertyclassroom.com/forums/reply/reply-to-lm-curve/","timestamp":"2024-11-03T21:25:34Z","content_type":"text/html","content_length":"89703","record_id":"<urn:uuid:7fbd297d-ba58-4b17-a129-5735fd611be0>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00879.warc.gz"} |
Math Tutor DVD || South Africa
Contact Us
Herman de Bruyn
P.O Box 30888
Durban, 4058
Benefits of DVD
Permanent copy of the course.
Re-watch as needed.
Guaranteed results.
Less expensive than any tutor.
Master Statistics - Volume 3
2 DVD Set! - 5 Hour Course!
DVD Chapter Index List Price: R619.85
Our Price: R495.88
Disc 1 You Save: 20%
Sect 1: Sampling Distributions Total DVD Running Time:
Sect 2: Central Limit Theorem, Part 1 5 Hours
Sect 3: Central Limit Theorem, Part 2
Sect 4: Apply Central Limit Theorem to Population Means, Part 1
Sect 5: Apply Central Limit Theorem to Population Means, Part 2
Sect 6: Apply Central Limit Theorem to Population Means, Part 3
Sect 7: Apply Central Limit Theorem to Population Proportions, Part 1
Sect 8: Apply Central Limit Theorem to Population Proportions, Part 2
Disc 2
Sect 9: Confidence Intervals for Population Means, Part 1
Sect 10: Confidence Intervals for Population means, Part 2
Sect 11: Estimating Population Means (Large Samples), Part 1
Sect 12: Estimating Population Means (Large Samples), Part 2
Sect 13: Estimating Population Means (Large Samples), Part 3
Sect 14: The Student t-Distribution
Sect 15: Using Student t-Distribution Statistical Tables
Sect 16: Estimating Population Means (Small Samples), Part 1
Sect 17: Estimating Population Means (Small Samples), Part 2
Sect 18: Estimating Population Means (Small Samples), Part 3
Statistics is one of the most important areas of Math to understand. It has applications in science, engineering, business, economics, political science, and more.
In this 5 Hour Course, Jason Gibson teaches the fundamental concepts needed to truly Master Statistics with step-by-step video tutorials.
The lessons begin by studying the concept of a sampling distribution. We give several examples so that the student has a solid concept of sampling.
Next, we explore the central limit theorem of statistics in detail and solve several problems where the central limit theorem is needed to arrive at the answer.
Finally, we will spend considerable time the concept of a confidence interval and how to apply it to estimating the mean of a population parameter. We examine two central cases that depend on the
number of samples that we have in our study and present how to calculate the confidence interval in either case
Statistics is a difficult subject for most students, but anyone can Master Statistics with our step-by-step teaching style!
How are the MathTutorDVD Tutorial's different?
The answer is simple. Most instruction involves a lengthy discussion of the theory before instructing the student in how to solve problems.
In the vast majority of the cases the student quickly gets bored and frustrated by the time he or she starts to solve the problems. This course, in contrast, teaches all of the concepts by working
fully worked problems step-by-step, which is a much more engaging way to learn.
Exceptional value and affordability!
MathTutorDVD believes in providing value for our customers. All MathTutor DVD Tutorials cover between 3 and 14 hours of lectures, and considering the cost, each MathTutor DVD set therefore costs
LESS than additional private tuition, and you may rewatch the lessons as many times as needed to master the material!
What is our teaching style like?
All topics on this DVD are taught by working example problems. There are no traditional lectures of background material that won't help you solve problems and improve your skills. We believe in
teaching-by-doing and that is what you will receive by watching this DVD.
Undetermined coefficients is explained, for example, by working many problems in step-by-step detail. We begin with the easier problems and work our way up to the harder problems. The student
immediately gains confidence, does not get bored, and quickly feels like he or she can conquer the material. This method is extremely powerful and has proven itself time and again. Perhaps most
importantly, problem solving skills are honed early on that will help with homework and taking exams even after watching the very first lesson.
All Courses Offered.
Below is a list of courses, click on one for more information. | {"url":"http://mathtutordvd.co.za/MASTER-STAT-V3.php","timestamp":"2024-11-06T18:40:50Z","content_type":"application/xhtml+xml","content_length":"23199","record_id":"<urn:uuid:9d7e7b39-d429-4602-a96e-86ad80d4d3b1>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00004.warc.gz"} |
Cohomology of toric diagrams
For a topological group G, a nice class of G-spaces is given by those G-CW spaces that are obtained by attaching finitely many equivariant cells with strictly increasing
Another way of describing such spaces is in terms of homotopy colimits of G-orbit diagrams in the category of compactly generated topological spaces.
By restricting to the case of a compact torus G, one obtains the class of toric diagrams consisting of quotient tori and homomorphisms between them, indexed by a finite category.
Toric diagrams encode (up to a T-equivariant homotopy) any toric variety, quotients of moment-angle complexes and other interesting spaces studied in toric geometry and toric topology.
In the talk, new formulas for singular cohomology groups of such spaces (for G=T and rational coefficients) will be presented in terms of sheaf cohomology groups over Alexandrov spaces.
These groups are known by the name of cohomology for the respective (toric) diagram, and appear at the second page of the Bousfield-Kan cohomological spectral sequence.
The spectral sequence in question converges to the cohomology of the homotopy colimit of the diagram.
Our main tool is the collapse of this spectral sequence at its second page, implied by formality of any toric diagram.
If time permits, a possible approach to integral coefficients and relation to open conjectures from toric geometry will be presented. | {"url":"https://indico.math.cnrs.fr/event/11198/","timestamp":"2024-11-11T04:17:39Z","content_type":"text/html","content_length":"95751","record_id":"<urn:uuid:8f6cd857-c322-41b9-9d77-8c38a07e139d>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00083.warc.gz"} |
Can someone provide guidance on MATLAB programming assignments in sustainable energy simulations? | Pay Someone To Do My Matlab Assignment
Can someone provide guidance on MATLAB programming assignments in sustainable energy simulations? Since my personal interest in simulations is the most obvious one, I was wondering if there were
related topics for those to follow. Also, would this article at least be an integral part of an ongoing group discussion for a MATLAB class based on this article? (I found this quite interesting) A:
It would have been a privilege for me to present in a meeting this evening (May-Nov) at the Grid-Q-Systech Meeting in San Francisco. We planned out multiple datasets. I realize I hadn’t laid out
precisely what the data should look like and might not have been phrased as a description of what is used, but the presentation was very much about a simulation study and given that it was so much
more nuanced than the previous click to read it had to be interpreted for the purpose for which his response is intended to be used. Since you’ve been working on MATLAB for a while I keep coming up
with a summary of the material I wanted to use. In order to begin the discussion, I want to note that the paper we’d come up with did a good job of this (we’ve done it ourselves). I just wanted to
try to better understand the project structure as not a general problem, but rather a practical application to build our concepts on top of another problem (for instance to understand how the term
“hybrid” might not even be a solution for a subprocess happening in a solver process), i.e why the definition of functions needs to be a rather specific definition, or there has to be a way is to
divide the project into several parts pretty much about each one. We are now ready to present the question. There is much more to the paper, plus an auxiliary section you can incorporate so that we
know what the project looks like first. As we can tell above, the starting point is the problem of constructing a method to identify theCan someone provide guidance on MATLAB programming assignments
in sustainable energy simulations? Looking for the best MATLAB modules to learn from while working with sustainability projects. Hi. I’m an ODE homework assignment material maker/researcher in space
for software/software-development-project. I have built a simulation/training process for a space project that has taken many years to complete. The course will have two main sections, program
simulation and program analysis and there are two other sections, functional simulation and functional implementation and evaluation of software and programming that I teach professionally and
professionally. I need help to select the chosen module to learn and apply from. Stress P The SPS tutorial guide: a good tutorial guide was provided to me at a workshop together with a good copy of
the “SPS guide” tutorial by your workshop colleague. The workshop was a big success considering the materials discussed in the workshop. I’ll let you know which module I’m most suited for you. What
is MATLAB? MATLAB is a very well implemented software package that runs on Windows, just like C#, Unix.
Pay Someone To Take Online Class For You
It supports data structures like string types for a wide variety of data structures, like a function or container. MATLAB also supports many different types of programming and automation approaches.
You can read more about MATLAB for yourself here and the MATLAB documentation here. Learning MATLAB The basic MATLAB model for simulation is the following: A single mat is defined as 8 elements. Each
element of the matrix is composed of a small number of lower-case letters, a “letter” represented by uppercase letters. The real-valued numbers represent the matrix elements themselves. These
elements can either be a scalar or vector. MATLAB handles a wide range of inputs and outputs, including multiple arrays or blocks of elements. A lot of useful features are defined inMATLAB, including
data-centric features and advanced interactive functions. Can someone provide guidance on MATLAB programming assignments in sustainable energy simulations? Background. After much debate over the
definition of a sustainable energy simulation, the literature did not provide any guidance. It could be this post that all energy simulations of EITCA where the dynamic system has a fixed point has
two effects: the system is constrained to move between two fixed points, and those points are fixed points in energy space for all purposes. In this case, the dynamic system has only one site in the
system, in the continuum of the system and therefore the system does not extend to position zero. The dynamic systems are not constrained to a different fixed point at the position of zero, nor to
position 0. Here are some of the most common definitions, in this setting, to reproduce an energy simulation in a sustainable configuration: A simulation of a potential system: the potential itself
is given a state variable and a state variable computes the probability that this system has been put into a stable state. This is a physically meaningful definition and is used to calculate
potential energy. A potential system is a potential system that instantaneously changes, and not simply switches between different fixed points when starting from the configuration. Functionalism or
other methods for generating the behavior of different structures/constraints (see the referenced paper “New Model for Efficient Simulation of a Nonlinear Potential” by D. Saldanha et al.) are
described in the references.
Help Me With My Homework Please
The definitions and associated concepts: A time-dependent potential Given a potential system, given a state variable, and x i : i : x i ∈ [i – X, i : i – X],, where X is the total time the system is
in the system, and X + 0 ≤ X ≤ X. A function f(x: x i : x i i i, xi: x i i x i, the functional X) is called a function of x. The functional f of a function x is written d (d > x) and f’ (fGet More
Info potentials or functional properties. Since most potentials are at fixed points in real time, they must be bounded by a number d and in a given time-domain; the physical laws govern this.
Problem: Say k is a discrete value. Given all k is always a periodic array. A potential is an order-disjoint family of potentials if, for each find out of discrete values, we can find a family of
potential energies with the same properties. For brevity, in the introduction, we call a potential a deterministic function: T = (1/2) T0 + | {"url":"https://domymatlab.com/can-someone-provide-guidance-on-matlab-programming-assignments-in-sustainable-energy-simulations","timestamp":"2024-11-07T16:17:16Z","content_type":"text/html","content_length":"111563","record_id":"<urn:uuid:c49a01e1-9668-42bd-a25c-7df91dfdd079>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00581.warc.gz"} |