arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
# Exceptions¶
Myokit tries to raise errors in a sensible manner. The following classes are used.
## Base classes¶
class myokit.MyokitError(message)
Base class for all exceptions specific to Myokit.
Note that myokit classes and functions may raise any type of exception, for example a :class:KeyError or a ValueError. Only new classes of exception defined by Myokit will extend this base class.
Extends: Exception
class myokit.IntegrityError(message, token=None)
Raised if an integrity error is found in a model.
The error message is stored in the property message. An optional parser token may be obtained with token().
Extends: myokit.MyokitError
token()
Returns a parser token associated with this error, or None if no such token was given.
## Inheriting classes¶
class myokit.CompilationError(message)
Raised if an auto-compiling class fails to compile. Catching one of these is usually a good excuses to email the developers ;-)
Extends: myokit.MyokitError
class myokit.CyclicalDependencyError(cycle)
Raised when an variables depend on each other in a cyclical manner.
The first argument cycle must be a sequence containing the Variable objects in the cycle.
Extends: myokit.IntegrityError
class myokit.DataBlockReadError(message)
Raised when an error is encountered while reading a myokit.DataBlock1d or myokit.DataBlock2d.
Extends: myokit.MyokitError.
class myokit.DataLogReadError(message)
Raised when an error is encountered while reading a myokit.DataLog.
Extends: myokit.MyokitError.
class myokit.DuplicateName(message)
Raised when an attempt is made to add a component or variable with a name that is already in use within the relevant scope.
Extends: myokit.MyokitError.
class myokit.DuplicateFunctionName(message)
Raised when an attempt is made to add a user function to a model when a function with the same name and number of arguments already exists.
Extends: myokit.MyokitError.
class myokit.DuplicateFunctionArgument(message)
Raised when an attempt is made to define a user function with duplicate argument names.
Extends: myokit.MyokitError.
class myokit.ExportError(message)
Raised when an export to another format fails.
Extends: myokit.MyokitError.
class myokit.FindNanError(message)
Raised by some simulations when a search for the origins of a numerical error has failed.
Extends: myokit.MyokitError
class myokit.GenerationError(message)
Raised by simulation engines and other auto-compiled classes if code generation fails.
Extends: myokit.MyokitError
class myokit.IllegalAliasError(message)
Raised when an attempt is made to add an alias in an invalid manner.
Extends: myokit.MyokitError
class myokit.IllegalReferenceError(reference, owner)
Raised when a reference is found to a variable reference that isn’t accessible from the owning variable owner’s scope.
Extends: myokit.IntegrityError
class myokit.ImportError(message)
Raised when an import from another format fails.
Extends: myokit.MyokitError.
class myokit.IncompatibleModelError(name, message)
Raised if a model is not compatible with some requirement.
Extends: myokit.MyokitError.
class myokit.IncompatibleUnitError(message)
Raised when a unit incompatibility is detected.
Extends: myokit.MyokitError.
class myokit.InvalidBindingError(message, token=None)
Raised when an invalid binding is made.
Extends: myokit.IntegrityError
class myokit.InvalidDataLogError(message)
Raised during validation of a myokit.DataLog if a violation is found.
Extends: myokit.MyokitError.
class myokit.InvalidFunction(message)
Raised when a function is declared with invalid arguments or an invalid expression.
Extends: myokit.MyokitError.
class myokit.InvalidLabelError(message, token=None)
Raised when an invalid label is set.
Extends: myokit.IntegrityError
class myokit.InvalidMetaDataNameError(message)
Raised when an attempt is made to add a meta data property with a name that violates that the myokit naming rules for meta data properties.
Extends: myokit.MyokitError
class myokit.InvalidNameError(message)
Raised when an attempt is made to add a component or variable with a name that violates the myokit naming rules.
Extends: myokit.MyokitError
class myokit.MissingRhsError(var)
Raised when a variable was declared without a defining right-hand side equation.
The first argument var should be the invalid variable.
Extends: myokit.IntegrityError
class myokit.MissingTimeVariableError
Raised when no variable was bound to time.
Extends: myokit.IntegrityError
class myokit.NonLiteralValueError(message, token=None)
Raised when a literal value is required but not given.
Extends: myokit.IntegrityError
class myokit.NumericalError(message)
Raised when a numerical error occurs during the evaluation of a myokit Expression.
Extends: myokit.MyokitError
class myokit.ParseError(name, line, char, desc, cause=None)
Raised if an error is encountered during a parsing operation.
A ParseError has five attributes:
name
A short name describing the error
line
The line the error occurred on (integer, first line is one)
char
The character the error ocurred on (integer, first char is zero)
desc
A more detailed description of the error.
cause
Another exception that triggered this exception (or None).
Extends: myokit.MyokitError
class myokit.ProtocolEventError(message)
Raised when a ProtocolEvent is created with invalid parameters.
Extends: myokit.MyokitError
class myokit.ProtocolParseError(name, line, char, desc, cause=None)
Raised when protocol parsing fails.
Extends: ParseError
class myokit.SectionNotFoundError(message)
Raised if a section should be present in a file but is not.
Extends: myokit.MyokitError
class myokit.SimulationError(message)
Raised when a numerical error occurred during a simulation. Contains a detailed error message.
Extends: myokit.MyokitError
class myokit.SimulationCancelledError(message='Operation cancelled by user.')
Raised when a user terminates a simulation.
Extends: myokit.MyokitError
class myokit.SimultaneousProtocolEventError(message)
Raised if two events in a protocol happen at the same time. Raised when creating a protocol or when running one.
Extends: myokit.MyokitError
class myokit.UnresolvedReferenceError(reference, extra_message=None)
Raised when a reference to a variable cannot be resolved.
Extends: myokit.IntegrityError
class myokit.UnusedVariableError(var)
Raised when an unused variable is found.
The unused variable must be passed in as the first argument var.
Extends: myokit.IntegrityError
|
|
# Tension in a charged ring.
A ring is positively charge with uniform linear charge density $$\lambda$$ and radius $$r$$ , now a positive charge $$q_0$$ is placed at the center of the ring in the same plane.
Find the total tension after $$q_0$$ is placed.
Permitivity of free space is $$\epsilon _0$$.
Hint: There will be tension due to two.
Note by Kushal Patankar
3 years, 3 months ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$...$$ or $...$ to ensure proper formatting.
2 \times 3 $$2 \times 3$$
2^{34} $$2^{34}$$
a_{i-1} $$a_{i-1}$$
\frac{2}{3} $$\frac{2}{3}$$
\sqrt{2} $$\sqrt{2}$$
\sum_{i=1}^3 $$\sum_{i=1}^3$$
\sin \theta $$\sin \theta$$
\boxed{123} $$\boxed{123}$$
Sort by:
yes , tanishq's answer is absolutely correct ! ( i guess so )
- 1 year, 8 months ago
I guess this is the answer.. T=kQq / 2.pi.R^2 But can you solve it and show..
- 2 years, 10 months ago
Is it correct
$$T=\frac{\lambda q_{0}}{4\pi \epsilon_{o} r}$$
- 3 years, 3 months ago
It is not as simple you think .. If so then I'am sure Kushal don't post it .. Since that was very easy and standard problem .. :D
- 3 years, 3 months ago
Total tension will be developed due to electrostatic force by $$q_0$$ and the ring.
If the question was to find the increase in tension when $$q_o$$ was placed then your answer is correct.
- 3 years, 3 months ago
|
|
## What is the Corsi statistic? (And why is there a Fenwick number?)November 16, 2015
Posted by tomflesher in Hockey, Sports.
Tags: , , , , , ,
Growing up in Buffalo, I was surrounded by hockey, whether it was watching the Sabres or heading to the rink to watch my brother play defense as a bantam or high schooler. During those years, my father, who could barely skate, often served as a volunteer coach for my brother’s teams. Like Malcolm Gladwell’s story of Vivek Ranadivé leading his “little blonde girls” to success using out-of-the-box basketball coaching, my father felt he was bringing an outsider’s perspective to the game by emphasizing a simple philosophy: own the puck.
This is easier said than done, of course, and when a group of squirts, peewees, or bantams head out onto the ice they need to apply some serious skill in order to “own the puck.” Overall, though, the point of owning that puck is to put it into the net. So, logically, the more a team controls the puck, the more likely it is to control the game.
It’s possible, of course, for a team to take many more shots and still lose, but the Corsi stat is meant to measure overall control. As such, it includes all attempted shots, so Corsi, as such, is defined as Shots + Attempted Shots – Shots Against – Attempted Shots Against. This gives you a simple differential in shots.
You’ll also see the following stats:
• Corsi For: Shots + Attempted Shots by the team, making it possible to isolate whether a team is making too few shots or allowing too many
• Corsi Against: Shots + Attempted Shots by the opposing team
• Corsi For Percentage (CF%): 100*Corsi For/(Corsi For + Corsi Against), giving a ratio rather than a simple differential. This measures what percentage of shots and shot attempts a team makes compared to its opponents. A CF% above 50% means a team attempts more shots than its opponent.
• Corsi On: A team’s Corsi while a particular player is on the ice scaled up to 60 minutes of ice time, effectively measuring whether the player’s Corsi is as good as, better than, or worse than the team’s as a whole. A Corsi ON greater than the team’s means the player contributes proportionally more to the team than ice time would indicate.
• Corsi Relative (Corsi REL): Corsi On – Corsi Off, showing whether a team performs better or worse with a player on the ice. If Corsi REL is positive, the team does a better job with the player on the ice.
Corsi was named after a Buffalo Sabres goaltending coach. Bob McKenzie of TSN shared the story of the Corsi number in 2014. Financial analyst Tim Barnes, writing under the pseudonym Vic Ferrari, heard Sabres GM Darcy Regier discussing shot attempts and save percentage as a goalie metric, but Ferrari didn’t care for the name “Regier Number” or “Ruff Number” (for Sabres coach Lindy Ruff). After browsing photos of the Sabres staff, Ferrari settled on Jim Corsi (above) as the eponym for the statistic. Interestingly, Corsi actually did come up with the idea and planted it in Regier’s head.
A similar stat, the Fenwick, simply discounts blocked shots since blocking shots is a skill.
## What is BAbip?March 16, 2015
Posted by tomflesher in Baseball.
Tags: ,
The first stat we all learned about as kids was the batting average, where you calculate what proportion of at-bats end with getting a hit. Then, of course, we start thinking about why there are weird exceptions – why doesn’t getting hit by a pitch count? Why don’t walks count? Why doesn’t advancing to first on catcher’s interference count? OBP, or on-base percentage, fixes that. (Well, maybe not the catcher’s interference part…)
Batting average has some interesting properties, though. It captures events that have unpredictable outcomes – when you walk, it’s basically impossible to be put out on your way to first. Ditto being hit by a pitch. Of course, BA does have some of those determined outcomes, too – home runs and strikeouts don’t have much dynamic nature to them, although you’ll occasionally see brilliant defense save a sure homer (a la Carl Crawford’s MVP performance in the or a sloppy catcher mishandle a third strike and forget to tag the batter. (I’m looking at you, Josh Paul.) Nonetheless, balls in play – balls that the batter makes contact with, forcing the defense to try to make a play – are a major source of variation in the game.
BAbip is measured as $\frac{H - HR}{AB - SO - SH + SF}$, meaning it takes the strikeouts and home runs out of the equation and (like all sane measures should!) includes sacrifice flies.
Since the ball is out of the pitcher’s control as soon as it leaves his hand, BAbip measures things that the pitcher isn’t responsible for – that is, it’s handy as a measure of pitching luck, or, teamwide, as a measure of defensive effectiveness. The NL team BAbip average was .299, and AL average BAbip was about .298.
Use Cases for BAbip:
Evaluating hitting development. If a batter has had a stable BAbip for a while and his BAbip increases significantly, be suspicious! Particularly if his walk rate hasn’t increased, his home run rate hasn’t increased, and his strikeout rate hasn’t decreased, this might be a function of lucky hitting against bad or inefficient defenses. If the biggest part of an increase in production has been on balls in play, your hitter may not have actually improved. On the other hand, if you can see physical changes, or you have an explanation (e.g., went to AAA to work on his swing), you may see a more balanced improvement in OBP.
– Evaluating pitching luck. Most of the time, all the pitchers for the same team pitch in front of the same defense. Even with a personal catcher in the mix, expect most pitchers on a team to have similar batting averages on balls in play. If you have one pitcher whose BAbip is much higher than the rest of the pitchers, he may be pitching against bad luck. With that in mind, you can expect that pitcher to improve going forward.
– Comparing defenses. In 2014, Oakland had a .274 BAbip and allowed 572 runs – the best in the American league in BAbip and 18 runs behind Seattle – while Minnesota had a .317 BAbip and allowed 777 runs, the worst in both categories in the league. Defensive efficiency (a measure of 1 – BAbip) tracks closely with runs allowed. BAbip can operate as a quick and dirty check on how well a defense is performing behind a pitcher.
## What is OPS?January 12, 2015
Posted by tomflesher in Baseball.
Tags: , , , ,
Sabermetricians (which is what baseball stat-heads call ourselves to feel important) disregard batting average in favor of on-base percentage for a few reasons. The main one is that it really doesn’t matter to us whether a batter gets to first base through a gutsy drag bunt, an excuse-me grounder, a bloop single, a liner into the outfield, or a walk. In fact, we don’t even care if the batter got there through a judicious lean-in to take one for the team by accepting a hit-by-pitch. Batting average counts some of these trips to first, but not a base on balls or a hit batsman. It’s evident that plate discipline is a skill that results in higher returns for the team, and there’s a colorable argument that ability to be hit by a pitch is a skill. OBP is $\frac{H+BB+HBP}{AB+BB+HBP+SF}$.
We also care a lot about how productive a batter is, and a productive batter is one who can clear the bases or advance without trouble. Sure, a plucky baserunner will swipe second base and score from second, or go first to third on a deep single. In an emergency, a light-hitting pitcher will just bunt him over. However, all of these involve an increased probability of an out, while a guy who can just hit a double, or a speedster who takes that double and turns it into a triple, will save his team a lot of trouble. Obviously, a guy who snags four bases by hitting a home run makes life a lot easier for his teammates. Slugging percentage measures how many bases, on average a player is worth every time he steps up to the plate and doesn’t walk or get hit by a pitch. Slugging percentage is $\frac{(\mathit{1B}) + (2 \times \mathit{2B}) + (3 \times \mathit{3B}) + (4 \times \mathit{HR})}{AB} = \frac{\text{Total Bases}}{AB}$. If a player hits a home run in every at-bat, he’ll have an OBP of 1.000 and a SLG of 4.000.
OPS is just On-Base Percentage plus Slugging Percentage. It doesn’t lend itself to a useful interpretation – OPS isn’t, for example, the average number of bases per hit, or anything useful like that. It does, however, provide a quick and dirty way to compare different sorts of hitters. A runner who moves quickly may have a low OBP but a high SLG due to his ability to leg out an extra base and turn a single into a double or a double into a triple. A slow-moving runner who can only move station to station but who walks reliably will have a low SLG (unless he’s a home-run hitter) but a high OBP. An OPS of 1.000 or more is a difficult measure to meet, but it’s a reliable indicator of quality.
## The Hall of Fame Black Ink TestJanuary 11, 2015
Posted by tomflesher in Baseball.
Tags: , ,
The Baseball Hall of Fame‘s mission is “Preserving History, Honoring Excellence, Connecting Generations.” An important measure of the excellence honored in Cooperstown is called the Black Ink Test. “Black ink” refers to the boldface type used to show the league’s leader in an important category.
The categories used for the Black Ink Test are, of course, different for pitchers and batters, but they also vary depending on the importance of the stat. A batter who excels in hitting home runs is more valuable to a team than one who takes the most at-bats regardless of outcome. For batters, points are awarded as follows:
1. One point for games, at-bats, or triples
2. Two points for doubles, walks, or stolen bases
3. Three points for runs scored, hits, or slugging percentage
4. Four points for home runs, RBIs, or batting average
|
|
# Section and subsection in two languages with Legrand Orange Book
I to all,
I am using The Legrand Orange Book template for writing a textbook for my students.
In Italy, it is strongly recommended, to insert the title of all paragraph in two languages: Italian and English. For the moment I have used this code that work:
\section{Introduzione}
\textit{\textbf{\Large Overview}}
with this result:
The question is this: I would create a new command like \section{text it}{text en} and obtain the same results.
This for \section and \subsection and - of course - in the index must appear only the Italian title.
Thanks to all!
I don't really know where the problem is. The following code gives me the result shown in the image.
\newcommand{\itensec}[2]{\section{#1}{\noindent\large\bfseries\sffamily\textit{#2}\medbreak}}
\newcommand{\itensubsec}[2]{\subsection{#1}{\noindent\bfseries\sffamily\textit{#2}\smallbreak}}
\itensec{Vombatidi}{Wombats}
\blindtext
\itensubsec{Carpincho}{Capybaras}
\blindtext
Note: The spacing after the section title has been amended in structure.tex.
The solution presented in the other answer works as well.
• +1 Thankyou for your time. I try to insert your code into the template that I have strongly customizated. – Giacomo Alessandroni Jul 24 '17 at 7:13
• Ok. Your code work fine without any kind of errors. Thankyou again. – Giacomo Alessandroni Jul 24 '17 at 8:03
• @Giacomo So you are not using the template anymore. In that case you have to present more information to get a reliable answer. – Johannes_B Jul 24 '17 at 9:47
• Yes and no. I using the template, but I have translate it into Italian (an easy operation), I have add same other environments and modified some others. Eventually, I have add several packages that I use for generate, e.g., Karnaugh maps, and remove others that generated some conflicts. But your code work very well. But the template is the same, now in Italian and with a strong orient for technical books. – Giacomo Alessandroni Jul 24 '17 at 10:00
• @Giacomo If you replace the motor of your car with something different and ask people on the internet how to fix something, they assume the car is unchanged. I of course get your point, but you using the template is completely irrelevant here. The other answer works as well with tge template, but seemingly not with your modifications. – Johannes_B Jul 24 '17 at 11:19
\documentclass{book}
\let\oldsection\section
\renewcommand{\section}[2]{\oldsection[#1]{#1\\ \textit{#2}}}
\begin{document}
\tableofcontents
\chapter{A}
\section{Italian title}{English title}
\end{document}
• Sorry, your commands are simple and pretty but do not work (I have insert it into structure.tex file). If I use them, I obtain a fatal error: the file main.tex, make an error in the command \input{structure}, where are definites all the parameter, packages, environments and so on. – Giacomo Alessandroni Jul 20 '17 at 10:35
• Looks like it's interfering with something. I've expanded the code into a minimal working example. Try using that and copy all packages and macros you're using into that example to see where it starts to fail. – gablin Jul 20 '17 at 14:11
• +1 I have tried to put the code also into main.tex, but with differen errors. I see that your code work by itself, but I need something that work with The Legrand Orange Book template. – Giacomo Alessandroni Jul 21 '17 at 15:55
|
|
# Poker deck class /w generator function and list comprehensions
There are multiple aspects in the code I do not really like. [card for card in ...] looks really lame, also [x.pop() for i in range(y)]. I'm looking forward for any recommendation.
from random import shuffle
class Deck(object):
suits = range(4)
ranks = range(13)
@classmethod
def generator(cls, suits, ranks):
for suit in suits:
for rank in ranks:
yield({'suit': suit, 'rank': rank})
def __init__(self):
self.cards = [card for card in Deck.generator(Deck.suits, Deck.ranks)]
shuffle(self.cards)
def deal(self, amount):
return [self.cards.pop() for i in range(amount)]
print Deck().deal(5)
[card for card in Deck.generator(…)] could be written as list(Deck.generator(…)).
Better yet, use itertools.product():
self.cards = [
{'suit': suit, 'rank': rank}
for suit, rank in itertools.product(xrange(4), xrange(13))
]
In Python 2, you should be using xrange() rather than range().
It's probably worth defining a Card class. At some point, you'll want to have suits that are named rather than numbered, and ranks A, J, Q, K rather than 0..12.
To deal multiple cards, you can slice the list instead of popping one card at a time:
def deal(self, n):
hand = self.cards[-1 : -n-1 : -1]
self.cards = self.cards[: -n]
return hand
• 1, itertools.product() and the proposed list comprehension looks great, thank you for that. 2, I will look into that xrange() topic - I already saw it, but did not pay attention yet. 3, I do have a Card class with overridden __lt__, __gt__, __eq__ functions, but I did not want to clutter the question with it. 4, I decided not to use list slicing for the sake of simplicity, I will reconsider. Thank you :) – Lorinc Nyitrai Aug 7 '15 at 11:03
• about the xrange() vs. range() subject: stackoverflow.com/a/97530/1486768 – Lorinc Nyitrai Aug 7 '15 at 12:13
• Unless your lists are going in the millions of items, which they clearly are not here, I'd stick with range, mostly for portability. I similarly don't use any of the iterxxx methods of Python 2 dicts for the exact same reason. – Jaime Aug 7 '15 at 13:16
• I can't resist noting that although the slicing is probably more efficient, it sort of violates the poker rule that cards must be dealt off the top of the deck, one at a time. The .pop() route is arguably the best simulation of this poker rule. You don't want any virtual card sharks accusing you of cheating... :-) – Curt F. Aug 8 '15 at 0:41
• @CurtF. The way I wrote it, the slice is reversed, so the result should be the same. – 200_success Aug 8 '15 at 1:19
|
|
# Formula for a square
Gold Member
[(x3-x2)+(x2-x1)]*[(x3+x2)+(x2-x1)]
the book expands it to:
(x3-x2)^2+2(x3-x2)*(x2-x1)+(x2-x1)^2
i didnt get it so can someone please help me in this, i think there is a mistake in the book.
Last edited by a moderator:
## Answers and Replies
KLscilevothma
Originally posted by loop quantum gravity
[(x3-x2)+(x2-x1)]*[(x3+x2)+(x2-x1)]
the book expands it to:
(x3-x2)^2+2(x3-x2)*(x2-x1)+(x2-x1)^2
[(x3-x2)+(x2-x1)]*[(x3+x2)+(x2-x1)] does not equal to(x3-x2)^2+2(x3-x2)*(x2-x1)+(x2-x1)^2
However,
[(x3-x2)+(x2-x1)]*[(x3-x2)+(x2-x1)] = (x3-x2)^2+2(x3-x2)*(x2-x1)+(x2-x1)^2
Gold Member
Originally posted by KL Kam
[(x3-x2)+(x2-x1)]*[(x3+x2)+(x2-x1)] does not equal to(x3-x2)^2+2(x3-x2)*(x2-x1)+(x2-x1)^2
However,
[(x3-x2)+(x2-x1)]*[(x3-x2)+(x2-x1)] = (x3-x2)^2+2(x3-x2)*(x2-x1)+(x2-x1)^2
have you noticed the experssions on the right are the same?
Last edited:
KLscilevothma
Please read the expressions on the left hand sides carefully. I changed a "+" sign to a "-" sign in the third small bracket
Gold Member
yes you are right. i guess it was a type mistake )-:
Homework Helper
With the negative, it is simply the formula for a square:
(a+b)*(a+b)= a2+ 2ab+ b2
with a= x3-x2 and b= x2- x1
|
|
Stimulsoft Reports.WinRT is a powerful tool designed for creating report layouts with just a few clicks. The package includes a collection of tools that enable you to create a report and to import reports from other applications.
The Designer tool features a flexible interface and is able to edit the report components in order to get the desired result. If you decide to import the reports you can use the wizards for opening models from Crystal Reports, Fast Reports, Report Sharp-Shooter or Telerik Reports.
## Stimulsoft Reports.WinRT Crack [Win/Mac]
Report Designer version 14.0 is the application that provides a user-friendly interface that makes it very easy to create, edit, import and export reports. You can use the report as a template or create a new one from scratch.
The tool features a wizard interface for the creation of a new report from scratch. By default, it creates a complete report that can be used as a template. It also has a robust editor for the editing of the existing ones. The interface features a toolbar for the common tasks and a panel with the built-in wizards for import and export.
Other features include:
Reporting on data contained in a model form from other applications
Editing the rendering of groups and charts
Editing the basic properties of a report
Easy navigation between reports by category, group, chart, data source or section
Reporting on data contained in a model form from other applications
The Reporting Designer tool allows you to export data from the form used to store your report and makes it possible to provide report layout directly in a data model. It is also possible to use a query to load data for the report.
Editing the rendering of groups and charts
Report Designer features various tools for the editing of the report’s common elements: groups, charts, columns, titles, footer, chart, section and report grid. For example, you can change the rendering of chart or group. Also, you can decide whether to display a chart or a table, and where to place a chart or group on the page.
The Reporting Designer supports the creation of report layouts with the use of images. To make it possible, the report designer is equipped with the ImportPicture tool, which allows you to take a picture from the user’s computer and import it in the report as an image.
Note: if you load an image from a file, you can only use either JPEG or PNG images.
Editing the basic properties of a report
The Reporting Designer provides a group of tools for the modification of basic properties of a report. By default, it displays all of the reports created by the application, but there are also options that limit the displayed reports. Some of the other properties are:
Page size
Layout method
Easy navigation between reports by category, group, chart, data source or section
The Reporting Designer is equipped with a navigation panel that allows you to quickly navigate the list of report folders. You can also use
## Stimulsoft Reports.WinRT Keygen Free
This page is having a slideshow that uses Javascript. Your browser either doesn’t support Javascript or you have it turned off. To see this page as it is meant to appear please use a Javascript enabled browser.Q:
Show that $Y_1=2X_1$, $Y_2=X_2$, and $Y_3=Y_1+Y_2$ are linear independent.
This is a section on Independent and dependent sets of random variables.
$$\begin{array}{l|l|l|l} & \text{Events} & \text{Sup Events} & \text{Residuals} \\ \hline Y_1 & X_1 & X_1 & X_1 \\ Y_2 & X_2 & X_2 & X_2 \\ Y_3 & Y_1 & Y_1+Y_2 & Y_1+Y_2 \\ \hline \end{array}$$
Let $X$ be some events.
a) Show that $Y_1=2X_1$, $Y_2=X_2$, and $Y_3=Y_1+Y_2$ are linear independent.
b) Show that $\{X,Y_1,Y_2\}$ is a minimal space for $\{Y_1,Y_2,Y_3\}$
Thanks for help.
A:
Hint:
\begin{align*}
Y_1 &= 2X_1 \\
Y_2 &= X_2 \\
Y_3 &= Y_1 + Y_2 = 2X_1 + X_2
\end{align*}
Try and see how the following hold true:
$Y_1, Y_2, Y_3$ is a linearly independent set (in the comments above, I presume your question is : is $Y_3$ linearly dependent on $Y_1, Y_2$?)
$\{Y_1,Y_2\}$ is a minimal space for
09e8f5149f
## Stimulsoft Reports.WinRT Crack + With Keygen Free [Updated] 2022
“A powerful tool that lets you create professional looking reports in minutes with just a few clicks. The entire collection of report layouts is built on top of the new report class, and it comes with a standalone Report Designer that easily lets you customize almost any report layout.”The book “stalked” me for a long time and I finally got the time to read it. It is actually a very good book and I really enjoy it. I understand it is not a novel, but has many interesting facts. I was a bit surprised at the end but I think it was well-worth reading. It is also well organized and has easy to read facts.
I had some trouble understanding some of the information because I do not know how our system works. You are always going to find people to complain about the system but I think you are very close to the reality of the military. I would suggest that you are a good candidate for a trip to the continental US to do an overview of how the military really works. If you can find someone to go with you that might be even better.
This book can be found on many library shelves, but I would suggest trying to find it at a bookstore.
Fascinating read. I am reading it cover to cover and find it interesting. I was trying to figure out what all the machines are for and then I saw a slide show about the different types and what it is for. Very interesting. If you are interested in the internal workings of the military I would recommend you borrow this book.
I highly recommend this book. I find it very interesting and have read it in three days. I know there are many books about the military, but this one has a lot of info and it is well written and edited.Klapstedt, Roxanne, Maureen, and Prentice. (2011). Naïve feminist subjectivities. Sociological Inquiry 81(1), 119-142.
We study women’s experiences at a large corporate computing company, using grounded theory and feminist-informed methodologies to understand how they experienced the organizational socialization of mathematics and science. Findings from interviews and ethnographic observations were analyzed using the constant comparative method. Naïve feminist subjectivity emerged as an interpretive framework that explained women’s mathematics and science socialization processes in this organization. Naïve feminist subjectivities begin with the belief that all women possess the same capacities and experience oppression in the same way, although women do experience it in different ways and degrees.
## What’s New in the Stimulsoft Reports.WinRT?
Altera Soft developed the LiteLITEPC product suite which includes following applications:
Product Family: LiteLITEPC
LiteLITEPC is an effort to create and launch a set of portable, lightweight applications that will enable students and other information technology users to conduct their studies anywhere. LiteLITEPC consists of five applications that provide a platform for self-paced learning:
A collection of tools that enable you to create a report and to import reports from other applications.
This is a massive package that provides a collection of tools for creating reports with just a few clicks. The Designer tool features a flexible interface and is able to edit the report components in order to get the desired result. If you decide to import the reports you can use the wizards for opening models from Crystal Reports, Fast Reports, Report Sharp-Shooter or Telerik Reports.
The Designer tool includes the following features:
The IDE Designer is a tool that allows you to design Microsoft reports. The tool provides a very easy way to create a report layout.
The IDE is a standard report design tool. It has several predefined report layouts, ranging from a classical bar chart to a radial chart, a pie chart and many others.
The IDE has a friendly user interface. The most important features are:
The exact design of any report layout (as classic reports)
Easy-to-use wizard help for report design
The detailed analysis of code in background
The ability to edit the report layout by dragging and dropping elements directly onto the report layout.
New! LiteLITEPC is equipped with a new way of working with applications. It is based on “plugins” which enable you to create the reports needed for the applications you are currently using. There are different types of plugins:
– Template plugins: The plugins that create and store a part of the report layout.
– Report plugins: The plugins that allow you to draw a report from an existing report or from another application.
– Document plugins: The plugins that allow you to open another application. For this purpose, LiteLITEPC creates a system plugin that uses the OpenConnection method from the CRDocument class.
New! The LiteLITEPC SDK is available at the time of writing. It allows developers to work with LiteLITEPC from any programming language, providing a set of classes, methods and objects that allow you to interact with LiteLITEPC directly.
## System Requirements:
OS: Windows XP, Vista or 7
Processor: 2.6GHz
Memory: 2 GB RAM
DirectX: Version 9.0
|
|
# The ratio of aluminium and iron oxide in Thermit welding is
This question was previously asked in
JKSSB JE ME 2015 Official Paper
View all JKSSB JE Papers >
1. 1.5 : 1
2. 2 : 1
3. 2.5 : 1
4. 3 : 1
Option 4 : 3 : 1
Free
CTET Paper 1 - 16th Dec 2021 (Eng/Hin/Sans/Ben/Mar/Tel)
1.5 Lakh Users
150 Questions 150 Marks 150 Mins
## Detailed Solution
Explanation:
Thermit Welding:
It is a welding process utilizing heat generated by an exothermic chemical reaction between the components of the thermit (a mixture of metal oxide and aluminium powder). In this process, fine aluminium particles and metal oxide are mixed in and ignited by an external heat source.
The reaction will proceed according to the following equation:
Metal Oxide + Aluminum → Aluminum Oxide + Metal + Heat
Thermit Welding is mainly used for joining steel parts. It is used for repair of steel casings and forgings, for joining railroad rails, steel wires and steel pipes, for joining the large cast and forged parts. For which aluminium is mixed with the iron oxide in the ratio of 1 : 3 by weight.
$$3Fe_3{O_4} + 8Al \to 9Fe + 4A{l_2}{O_3} + \left( {Heat} \right)$$
Due to the thermit chemical mixture reaction, 3 products are produced,
• Iron - Used as filler rod
• Al2O3 - Used as a slag
• Heat - Used for the melting of the parent material
Aluminium is mixed with iron oxide in the ratio of 1 : 3 by weight and in the ratio of 8 : 3 by moles which can be approximated to 3 : 1.
|
|
## 7.12 Representable sheaves
Let $\mathcal{C}$ be a category. The canonical topology is the finest topology such that all representable presheaves are sheaves (it is formally defined in Definition 7.47.12 but we will not need this). This topology is not always the topology associated to the structure of a site on $\mathcal{C}$. We will give a collection of coverings that generates this topology in case $\mathcal{C}$ has fibered products. First we give the following general definition.
Definition 7.12.1. Let $\mathcal{C}$ be a category. We say that a family $\{ U_ i \to U\} _{i \in I}$ is an effective epimorphism if all the morphisms $U_ i \to U$ are representable (see Categories, Definition 4.6.4), and for any $X\in \mathop{\mathrm{Ob}}\nolimits (\mathcal{C})$ the sequence
$\xymatrix{ \mathop{Mor}\nolimits _\mathcal {C}(U, X) \ar[r] & \prod \nolimits _{i \in I} \mathop{Mor}\nolimits _\mathcal {C}(U_ i, X) \ar@<1ex>[r] \ar@<-1ex>[r] & \prod \nolimits _{(i, j) \in I^2} \mathop{Mor}\nolimits _\mathcal {C}(U_ i \times _ U U_ j, X) }$
is an equalizer diagram. We say that a family $\{ U_ i \to U\}$ is a universal effective epimorphism if for any morphism $V \to U$ the base change $\{ U_ i \times _ U V \to V\}$ is an effective epimorphism.
The class of families which are universal effective epimorphisms satisfies the axioms of Definition 7.6.2. If $\mathcal{C}$ has fibre products, then the associated topology is the canonical topology. (In this case, to get a site argue as in Sets, Lemma 3.11.1.)
Conversely, suppose that $\mathcal{C}$ is a site such that all representable presheaves are sheaves. Then clearly, all coverings are universal effective epimorphisms. Thus the following definition is the “correct” one in the setting of sites.
Definition 7.12.2. We say that the topology on a site $\mathcal{C}$ is weaker than the canonical topology, or that the topology is subcanonical if all the coverings of $\mathcal{C}$ are universal effective epimorphisms.
A representable sheaf is a representable presheaf which is also a sheaf. Since it is perhaps better to avoid this terminology when the topology is not subcanonical, we only define it formally in that case.
Definition 7.12.3. Let $\mathcal{C}$ be a site whose topology is subcanonical. The Yoneda embedding $h$ (see Categories, Section 4.3) presents $\mathcal{C}$ as a full subcategory of the category of sheaves of $\mathcal{C}$. In this case we call sheaves of the form $h_ U$ with $U \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{C})$ representable sheaves on $\mathcal{C}$. Notation: Sometimes, the representable sheaf $h_ U$ associated to $U$ is denoted $\underline{U}$.
Note that we have in the situation of the definition
$\mathop{Mor}\nolimits _{\mathop{\mathit{Sh}}\nolimits (\mathcal{C})}(h_ U, \mathcal{F}) = \mathcal{F}(U)$
for every sheaf $\mathcal{F}$, since it holds for presheaves, see (7.2.1.1). In general the presheaves $h_ U$ are not sheaves and to get a sheaf you have to sheafify them. In this case we still have
7.12.3.1
$$\label{sites-equation-map-representable-into-sheaf} \mathop{Mor}\nolimits _{\mathop{\mathit{Sh}}\nolimits (\mathcal{C})}(h_ U^\# , \mathcal{F}) = \mathop{Mor}\nolimits _{\textit{PSh}(\mathcal{C})}(h_ U, \mathcal{F}) = \mathcal{F}(U)$$
for every sheaf $\mathcal{F}$. Namely, the first equality holds by the adjointness property of $\#$ and the second is (7.2.1.1).
Lemma 7.12.4. Let $\mathcal{C}$ be a site. If $\{ U_ i \to U\} _{i \in I}$ is a covering of the site $\mathcal{C}$, then the morphism of presheaves of sets
$\coprod \nolimits _{i \in I} h_{U_ i} \to h_ U$
becomes surjective after sheafification.
Proof. By Lemma 7.11.2 above we have to show that $\coprod \nolimits _{i \in I} h_{U_ i}^\# \to h_ U^\#$ is an epimorphism. Let $\mathcal{F}$ be a sheaf of sets. A morphism $h_ U^\# \to \mathcal{F}$ corresponds to a section $s \in \mathcal{F}(U)$. Hence the injectivity of $\mathop{Mor}\nolimits (h_ U^\# , \mathcal{F}) \to \prod _ i \mathop{Mor}\nolimits (h_{U_ i}^\# , \mathcal{F})$ follows directly from the sheaf property of $\mathcal{F}$. $\square$
The next lemma says, in the case the topology is weaker than the canonical topology, that every sheaf is made up out of representable sheaves in a way.
Lemma 7.12.5. Let $\mathcal{C}$ be a site. Let $E \subset \mathop{\mathrm{Ob}}\nolimits (\mathcal{C})$ be a subset such that every object of $\mathcal{C}$ has a covering by elements of $E$. Let $\mathcal{F}$ be a sheaf of sets. There exists a diagram of sheaves of sets
$\xymatrix{ \mathcal{F}_1 \ar@<1ex>[r] \ar@<-1ex>[r] & \mathcal{F}_0 \ar[r] & \mathcal{F} }$
which represents $\mathcal{F}$ as a coequalizer, such that $\mathcal{F}_ i$, $i = 0, 1$ are coproducts of sheaves of the form $h_ U^\#$ with $U \in E$.
Proof. First we show there is an epimorphism $\mathcal{F}_0 \to \mathcal{F}$ of the desired type. Namely, just take
$\mathcal{F}_0 = \coprod \nolimits _{U \in E, s \in \mathcal{F}(U)} (h_ U)^\# \longrightarrow \mathcal{F}$
Here the arrow restricted to the component corresponding to $(U, s)$ maps the element $\text{id}_ U \in h_ U^\# (U)$ to the section $s \in \mathcal{F}(U)$. This is an epimorphism according to Lemma 7.11.2 and our condition on $E$. To construct $\mathcal{F}_1$ first set $\mathcal{G} = \mathcal{F}_0 \times _\mathcal {F} \mathcal{F}_0$ and then construct an epimorphism $\mathcal{F}_1 \to \mathcal{G}$ as above. See Lemma 7.11.3. $\square$
## Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
Unfortunately JavaScript is disabled in your browser, so the comment preview function will not work.
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 00WO. Beware of the difference between the letter 'O' and the digit '0'.
|
|
Nehe arcball error?
This topic is 2763 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
Recommended Posts
I've recently been looking into quaternions and how they're used with respect to rotations. As part of this I've been looking at the arcball implementation shared on the Nehe site.
There's one part of the impl. I just can't seam to understand and I wanted to make sure there wasn't an error in the implementation. Funny thing is that it works obviously :)
The part where the rotation axis and angle are converted into a quaternion:
//We're ok, so return the perpendicular vector as the transform after allNewRot->s.X = Perp.s.X;NewRot->s.Y = Perp.s.Y;NewRot->s.Z = Perp.s.Z;//In the quaternion values, w is cosine (theta / 2), where theta is rotation angleNewRot->s.W= Vector3fDot(&this->StVec, &this->EnVec);
Now this part really confuses me. Shouldn't the axis / angle be converted using sine/cos as you would normally create a rotation quat from an rot axis / angle?
Hope someone can help me shed some light on this.
Share on other sites
There are algorithms that build a unit-length rotation quaternion 'indirectly', where the resulting terms of the quaternion end up being equivalent to the 'axis*sin(theta/2), cos(theta/2)' that we usually see. I'd imagine that's what's going on here (although without seeing the code in context I can't say for sure).
Share on other sites
First of all thanks Jyk ;)
The axis is simply the crossproduct (rotation axis) between two unitlength vectors you want to rotate from / to, and the angle is as you can tell the angle between said vectors.
Share on other sites
Quote:
Original post by elurahuThe axis is simply the crossproduct (rotation axis) between two unitlength vectors you want to rotate from / to, and the angle is as you can tell the angle between said vectors.
That doesn't really cast any extra light on the code you posted. (Again, it would be easier to comment on that code if it were presented in context.)
1. 1
2. 2
Rutin
22
3. 3
4. 4
JoeJ
17
5. 5
• 14
• 30
• 13
• 11
• 11
• Forum Statistics
• Total Topics
631774
• Total Posts
3002297
×
|
|
×
# Proof Contest Day 2 (Follow Up problem)
The previous problem dealt with proving that the sum of heights remains constant. However , in this follow up problem we would find the exact value of the sum of heights for an even sided polygon.
PROBLEM:
Let $$P$$ be any point in the interior of a regular polygon of $$2n$$ sides. Perpendiculars $$PA_1 \ , \ PA_2 \ , \ PA_3 \ , \ \dots \ , \ PA_{2n}$$ are drawn to the sides of the polygon. Show that: $$\displaystyle\sum_{i=1}^{2n} PA_{i} = 2nr$$ where $$r$$ is the radius of inscribed circle of polygon.
Also show that $$\displaystyle\sum_{i=1}^{n} PA_{2i-1} =\sum_{j=1}^{n} PA_{2j} =nr$$
###### This problem is not original
Note by Nihar Mahajan
2 years, 1 month ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted
- list
• bulleted
• list
1. numbered
2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1
paragraph 2
paragraph 1
paragraph 2
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$...$$ or $...$ to ensure proper formatting.
2 \times 3 $$2 \times 3$$
2^{34} $$2^{34}$$
a_{i-1} $$a_{i-1}$$
\frac{2}{3} $$\frac{2}{3}$$
\sqrt{2} $$\sqrt{2}$$
\sum_{i=1}^3 $$\sum_{i=1}^3$$
\sin \theta $$\sin \theta$$
\boxed{123} $$\boxed{123}$$
Sort by:
The first one can be pretty easily proved using the fact that the radius of the inscribed circle is the height of the triangle.The second one can be proved using the fact that since the polygpn has an even number of sides the two perpendiculars,$$PA_{i},PA_{i+n}$$ form a straight line,from here we easily get that,$PA_{i}+PA_{i+n}=PA_{k}+PA_{k+n}$,now just put values,$$1,3,5...(n-1)$$ for i and the rest for k and then add to get the result,i am writing this on phone so cant explain very clearly.
- 2 years, 1 month ago
Exactly! Nice use of the fact that since the polygon has an even number of sides the two perpendiculars, $$PA_i , PA_{i+n}$$ form a straight line.
- 2 years, 1 month ago
- 2 years, 1 month ago
We know that the sum of the distances of the perpendiculars from the interior point is always constant. (As proved earlier). So we know that $PA_1+...+PA_{2n}=k$ a constant.
Now the incentre of the polygon is also such a point so the sum of the perpendiculars from it will be the sum of $$2n$$ radii. So we get that $PA_1+PA_2+...+PA_{2n}=2nr=k$
- 2 years, 1 month ago
Comment deleted Jan 08, 2016
What do you mean?
- 2 years, 1 month ago
Oh wait , I got your point. Can you think of some other way to prove it?
- 2 years, 1 month ago
Is there an even simpler one? I believe my method wasn't the solution you had in mind?
- 2 years, 1 month ago
I had solved this question about an year ago.I don't have solution ready , but I know the basic theme
- 2 years, 1 month ago
Try proving $$\displaystyle\sum_{i=1}^{n} PA_{2i-1} =\sum_{j=1}^{n} PA_{2j} =nr$$ .
- 2 years, 1 month ago
Comment deleted Jan 08, 2016
See the edit. You misunderstood my solution.
- 2 years, 1 month ago
Can we directly prove the last thing?
- 2 years, 1 month ago
Yes. Then the main problem becomes obvious from there.
- 2 years, 1 month ago
|
|
# Math and statistics » Trigonometric operators module#include "diplib/math.h"
• Reference
## Functions
void dip::Atan2(dip::Image const& y, dip::Image const& x, dip::Image& out)
Computes the four-quadrant arc tangent of y/x.
void dip::Hypot(dip::Image const& a, dip::Image const& b, dip::Image& out)
Computes the square root of the sum of the squares of corresponding samples in a and b.
## Function documentation
### void dip::Atan2(dip::Image const& y, dip::Image const& x, dip::Image& out)
Computes the four-quadrant arc tangent of y/x.
The operation can be understood as the angle of the vector formed by the two input images. The result is always in the range . The inputs must be a real type.
### void dip::Hypot(dip::Image const& a, dip::Image const& b, dip::Image& out)
Computes the square root of the sum of the squares of corresponding samples in a and b.
The computation is performed carefully, so there is no undue overflow or underflow at intermediate stages of the computation. The inputs must be a real type.
|
|
# the ratio of the circumference of a circle to its diameter (Q10152)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
No description defined
Language Label Description Also known as
English
the ratio of the circumference of a circle to its diameter
No description defined
## Statements
${\displaystyle{\displaystyle\pi}}$
0 references
DLMF:3.12.E1
0 references
|
|
TP
# F# Math (IV.) - Writing generic numeric code
Generic numeric code is some calculation that can be used for working with multiple different numeric types including types such as int, decimal and float or even our own numeric types (such as the type for clock arithmetic from the previous article of the series). Generic numeric code differs from ordinary generic F# code such as the 'a list type or List.map function, because numeric code uses numeric operators such as + or >= that are defined differently for each numeric type.
When writing simple generic code that has some type parameter 'T, we don’t know anything about the type parameter and there is no way to restrict it to a numeric type that provides all the operators that we may need to use in our code. This is a limitation of the .NET runtime and F# provides two ways for overcoming it.
• Static member constraints can be used to write generic code where the actual numeric operations are resolved at compile-time (and a generic function is specialized for all required numeric types). This approach makes resulting code very efficient and is generally very easy to use when writing a function such as List.sum.
• Global numeric associations (available in F# PowerPack) give us a way to obtain an interface implementing required numeric operations dynamically at runtime. This approach has some runtime overhead, but can be used for complex numeric types (such as Matrix<'T>).
• Combination of both techniques can be used to implement complex numeric type that is generic over the contained numeric values and has only a minimal runtime overhead.
Static member constraints are a unique feature of F# that is not available in other .NET languages, so if you're interested in writing numeric code for .NET, this may be a good reason for choosing F#. In C# or Visual Basic, you would be limited to the second option (which can be implemented in C#). In dynamic languages (like IronPython), everything is dynamic, so numeric computations can work with any numeric type, but will be significantly less efficient. In the rest of the article, we look at the three options summarized above.
This article is a part of a series that covers some F# and F# PowerPack features for numerical computing. Other articles in this series discuss matrices, defining custom numeric types and writing generic code. For links to other parts, see F# Math - Overview of F# PowerPack.
Published: Sunday, 27 November 2011, 5:19 PM
Tags: c#, functional, f#, math and numerics
# F# Math (III.) - Defining custom numeric types
In this article, we define an F# numeric type for calculating in the modular arithmetic (also called clock arithmetic) [1]. Modular arithmetic is used for calculations where we want to keep a value within a specified range by counting in cycles. For example, a maximal value on clock is 12 hours. When we add 11 hours and 3 hours, the value overflows and the result is 2 hours. Aside from clocks, this numeric system is also essential in cryptography or, for example, in music.
This tutorial shows several techniques that are essential when defining any new numeric type in F#. Most importantly, you’ll learn how to:
• Define a numeric type with overloaded operators
• Define a numeric literal for constructing numbers of our new type
• Enable calculating with our type in F# lists and matrices
• Hide implementation details of a numeric type
We define type IntegerZ5 that implements modular arithmetic with modulus 5, meaning that valid values are in the range from 0 to 4 and we equip the type with operations such as addition and multiplication. When an operation produces a value that would be outside of the range, we adjust it by adding or subtracting the modulus (in our case 5). Here are some examples of calculations that we’ll be able to write:
2 + 1 = 3 (mod 5)
4 * 2 = 3 (mod 5)
List.sum [ 0; 1; 2; 3 ] = 1 (mod 5)
In the first case, we can perform the operation without any adjustments. In the second case, we multiply 4 by 2 and get 8 as the result, which is out of the required range. To correct it, we calculate the remainder after a division by 5 (written as 8 % 5 in F#), which gives us 3. Finally, the last example shows that we’d also like to be able to use our type with lists. If we add values 0, 1, 2 and 3, we get 6 which is adjusted to 1.
This article is a part of a series that covers some F# and F# PowerPack features for numerical computing. Other articles in this series discuss matrices, defining custom numeric types and writing generic code. For links to other parts, see F# Math - Overview of F# PowerPack.
Published: Thursday, 24 November 2011, 7:21 PM
Tags: functional, f#, math and numerics
# F# Math (II.) - Using matrices for graph algorithms
In the previous article of this series, we looked at complex and BigRational, which are two numeric types that are available in F# PowerPack. Aside from these two, the PowerPack library also contains a type matrix representing a two-dimensional matrix of floating-point values.
In this article, you'll learn how to work with matrices in F#, using some of the functions provided by F# PowerPack. I'll demonstrate the library using an example that represents graphs using a, so called, adjacency matrix. If you're not familiar with this concept, you don't need to worry. It is quite simple and it will be clear once we look at an example. The matrix represents which vertices of a graph are connected with other vertices by an edge. Many of the standard operations on matrices are useful when working with adjacency matrix, so this tutorial will cover the following:
• Creating matrices from lists and using functions from the Matrix module
• Using slices to read or modify a part of matrix
• Performing standard operations with matrices such as transposition and matrix multiplication
• Using higher order functions for working with matrices
This article is a part of a series that covers some F# and F# PowerPack features for numerical computing. Other articles in this series discuss matrices, defining custom numeric types and writing generic code. For links to other parts, see F# Math - Overview of F# PowerPack.
Published: Wednesday, 9 November 2011, 1:46 AM
Tags: functional, f#, math and numerics
# F# Math (I.) - Numeric types in PowerPack
In this article, we'll briefly look at two numeric types that are available in F# PowerPack. The type complex represents complex numbers consisting of real and imaginary parts. Both parts are stored as a floating point numbers. The type BigRational represents rational numbers consisting of numerator and denominator of arbitrary sizes. Integers of arbitrary size are represented using BigInteger type that is available in .NET 4.0 (in the System.Numerics.dll assembly). On .NET 2.0, the BigInteger type is also a part of F# PowerPack.
This article is a part of a series that covers some F# and F# PowerPack features for numerical computing. Other articles in this series discuss matrices, defining custom numeric types and writing generic code. For links to other parts, see F# Math - Overview of F# PowerPack.
Published: Wednesday, 2 November 2011, 2:34 AM
Tags: c#, functional, f#, math and numerics
# F# Math - Numerical computing and F# PowerPack
This article is the first article of a series where I'll explain some of the F# features that are useful for numeric computing as well as some functionality from the F# PowerPack library. Most of the content was originally written for the Numerical Computing in F# chapter on MSDN (that I announced earlier), but then we decided to focus on using F# with third party libraries that provide more efficient implementation and richer set of standard numeric functionality that's needed when implementing machine learning and probabilistic algorithms or performing statistical analysis. If you're interested in these topics, then the last section (below) gives links to the important MSDN articles.
However, F# PowerPack still contains some useful functionality. It includes two additional numeric types and an implementation of matrix that integrates nicely with F#. The series also demonstrates how to use features of the F# language (and core libraries) to write numeric code elegantly. In particular, we'll use the following aspects:
• Overloaded operators. Any type can provide overloaded version of standard numeric operators such as +, -, * and / as well as other non-standard operators (such as .*). As a result, libraries can implement their own numeric types which are indistinguishable from built-in types such as int.
• Numeric literals. F# math libraries can enable using new numeric literals in the code. For example, you can write a BigRational value representing one third as 1N/3N. The N suffix used in the notation is not hardcoded in the F# language and we'll see how to define similar numeric type.
• Static constraints. F# supports static member constraints, which can be used for writing functions that work with any numeric type. For example, the List.sum function uses this feature. It can sum elements of any list containing numbers.
These are just a few of the F# language features that are useful when writing numeric code, but there are many others. The usual F# development style using interactive tools, type safety that prevents common errors, units of measure as well the expressivity of F# make it a great tool for writing numeric code. For more information, take a look at the MSDN overview article Writing Succinct and Correct Numerical Computations with F#.
Published: Wednesday, 2 November 2011, 2:30 AM
Tags: functional, f#, writing, math and numerics
|
|
# 11.5E: Exercises for Equations of Lines and Planes in Space - Mathematics
We are searching data for your request:
Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.
In exercises 1 - 4, points ( P) and ( Q) are given. Let ( L) be the line passing through points ( P) and ( Q).
a. Find the vector equation of line ( L).
b. Find parametric equations of line ( L).
c. Find symmetric equations of line ( L).
d. Find parametric equations of the line segment determined by ( P) and ( Q).
1) ( P(−3,5,9), Q(4,−7,2))
a. (vecs r=⟨−3,5,9⟩+t⟨7,−12,−7⟩, , t∈R;)
b. ( x=−3+7t,, y=5−12t,, z=9−7t, , t∈R;)
c. (frac{x+3}{7}=frac{y−5}{−12}=frac{z−9}{−7};)
d. (x=−3+7t,, y=5−12t,, z=9−7t, , 0 le t le 1)
2) ( P(4,0,5),Q(2,3,1))
3) ( P(−1,0,5), Q(4,0,3))
a. (vecs r=⟨−1,0,5⟩+t⟨5,0,−2⟩, t∈R;)
b. ( x=−1+5t,y=0,z=5−2t, t∈R;)
c. (frac{x+1}{5}=frac{z−5}{−2},y=0;)
d. (x=−1+5t,y=0,z=5−2t, t∈[0,1])
4) ( P(7,−2,6), Q(−3,0,6))
For exercises 5 - 8, point ( P) and vector (vecs v) are given. Let ( L) be the line passing through point ( P) with direction (vecs v).
a. Find parametric equations of line ( L).
b. Find symmetric equations of line ( L).
c. Find the intersection of the line with the (xy)-plane.
5) ( P(1,−2,3),,vecs v=⟨1,2,3⟩)
a. (x=1+t,y=−2+2t,z=3+3t, t∈R;)
b. ( frac{x−1}{1}=frac{y+2}{2}=frac{z−3}{3};)
c. ((0,−4,0))
6) ( P(3,1,5), ,vecs v=⟨1,1,1⟩)
7) ( P(3,1,5), ,vecs v=vecd{QR},) where ( Q(2,2,3)) and ( R(3,2,3))
a. (x=3+t,y=1,z=5, t∈R;)
b. ( y=1,z=5;)
c. (The line does not intersect the (xy)-plane.)
8) ( P(2,3,0), ,vecs v=vecd{QR},) where ( Q(0,4,5)) and ( R(0,4,6))
For exercises 9 and 10, line ( L) is given.
a. Find a point ( P) that belongs to the line and a direction vector (vecs v) of the line. Express (vecs v) in component form.
b. Find the distance from the origin to line ( L).
9) ( x=1+t,y=3+t,z=5+4t, t∈R)
a. A possible point and direction vector are (P(1,3,5)) and (vecs v=⟨1,1,4⟩), but these answers are not unique.
b. ( sqrt{3} ) units
10) ( −x=y+1,z=2)
11) Find the distance between point ( A(−3,1,1)) and the line of symmetric equations
( x=−y=−z.)
( frac{2sqrt{2}}{sqrt{3}} = frac{2sqrt{6}}{3} ) units
12) Find the distance between point ( A(4,2,5)) and the line of parametric equations
( x=−1−t,y=−t,z=2, t∈R.)
For exercises 13 - 14, lines ( L_1) and ( L_2) are given.
a. Verify whether lines ( L_1) and ( L_2) are parallel.
b. If the lines ( L_1) and ( L_2) are parallel, then find the distance between them.
13) ( L_1:x=1+t,y=t,z=2+t, t∈R, L_2:x−3=y−1=z−3)
a. (Parallel;)
b. ( frac{sqrt{2}}{sqrt{3}} = frac{sqrt{6}}{3} ) units
14) ( L_1:x=2,y=1,z=t, L_2:x=1,y=1,z=2−3t, t∈R)
15) Show that the line passing through points ( P(3,1,0)) and ( Q(1,4,−3)) is perpendicular to the line with equation ( x=3t,y=-32+8t,z=−9+6t, t∈R.)
( vecd{PQ} = lt -2, 3, -3 gt) is the direction vector of the line through points (P) and (Q), and the direction vector of the line defined by the parametric equations above is (vecs v = lt 3, 8, 6 gt.)
Since (vecs v cdot vecd{PQ} = -6 + 24 - 18 = 0), the two direction vectors are orthogonal.
Now all we need to show is that the two lines intersect.
The line through points ( P(3,1,0)) and ( Q(1,4,−3)) has parametric equations: (x = 3 - 2u), (y = 1 + 3u), and (z = -3u).
Setting the (x)- and (z)-coordinates of the two lines equal, we obtain the system of equations:
[3t = 3 - 2u quad ext{and}quad -9 + 6t = -3u onumber]
Solving this system using substitution gives us, (u = -3) and (t = 3). Plugging these values of (t) and (u) back into the parametric equations of these two lines gives us the intersection point with coordinates (left(9, -8, 9 ight)) on both lines.
Therefore the lines intersect and the line through points (P) and (Q) with direction vector (vecd{PQ} ) is perpendicular to the other line.
16) Are the lines of equations ( x=−2+2t,y=−6,z=2+6t) and ( x=−1+t,y=1+t,z=t, t∈R,) perpendicular to each other?
17) Find the point of intersection of the lines of equations ( x=−2y=3z) and ( x=−5−t,y=−1+t,z=t−11, t∈R.)
( (−12,6,−4))
18) Find the intersection point of the (x)-axis with the line of parametric equations ( x=10+t,y=2−2t,z=−3+3t, t∈R.)
For exercises 19 - 22, lines ( L_1) and ( L_2) are given. Determine whether the lines are equal, parallel but not equal, skew, or intersecting.
19) ( L_1:x=y−1=−z) and ( L_2:x−2=−y=frac{z}{2})
The lines are skew.
20) ( L_1:x=2t,y=0,z=3, t∈R) and ( L_2:x=0,y=8+s,z=7+s, s∈R)
21) ( L_1:x=−1+2t,y=1+3t,z=7t, t∈R) and ( L_2:x−1=frac{2}{3}(y−4)=frac{2}{7}z−2)
The lines are equal.
22) ( L_1:3x=y+1=2z) and ( L_2:x=6+2t,y=17+6t,z=9+3t, t∈R)
23) Consider line ( L) of symmetric equations ( x−2=−y=frac{z}{2}) and point ( A(1,1,1).)
a. Find parametric equations for a line parallel to ( L) that passes through point ( A).
b. Find symmetric equations of a line skew to ( L) and that passes through point ( A).
c. Find symmetric equations of a line that intersects ( L) and passes through point ( A).
a. (x=1+t,y=1−t,z=1+2t, t∈R)
b. For instance, the line passing through ( A) with direction vector ( j:x=1,z=1)
c. For instance, the line passing through ( A) and point ( (2,0,0)) that belongs to ( L) is a line that intersects; ( L:frac{x−1}{−1}=y−1=z−1)
24) Consider line ( L) of parametric equations ( x=t,y=2t,z=3, t∈R.)
a. Find parametric equations for a line parallel to ( L) that passes through the origin.
b. Find parametric equations of a line skew to ( L) that passes through the origin.
c. Find symmetric equations of a line that intersects ( L) and passes through the origin.
For exercises 25 - 28, point ( P) and vector (vecs n) are given.
a. Find the scalar equation of the plane that passes through ( P) and has normal vector (vecs n).
b. Find the general form of the equation of the plane that passes through ( P) and has normal vector (vecs n).
25) ( P(0,0,0), n=3hat{imath}−2hat{jmath}+4hat{k})
a. (3x−2y+4z=0)
b. (3x−2y+4z=0)
26) ( P(3,2,2), vecs n=2hat{imath}+3hat{jmath}−hat{k})
27) ( P(1,2,3), vecs n=⟨1,2,3⟩)
a. ((x−1)+2(y−2)+3(z−3)=0)
b. (x+2y+3z−14=0)
28) ( P(0,0,0), vecs n=⟨−3,2,−1⟩)
For exercises 29 - 32, the equation of a plane is given.
a. Find normal vector (vecs n) to the plane. Express (vecs n) using standard unit vectors.
b. Find the intersections of the plane with each of the coordinate axes (its intercepts).
c. Sketch the plane.
29) [T] ( 4x+5y+10z−20=0)
a. (vecs n=4hat{imath}+5hat{jmath}+10hat{k})
b. ((5,0,0), (0,4,0),) and ( (0,0,2))
c.
30) ( 3x+4y−12=0)
31) ( 3x−2y+4z=0)
a. (vecs n=3hat{imath}−2hat{jmath}+4hat{k})
b. ((0,0,0))
c.
32) ( x+z=0)
33) Given point ( P(1,2,3)) and vector (vecs n=hat{imath}+hat{jmath}), find point ( Q) on the (x)-axis such that ( vecd{PQ}) and (vecs n) are orthogonal.
( (3,0,0))
34) Show there is no plane perpendicular to (vecs n=hat{imath}+hat{jmath}) that passes through points ( P(1,2,3)) and ( Q(2,3,4)).
35) Find parametric equations of the line passing through point ( P(−2,1,3)) that is perpendicular to the plane of equation ( 2x−3y+z=7.)
( x=−2+2t,y=1−3t,z=3+t, t∈R)
36) Find symmetric equations of the line passing through point ( P(2,5,4)) that is perpendicular to the plane of equation ( 2x+3y−5z=0.)
37) Show that line ( frac{x−1}{2}=frac{y+1}{3}=frac{z−2}{4}) is parallel to plane ( x−2y+z=6).
38) Find the real number ( α) such that the line of parametric equations ( x=t,y=2−t,z=3+t, t∈R) is parallel to the plane of equation ( αx+5y+z−10=0.)
For exercises 39 - 42, the equations of two planes are given.
a. Determine whether the planes are parallel, orthogonal, or neither.
b. If the planes are neither parallel nor orthogonal, then find the measure of the angle between the planes. Express the answer in degrees rounded to the nearest integer.
c. If the planes intersect, find the line of intersection of the planes, providing the parametric equations of this line.
39) [T] ( x+y+z=0, 2x−y+z−7=0)
a. The planes are neither parallel nor orthogonal.
b. (62°)
c. (x = -1 + 2t)
(y = -4 + t)
(z = 5 - 3t)
40) ( 5x−3y+z=4, x+4y+7z=1)
41) ( x−5y−z=1, 5x−25y−5z=−3)
a. The planes are parallel.
42) [T] ( x−3y+6z=4, 5x+y−z=4)
For exercises 43 - 46, determine whether the given line intersects with the given plane. If they do intersect, state the point of intersection.
43) Plane: (2x + y - z = 11) Line: (x = 1 + t, , y = 3 - 2t, , z = 2 +4t)
They intersect at point ( (-1, 7, -6) ).
44) Plane: (-x + 2y + z = 2) Line: (x = 1 + 2t, , y = -2 + t, , z = 5 - 3t)
They intersect at point ( (-frac{1}{3}, -frac{8}{3}, 7) ).
5) Plane: (x - 3y + 2z = 4) Line: (x = 2 - t, , y = t, , z = 4 +2t)
The line does not intersect with this plane.
46) Plane: (x - 3y + 2z = 10) Line: (x = 2 - t, , y = t, , z = 4 +2t)
The line is actually fully contained in this plane, so every point on the line is on the plane. For example, when (t = 0) we have the point, ((2, 0, 4)).
47) Show that the lines of equations ( x=t,y=1+t,z=2+t, t∈R,) and ( frac{x}{2}=frac{y−1}{3}=z−3) are skew, and find the distance between them.
( frac{1}{sqrt{6}} = frac{sqrt{6}}{6}) units
48) Show that the lines of equations ( x=−1+t,y=−2+t,z=3t, t∈R,) and ( x=5+s,y=−8+2s,z=7s, s∈R) are skew, and find the distance between them.
49) Consider point ( C(−3,2,4)) and the plane of equation ( 2x+4y−3z=8).
a. Find the radius of the sphere with center ( C) tangent to the given plane.
b. Find point P of tangency.
a. (r = frac{18}{sqrt{29}} = frac{18sqrt{29}}{29})
b. (P(−frac{51}{29},frac{130}{29},frac{62}{29}))
50) Consider the plane of equation ( x−y−z−8=0.)
a. Find the equation of the sphere with center ( C) at the origin that is tangent to the given plane.
b. Find parametric equations of the line passing through the origin and the point of tangency.
51) Two children are playing with a ball. The girl throws the ball to the boy. The ball travels in the air, curves ( 3) ft to the right, and falls ( 5) ft away from the girl (see the following figure). If the plane that contains the trajectory of the ball is perpendicular to the ground, find its equation.
( 4x−3y=0)
52) [T] John allocates ( d) dollars to consume monthly three goods of prices ( a,b), and ( c). In this context, the budget equation is defined as ( ax+by+cz=d,) where ( x≥0,y≥0), and ( z≥0) represent the number of items bought from each of the goods. The budget set is given by ( {(x,y,z)|ax+by+cz≤d,x≥0,y≥0,z≥0},) and the budget plane is the part of the plane of equation ( ax+by+cz=d) for which ( x≥0,y≥0), and ( z≥0). Consider ( a=$8, b=$5, c=$10,) and ( d=$500.)
a. Use a CAS to graph the budget set and budget plane.
b. For ( z=25,) find the new budget equation and graph the budget set in the same system of coordinates.
53) [T] Consider (vecs r(t)=⟨sin t,cos t,2t⟩) the position vector of a particle at time ( t∈[0,3]), where the components of (vecs r) are expressed in centimeters and time is measured in seconds. Let ( vecd{OP}) be the position vector of the particle after ( 1) sec.
a. Determine the velocity vector (vecs v(1)) of the particle after ( 1) sec.
b. Find the scalar equation of the plane that is perpendicular to ( v(1)) and passes through point ( P). This plane is called the normal plane to the path of the particle at point ( P).
c. Use a CAS to visualize the path of the particle along with the velocity vector and normal plane at point ( P).
a. (vecs v(1)=⟨cos 1,−sin 1, 2⟩)
b. ( (cos 1)(x−sin 1)−(sin 1)(y−cos 1)+2(z−2)=0)
c.
54) [T] A solar panel is mounted on the roof of a house. The panel may be regarded as positioned at the points of coordinates (in meters) ( A(8,0,0), B(8,18,0), C(0,18,8),) and ( D(0,0,8)) (see the following figure).
a. Find the general form of the equation of the plane that contains the solar panel by using points ( A,B,) and ( C), and show that its normal vector is equivalent to ( vecd{AB}×vecd{AD}.)
b. Find parametric equations of line ( L_1) that passes through the center of the solar panel and has direction vector (vecs s=frac{1}{sqrt{3}}hat{imath}+frac{1}{sqrt{3}}hat{jmath}+frac{1}{sqrt{3}}hat{k},) which points toward the position of the Sun at a particular time of day.
c. Find symmetric equations of line ( L_2) that passes through the center of the solar panel and is perpendicular to it.
d. Determine the angle of elevation of the Sun above the solar panel by using the angle between lines ( L_1) and ( L_2).
|
|
# A Electromagnetic strength tensor
1. Sep 14, 2016
### spaghetti3451
The antisymmetric 2-tensor $F_{ij}$ is given by $F_{ij}\equiv \partial_{i}A_{j}-\partial_{j}A_{i}$
so that $F_{ij}={\epsilon_{ij}}^{k}B_{k}$ and $B_{i}=\frac{1}{2}{\epsilon_{i}}^{jk}F_{jk}$.
I was wondering if the permutation tensor with indices upstairs is different from the permutation tensor with indices downstairs.
2. Sep 14, 2016
### Orodruin
Staff Emeritus
As long as you are doing rotations only (i.e., changing between different Cartesian coordinate systems and no boosts), it does not matter whether the indices are covariant or contravariant.
If you include general coordinate transformations, it does matter, but there is also an ambiguity in how you define the permutation symbol with indices in different places (it is not a tensor, it is a tensor density) and if you want B and F to transform as a pseudo-vector and tensor, respectively, you need to use a tensor instead of the permutation symbol.
3. Sep 14, 2016
### spaghetti3451
So, is the basic idea that it does not really matter if the indices on the permutation symbol are upstairs or downstairs?
4. Sep 14, 2016
### Orodruin
Staff Emeritus
If you restrict yourself to Cartesian coordinate systems and proper rotations, it never does.
5. Sep 14, 2016
### spaghetti3451
I see!
I guess this means that, for general coordinate transformations, it is not recommended to use the permutation symbol (but rather some tensor constructed out of the permutation symbol) to transform the magnetic field and field strength tensor in a tensor equation.
Is that so?
6. Sep 14, 2016
### Orodruin
Staff Emeritus
Right. This tensor would be the tensor $\eta_{ijk} = \sqrt{g} \epsilon_{ijk}$, where $g$ is the metric determinant (which happens to be a scalar density of the appropriate weight to make that thing a tensor). I have here defined the (covariant) permutation symbol $\epsilon_{ijk}$ to have components 1, -1, or 0 in any coordinate system, depending on the indices. Note that the corresponding definition using the contravariant permutation symbol (i.e., the permutation symbol with components 1, -1, or 0 and contravariant indices) would be $\eta^{ijk} = \sqrt{g}^{-1} \epsilon^{ijk}$.
7. Sep 14, 2016
### spaghetti3451
Does this mean that ${\eta_{ij}}^{k}=\sqrt{g}{\epsilon_{ij}}^{k}=-\eta_{ijk}=-\sqrt{g}\epsilon_{ijk}$ in Minkowski space (with mostly negative signature), which means that ${\epsilon_{ij}}^{k}=-\epsilon_{ijk}$ in Minkowski space (with mostly negative signature)?
I'm trying to understand if the raising and lowering of indices (using the MInkowski metric) on $\eta$ follow the usual rules.
8. Sep 14, 2016
### Orodruin
Staff Emeritus
In Minkowski space you are not working with a rank three anti-symmetric tensor and/or permutation symbol. You need to work with a rank 4 tensor instead. The corresponding tensor $\eta$ has three spatial indices for all of its non-zero components and it will therefore matter if you define the tensor using the covariant or contravariant permutation symbol. I strongly suggest not writing upper and lower indices on the same permutation symbol because of the ambiguity in whether you are referring to the covariant or contravariant permutation symbol. If you have a single text where it is well defined what is intended (for example, when it is first introduced you could say that it is the only one you are going to use and then use all of the raised and lowered indices with that definition as the basic object). However, this can cause problems when comparing texts that adopt different definitions and you will then be dealing with signs appearing and disappearing.
9. Sep 14, 2016
### spaghetti3451
I understand that covariant and contravariant permutation symbols are distinct objects in Minkowski space and also in general curvilinear coordinates in Euclidean space.
But aren't permutation symbols with mixed indices also well-defined objects, and so do we not have to know how each index on the permutation symbol (with mixed indices) transforms independently of the other indices?
10. Sep 14, 2016
### Orodruin
Staff Emeritus
Introducing permutation symbols with mixed indices as basic objects is not very well defined. The main reason for this is that the property you want from your permutation symbols is complete anti-symmetry. This becomes a problem if you want to mix covariant and contravariant indices (and are not restricting yourself to Cartesian coordinates). An anti-symmetry between two covariant or between to contravariant indices is well defined, an anti-symmetry between a covariant and a contravariant index is not. If you see a permutation symbol with mixed indices, it is most likely defined as either the contravariant or covariant permutation symbol with the lowered/raised indices contracted with the metric/inverse metric.
11. Sep 14, 2016
### spaghetti3451
Thanks for the explanation.
I was wondering if the electric field vector $\vec{E}$, when written in index notation becomes, $E_{i}$ or $E^{i}$.
Or does it not matter in either Cartesian coordinates or Minkowski space because the object is not a four-vector?
Does a similar conclusion apply for the magnetic field pseudo-vector as well?
12. Sep 14, 2016
### Orodruin
Staff Emeritus
In Cartesian coordinates it does not matter. In Minkowski space with general Lorentz transformations, the object mixes with the magnetic field vector.
The conclusion is similar for the magnetic field, yes. As long as you restrict yourself to rotations (no general coordinates in space or boosts), it does not matter.
13. Sep 14, 2016
### spaghetti3451
The index-free equation $\vec{E}=-\nabla\phi-\frac{\partial\vec{A}}{\partial t}$ becomes the index-full equation $E_{i}=-\partial_{i}\phi-\partial_{t}A_{i}$.
Similarly, the index-free equation $\vec{B}=\nabla\times\vec{A}$ becomes the index-full equation $B_{i}=\epsilon_{ijk}\partial_{j}A_{k}$.
Since this is in Cartesian coordinates, I presume that the position (upstairs of downstairs) of the indices does not matter, so that $E_{i}=-\partial_{i}\phi-\partial_{t}A_{i}$ is the same as $E^{i}=-\partial^{i}\phi-\partial_{t}A_{i}$, where the indices are raised and lowered without care.
Is this true?
Now, in Minkowski space, $F^{i0}=\partial^{i}A^{0}-\partial^{0}A^{i}=-\partial_{i}\phi-\partial_{t}A^{i}$. Now, I am not sure if this expression is equal to $E_{i}$, since in Minkowski space $\partial^{i}=-\partial_{i}$ and $A^{i}=-A_{i}$ but in Euclidean space in Cartesian coordinates $\partial^{i}=\partial_{i}$ and $A^{i}=A_{i}$.
Last edited: Sep 14, 2016
14. Sep 14, 2016
### Orodruin
Staff Emeritus
If you are using the Cartesian metric, yes. (I.e., if you are considering space and time separately.) Beware that if you use the Minkowski metric, raising the index changes the sign of the derivative. You just need to be careful with what you mean when writing down your equations.
Alternatively you can use a metric with -+++ signature and not have this problem for raising and lowering the spatial indices. Some minus signs will pop up in other places though.
15. Sep 14, 2016
### vanhees71
Another point of view is that neither $\vec{E}$ nor $\vec{B}$ alone have a well-determined behaviour under Lorentz transformations but only when brought together in terms of $F_{\mu \nu}$. Particularly neither $\vec{E}$ nor $\vec{B}$ are not spatial components of a Minkowski four-vector but components of the anti-symmetric Faraday tensor.
Of course, on the from the Faraday tensor $\vec{E}$ and $\vec{B}$ have a well-defined transformation behavior under Lorentz transformations.
$$F^{\prime \mu \nu}(x')={\Lambda^{\mu}}_{\rho} {\Lambda^{\nu}}_{\sigma} F^{\rho \sigma}(x) = {\Lambda^{\mu}}_{\rho} {\Lambda^{\nu}}_{\sigma} F^{\rho \sigma}(\Lambda^{-1}x). \qquad (*)$$
As turns out the only two scalars (under proper orthochronous Lorentz transformations) are $F_{\mu \nu} F^{\mu \nu}$ and $\epsilon_{\mu \nu \rho \sigma} F^{\mu \nu} F^{\rho \sigma}$ which are $\propto \vec{E}^2-\vec{B}^2$ and $\propto \vec{E} \cdot \vec{B}$. This suggests further that the complex vector (the Hilbert-Silberstein vector)
$$\vec{F}=\vec{E} + \mathrm{i} \vec{B}$$
are objects with a proper transformation behavior under proper orthochronous Lorentz transformations (LTs), because with the normal bilinear scalar product (not the sesquilinear product of a unitary space!) you get
$$\vec{F} \cdot \vec{F}=\vec{E}^2-\vec{B}^2 + 2 \mathrm{i} \vec{E} \cdot \vec{B},$$
which is invariant under LTs. Indeed the transformation properties from (*) lead to the transformation
$$\vec{F'}(x')=D(\Lambda) \vec{F}(x),$$
where $D(\Lambda) \in \mathrm{SO}(3,\mathbb{C})$ builds a proper representation of the LTs. The subgroup $\mathrm{SO}(3,\mathbb{R})$ corresponds to the rotations of course, because indeed $\vec{E}$ and $\vec{B}$ are transforming as vector fields under rotations. A rotation matrix with a purely imaginary rotation angle $\mathrm{i} \eta$ is also in $\mathrm{SO}(3,\mathbb{C})$, and these refer to the pure boosts in the specified direction. The parameter $\eta$ is the rapidity, related to the velocity of the boost by $\beta=v/c=\tanh \eta$.
16. Sep 14, 2016
### Orodruin
Staff Emeritus
While this is true, I would like to remark that the information contained in the E field of an observer is contained in a 4-vector orthogonal to its 4-velocity, namely $F_{\mu\nu} V^\nu$. Similarly $\tilde F_{\mu\nu} V^\nu$ contains exactly the same information as the B field. These vectors are clearly both space-like and have time-component zero and the E and B components, respectively, as spatial components.
|
|
# Public Release 7: Wind blocks and Detailed Ending (v1.4.0)
2021-03-17
## Changelog
With the update v1.4.0, an error will occur once opened a game with an older map. To fix that, install the new earthquake_settings.xml file and place it inside mods/gui. The “Sample custom level” in the workshop page has been updated already.
Mods:
Check out more about it in the documentation.
Tweaks:
• Added the Detailed Ending that displays if the player has won with:
• Giant Boots
• Snake Ring
• Both
• None of the above
The Detailed Ending is still a bit unstable / not working.
Fixes:
• Fixed hard coded “earthquake” effect. (thanks to IntroCar and Abaddon!)
Usually used for the towers. Check out how to implement them in your level here.
|
|
# Gröbner Bases of Ideals Invariant under a Commutative Group: the Non-Modular Case
1 PolSys - Polynomial Systems
LIP6 - Laboratoire d'Informatique de Paris 6, Inria Paris-Rocquencourt
Abstract : We propose efficient algorithms to compute the Gröbner basis of an ideal $I\subset k[x_1,\dots,x_n]$ globally invariant under the action of a commutative matrix group $G$, in the non-modular case (where $char(k)$ doesn't divide $|G|$). The idea is to simultaneously diagonalize the matrices in $G$, and apply a linear change of variables on $I$ corresponding to the base-change matrix of this diagonalization. We can now suppose that the matrices acting on $I$ are diagonal. This action induces a grading on the ring $R=k[x_1,\dots,x_n]$, compatible with the degree, indexed by a group related to $G$, that we call $G$-degree. The next step is the observation that this grading is maintained during a Gröbner basis computation or even a change of ordering, which allows us to split the Macaulay matrices into $|G|$ submatrices of roughly the same size. In the same way, we are able to split the canonical basis of $R/I$ (the staircase) if $I$ is a zero-dimensional ideal. Therefore, we derive \emph{abelian} versions of the classical algorithms $F_4$, $F_5$ or FGLM. Moreover, this new variant of $F_4/F_5$ allows complete parallelization of the linear algebra steps, which has been successfully implemented. On instances coming from applications (NTRU crypto-system or the Cyclic-n problem), a speed-up of more than 400 can be obtained. For example, a Gröbner basis of the Cyclic-11 problem can be solved in less than 8 hours with this variant of $F_4$. Moreover, using this method, we can identify new classes of polynomial systems that can be solved in polynomial time.
Keywords :
Type de document :
Communication dans un congrès
The 38th International Symposium on Symbolic and Algebraic Computation, ISSAC '13, Jun 2013, Boston, United States. ACM, Proceedings of the 38th International Symposium on Symbolic and Algebraic Computation, ISSAC '13, pp.347-354, 2013, 〈10.1145/2465506.2465944〉
Littérature citée [19 références]
https://hal.inria.fr/hal-00819337
Contributeur : Jules Svartz <>
Soumis le : mardi 30 avril 2013 - 18:15:59
Dernière modification le : mardi 17 avril 2018 - 11:30:08
Document(s) archivé(s) le : mercredi 31 juillet 2013 - 05:00:08
### Fichiers
FS13.pdf
Fichiers produits par l'(les) auteur(s)
### Citation
Jean-Charles Faugère, Jules Svartz. Gröbner Bases of Ideals Invariant under a Commutative Group: the Non-Modular Case. The 38th International Symposium on Symbolic and Algebraic Computation, ISSAC '13, Jun 2013, Boston, United States. ACM, Proceedings of the 38th International Symposium on Symbolic and Algebraic Computation, ISSAC '13, pp.347-354, 2013, 〈10.1145/2465506.2465944〉. 〈hal-00819337〉
### Métriques
Consultations de la notice
## 397
Téléchargements de fichiers
|
|
# Learn About Counting
Let’s learn about counting and numbers up to 20 in numerals and words.
Counting to twenty with numbers or numerals:
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20.
Counting to twenty with words:
one, two, three, four, five, six, seven, eight, nine, ten, eleven, twelve, thirteen, fourteen, fifteen, sixteen, seventeen, eighteen, nineteen, twenty.
### We write 20 as Twenty.
Math Only Math is a fun math websites where learning math becomes play and play becomes learning.
Free kindergarten math printable lessons are available for kid’s and even parents and teachers can encourage and suggest the child to practice the kid’s math sheets so that they get prepare for kindergarten math test. Kids math homework help is also available here, if any doubts you can contact us by mail.
However, suggestions for further improvement, from all quarters would be greatly appreciated.
● Time
### New! Comments
Have your say about what you just read! Leave me a comment in the box below. Ask a Question or Answer a Question.
Didn't find what you were looking for? Or want to know more information about Math Only Math. Use this Google Search to find what you need.
|
|
# ANOVA claims that model is singular
I have a problem with a 3 way ANOVA with repeated measures. The design is the following: "localization" is the dependent variable, than I have the following within subjects factors "material" (4 levels), "diffusion technique" (6 levels) and "typology" (2 levels).
This is the formula I used:
aov_Material = aov(abs(Localization) ~ Diffusion_Type*Material*Typology +
Error(Subject/(Diffusion_Type*Material*Typology)), data=scrd)
summary(aov_Material)
And this is the output I get as result:
Warning message:
In aov(abs(Localization) ~ Diffusion_Type * Material * Typology + :
Error() model is singular
> summary(aov_Material)
Error: Subject
Df Sum Sq Mean Sq F value Pr(>F)
Residuals 11 2117 192.5
Error: Subject:Diffusion_Type
Df Sum Sq Mean Sq F value Pr(>F)
Diffusion_Type 5 20.2 4.032 0.423 0.83
Residuals 55 523.7 9.522
Error: Subject:Material
Df Sum Sq Mean Sq F value Pr(>F)
Material 3 1967 655.6 21.4 7.03e-08 ***
Residuals 33 1011 30.6
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Error: Subject:Diffusion_Type:Material
Df Sum Sq Mean Sq F value Pr(>F)
Diffusion_Type:Material 15 170.1 11.338 1.308 0.202
Residuals 165 1430.3 8.668
Error: Within
Df Sum Sq Mean Sq F value Pr(>F)
Residuals 288 2888 10.03
My questions are:
1. Why there is the warning (and, BTW, what does it means that the model is singular)?
2. Why there is not the effect for typology listed in the results? How can I understand if there is a difference between the two levels of typology?
• Sorry for migrating it; it is on-topic here. – user88 Feb 3 '13 at 0:12
• hello...can anyone help me please? – L_T Feb 3 '13 at 10:46
|
|
# Why it is rational to predict according to your true beliefs
Earlier today, I was making a prediction on question whose median prediction seemed way too high. Because I want those informed by these predictions to have accurate beliefs, for a brief moment, I felt inclined to predict way below the median prediction (and even below my true belief) in order to have the largest impact on the median prediction. But, as some of you might know, this does not make sense.
Here I examine the case of binary questions where you input a prediction on the [1%,99%] range, but similar reasoning applies also to numeric-range predictions and date-range predictions.
Suppose my true belief about question $Q$ is that it has a probability $p_i \in [1\%,99\%]$ of a positive resolution. Suppose that my preferences are such that I prefer the median to be closest to my true belief $p_i$. After reporting my prediction $r\in [1\%,99\%]$, all predictions are given by the vector $\mathbf{p}$, and the median prediction is $median(\mathbf{p})$. My preferences imply that I want to report a prediction $r$ that minimises the distance between the median and my true belief, which is given by
Suppose I report my true belief as prediction ($r=p_i$), then there are three possible outcomes,
In case 1, the median is equal to my true belief $p_i$, and I have no reason to deviate. In case 2, the median is larger than my true belief, but since no prediction $r\lt p_i$ will decrease the median below its current value, I have no reason to deviate. Similarly, in case 3, I cannot increase the median with any prediction $r\gt p_i$, which again gives me no reason to deviate. Hence, predicting your true belief is at least as good as any other prediction (and possibly even better when you're interested in maximising your points!)
I recognise that there might be exceptions to this principle, especially given that predictions on Metaculus are made in sequence, and not simultaneously. For instance, suppose you expect that the Metaculus community is likely to be positively biased about the chance of $Q$ resolving positively. Moreover, you know that the anchoring of some initial low median will update future predictions downward closer toward your true belief. This may plausibly effect the median prediction over the long run. However, I hope that my pointing this out will void this strategy, as you now know to update much less on early predictions given that these might reflect other player's strategic behaviour and not their true beliefs.
|
|
MathsGee is free of annoying ads. We want to keep it like this. You can help with your DONATION
0 like 0 dislike
17 views
Solve the equation $4x - 7 = 2x + 13$
| 17 views
1 like 0 dislike
collect like terms to one side of equation
4x-2x=13+7
2x=20
x=10
by Diamond (39,577 points)
selected by
1 like 0 dislike
4x -2x=20
2x=20
x=10
by Wooden (3,935 points)
|
|
If
then
$A^{-1} =$
$\left.\vphantom{\begin{array}{c}\!\strut\\\!\strut\\\!\strut\\\end{array}}\right[$ $\left.\vphantom{\begin{array}{c}\!\strut\\\!\strut\\\!\strut\\\end{array}}\right]$
Your overall score for this problem is
|
|
# How to explain the results from this kmeans?
I got the following results by using k-means algorithm. There are $$10$$ elements in Cluster $$0$$ and $$3$$ elements in Cluster $$1$$.
Do you think it makes sense and it might be an acceptable result? How could I explain it?
|
|
# Ascii art from image to characters
### Background
I just got interested in Ascii art, and so I searched on the Internet and came up with following code in doing it with Python.
### Code
from PIL import Image, ImageDraw, ImageFont
import operator, bisect
def getChar(val):
"""
return a char for a given gray value
"""
index = bisect.bisect_left(scores, val) # find index of val in scores
# check and choose the nearer one between current index and former index
if index > 0 and sorted_weights[index][1] + sorted_weights[index -
1][1] > 2 * val:
index -= 1
return sorted_weights[index][0]
def transform(image_file):
"""
return a string containing characters representing each pixel
"""
image_file = image_file.convert("L") # transform image to black-white
codePic = ''
for h in range(image_file.size[1]):
for w in range(image_file.size[0]):
gray = image_file.getpixel((w, h))
codePic += getChar(maximum * (1 - gray / 255)) # append characters
codePic += '\r\n' # change lines
return codePic
outputTextFile = 'ycy_ascii.txt'
outputImageFile = 'ycy_ascii.jpg'
fnt = ImageFont.truetype('Courier New.ttf', 10)
chrx, chry = fnt.getsize(chr(32))
normalization = chrx * chry * 255
weights = {}
# get gray density for characters in range [32, 126]
for i in range(32, 127):
sizex, sizey = chrImage.size
ctr = sum(
chrImage.getpixel((x, y)) for y in range(sizey) for x in range(sizex))
weights[chr(i)] = ctr / normalization
weights[chr(32)] = 0.01 # increase it to make blank space ' ' more available
weights.pop('_', None) # remove '_' since it is too directional
weights.pop('-', None) # remove '-' since it is too directional
sorted_weights = sorted(weights.items(), key=operator.itemgetter(1))
scores = [y for (x, y) in sorted_weights]
maximum = scores[-1]
resolution = 0.3 # resolution of result ascii image, the higher the better
sizes = [resolution * i for i in (0.665, 0.3122, 4)]
imagefile = base.resize((int(base.size[0] * sizes[0]),
int(base.size[1] * sizes[1])))
result = transform(imagefile)
# output to text file
asc_text = open(outputTextFile, 'w')
asc_text.write(result)
asc_text.close()
# output to image file and show it
asc_image = Image.new(
'L', (int(base.size[0] * sizes[2]), int(base.size[1] * sizes[2])), 255)
d = ImageDraw.Draw(asc_image)
d.text((0, 0), result, font=fnt, fill=0)
asc_image.save(outputImageFile)
asc_image.show()
asc_image.close()
### Usage
I run above code in Jupyter Notebook, but it should be easy in adding arguments and changed into .py file.
### Side Notes
1. readinFilePath is the path to the input image file.
2. outputTextFile is the output text file containing ascii characters.
3. outputImageFile is the output image containing ascii characters. I added this because text file always wraps around long lines and results in distorted image.
4. I use font Courier New, you can use other monospace fonts.
5. (0.665, 0.3122, 4) is the ratio between width and height of result image. If using a different font or with a different font size, this should be change a little bit.
|
|
Subtract the product we calculated (which is 425) from the current number on the left (also 425). Online calculator for calculating the square root of a complex number. sqrt(r)*(cos(phi/2) + 1i*sin(phi/2)) Logarithmic functions. Substitute values , to the formulas for . Example: type in (2-3i)*(1+i), and see the answer of 5-i. Trick in Hindi. If I'm given a complex number (say $9 + 4i$), how do I calculate its square root? STEP 6: Subtract Again. You can verify this by using the calculator to take the square root of various numbers and converting them to polar co-ordinates. Just type your formula into the top box. In Matlab it uses "rms(vector) " to compute the rms value. You can find the Square root of complex numbers in Python using the cmath library. Using this tool you can do calculations with complex numbers such as add, subtract, multiply, divide plus extract the square root and calculate the absolute value (modulus) of a complex number. This online calculator finds -th root of the complex number with step by step solution.To find -th root, first of all, one need to choose representation form (algebraic, trigonometric or exponential) of the initial complex number. When a single letter x = a + bi is used to denote a complex number it is sometimes called 'affix'. Therefore, the combination of both the real number and imaginary number is a complex number.. Do NOT enter the letter 'i' in any of the boxes. For example, to take the square root of a complex number, take the square root of the modulus and divide the argument by two. So, . The imaginary unit, denoted as i on the scientific calculator represents the square root of -1. Summary : complex_conjugate function calculates conjugate of a complex number online. I have input a complex vector, which also gives me answer. If w is one square root, then the other one will … The new version (0.9 onwards) of the calculator supports operations with imaginary numbers and complex numbers. Multiplication = (a+bi) × (a+bi) Division = (a+bi) / (a+bi) Square root: r = sqrt (a² + b²) y = sqrt ((r-a) / 2) x = b / 2y r1 = x + yi r2 = -x - yi This tool will help you dynamically to calculate the complex number multiplication, division and square root problems. The basic problems are to know how to square $(x+iy)$ (it will have both real and imaginary parts) which was written correctly in the question but wrong in the comment just now; also, the square root of a complex number is an ordinary complex number (or a pair of complex numbers each the negative of the other, just like $\sqrt{4} = \pm 2$). For example, 4 and −4 are square roots of 16, because 4² = (−4)² = 16. Tutorial on how to find the square roots of complex numbers. Complex number have addition, subtraction, multiplication, division. Show Instructions In general, you can skip … On the Home screen, press [2nd][x2][(-)][ENTER].There’s a good chance you’ll get an ERROR: NONREAL ANSWERS message, as shown in the first screen.In Real mode, your calculator usually returns an error for a complex-number result. "Note that any positive real number has two square roots, one positive and one negative. As imaginary unit use, (1+i) (3+5i) = 1*3+1*5i+i*3+i*5i = 3+5i+3i-5 = -2+8, pow(1+2i,1/3)*sqrt(4) = 2.439233+0.9434225, pow(-5i,1/8)*pow(8,1/3) = 2.3986959-0.4771303, (6-5i)^(-3+32i) = 2929449.03994-9022199.58262, equation with complex numbers: (z+i/2 )/(1-i) = 4z+5i, system of equations with imaginary numbers: x-y = 4+6i; 3ix+7y=x+iy, multiplication of three complex numbers: (1+3i)(3+4i)(−5+3i), Find the product of 3-4i and its conjugate. A complex number is in the form of a + bi (a real number plus an imaginary number) where a and b are real numbers and i is the imaginary unit. Any nonzero complex number also has two square roots. In this case, your calculator produces a complex-number result regardless of the mode… For negative and complex numbers z = u + i*w, the complex square root sqrt(z) returns. The calculator will find the n-th roots of the given complex number, using de Moivre's Formula, with steps shown. Complex numbers are the numbers which are expressed in the form of a+ib where ‘i’ is an imaginary number called iota and has the value of (√-1).For example, 2+3i is a complex number, where 2 is a real number and 3i is an imaginary number. Multiplying the principal square root by -1 will provide the negative square root if needed. Complex Number is in the form a + bi. This website uses cookies to ensure you get the best experience. – a+bi is the expression of complex numbers, a in this combination and b represent real numbers and i represents imaginary unit, complex numbers real part is zero, use this online calculator and get your complex number problems right. The number 8 and the number 5 give us 85. I'm being passed a real number and an imaginary number, x and y to make up a complex number. The exception is when you enter your expression using i. Complex Number is a number that consists of a real number plus an imaginary number. nth Roots of Complex Numbers Fold Unfold. Calculators is for use with complex numbers - meaning numbers that have the form a + bi. Let the given complex number (whose square root is to be found) be a+ib. The choice of one of these signs defines what is called a branch of the square root function, which is a single-valued function. A complex number such as 3 + 5i would be entered as a=3 bi=5. Calculate the Complex number Multiplication, Division and square root of the given number. Complex Numbers Calculator. This online calculator finds -th root of the complex number with step by step solution.To find -th root, first of all, one need to choose representation form (algebraic, trigonometric or exponential) of the initial complex number. That is its principal root. Complex number have addition, subtraction, multiplication, division. This will be the case for the current topic. The notion of complex numbers was introduced in mathematics, from the need of calculating negative quadratic roots. Instructions:: All Functions. 85 times 5 results in 425, which is exactly what we need. SonoG tone generator The complex symbol notes i. Multiplication: [ (a+bi) × (a+bi) ] Operations with one complex number Five operations with a single complex number. This calculator extracts the square root, calculate the modulus, finds inverse, finds conjugate and transform complex number to polar form.The calculator will … [ where a and b are real numbers] Assume that the square root of the given complex number is c+id. bi: r1 : In many cases, these methods for calculating complex number roots can be useful, but for higher powers we should know the general four-step guide for calculating complex number roots. Trigonometric functions. When a single letter x = a + bi is used to denote a complex number it is sometimes called 'affix'. DeMoivre's Theorem [r(cos θ + j sin θ)] n = r n (cos nθ + j sin nθ) where j=sqrt(-1). Logarithms, powers and roots work in the same way as for real numbers. Complex square roots of are and . For a complex number such as 7 + i, you would enter a=7 bi=1. What to do with square root with number in front, calculator that solve quadratic equations by factoring, solve and graph on the real number line, TI 83 Factoring program, binomial expansion solver, free rational expression online calculator, Algebra II , Holt, Rinehart and Winston Workbooks. For real numbers, we square the numbers and take the average then finally take the square root which gives us the RMS value. Example 7. Simplifying complex expressions The following calculator can be used to simplify ANY expression with complex numbers. A positive root and a negative root. Download, Basics However, if the given number isn't a perfect square you can find the square using a long division method. Any nonzero complex number also has two square roots. A complex number is in the form of a + bi (a real number plus an imaginary number) where a and b are real numbers and i is the imaginary unit. Can those numbers we squared be complex numbers? This calculator extracts the square root, calculate the modulus, finds inverse, finds conjugate and transform complex number to polar form. Conic Sections Trigonometry. The above two methods will work fine for all positive real numbers. The complex number calculator is also called an imaginary number calculator. A complex number such as 3 + 7i would be entered as a=3 bi=7. Example 1: to simplify $(1+i)^8$ type (1+i)^8 . This online calculator is set up specifically to calculate 4th root. For example, 4 and −4 are square roots of 16, because 4² = (−4)² = 16. Given a number x, the square root of x is a number a such that a2 = x. Quick Intro: Calculating a square root is an inverse calculation for coming back to the root of a square. The inverse operation of finding a power for a number is to find a root of the same number. In the RedCrab Calculator , you can use the Sqrt function for real and complex numbers. But for negative or complex numbers, it can be done as follows. It accepts inputs of real numbers for the radicand. One: below is a single-valued function be written in polar form, leads us to 's! That are recommended and how to use scientific calculator represents the square of... Will be the case for the elements of x is a library for dealing with numbers. Type ( 1+i ), how we calculate the square root of a complex number concept was taken a... Every complex number such as 7 + i, the two square roots of a.! This free root calculator provides the principal square root by -1 will provide the negative square root of complex! Would enter a=7 bi=1 write 5 next to 4 in the expression a+bi, a is called the real and! Any expression with complex numbers z = u + i, you would enter a=7 bi=1 in complex calculations sqrt. If used unintentionally ) 2 = ( a+bi ) ] how to use them, powers and work... Rules step-by-step this website uses cookies to ensure you get the best experience with a single letter =... Algebraic rules step-by-step this website uses cookies to ensure you get the best experience produces complex-number. The elements of x is a 2-valued function: each complex has two roots... One of these signs defines what is called the real number plus an imaginary number.! Cubed root for example, 4 and −4 are square roots of a number consists. Roots work in the form a + bi means a real number plus an imaginary number and real... There is one more common notation of square that you can use the function. A=3 bi=7 for positive a, the combination of both the real part and b imaginary! Root sqrt ( z ) returns numbers when they are in their algebraic.! These signs defines what is called the real number has the form a + bi ( a real number an... Or looking to buy a new calculator it ’ s domain includes negative and complex.! Intro: calculating a square root of various numbers and take the square root by -1 will provide negative..., you would enter a=7 bi=1 numbers z = u + i * w, the root! This example as follows of for generates the principal square root is to find the root! Of x that are negative or complex, sqrt ( x ) produces complex results which is 425 ) ''... Of one of these signs defines what is called a complex number ( than. A branch of the absolute value of z^2 is the inverse operation of a... Number is a number ) be a+ib way as for real and complex numbers generates the square! Letter x = a + bi as a^ ( 1/2 ). simplify complex expressions using algebraic rules step-by-step website... With opposite signs a2 = x, subtraction, multiplication, division and square is... The following calculator can be used to denote a complex number such as 3 + 7i would be as... Of z^2 is the positive square root of a real number and imaginary number generate! Say that z is real, when a=0, we square the numbers and complex Polar/Cartesian! R2 = -x-yi ] a meaning numbers that have the form a bi! S important to understand what it really means, how we calculate it, ’! Function of for of numbers, square root of complex number calculator follows from basic Algebra: ( 5e 3j 2... Expression contains roots of complex numbers when they are in … complex number it sometimes! Of our common roots such as 3 + 7i would be entered as a=3 bi=7 'affix ' 5e )! Calculate 4th root the scientific calculator to find the square root is inverse! Uses cookies to ensure you get the best experience library for dealing with complex.... Number results in an imaginary number of these signs defines what is called a complex number as! Given square root of complex number calculator = x+yi ; r2 = -x-yi ] a Calculation for coming back to the one... 9 + 4i \$ ), how we calculate the complex number ( whose square which! = a + bi means a real number plus an imaginary number noted by letter! Value of z^2 is the inverse operation of finding a power for a with... Intro: calculating a square root which gives us the RMS value ( 1 + i, two. Has the form a + bi means a real number and a number! For example, using the imaginary part of the mode… complex number roots ] how to them. Unexpected results if used unintentionally then finally take the average then finally take the square of... 8 – 6i but for negative or complex numbers Polar/Cartesian Functions Arithmetic & Comp set up specifically calculate!
Is Beauty Important For Love, I Commit To Uphold The Truth By Essay, Raptor Ranch Full Movie, Is Dragon Ball Z Budokai Tenkaichi 3 Rare, Sexually Charged Romance Books, Ephraim's Rescue Narrator, Darling You Send Me The Holiday, Rainforest Crunch Coffee Finger Lakes, Pomeranian Puppy For Adoption,
|
|
Have you ever heard of Ceasar's Cipher? Allegedly Julius Ceasar used this very simple encryption scheme to protect his private messages.
The algorithm is very easy: It changes every letter c in a message by substituting it with the character c' 3 places to the right of c in the alphabet. At the end of the alphabet it wraps around and starts at the beginning.
For example A -> D, H -> K, .., X -> A, Y -> B, Z -> C, and the message HELLO is turned into KHOOR.
To decrypt a message, all you have to do is count backwards in the alphabet so D -> A, K -> H, ..., C -> Z.
This algorithm can easily be changed by changing the number of steps you move to the left or right in the alphabet for each character. If you use do 13 steps instead of just 3 you end up with ROT13 - you might have heard of that too - which has the very nice property that you can use the same method to encrypt and decrypt you message (in our common 26-character alphabet).
So you can parametrize this little algorithm by this step and get a bunch of algorithms - nice huh?
## Demo
If you want you can try out the algorithm here:
## some math
So why has ROT13 this nice property? The magic is in the way the algorithm wraps inputs. If you number the letters A to Z with 0 to 25 you might notice that the operation here is really just an addition: A=0 -> 3=D, B=1 -> 4=E, ...
But at the end there is something curious happening: X=23 -> 26=A?, Y=24 -> 27=B and Z=25 -> 28=C?
It's almost as if we want 26 and 0 to be the same?
And indeed mathematician have a way to express exactly that idea by working with quotient sets/groups.
The idea behind that is: Given an equivalence relation $$\sim$$ on some set $$S$$ you can define a new set $$S/\sim$$ that consists of all the equivalence classes of $$\sim$$.
These are subsets of $$S$$ so that for two elements $$a,b \in S$$ you always have $$a \sim b$$).
Here we are interested in integers and the relation is
$$a \sim b \Leftrightarrow 26 \ divides \ (a-b)$$
It's easy to extent addition to that set which usually is written as $$\mathbb{Z}_{26}$$.
But don't fear: You probably already know this - all you have to do is mod 26 everything:
• 0 mod 26 == 0
• 1 mod 26 == 1
...
• 25 mod 26 == 25
• 26 mod 26 == 0
• 27 mod 26 == 1
And that is why there are really only 26 different algorithms (think about it). It even get's better: decrypting with key k is the same as encrypting with key -k. The nice property of ROT13 is a small corollary from the fact that
$$26 - k \equiv -k \ (mod \ 26)$$
which shows, that $$-13 \equiv 13 \ (mod \ 26)$$.
## first approach
It's not to hard to use this exact idea - Mapping a character into a number, adding the key to this number modulo 26 and then translating the resulting number back into a character - in an imperative language.
Here is a basic version in C#:
static char ShiftChar(int key, char c)
{
var upperC = Char.ToUpper(c);
if (upperC < 'A' || upperC > 'Z')
return c;
var number = upperC - 'A';
var shiftedNumber = (number + key) % 26;
return (char)('A' + shiftedNumber);
}
static string Encrypt(int key, string input)
{
return new String(input
.Select(c => ShiftChar(key, c))
.ToArray());
}
static string Decrypt(int key, string input)
{
return Encrypt(-key, input);
}
Please note that this will not change characters outside the A..Z range.
## a more functional solution
Of course you could do the same in pretty much every language out there but for my taste that all is just a bit to much bound to low-level implementation details.
For example the solution above would not work if the character-values for A to Z were not direct successor of each other.
In the end ShfitChar is just a function with a very limited domain (if you don't count all the characters it keeps constant).
On the letters A - Z it's indeed a bijection and a very simple one.
So simple that there used to be toys that helped kids encrypt their messages just like Ceasar but without knowing anything about quotient groups or modulo arithmetic - all you had to do was find the character on a toy like this:
and look bellow the character to find it's encrypted form.
To decrypt a character back you could either flip the ring or - if it was a fancy one - rotate it in the other direction.
So instead of that arithmetic imagine a approach like that toy.
First let's simplify things a bit: Instead of being able to move left and right on a ring it's enough to only move right because sooner or later (see the math section) it get's to the same state moving left would have.
As Haskell is lazy the top-section of the ring - if we start at A is just
alphabet :: [Char]
alphabet = ['A'..'Z']
ringTop :: [Char]
ringTop = cycle alphabet
which is the same for the bottom section if it was not twisted (key = 0):
data Ring = Ring
{ top :: [Char]
, bottom :: [Char]
}
initial :: Ring
initial = Ring ringTop ringTop
Rotating the bottom ring left is easy too:
rotateLeft :: Int -> Ring -> Ring
rotateLeft steps ring =
ring { bottom = drop steps (bottom ring) }
Now once the ring is setup correctly encoding a character works just like you would with that toy - you look up the character on the top and give back the character on the bottom.
The naive version of this would be
encryptUsing :: Ring -> String -> String
encryptUsing ring =
map encryptChar
where
encryptChar c = fromMaybe c $c lookup zip (top ring) (bottom ring) But caution is needed: lookup will keep looking forever if it cannot find the character (try it! and ask yourself: why?). To fix that it's enough to only look at only as much pairs as there are characters in the alphabet: encryptChar c = fromMaybe c$
c lookup take (length alphabet) (zip (top ring) (bottom ring))
Composing all these parts together we already have a working algorithm:
import Data.Maybe (fromMaybe)
encrypt :: Int -> String -> String
encrypt key =
encryptUsing $rotateLeft key' initial where key' = key mod length alphabet decrypt :: Int -> String -> String decrypt key = encrypt (negate key) data Ring = Ring { top :: [Char] , bottom :: [Char] } encryptUsing :: Ring -> String -> String encryptUsing ring = map encryptChar where encryptChar c = fromMaybe c$
c lookup take (length alphabet) (zip (top ring) (bottom ring))
initial :: Ring
initial = Ring ringTop ringTop
rotateLeft :: Int -> Ring -> Ring
rotateLeft steps ring =
ring { bottom = drop steps (bottom ring) }
ringTop :: [Char]
ringTop = cycle alphabet
alphabet :: [Char]
alphabet = ['A'..'Z']
### performance or don't use in production
Of course the performance is a bit shitty. The reason is that lookup is not really up to where the arithmetic solution is, it stupidly moves along the list and will take 13 operations on average with this alphabet. So it's $$O(n)$$ for a alphabet of length $$n$$.
You can improve this by translating the zipped pairs into a Map/Dict of some sorts.
This is the version I used for the small demo above in Elm:
ringTop : List Char
ringTop =
String.toList "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
encrypt : Int -> String -> String
encrypt key =
let
ringBottom =
rotate key ringTop
rotate k xs =
let
n =
k % List.length xs
in
List.drop n xs ++ List.take n xs
dict =
Dict.fromList (List.map2 (,) ringTop ringBottom)
mapChar c =
Dict.get c dict |> Maybe.withDefault c
in
String.map mapChar
This should be fast enough for a algorithm you should never use in a real scenario ;)
|
|
# discovr: a package of interactive tutorials
## What is discovr?
The discovr package will contain tutorials associated with my textbook Discovering Statistics using R and RStudio, due out in early 2021. It will include all datasets, but most important it will contain a series of interactive tutorials that teach alongside the chapters of the book. The tutorials are written using a package called learnr. Once a tutorial is running it’s a bit like reading excerpts of the book but with places where you can practice the R code that you have just been taught. The discovr package is free (as are all things -related) and offered to support tutors and students using my textbook.
## What are R and RStudio?
If you’re using a textbook about then you probably already know what it is. If not, R is a free software environment for statistical analysis and graphics. RStudio is a user interface through which to use . RStudio adds functionality that make working with easier, more efficient, and generally more pleasant than working in alone.
You can get started with R and RStudio by completing this tutorial (includes videos):
## Contents of discovr
The tutorials are named to correspond (roughly) to the relevant chapter of the book. For example, discovr_04 would be a good tutorial to run alongside teaching related to chapter 4, and so on. Some longer chapters have several tutorials that break the content into more manageable chunks. Given the current global situation and the fact that lots of instructors are needing to teach remotely I’m making what I have available before the book is finished and will update as and when new tutorials are written.
• discovr_01: Key concepts in (functions and objects, packages and functions, style, data types, tidyverse, tibbles)
• discovr_02: Summarizing data (frequency distributions, grouped frequency distributions, relative frequencies, histograms, mean, median, variance, standard deviation, interquartile range)
• discovr_03: Confidence intervals: interactive app demonstrating what a confidence interval is, computing normal and bootstrap confidence intervals using , adding confidence intervals to data summaries.
• discovr_05: Visualizing data. The ggplot2 package, boxplots, plotting means, violin plots, scatterplots, grouping by colour, grouping using facets, adjusting scales, adjusting positions."
• discovr_06: The beast of bias. Restructuring data from messy to tidy format (and back). Spotting outliers using histograms and boxplots. Calculating z-scores (standardizing scores). Writing your own function. Using z-scores to detect outliers. Q-Q plots. Calculating skewness, kurtosis and the number of valid cases. Grouping summary statistics by multiple categorical/grouping variables.
• discovr_07: Associations. Plotting data with GGally. Pearson’s r, Spearman’s Rho, Kendall’s tau, robust correlations.
• discovr_08: The general linear model (GLM). Visualizing the data, fitting GLMs with one and two predictors. Viewing model parameters with broom, model parameters, standard errors, confidence intervals, fit statistics, significance.
• discovr_09: Categorical predictors with two categories (comparing two means). Comparing two independent means, comparing two related means, effect sizes.
• discovr_10: Moderation and mediation. Centring variables (grand mean centring), specifying interaction terms, moderation analysis, simple slopes analysis, Johnson-Neyman intervals, mediation with one predictor, direct and indirect effects, mediation using lavaan.
• discovr_11: Comparing several means. Essentially ‘One-way independent ANOVA’ but taught using a general linear model framework. Covers setting contrasts (dummy coding, contrast coding, and linear and quadratic trends), the F-statistic and Welch’s robust F, robust parameter estimation, heteroscedasticity-consistent tests of parameters, robust tests of means based on trimmed data, post hoc tests, Bayes factors.
• discovr_12: Comparing means adjusted for other variables. Essentially ‘Analysis of Covariance (ANCOVA)’ designs but taught using a general linear model framework. Covers setting contrasts, Type III sums of squares, the F-statistic, robust parameter estimation, heteroscedasticity-consistent tests of parameters, robust tests of adjusted means, post hoc tests, Bayes factors.
• discovr_13: Factorial designs. Fitting models for two-way factorial designs (independent measures) using both lm() and the afex package. This tutorial builds on previous ones to show how models can be fit with two categorical predictors to look at the interaction between them. We look at fitting the models, setting contrasts for the two categorical predictors, obtaining estimated marginal means, interaction plots, simple effects analysis, diagnostic plots, partial eta-squared and partial omega-squared, robust models and Bayes factors.
• discovr_14: Repeated measures designs. Fitting models for one- and two-way repeated measures designs using the afex package. This tutorial builds on previous ones to show how models can be fit with one or two categorical predictors when these variables have been manipulated within the same entities. We look at fitting the models, setting contrasts for the categorical predictors, obtaining estimated marginal means, interaction plots, simple effects analysis, diagnostic plots, robust models and Bayes factors.
• discovr_15: Mixed designs. Fitting models for mixed designs using the afex package. This tutorial builds on previous ones to show how models can be fit with one or two categorical predictors when at least one of these variables has been manipulated within the same entities and at least one other has been manipulated using different entities. We look at fitting the models, setting contrasts for the categorical predictors, obtaining estimated marginal means, interaction plots, simple effects analysis, diagnostic plots, robust models and Bayes factors.
• discovr_18: Exploratory Factor Analysis (EFA). Applying factor analysis using the psych package. This tutorial uses a fictitious questionnaire (the Anxiety Scale, RAQ) with 23 items to show how EFA can be used to identify clusters of items that may, or may not, represent constructs associated with anxiety about using . We look at inspecting the correlation matrix, obtaining the Bartlett test and KMO statistics, using parallel analysis to determine the number of factors to extract, extracting factors, rotating the solution nd interpretation of the factors. We also learn to obtain Cronbach’s alpha on each of the subscales.
## Installing discovr
This package is incomplete but under active development. I have released it early in case it is useful for instructors needing to move rapidly to remote learning because of the current global pandemic. Check the GitHub page for updates/new tutorials.
To use discovr you first need to install and RStudio. To learn how to do this and to get oriented with and RStudio complete my interactive tutorial, getting started with R and RStudio.
You can get the development version of the package from github.com/profandyfield/discovr.
## Running a tutorial
In RStudio Version 1.3 onwards there is a tutorial pane. Having executed
library(discovr)
A list of tutorials appears in this pane. Scroll through them and click on the button to run the tutorial:
Alternatively, to run a particular tutorial from the console execute:
library(discovr)
learnr::run_tutorial("name_of_tutorial", package = "discovr")
and replace “name of tutorial” with the name of the tutorial you want to run. For example, to run tutorial 3 (for Chapter 3) execute:
learnr::run_tutorial("discovr_03", package = "discovr")
The name of each tutorial is in bold in the list above. Once the command to run the tutorial is executed it will spring to life in the tutorial pane.
## Suggested workflow
The tutorials are self-contained (you practice code in code boxes) so you don’t need to use RStudio at the same time. However, to get the most from them I would recommend that you create an RStudio project and within that open (and save) a new RMarkdown file each time to work through a tutorial. Within that Markdown file, replicate parts of the code from the tutorial (in code chunks) and use Markdown to write notes about what you have done, and to reflect on things that you have struggled with, or note useful tips to help you remember things. Basically, write a learning journal. This workflow has the advantage of not just teaching you the code that you need to do certain things, but also provides practice in using RStudio itself.
Here’s a video explaining how I suggest using the tutorials.
|
|
You are currently offline. Some features of the site may not work correctly.
# GridPP
GridPP is a collaboration of particle physicists and computer scientists from the United Kingdom and CERN. They manage and maintain a distributed… Expand
Wikipedia
## Papers overview
Semantic Scholar uses AI to extract papers important to this topic.
Highly Cited
2015
Highly Cited
2015
The determination of track reconstruction efficiencies at LHCb using J/psi -> mu(+)mu(-) decays is presented. Efficiencies above… Expand
2015
2015
The decay B-0 -> psi(2S)K+pi(-) is analyzed using 3 fb(-1) of pp collision data collected with the LHCb detector. A model… Expand
2015
2015
A bstractThe differential branching fraction with respect to the dimuon invariant mass squared, and the CP asymmetry of the B… Expand
Highly Cited
2015
Highly Cited
2015
A bstractAn angular analysis of the B0 → K*0e+e− decay is performed using a data sample, corresponding to an integrated… Expand
Highly Cited
2014
Highly Cited
2014
A bstractThe isospin asymmetries of B → Kμ+μ− and B → K*μ+μ− decays and the partial branching fractions of the B0 → K0μ+μ−, B… Expand
Highly Cited
2014
Highly Cited
2014
The differential cross-section as a function of rapidity has been measured for the exclusive production of $J/\psi$ and $\psi(2S… Expand Highly Cited 2013 Highly Cited 2013 A bstractThe production of J/ψ and$ \varUpsilon $mesons in pp collisions at$ \sqrt{s}=8 $TeV is studied with the LHCb… Expand Highly Cited 2013 Highly Cited 2013 A bstract A measurement of the cross-section for pp → Z → e+e− is presented using data at$\sqrt{s}=7$TeV corresponding to an… Expand Highly Cited 2013 Highly Cited 2013 The time-dependent CP asymmetry in B_s^0\to J/\psi K^+K^- decays is measured using$pp\$ collision data at \sqrt{s}=7TeV… Expand
Review
2005
Review
2005
R-GMA (Relational Grid Monitoring Architecture) [1] is a grid monitoring and information system that provides a global view of… Expand
|
|
How to Set up a Local Ethereum Network on Windows 11 and Connect MetaMask
Post
Cancel
# How to Set up a Local Ethereum Network on Windows 11 and Connect MetaMask
Everyone is talking about blockchain, web3, smart contracts, etc., and you probably want to check out how it works and what you can do with it. In this article, you will learn how to set up your own local Ethereum network on Windows 10/11 to test all the possibilities that the Ethereum blockchain offers without spending any money on Ether.
## Install Geth on Windows 11
Open your downloads folder and double-click on the installer when the download has finished. Follow the installer instructions and make sure to remember the install directory. The default install directory is C:\Program Files\Geth.
## Add Geth to the Path Environment Variable
When you finish the installation successfully, you can close the Geth installer. To use the geth command in the Windows Command Prompt/PowerShell, you must add the Geth install folder to the Path environment variable. To edit the Path environment variable, press Win+R to start Run and in the field Open enter: sysdm.cpl and click on OK.
This will open the System Properties dialog. In the System Properties dialog, switch to the tab Advanced and click on Environment Variables … at the bottom:
In the Environment Variables dialog choose the row Path and click on Edit …:
Now you can see all the paths set in the Path variable. Here you have to add the path to the Geth install folder. First, click on New and insert the path you remembered from the installer:
Now you can click on OK in all three dialogs to close them.
## Setting up your local Ethereum Network
Open the PowerShell and create a new folder in which you want to store all the files for your local Ethereum network and enter the folder with the PowerShell. For the remainder of this article, this folder is under C:\Users\YOUR_USERNAME\my_ethereum:
1 2 mkdir C:\Users\YOUR_USERNAME\my_ethereum cd C:\Users\YOUR_USERNAME\my_ethereum
Inside this folder, you have to create a genesis.json file that holds all the necessary settings for the genesis block, the first block of your Ethereum blockchain. The following genesis.json is already setup that allows you to quickly mine your Ether (which is of course, only valid in your local Ethereum network) and applies the latest patches of the Ethereum Yellow Paper:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 { "nonce": "0x0000000000000042", "mixhash": "0x0000000000000000000000000000000000000000000000000000000000000000", "timestamp": "0x0", "parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000", "extraData": "0x00", "gasLimit": "0x8000000", "difficulty": "0x400", "coinbase": "0x3333333333333333333333333333333333333333", "alloc": {}, "config": { "chainId": 2342, "homesteadBlock": 0, "byzantiumBlock": 0, "constantinopleBlock": 0, "eip150Block": 0, "eip155Block": 0, "eip158Block": 0 } }
### genesis.json fields explained
Here is an explanation for the different fields in the genesis.json (adapted from niksmac on StackExchange):
nonce A 64-bit hash, which proves, combined with the mix-hash, that a sufficient amount of computation has been carried out on this block: the Proof-of-Work (PoW). The combination of nonce and mixhash must satisfy a mathematical condition described in the Yellowpaper, 4.3.4. Block Header Validity, (44), and allows to verify that the Block has really been cryptographically mined and thus, from this aspect, is valid. The nonce is the cryptographically secure mining proof-of-work that proves beyond reasonable doubt that a particular amount of computation has been expended in the determination of this token value. (Yellowpager, 11.5. Mining Proof-of-Work).
mixhash A 256-bit hash which proves, combined with the nonce, that a sufficient amount of computation has been carried out on this block: the Proof-of-Work (PoW). The combination of nonce and mixhash must satisfy a mathematical condition described in the Yellowpaper, 4.3.4. Block Header Validity, (44). It allows to verify that the Block has really been cryptographically mined, thus, from this aspect, is valid.
timestamp A scalar value equal to the reasonable output of Unix time() function at this block inception. This mechanism enforces a homeostasis in terms of the time between blocks. A smaller period between the last two blocks results in an increase in the difficulty level and thus additional computation required to find the next valid block. If the period is too large, the difficulty, and expected time to the next block, is reduced. The timestamp also allows verifying the order of block within the chain (Yellowpaper, 4.3.4. (43)).
parentHash The Keccak 256-bit hash of the entire parent block header (including its nonce and mixhash). Pointer to the parent block, thus effectively building the chain of blocks. In the case of the Genesis block, and only in this case, it’s 0.
extraData An optional free, but max. 32-byte long space to conserve smart things for ethernity.
gasLimit A scalar value equal to the current chain-wide limit of Gas expenditure per block. High in our case to avoid being limited by this threshold during tests. Note: this does not indicate that you should not pay attention to the Gas consumption of our Contracts.
difficulty A scalar value corresponding to the difficulty level applied during the nonce discovering of this block. It defines the mining Target, which can be calculated from the previous block’s difficulty level and the timestamp. The higher the difficulty, the statistically more calculations a Miner must perform to discover a valid block. This value is used to control the Block generation time of a Blockchain, keeping the Block generation frequency within a target range. On the test network, keep this value low to avoid waiting during tests, since the discovery of a valid Block is required to execute a transaction on the Blockchain.
coinbase The 160-bit address to which all rewards (in Ether) collected from the successful mining of this block have been transferred. They are a sum of the mining reward itself and the Contract transaction execution refunds. Often named “beneficiary” in the specifications, sometimes “etherbase” in the online documentation. This can be anything in the Genesis Block since the value is set by the setting of the Miner when a new Block is created.
alloc Allows defining a list of pre-filled wallets. That’s an Ethereum specific functionality to handle the “Ether pre-sale” period. Since your can mine local Ether quickly, you don’t use this option.
chaidId Identifier for your local chain. It should be set to a unique value.
homesteadBlock / byzantiumBlock / constantinopleBlock Is set to 0 to indicate that your local network won’t undergo the switch to Homestead.
eip150Block / eip155Block / eip158Block Are all set to 0 to indicate that your local network won’t be hard-forked for these changes.
After creating the genesis.json you need a folder that stores the blockchain. For that, create another folder inside my_ethereum called chain:
1 mkdir chain
Now you can initialize your local Ethereum network with the following command:
1 geth --datadir chain init genesis.json
Which should give an output comparable to this one:
And to start your Ethereum network, execute:
1 geth --http --http.port="8545" --http.corsdomain="*" --http.api="personal,eth,net,web3" --allow-insecure-unlock --datadir chain --nodiscover
This also starts a local webserver on port 8545 and allows connections from different applications indicated by --http.corsdomain="*". Additionally, the webserver allows you to use the APIs personal, eth, net, and web3, which will be accessed by different applications such as MetaMask. --allow-insecure-unlock makes it possible to unlock an account through the command line (necessary for this local test network). With --datadir, you tell geth that the blockchain is stored in the chain folder. And lastly --nodiscover makes sure your local Ethereum network can only be manually added to applications.
## Connect to your Local Ethereum Network
Geth launched a local webserver and an interprocess communication API (IPC), which allows you to access it. The URL for the IPC can be found in the log of geth under IPC endpoint opened. You will now use this IPC endpoint to connect to your local Ethereum network. Open a new PowerShell terminal and navigate to your my_ethereum folder and execute the following command:
1 geth attach \\.\pipe\geth.ipc
Now you are connected to your local Ethereum network. The first thing you should do is to create an account/wallet with the following command:
1 personal.newAccount()
You have to enter a password two times. Make sure to write down the password; otherwise, you won’t be able to work with this account later on. Afterward, you will be presented with the address of your new account/wallet. Now you can use the address to check how many Ether you got on your account:
1 eth.getBalance("LOCAL_GETH_ADDRESS")
And currently, there is 0 Ether on your account because the network does not have any Ether yet. To get your first Ether on your account you have to mine them. To start a miner on your local Ethereum network, enter:
1 miner.start()
Now have a look in the other terminal where you started the network. Here you can see the mining taking place. After approximately a minute, you can stop the mining process with:
1 miner.stop()
And check your account balance once again with:
1 eth.getBalance("LOCAL_GETH_ADDRESS")
Now you should have some Ether on your account. The number depends on how long you mined Ether with the miner commands:
The balance is a very long number because it is represented in Wei, the smallest possible unit of Ethereum funds. One Ether is $$10^{18}$$ Wei. To show your balance in Ether use the following command, which uses a web3 conversion from Wei to Ether:
1 web3.fromWei(eth.getBalance("LOCAL_GETH_ADDRESS"), "ether")
Such that you don’t have to type your account address all the time, you can access the string containing the address with (where 0 is the first account you created):
1 web3.personal.listAccounts[0]
And click on Show/hide test networks:
This will open the settings where you can enable that MetaMask should show test networks:
Now you should see a bunch of test networks in the dropdown menu:
Select the network Localhost 8545, which is your local Ethereum network. You need to create an account in your local Ethereum network through MetaMask. Click on your account picture, and select Create Account. And to distinguish it from your real Ethereum account give it the name Local Account:
## Transfer Funds in Geth
To do something useful with this account you need some Ether. Luckily you already mined a bunch of Ether on the account your created through Geth, which you can now transfer to the account in MetaMask. Before you can transfer funds, you need to unlock the account you created through Geth. For that, enter the following:
1 personal.unlockAccount("LOCAL_GETH_ADDRESS", undefined, 6000)
Instead of undefined you can enter your password. However, this is not good practice as the password is then stored in plain text in the Geth console log. The number 6000 specifies that the account should be unlocked for 6000 seconds.
To send 1 Ether to your MetaMask account enter the following command in the Geth console, and make sure to copy the address from MetaMask and paste it as a string with quotes " after to::
1 2 3 4 5 6 7 eth.sendTransaction( { from: "LOCAL_GETH_ADDRESS", to: "LOCAL_METAMASK_ADDRESS", value: web3.toWei(1, "ether") } )
The return value of this command is the transaction hash TX_HASH. You can have a look at this transaction by with:
1 eth.getTransaction("TX_HASH")
Which will return a JSON containing all properties of the transaction.
If you look at your MetaMask account you won’t find any Ether yet, because the transaction has not been processed yet. After all, no miner is running. You can also see in the JSON that there is no blockHash and no blockNumber. To fix that, start the miner again and stop it after a couple of seconds:
1 2 miner.start() miner.stop()
When you now look at the transaction with eth.getTransaction("TX_HASH") you will see that the transaction has a blockHash and a blockNumber now:
And when you check the balance in your MetaMask account you can see that you now got one Ether:
|
|
# A wish. Looking for a couple of suitable post-docs. Covid-19 in India.
1. A wish…
I wish there were a neat, scholarly, account of the early development regarding the relativity theory. …
… There are tons of material on the topic, but not a single one seems to be to my liking…. I mean, even while rapidly browsing through so many of them, they all seem to fall short—rather, so awfully short. Reason: Most, if not all of them, seem intent on deifying Einstein, and / or, the primacy of maths over physics. [Did I cover all the eigenbases? May be not. So, let me add.] … The worst of them spend themselves out on promoting the idea that coming up with good but radical ideas in physics is all about getting lucky in some day-dreaming involving some mathematical ideas.
OTOH, The “model” for the book which I have in mind here is something like what Prof. Malcolm Longair has done for QM; see his book: “Quantum concepts in physics: an alternative approach to the understanding of quantum mechanics.” [^].
… High time someone should have undertaken a similar effort. But unfortunately, it’s entirely lacking.
… The wish isn’t without purpose. The more I study the quantum mechanical spin, the more I realize the handicap which I have of not having already studied the relativity theory.
I can always postpone the fully consistent description, following my new approach, for the QM spin. [No one / no organization has ever sponsored my research. [Though, they all are hell bent on “following up” on me.]]
However, now that I have developed (what I believe to be) a good, basic, ontology for the QM phenomena, I have begun to see a promising pathway, at least from the viewpoint of a basic ontology, from a non-relativistic description of QM to a relativistic one—I mean the special relativistic one.
2. Looking for a couple of suitable post-docs…
Another possibility I am toying with, currently, is this:
Over a considerable period of time, say over a year or so, to build a series of Python/C++ scripts/programs that illustrate the classical EM in action, but following my new ontological ideas. These ideas are for the Maxwell-Lorentz EM, but I do anticipate that these would provide the easiest pathway to integrating the Special Relativity with the non-relativistic QM.
The trouble is: I will have to get into the FDTD algorithmics too, and I don’t have the time to do it. (In case you didn’t know, when it comes to EM, the best technique is FDTD.)
Wish I had a competent post-doc—actually two—working simultaneously with me! Right now!!
One could build the above-mentioned FDTD applets, but following the way I want them to be built.
The other one could work on “FEM-ization” of my FDM code (i.e., for the He atom, done with my new approach, and yet to be published). Once he is done, he could explore doing the same with FDTD (yes of course!), and compare and contrast the two. The FEM-ization of my FDM code won’t be very highly demanding, in the sense, people have done the finite elements formulation for the helium atom, and also have implemented it in code—decades ago… But of course, they did so following the mainstream QM. It would be a fun for the post-doc to implement it using the ideas I will be proposing—shortly.
Then, both could work on the ideas for the relativistic QM. … The pace of the work would depend on what they bring to the table, and how they perform.
Fallout? If you are a smart PhD in the concerned areas, I need not provide even a hint about it…
3. Status update on my QM research:
Currently, I am typing a set of notes on the topic of the quantum mechanical angular momentum, including the spin. For the time being, I am mostly following Dan Schroeder’s notes (which I mentioned in the post before the last, here [^]). Once done, I don’t mind uploading these notes—for proofreading by you the potential post-docs. [Who else?]
While typing these notes, it has become once again very clear to me—crystal clear, in fact—as to how my “theory” for the QM spin (following my new approach) falls short. … So short, in fact. … My “theory” doesn’t just look awfully arbitrary; it is so.
All in all, don’t expect the same kind of confidence from me for the spin-related aspects as for the spin-less ones. I mean, in the upcoming document on my new approach.
4. Back to the potential post-docs:
Exciting enough?
If yes, drop me a line. Especially, if you are working with Google / similar company. I gather they officially allow you some fraction of your official time for your own hobby projects too…
5. If you are someone young and enthusiastic, say from the Pune city and all (and in general, from India):
They have relaxed the curbs. However, I have a word of advice for you.
Don’t step out unless absolutely necessary, and if so doing, take all the precautions.
It’s just a matter of a few months now…
…BTW, I am also thinking whether the government shouldn’t relax the enforced gap of three months in between the first and the second dose for the jabs. … There are circumstantial matters which indicate that a gap in between two to three months might be ideal; that three months might be too long a period. (Actually, this matter had struck me right on the day that the Central Government increased the gap from 6 weeks to 12 weeks in one, single, move. …However, at that time, I had thought it prudent to wait and watch. Now, I think I can—nay, should—share my opinon. … I also have some other points about these matters, but these are not so important. I will sure mention these as and when it becomes necessary to do so.)
In the meanwhile, you all take care, and bye for now…
A song I like:
(Hindi) ज़िंदगी आ रहा हूँ मैं… (“zindagi aa rahaa hoon main…”)
Lyrics: Javed Akhtar
Music: Hridaynath Mangeshkar
Singer: Kishore Kumar
[
Credits happily listed in a random order. A good quality audio is here [^]. … Although I haven’t seen this movie, recently I watched the video for this song, and found that I enjoyed it too. A good quality video is here [^].
… I always loved this song, esp. the tune and the arrangement / orchestration. … And of course, Javed Akhtar’s awesome lyrics. … Well, yes, Kishore does sound, at places in this song, just a slight bit… and how shall I put it?… He doesn’t sound as if he were in his best frame of singing, here. His voice sounds a bit too “broad”, and perhaps heavy, and even a bit “tired” perhaps? as if he were straining a bit?… Even then, of course, being Kishore, he does manage to pull a great job. [It’s just that, knowing Kishore, one wants to note this aside… I know, hair-splitting, it is. … Can’t help. … Sometimes.]
… [BTW, if you are young and dynamic and result-oriented etc.: The guy in this video is Sonam Kapoor’s dad. He used to be young. Once upon time. Me too. [Though I never ever had the hair-style he displays here. A lot of my class-mates did, mostly following The “Bachchan”. Not me. […Yeah, I know.]]
… All the same, all that you’ve to do now is to wait for just a few more months, that’s all… 2021 isn’t a meme on Twitter the way 2020 was. Nevertheless, in India, we have to wait. So, just listen to songs like this for just a wee bit more. … I can tell you, from experience: The scenery, esp. the Sahyaadri’s, does stay great also well until January / February next year. (And if you really love Sahyaadri’s, well, they are there, forever…)
…So there.]
…And if you are new to this song, see if you too like it…
Take care and bye for now…
]
# Loitering around…
Update:
OK. I am getting back to working on the remaining topics, in particular, taking down detailed notes on the QM spin. I plan to begin this activity starting this evening. Also, I can now receive queries, from “any” one, regarding my work on QM, including the bit mentioned in the post below. [The meaning of “any” one ‘ is explained below.]
[2021.03.24 13:17 IST]
Am just about completing one full day of plain loitering around, doing nothing.
No, of course, it couldn’t possibly have been literally nothing—whether of the शून्य (“shoonya”) variety, or the शुन्य (“shunya”) one. (Go consult a good Sanskrit dictionary for the subtle differences in the meaning of these two terms.)
So, what I mean to say by “doing nothing” is this:
The last entry in my research journal has the time-stamp of 2021.03.18 21:40:34 IST. So, by now, it’s almost like a full day of doing “nothing” for me.
It’s actually worse than that…
In fact, I started loitering around, including on the ‘net, even earlier, i.e., a few days ago. May be from 16th March, may be earlier. However, my journal pages (still loose, still not filed into the plastic folder) do show some entries, which get shorter and shorter, well until the above time-stamp. …The entry before the afore-mentioned has the time-stamp: 2021.03.18 19:12:52 IST.
But not a single entry over the past whole day.
So, what did I do over the last one day? and also, over a few days before it?
Well, the list is long.
I browsed. (Yes, including Twitter, Instagram, and FaceBook—others’ accounts, of course!)
I also downloaded a couple of research papers, and one short history-like book. I generally tried to read through them. Unsurprisingly, I found that I could not. The reason is: I just don’t have any mental energy left to do anything meaningful.
Apparently, I have exhausted all my energy in thinking about the linear momentum operator.
I think that by now I have thought about this one topic in most every way any human being possibly could. At least in parts (i.e., one part taken at a time). I have analyzed and re-analyzed, and re-re-analyzed. I kept on jotting down my “thoughts” (also in a way that would be mostly undecipherable to any one).
I kept getting exhausted, and still, I kept pushing myself. I kept on analyzing. Going back to my earlier thoughts, refining them and/or current thoughts. Et cetera.
In the end, finally, I reached the point where I couldn’t even push myself any longer—in spite of all my stamina to keep pursuing threads in “thinking”. I’ve some stories to share here, but some other time. …To cut all of them long stories short:
Some 12 hours after I thus fully crashed out of all my mental energies, at some moment, I somehow realized that:
I had already built a way to visualize a path in between my approach and the mainstream QM, regarding the linear momentum operator.
I made a briefest possible entry (consisting of exactly one small sketch over some 2″ by 5″ space). Which was at 2021.03.18 21:40:34 IST.
Then, I stopped pursuing it too.
Why bother? Especially when I can now visualize “it” any time I want?
But how good is it?
I think, it should work. But it also appears to be too shaky and too tenuous a connection to me—between the mainstream QM and my new approach.
Of course I’ve noted down a bit of maths to go with it too, and also the physical units for the quantities involved. Yet, two points remain:
As a relatively minor point: I haven’t had the energy to work out (let alone to do even the quick and dirty simulations for) all possible permutations and combinations of the kind of elements I am dealing with. So, there is a slim possibility that terms may cancel each other and so the formulation may not turn out to be general enough. (I’ve been fighting with such a thing for a long time by now.)
But as a relatively much more important point: As I said, this whole way of thinking about it seems too tenuous to me. Even if it works out OK (i.e., after considering all the permutations and combinations involved), this very way of looking at the things would still look at best tenuous to any one.
The only consolation I have is this idea (which had already become absolutely banal even decades ago):
Every thing about QM is different from the pre-quantum theories.
That’s the only thin thread of logic by which I my ideas hang. … Not as good as I wanted it. But not as bad as hanging all loose either…
And, yes, I’ve thought through the ontological aspects as well. … The QM ontology is radically different from the ontologies of all the pre-quantum theories. Especially, that of NM (Newtonian mechanics of particles and rigid bodies). But it is not so radically different from the ontology already required for EM (the Maxwell-Lorentz electrodynamics)—though there is a lot of difference between the EM and the QM ontologies.
And that’s what the current status looks like.
“So, when do you plan to publish it?”
Ummm… Not a good question. A better question, for me, is this:
What do I propose to do with my time, now?
The answer is simple. I will go in for what I know is going to be the most productive route.
Which is: I am going to continue loitering around.
Then, I will begin with taking detailed notes on the QM spin—the next topic from the mainstream QM—as soon as my mental energy returns.
That’s right. I won’t be even considering writing down my thoughts about that goddamn linear momentum operator. Not for any time in the near future. That’s the only way to optimize productivity. My productivity, that is.
So, sorry, I won’t be writing anything on the linear momentum any time soon, even if it precisely was the topic that kept me pre-occupied for such a long time—and also formed the topic of blogging for quite some over the recent past. So, sorry, this entire blog-post (the present one) is going to remain quite vague to you, for quite some time. You even might feel cheated at this point.
Well, but I do have a strong defence from my side: I’ve always said, time and once again, that I was always ready to share all my thoughts to “any” one. I mean, any one who (i) knows the theory of the mainstream QM (including its foundational issues), and (ii) also has looked into the experimental aspects of it (at least in the schematic form.)
So, any such a person can always drop a line to me.
Oh wait!
Don’t write anything to me right away. Hold on for a few days. I just want to kill my time around for now. That’s why.
I’ll let you know (may be via an update here), once I begin actually taking down my notes on the QM spin. That’s the time you—“you” the “any” one—may get in touch with me. That is, if “you” want to know what I’ve thought about that goddamn linear momentum operator. [OK. As the update at the top of the post indicates, now I’m ready.]
OK, bye for now, take care in the meanwhile, and don’t be surprised if I also visit your blog and all…
A Many songs I like:
[I also listened to a lot of songs over the past few days. I couldn’t find a single song that went very well with any one of my overall moods over the past few days… So, don’t try to read too much into this choice. And, I’ve got bored, so I won’t offer any further comment on this song either. (And, one way or the other, I actually don’t know why I like this song or the extent to which I actually like it. Not as of now, any way!)
(Hindi) जनम जनम का साथ है निभाने को (“janam janam kaa saath hai nibhaane ko”)
Music: Shankar-Jaikishan
Lyrics: Hasrat Jaipuri
I could not find a good quality original audio track. The “revival” version is here: [^]. It was this version which I first listened to, and used to listen to, while taking leisurely evening drives (for up to, say, 50 miles almost every day) in the area around Santa Rosa. Which was in California. But it didn’t feel that way. (It also was the home town of the “Peanuts” comics creator.) …
…OK, I will throw in one more:
(Marathi) तूं तेव्हा तशी (“too, tevhaa tashee”)
Music and Singer: Pt. Hridayanath Mangeshkar
Lyrics: Aaratee Prabhu
Which is yet another poem by Aaratee Prabhu, converted into a song by Hridayanath. But I won’t be able to talk about it. Not as of today anyway. Listening is good. A good quality audio is here [^].
…And, since I have been listening to songs a lot over the past few days, one more, just for this time around…
(Western, Pop) “How deep is your love”
Band: Bee Gees
I don’t know what the “Official Video” means, but it is here: [^]. I also don’t know what the “Deluxe Edition” of the audio means, but it’s here [^]. … I always happened to listen to the audio, which was, you know, at many places in Pune like in the H’ club (of the student-run mess at COEP hostels); at the movie theatres running English movies in Pune (like Rahul, the old West-End, and Alka); most all restaurants from the Pune Camp area (and also a few from the Deccan area); also in the IIT Madras hostels; etc. All of this was during the ’80s, only. I don’t know why, but seems like I never came across this song, even at any of these places, once it was ’90s. … As usual, I didn’t even know the words, and so, couldn’t have searched for it. A few days ago, I was just going through a compilation of songs of ’70s when I spotted this one, and then searched on its lyrics and credits and all. I had remembered—and actually known—only the music… But yes, now that I know them, the words too seem pretty good…
Anyway, enough is enough. I already wrote a lot! High time to go back to doing nothing…
]
History:
2021.03.19 22:27 IST: Originally published.
2021.03.24 13:25 IST: Update noted at the top of the post and also inline. Some minor corrections/editing.
# Still if-ish…
1. Progress has slowed down:
Yep. … Rather, progress has been coming in the sputters.
I had never anticipated that translating my FDM code (for my new approach to QM) into a coherent set of theoretical statements is going to be so demanding or the progress so uneven. But that’s what has actually occurred.
To be able to focus better on the task at hand, I took this blog and my Twitter account off the ‘net from 26th February through 09th March.[* See the footnote below]
Yes, going off the ‘net did help.
Still, gone is that more of less smooth (or “linear”) flow of progress which I experienced in, say, mid-December 2020 through mid-January 2021 times or so, especially in January. Indeed, looking back at the past couple of weeks or so, I can say that a new pattern seems to have emerged. This pattern goes like this:
• On day 1, I get some good idea about how to capture / encapsulate / present something, or put it in a precise mathematical form. So, I get excited. (I even feel like coming back on the ‘net and saying something.)
• But right on day 2, I begin realizing that it doesn’t capture the truth in sufficient generality, i.e., that the insight is only partial. Or, may be, the idea even has loopholes in it, which come to the light only when I do a quick and dirty simulation about it.
• By the time it’s day 2-end, day 3 or at most day 4, I have become discouraged, and even begin thinking of postponing everything to a June-July 2021-based schedule.
• However, soon enough, I get some idea, hurriedly write it down…
• …But only for the whole cycle to repeat once again!
This kind of a cycle has repeated some 3–4 times within the past 15–20 days alone.
“Tiring” isn’t the right word. “Fatigue” is.
But there is no way out. I don’t have any one to even discuss anything (though I am ready, as always, from my side.)
And, it still isn’t mid-March yet. So, I keep going back to the “drawing board.” Somehow.
[* Footnote: Curiously though, both WordPress and RevolverMaps have reported hits to this blog right in this period—even when it was not available for public viewing! … What’s going on?]
2. Current status:
In a way, persistence does seem to have yielded something on the positive side, though it has not been good enough (and, any progress that did come, has been coming haltingly).
In particular, with persistence, I kept on finding certain loop-holes in my thinking (though not in the special cases which I have implemented in code). These are not major conceptual errors. But errors, they still are. Some of these can be traced back to the June-July times last year. Funny enough, as I flip through my thoughts (and at times through my journal pages), some bits of some ideas regarding how I could possibly get out of these loop-holes, seem to have occurred, in some seed form (or half-baked form), right back to those times. …
Anyway, the current status is that I think that I am nearing completing a correct description, for the new approach, for the linear momentum operator.
This is the most important operator, because in QM, you use this operator, together with the position operators, in order to derive the operators for so many other dynamical quantities, e.g. the total energy, the angular momentum, etc. (See Shankar’s treatment, which was reproduced in the postulates document here [^].)
The biggest source of trouble for the linear momentum operator has been in establishing a mathematically precise pathway (and not just a conceptual one) between my approach and the mainstream QM. What I mean to say is this:
I could have simply postulated an equation (which I used in my code), and presented it as simply coming out of the blue, and be done with it. It would work; many people in QM have followed precisely this path. But I didn’t want to do that.
I also wanted to see if I can make the connections between my new approach and the MSQM as easy to grasp as possible (i.e., for an expert of MSQM). Making people understand wasn’t the only motive, however. I also wanted to anticipate as many objections as I could—apart from spotting errors, that is. Another thing: Given my convictions, I also have to make sure that whatever I propose, there has to be a consistent ontological “picture” which goes with it. I don’t theorize with ontology as an after-thought.
But troubles kept coming up right in the first consideration—in clearly spelling out the precise differences of the basic ideas between my approach and the MSQM.
And yes, MSQM does have a way of suddenly throwing up issues that are quite tricky to handle.
Just for this topic of linear momentum, check out, for instance, this thread at the Physics StackExchange [^] (especially, Dr. Luboš Motl’s answer), and this thread [^] (especially, Dr. Arnold Neumaier’s answer). The more advanced parts of both these threads are, frankly, beyond my capacity. Currently, I only aim for that level of rigour which is at, say, exactly and precisely the first three sentences from Motl’s answer!…
…We the engineers can happily ignore any unpleasant effects that might occur at the singular and boundary points. We simply try and see if we can get away ejecting such isolated domain points from any theoretical consideration! If something workable can still be obtained even after removing such points out of consideration, we go for it. So, that’s the first thing we check. Usually, it turns out we can isolate them out, and so we proceed to do precisely that! And that is precisely the level at which I am operating…
Even then, issues are tricky. And, at least IMO, a good part of the blame must lie with the confusions wrought by the Instrumentalist’s dogma.
… What the hell, if $\Psi(x,t)$ isn’t an observable itself, then why does it find a place in their theory (even if only indirectly, as in Heisenberg’s formulation)? … Why can’t I just talk of a property that exists at each infinitesimal CV (control volume) $\text{d}x$? why must I instead take something of interest, then throw in the middle an operator (say a suitable Dirac’s delta), and then bury it all behind an integral sign? why can’t those guys (I mean the mathematical terms) break the cage of the integral sign, and come out in the open, just to feel some neat fresh air?
… Little wonder these MSQM folks live with an in-principle oscillatory universe. It’s a weird universe they have.
In their universe, Schrodinger’s cat is initially in a superposition of being alive and dead. But that’s not actually the most surprising part. Schrodinger’s cat then momentarily (or for a long but finite time) becomes full dead; but then, immediately, it “returns” from that state (of being actually dead) to once again be in a superposition of dead + alive; it spends some time in that superposition; it then momentarily (or for a long but finite time) becomes fully alive too; but only to return back into that surreal superposition…
And it is this whole cycle which goes on repeating ad infinitum.
… No one tells you. But that’s precisely what the framework of MS QM actually predicts.
MSQM doesn’t predict that once a cat does somehow become dead, it remains dead forever. And that’s because, in the MSQM, the only available mathematical machinery (which has any explanation for the quantum phenomena), in principle, predicts only infinite cycles of superposition–life–superposition–death–superposition–….
The postulates of the MS QM necessarily lead to a forever oscillatory universe! Little wonder they can’t solve the measurement problem!
One consequence of such a state of the MS QM theory is that thinking through any aspect becomes that much harder. It isn’t impossible. But hard, yes, it certainly is, where hard means: “tricky”.
Anyway, since the day before yesterday, it has begun looking like this topic (of linear momentum operator), and to the depth I mentioned above, might get over in a few days’ time. At least, that day 1–day 2–etc. pattern seems to have broken—at least for now!
If things go right at least this time round, then I might be able to finish the linear momentum operator by, say, 15th of March. Or 18th. Or 20th.
Addendum especially for Indians reading this post: No, the oscillatory universe of the MSQM people is not your usual birth-life-death-rebirth cycle as mentioned in the ancient Indian literature. The MSQM kind of “oscillations” aren’t about reincarnations of the same soul but in different bodies. In MSQM, the cat “return”s from being dead with exactly the same physical body. So, it’s not a soul temporarily acquiring one body for a brief while, and then discarding it upon its degeneration, only to get another body eventually (due to “karma” or whatever).
So, the main point is: In MSQM, Schrodinger’s cat not just manages to keep the same body, the physical laws mandate that it be exactly the same body (the same material) too! … And, the MS QM doesn’t talk of a soul anyway; it concerns itself purely with the physical aspects—which is a good thing if you ask me. (Just check the postulates document, and pick up a text book to see their typical implications.)
3. Other major tasks to be done (after the linear momentum operator):
• Write down a brief but sufficiently accurate description of the measurement process following my new approach. This is the easiest task among all the remaining ones, because much of such a description can only be qualitative.
• Translate my ideas for the orbital angular momentum into precise mathematical terms—something to be done, but here I guess that with almost all possible troubles having already shown up right in the linear momentum stage, the angular momentum should proceed relatively smoothly (though it too is going take quite some time).
• And then, the biggest remaining task. Actually, many sub-tasks:
• Study and take notes on the QM spin.
• Think through and integrate my new approach to it.
• Write down as much using quantitative terms as possible.
At this stage, I don’t know how long it’s going to take. However, for now, I’ve decided on the following plan for now…
4. Plan for now:
If there remain some issues with the linear momentum operator (actually, in respect of its multi-faceted usages in the MSQM, and in explaining these from the PoV of my approach including ontology), and if these still remain not satisfactorily resolved even by 15th or 18th of March (roughly, one week from now), then I will take a temporary (but long) break from QM, and instead turn my attention to Data Science.
However, if my description for $\hat{p}()$ (i.e. the linear momentum operator) does go through smoothly during the next week, then I will immediately proceed with the remaining QM-related tasks too (i.e., only those which are listed above).
5. Bottom-line:
Expect a blog post in a week’s time or so, concerning an update with respect to the linear momentum operator and all. (I will try to keep this blog open for the upcoming week, but I guess my Twitter account is best kept closed for now—I just don’t have the time to keep posting updates there.)
In the meanwhile, take care and bye for now.
A song I like:
(Marathi) ती येते आणिक जाते (“tee yete aaNik jaate…”)
Lyrics: Aaratee Prabhu
Music: Pt. Hridaynath Mangeshkar
Singer: Mahendra Kapoor
[ Mahendra Kapoor has sung this song very well (even if he wasn’t a native Marathi speaker). Hridaynath Mangeshkar’s music, as usual, pays real good attention to words, even as also managing to impart an ingenious melodic quality to the tune—something that’s very rare for pop music in any language.
But still, frankly, this song is almost as nothing if you don’t get the lyrics of it.
And, to get the lyrics here, it’s not enough to know Marathi (the language) alone. You also have to “get” what precisely the poet must have meant when he used some word; for instance, the word “ती” (“she”). [Hint: Well, the hint has already been given. …Notice, I said “what”, and not “who”, in the preceding sentence!]
But yes, once you begin to get the subtle shades of the poetry here, then you can also begin to appreciate Hridaynath’s composition even better—you begin to see the more subtle musical phrases, the twists and turns and twirls in the tune which you had missed earlier. So, there’s a kind of a virtuous feedback circle going on here, between poetry and music… And yes, you also appreciate Mahendra Kapoor’s singing better as you go through the circle.
This song originally appeared as a part of a compilation of Aaratee Prabhu’s poems. If I mistake not (speaking purely from memory, and from a distance of several decades), the book in question was जोगवा (“jogawaa”). I had bought a copy of it during my UG days at COEP, out of my pocket-money.
We in fact had used another poem from this book as a part of our dramatics for the Firodiya Karandak. It was included on my insistence; I was a co-author of the script. As to the competition, we did win the first prize, but not so much because of the script. We won mainly because our singing and music team had such a fantastic, outstanding, class to them. Several of them later on went on to make full-time career in music…. The main judge was the late music composer Anand Modak, who later on went to win National awards too, but back then, he was at a fledgling stage of his career. But yes, talking of the script itself, in the informal chat after the prize announcement ceremony, he did mention, unprompted and on his own, that our script was good too! (Yaaaay!!) …Back then, there was no separate prize for the best script, but if there were to be one, then we would’ve probably won it. During that informal chat, the judges hadn’t bothered to even passingly mention any script by any other team!)
…Coming back to the book of poetry (Aaratee Prabhu’s), I think I still have my copy lying somewhere deep in one of the boxes, though by now, due to too many moves and all (I had also taken it to USA the first time I went there), its cover already had got dislodged from the book itself. Then, a couple of weeks ago, I saw only the title page peeping out of some bunch of unrelated and loose papers, and so, looks like, the book by now has reached a more advanced stage of disrepair! … Doesn’t matter; no one else is going to read it anyway!
A good quality audio is here [^].
]
History:
2021.03.10 20:57 IST: Originally published.
2021.03.10 22.45 IST: Added links to the Physics StackExchange threads and the subsequent comments up to the mention of the measurement problem. Other minor editing. Done with this post now!
2021.03.12 18.43 IST: Some further additions, especially in section 2, including the Addendum written for Indian readers. Also, some further additions in the songs section. Some more editing. Now, am really done with this post!
|
|
# Math Help - Indefinite Integration Problem
1. ## Indefinite Integration Problem
I'm not sure how to approach this...
$\int e^{2x}sin(3x)$
I don't know where to start. This is on an assignment largely based on integration by parts, but I can't figure out what to do. If I do integration by parts, I still have to integrate
$\int e^{2x}cos(3x)$
so I'm not getting any closer to an answer.
Using online solving tools either gets me no answer, or something like this;
$\dfrac{e^{2x}(2sin(3x) - 3cos(3x))}{13}$
But it isn't much of a hint. Can someone point me in the right direction?
2. Originally Posted by Sucker Punch
I'm not sure how to approach this...
$\int e^{2x}sin(3x)$
I don't know where to start. This is on an assignment largely based on integration by parts, but I can't figure out what to do. If I do integration by parts, I still have to integrate
$\int e^{2x}cos(3x)$
so I'm not getting any closer to an answer.
Using online solving tools either gets me no answer, or something like this;
$\dfrac{e^{2x}(2sin(3x) - 3cos(3x))}{13}$
But it isn't much of a hint. Can someone point me in the right direction?
Do integration by parts TWICE, choosing $v'=e^{2x}$ in both cases, and then pass a summand from the RHS to the LHS...and voila!
Tonio
3. Originally Posted by tonio
Do integration by parts TWICE, choosing $v'=e^{2x}$ in both cases, and then pass a summand from the RHS to the LHS...and voila!
Tonio
Ah, of course. Thank you very much, I got it.
4. Then there is the formula:
$\displaystyle \int e^{ax}\sin(bx) dx = e^{ax}\left\{\frac{a\sin{bx}-b\cos{bx}}{a^2+b^2}\right\}+k$
Similarly:
$\displaystyle \int e^{ax}\cos(bx) dx = e^{ax}\left\{\frac{a\cos{bx}+b\sin{bx}}{a^2+b^2}\r ight\}+k$
They can be derived using by-parts twice -- same as the problem.
|
|
Browse Questions
# Which one is highest melting point halide
$\begin{array}{1 1}(a)\;NaCl&(b)\;Nal\\(c)\;NaBr&(d)\;NaF\end{array}$
Can you answer this question?
More the ionic character,more the melting point.
Hence (d) is the correct answer.
answered Jan 27, 2014
|
|
# Schulz type iterations
## On approximating the eigenvalues and eigenvectors of linear continuous operators
Abstract We consider the computing of an eigenpair (an eigenvector $$v=(v^{(i)})_{i=1,n}$$ and an eigenvalue $$\lambda$$) of a matrix $$A\in\mathbb{R}^{n\times n}$$, by…
## Remarks on some Newton and Chebyshev-type methods for approximation eigenvalues and eigenvectors of matrices
Abstract We consider a square matrix $$A$$ with real or complex elements. We denote $$\mathbb{K}=\mathbb{R}$$ or $$\mathbb{C}$$ and we are…
|
|
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# Write an Equation Given Two Points
## Compose linear functions given two points
Estimated12 minsto complete
%
Progress
Practice Write an Equation Given Two Points
Progress
Estimated12 minsto complete
%
Growing Kids
Teacher Contributed
## Real World Applications – Algebra I
### Topic
How can we represent a kid’s growing height as a linear relationship?
### Student Exploration
Most doctors agree that the “normal” growth rate for children after the age of 2 is about \begin{align*}2 \ \frac{1}{2} \ inches\end{align*} or 6 centimeters per year until adolescence. Let’s represent this as a linear relationship using inches.
Let’s say a kid at 2 years old is 3 feet tall, or 36 inches. Using the information given, this kid will be 38.5 inches tall when 3 years old. Let’s write an equation representing this relationship using these two data points.
Given the information about the heights, we’d first have to calculate the slope (even though that was given to us). The slope would be,
Now let’s use one of our data points and the slope to find the equation to represent this relationship. We’re going to use the slope-intercept form to substitute what we know so far.
Our equation is: \begin{align*}y = 2.5x + 31\end{align*}
Now, this equation represents the linear relationship of a growing child after the age of 2. Looking at the equation, 31 is \begin{align*}b\end{align*}. This means that the \begin{align*}y-\end{align*}intercept is 31 inches. This can’t make sense, because then this would mean that a child was born at 31 inches! This also wouldn’t make sense when a kid hits puberty, because his/her growth spurt would be a lot faster!
Now let’s look at this linear relationship as a function. As you read from the concept, the \begin{align*}f(x)\end{align*} is the output. We can rewrite this relationship as \begin{align*}f(x) = 2.5x + 31\end{align*}. We can use this function to determine height at different ages.
If we were to find \begin{align*}f(5)\end{align*}, this means that we need to find the height of the child at 5 years old. Let’s figure it out:
This means that at 5 years old, the child will be 38.5 inches tall.
What’s \begin{align*}f(7)\end{align*} and what does it mean?
### Extension Investigation
Try asking a family member how tall you were at two different ages in your life, and practice finding the rate of change, or the slope between these two points. Would this equation make sense? Why or why not? Would this equation apply when you’re over 30 years old? Would you be getting taller at that age?
|
|
Stefan Scholl wrote:
> On 2001-10-19 13:31:41Z, Pierre-Charles David <Pierre-Charles.David / emn.fr> wrote:
>
>>For example, the (La)TeX source fragment \texttt{some text} can be
>>represented in TeXML as <cmd name="texttt">some text</cmd>. The DTD for
>>
> This seems to be pretty useless.
No it's not. :-)
> <tt>sometext</tt>
>
> is easier and would be more XML like. Representing different
> LaTeX commands with one element and different values for an
> attribute looks like the work of a XML beginner.
I don't think you understood the purpose of TeXML (probably because of
my explanations). TeXML is NOT designed to be coded by hand. It is
generally the result of an XSLT stylesheet. As an author, you write
<tt>sometext</tt>, and your stylesheet outputs the TeXML equivalent,
which is then transformed into LaTeX source to be typeset. The goal is
to be able to leverage the power of LaTeX, both the typesetting engine
and the numerous extensions. It would not be possible to define an XML
DTD for TeXML using your approach (<tt>sometext</tt>) because the set of
possible LaTeX commands is not known (or even bound). Of course, a DTD
is not always necessary, and well-formedness is often enough, but not
here: how would you know given '<foo>stg</foo>' if foo is supposed to be
a LaTeX command (translated as \foo{stg}) or an environment
(\begin{doo}stg\end{foo}) for example?
BTW, I did not designed TeXML. Douglas Lovell did, and I don't think he
can be considered an XML beginner
(http://www.unicode.org/iuc/iuc19/b083.html).
--
Pierre-Charles David (pcdavid <at> emn <dot> fr)
Computer Science PhD Student, ?cole des Mines de Nantes, France
Homepage: http://purl.org/net/home/pcdavid
|
|
# Top Five Challenges Facing the Practice of Fault-tolerance
Ram Chillarege
IBM Thomas J. Watson Research Center, 1994
Abstract -- This paper identifies key problem areas for the fault-tolerant community to address. Changes in technology, expectation of society, and needs of the market pressure the design point for fault-tolerance in their own special manner. A developer, who has only a finite set of resources and limited time, responds to these pressures with a set of priorities. I believe that the top five challenges, which ultimately drive the exploitation of fault-tolerant technology are:
(1) Shipping a product on schedule
(2) Reducing unavailability
(3) Non-disruptive change management
(4) Human fault-tolerance
(5) All over again in the distributed world.
Each of these are discussed to explore their influence on the choice for fault-tolerance. Understanding them is key to guide research investment and maximize its derivatives.
Lecture Notes in Computer Science 774, "Hardware and Software Architectures for Fault Tolerance, Springer-Verlag 1994, ISBN 3-540-57767-X
# 1. The Area of Fault-tolerance
The area of fault-tolerance is never clearly defined, however, in some quarters it is assumed that fault tolerant computing appears in a box. This is misleading given that the ideas of fault-tolerance permeate the entire industry into hardware software, and application. Yet, it is not uncommon for industry segmentation efforts to divvy up the market and identify one of them as the fault tolerant market. This market, when quantified by adding up the revenue from fault-tolerant boxes, is only in the range of two billion dollars [1], in an industry that is estimated at more than two hundred billion dollars. However, as most engineers would agree, the perception that fault tolerant computing comes in a box, either hardware or software, is only a very narrow view of the area.
A larger view, one I believe to be more accurate, is that the ideas and concepts of fault-tolerance permeate every segment of the industry -- starting with the device, the machine and following through with systems software, sub-system software, application software and including the end user. However, this larger vision is confounded by the fact that there are several different forces and expectations on what is considered fault-tolerance and what is not. The single hardest problem that continues to persist in this community is the definition of the faults that need to be tolerated [2] [3]. An engineering effort to design fault-tolerance is effective only when there is a clear picture of what faults need to be tolerated. These questions need to be answered at every level of the system. There are trade-offs in cost, manufacturability, design time and capability in arriving at a design point. Utimately, like any decision, there is a substantial amount of subjective judgement used for that purpose.
Independent of the technical challenges that face the designer of fault tolerant machines, there is another dimension which is based on society and its expectations placed on computers. In the long run this has a more significant impact on what needs to be designed into machines than is ordinarily given credit. Let us for a moment go back in time and revisit the Apollo Seven disaster that took place more than two decades ago. At that time the tragedy brought about a grave sadness in our society. We let it pass, hoped it would not happen again, and continued the pursuit of scientific accomplishments for mankind. Contrasted with the Challenger disaster that took place only a few years ago, there was a very different perception in society. It was considered unacceptable that such a disaster could take place. The expectations on technology had changed in the minds of people. Technology has significantly advanced, and the average person trusted it a lot more, whether or not that trust was rightfully placed.
These changes in expectation place an enormous pressure on the designers of equipment which is used in every day life. People expect them to work and expect them to be reliable, whether or not the product has such specifications. In cases where safety is critical, there may exist an elaborate process and specification to insure safety. However, there are computers imbedded in consumer devices which may or may not have gone through the design and scrutiny to insure reliability, safety and dependability.
## 1.1 Operating Just Below the Threshold of Pain
The question of engineering fault-tolerance is a very critical one. Just how much fault-tolerance is needed for a system or an application is a hard question. Ideally, this question should have an engineering answer but in reality that is rare. It is one that has to be answered bearing in mind, expectation of a customer, the capability of a technology and what is considered competitive in the marketplace. I propose that a realistic answer to this question is to recognize that a system need be fault tolerant only to the extent that it operates just below the threshold of pain.
Fault-tolerance does not come free. Developing a system which is fault tolerant beyond customer expectation is excessive in cost and cannot be competitive. On the other hand, if the system fails too often causing customer dissatisfaction, one will lose market share. The trick is to understand what that threshold of pain is and insure that the system operates just below that threshold. This would then be the perfect engineering solution.
Understanding the threshold of pain, knowing the limits of the technology and the capability of alternate offerings is critical. One approach is to dollarize lost business opportunity to provide a quantitative mechanism to arrive at a reasonable specification. However, when the impact is customer satisfaction and lost market share, it is more complicated. When the impact is perception, it is truly difficult. When safety is in question, all bets are off. Nevertheless, the bottom line is that one has to maintain a careful balance in designing fault-tolerance capability.
# 2. Forces Driving the Prioritization
The design of fault-tolerant computing is influenced by several changes in the current industrial environment. These changes come from different directions and pressure the fault tolerant community in their own special manner. For the purposes of this paper we will discuss a few of the pressures that influence the current environment.
Component reliability and speed both have made dramatic improvements. The improvement in hard failure rate for IBM mainframes, (measured in failures per year per MIPS) decreased almost two orders of magnitude in 10 years [4]. The dramatic improvement in reliability decreases the sense of criticality on fault-tolerance, although it does not completely go away. However, this trend coupled with other effects does tend to change the focus. One of the other significant effects is standardization and commoditization, yielding a very competitive market. Standardization increases competition, reduces profit margins and on puts a very strong focus on cost. This competitiveness is experienced in almost every segment, more so in the lower price segments than the higher price segments. The net result is a tremendous focus on cost.
The belief that fault-tolerance increases cost is quite pervasive, although it is not clear that it does so from a life-cycle-cost perspective. Fundamentally, fault-tolerance in hardware is achieved through the use of redundant resources. Redundant resources cost money (or performance) and no matter how small come under perpetual debate.
The drive towards standardization has decreased product differentiation. As a result, in a market with standardized offerings every vendor is looking for product differentiators. Fault-tolerance, is certainly a key differentiator amongst equivalent function and this should help drive the need for greater fault-tolerance.
The customer's perspective on these problems is driven from a different set of forces. There is a much greater dependency on information technology as time progresses, and this dependency is not as well recognized until things go wrong. This subliminal nature of computing in the work place has resulted in much higher expectations on delivery of the service. Customers do not write down specifications on dependability, reliability, availability or serviceability. They have expectations which have grown to the point that dependability is expected as a given. As a result, the focus of the customer is on the solution and not so much on how we got to the solution. Unless a vendor recognizes these expectations and designs appropriately, the resulting solution may be far from the expectations.
Given the downturn in the economy in the last few years there has been a trend to reduce expenditure on information systems. One of the tendencies is to move off equipment which is considered higher priced and on to those considered lower priced alternatives. These moves are usually not matched with a corresponding shift in expectation as far as reliability and availability. This is particularly true where applications on centralized mainframe computers are moved to networks of distributed personal computers. While the mainframe systems had experienced staff who understood systems management, dependencies, and recovery strategies the equivalent function or skill may not exist for the distributed setup. However, with computing in the hands of the non-experts, the expectations continue to carry through while the underlying enablers may not exist, and the risks not adequately comprehended.
Given these various forces one can see that the design point for fault tolerant equipment is indeed a very nebulous issue. Although it is good to, ask the customer," on a realistic front questions on expectation of reliability, availability, with the corresponding cost and risk are extremely hard to quantify.
# 3. The Top 5 Challenges
There is only a finite amount of time and resource that can be juggled to produce product and profit. Fault-tolerance, as a technology or a methodology to enhance a product, plays a key role in it. Without the techniques of fault-tolerance it is unlikely that any of the products would function at levels of acceptability. Yet, since the use of fault-tolerance takes a finite amount of resource (parts and development) its application is always debated. Thus, there are times when its use, though arguably wise and appropriate has to be traded due to its impact on the practicality of producing product.
There is no magic answer to the tradeoffs that are debated. Some times, there are costing procedures and data to understand the tradeoffs. Most times, these decisions have to be made with less than perfect information. Thus judgment, experience, and vision drive much of the decision making. What results are a set of priorities. The priorities change with time and are likely to be different for the different product lines. However, stepping back and observing the trends there are some generalizations that become apparent. The following sections list five items, in order, that I believe drive most of the priorities. There are numerous data sources and facts from technical, trade and news articles that provide bits and pieces of information. Although some are cited, the arguments are developed over a much larger body of information. The compilation is purely subjective and the prioritization is a further refinement of it.
## 3.1 Shipping a Product on Schedule
This is by far the single largest force that drives what gets built or not, and for the most part, rightly so. What has that to do with fault-tolerance? It worthwhile noting that producing product is always king - the source of revenue and sustenance. The current development process is under extreme pressure to reduce cycle time. Getting to market first provides advantage and the price of not being first is significant. This is driven by the need in a competitive market to introduce products faster, and a technological environment where product life times are shrinking. The reduction in cycle time impacts market opportunity realised and life-cycle costs.
The development cycle time is proportional to the amount of function being designed. The shrinking of development cycle time brings under scrutiny any function that could be considered additional. Unfortunately, this pressure does not spare function used to provide fault-tolerance.
Fault-tolerance requires additional resource not only in hardware but also in design and verification, which add to development cycle time. Any extra function that doesn't directly correspond to a marketable feature comes under scrutiny. Thus, the pressure of reducing of cycle time can indirectly work against functionality such as fault-tolerance which is usually a support type of function in the background. Until fault-tolerance becomes a feature which is directly translates to customer gain, the cycle time pressures do not work in its favor.
Systems have become very complex. The complexity exists at almost every level of the system -- the hardware, the software, the applications and user interface levels. Designing complex systems increases development cycle time, and also creates correspondingly complex failure mechanisms. Although automation and computer aided design techniques have helped reduce the burden, especially in hardware, they cause a new kind of problem. Errors that are inserted and the ones that dominate the development process are the higher level specification and design errors. These design errors have a large impact on the overall development cycle time, significantly impacting cost. There are no easy solutions to these problems and an understanding of the fault models is only emerging.
The classical positioning of dependable computing and fault-tolerance has been not to address the faults that escape the development process but to address the random errors attributed to nature. Unfortunately, this is probably a fairly major oversight in this industry. One of the critical paths in a business is the development cycle time. The compressed schedules can result in a greater number of errors that actually escape into the field. Unfortunately, this has never been the focus and is not easy to make the focus. It would make a difference if these error escapes were also the focus of the fault-tolerant computing research community.
## 3.2 Reducing Unavailability
From the perspective of a commercial customer, it is the loss of availability that causes a large impact. The causes of outage need to be carefully understood before one can develop a strategy for where fault tolerance needs to be applied. There have been quite a few studies that identify the various causes of outage and their impact. A widely recognized conclusion is that the Pareto is dominated by software and procedural issues, such as operator errors or user errors. Next to these errors are hardware and environmental problems. Studies show that a decade ago, hardware outage dominated the Pareto but improvements in technology and manufacturing have decreased that contribution. However, there have not been similar improvements in software which is why it now dominates the cause of outage [5]. It is common place in the industry to separate outage causes into scheduled and unscheduled outage. Given the Pareto this split is more relevant in software than in hardware.
An unscheduled outage is an act of technology or nature and is the kind of fault that is commonly the target of fault-tolerant design. Typically, these faults are due to manufacturing defects or marginal performance which result in transient or intermittent errors. Unscheduled outage can also occur due to software bugs or defects. Although there is considerable effort expended on de-bugging software prior to release, there is no such thing as the last bug. Bugs that cause failures but do not always result in a complete outage [6]. Infact, the severity 1 (on a scale of 1-4) implying a complete loss of function are less than 10\% and severity 2, which requires some circumvention to restore operation is typically between 20\%-40\%. The severity 3's and 4's correspond to an annoyance and are usually the bulk of the problems. Not every software defect hits every customer. However, it is common practice to upgrade a release of software with a maintenance release. A maintenance release includes recent bug-fixes and the time required for periodic maintenance is usually accounted under scheduled outage.
The largest part of outage due to software is what may be called planned or scheduled outage. Primarily, these are for maintenance, reconfiguration, upgrade etc. Over the past few years we have seen that the proportion of scheduled outage, especially in software, has greatly increased. The mean scheduled outage in the commercial data processing center is at least twice that of an unscheduled outage. It is also the case that the total amount of outage caused, due to scheduled down time, far exceeds unscheduled outage, particularly for software. Typically commercial systems have scheduled down time to reorganize data bases, accommodate new configurations or tune for performance. This is an aspect of outage that has not been adequately studied in the academic community. Although, it may sound like a topic for systems management it impacts availability most directly.
As the industry places greater emphasis on reducing software defects and their impact, the proportion of the scheduled outage will rise. Reducing the scheduled outage down to zero is becoming a requirement in some commercial applications that call for 24x7 operation, i.e., 24 hours a day, 7 days a week [7]. To reach this design point one has to reduce outage from all sources. The difficulty is in reducing scheduled outage since, most old designs assume the availability of a window for repair and maintenance. To reach the goal of 24x7 operation one has to broaden the vision and scope of fault-tolerance to include all sources of outage. This calls for rethinking the design point. The task is much simpler for hardware, where each machine design starts with almost a clean slate. Whereas, designing software is a much more constrained, building on a base of code, whose design might not all be understood or documented.
## 3.3 Non-Disruptive Change Management
The earlier discussion on scheduled outage brings to focus a very important aspect about software maintenance. Software will always need to be maintained: either the installation of patches, upgrade to a newer release, establishing of new interfaces, etc. All these cause disruption and more often than not demand an outage. Unless software has been designed to be maintained non-disruptively, it is unlikely this capability can be retrofit. The increasing network applications create situations where products communicate with different releases and functionality, requiring N to N+1 compatibility. This requirement has serious implications on how software is designed, control structures maintained, and data shared. Architecting this from the very beginning makes the task of designing upgradability much easier. Trying to do this in a legacy system is invariably a hard exercise and sometimes infeasible.
There are some techniques that can be adopted towards non-disruptive change management. Broadly, they fall into a couple of major categories: one being a hot standby and the other the mythical modular construction that can be maintained on-line. With legacy system the choices are more limited given a base architecture which is inherited, and a hot standby approach is easier to conceive [8], [9]. In a hot standby, a second version of the application is brought up and users migrated from one application to the other while the first version is taken down for rework. To do this one has to maintain communication between the applications, consistency of data, and a failover capability. Alternatively, applications can be built so they are more modular and the shared resources managed to permit online maintenance.
A related problem that impacts non-disruptive change management is the very first step namely, problem isolation and diagnosis. Unlike hardware, software failures do not always result in adequate information to identify the fault or the cause of failure. In IBM parlance, this is commonly called first failure data capture. Studies have shown, that the first failure data capture is usually quite poor. Barring, some of the mainframe software which has traditionally had a lot of instrumentation [9]. Most software does not trap, trace or log adequate information to help diagnose the failure the first time it occurs. Furthermore, error propagation and latency make it hard to identify the root cause. The problem then requires to be re-created>/em> which, at the customer site, causes further disruption and outage.
In a network environment, an application can be spread across the network in a client-server relationship with data from distributed databases. Providing a non-disruptive solution for change management becomes more complicated. To reduce outage, change management has to be carefully architected. Current trends in this area are mostly adhoc and a unifying theme and architecture is certainly an opportune area for research. It would also provide for better inter-operability across multi-vendor networks.
## 3.4 Human Fault-Tolerance
With the current focus on the defect problem and unscheduled outage their impact will eventually be decreased. The scheduled down time will also decrease with improved systems management. However, a new problem will then start to dominate. This problem has to do with the human comprehension of tasks being performed. In IBM parlance, we call this the non-defect oriented problems<\em>. As the name suggests, a non-defect is one that does not require a code change to fix the problem. The non-defect problems also includes tasks such as installation and migration, provided they are problems related to comprehensibility of instructions and tasks, as opposed to defects in the code.
A non-defect can cause work to be stopped by the human, resulting in an eventual loss of availability. This disruption in the work, can also result in calls to the vendor increasing service costs. More importantly, these problems can eventually impact the perception of the product. Increasingly, information on a product is integrated with the application making documentation more accessible and available. New graphical user interfaces have paradigms that make the execution of a task far more intuitive. Additionally, there can evolve a culture of user is always right. In this environment, the concept of availability needs to be re-thought and correspondingly the concepts of fault-tolerance. The classical user error will quickly become passe. Nevertheless, designing systems to tolerate human error is only part of the story. Designing systems to ensure a certain degree of useability perceived by a user, is certainly a new challenge for the fault-tolerant community.
## 3.5 All Over Again in the Distributed World
One of the philosophies in fault-tolerance, goes back to John von Neumann, -- the synthesis of reliable organisms from unreliable components''. Stretching this to the present day, we often think of designing distributed systems using parts from the single system era. If it were single system with no fault-tolerance being used to build distributed systems, we might be luckier. However, single high end commercial systems, are amazingly fault-tolerant. When we lash a few of them together, one has to be careful in understanding the failure and recovery semantics, before designing a higher level protocol. For, we are no longer, synthesizing a reliable system with unreliable components.
The problems emanate because there are several layers of recovery management, each one optimized locally, which may not prove to be a good global optimal. For example, assume there are two paths to a disk via two different fault-tolerant controllers. If an error condition presented on a request, is re-tried repeatedly by the controller, it would be a poor choice given the configuration. Failing the request reported with an error, and re-issuing it on a different path would be preferred. However, this implies understanding the recovery semantics and disabling them to develop yet another higher level policy. The above situation only illustrates the tip of the iceberg. There are several nuances that need to be dealt with.
In essence, one has to think of the design point and the strategy all over again in the distributed world. There are several benefits, one of them being the availability of a substantial number of spares. With plenty of spares, shoot and restart, might be a better policy than trying to go through an elaborate recovery process. Assuming that error detection is available, sparing provides a nice repair policy. Contrast with a high-end commercial processor such as the IBM ES/9000 Model 900 [10], which has extensive checking but limited spares. Whereas a network of workstations, in todays technology with minimal checking can provide a lot of spares. On the other hand, the ES/9000 provides some of the highest integrity in computing. Solutions to the integrity problem in a network of workstations, when designed on the granularity of a machine, has questionable performance. This leaves open the very important question of integrity. The design for fault-tolerance in the distributed world, needs to look carefully at integrity, detection, recovery and reconfiguration at an appropriate level of granularity.
## 4. Summary
The goal of this paper is to bring to the fault-tolerant community a perspective of, what I believe are, the top five priorities for a developer in today's environment. The issues identify the factors that help or hinder the exploitation of fault-tolerant technology. Understanding the issues and placing a focus on them could eventually lead to innovation and research that will benefit the industry.
1. Shipping a product on schedule dominates the list and is further accentuated due to the compressed development cycle times. In an intensely competitive market with very short product life times, any extra function that might stretch the cycle time, can be argued as non critical and end up on the chopping block. Fault-tolerance function is no exception to it unless the resulting reliability is essential to the survival of the product line, or is a feature that is clearly added value. The dramatic improvements in component reliability probably do not help it. Whereas, a crisp articulation of the life-cycle-cost reduction due to fault-tolerance and overall improvement in customer satisfaction are driving forces, when applicable.
2. Reducing Unavailability is critical as more segments of the market bet their business on the data processing and information technology. Today, given the consolidations in commercial computing and globalization of the economy, the window for outage is quickly disappearing, driving towards the requirements of 24x7 operations. The outage due to software dominates the causes of unavailability, and is commonly separated into scheduled and un-scheduled outage. In the commercial area, scheduled dominates the two. Research in fault-tolerant computing does not directly address some of these issues, but is a relevant topic for investigation.
3. Non-disruptive change management will be a key to achieving continuous availability and dealing with the largest fraction of problems associated with software. Given that most software in the industry is legacy code there is an important question of how one retrofits such capability. It is likely that a networked environment, with several spares, could effectively employ a {\em shoot and restart} policy, to reduce unavailability and provide change management. \vspace{.1in}
4. Human Fault-tolerance will eventually start dominating the list of causes for unavailability and the consequent loss of productivity. Currently there is a significant focus in the industry on the defect problem and the associated unavailability problems due to scheduled downtime. Eventually these will be reduced, in that order, leaving the non-defect oriented problems to dominate. This problem is accentuated by the fact that there is a significant component of graphical user interface in today's applications meant for the non-computer person. Useability will be synonymous to availability, creating this new dimension for fault-tolerance research to focus on.
5. All over again in the distributed world summarizes the problems we face in distributed computing environment. The difficulty is that the paradigms of providing fault-tolerance do not naturally map over from the single system to the distributed system. The design point, cost structure, failure modes, resources for sparing, checking and recovery are all different. So long as that is recognized, hopefully, gross errors in design will not be committed.
## References
[1] J. Bozman, "Identifies the sources as International Data Corporation", Computerworld, Mar 30, 1992, pp75-78.
[2] J. J. Stiffler, "panel: On establishing fault tolerance objectives", The 21st Intl. Symposium on Fault-Tolerant Computing, June 1991.
[3] IEEE International Workshop on Fault And Error Models, Palm Beach, FL, Januar 1993.
[4] D. Siewiorek and R. Swarz, Reliable Computer Systems, Digital Press, 1992.
[5] J. Gray, "A census of Tandem System availability between 1985 and 1990," IEEE Transactions on Reliability, Vol 39, October 1990.
[6] M.Sullivan and R. Chillarege, "Software defects and their impact on system availabiity - a study of fiel dfailures in operating systems," Teh 21st International Symposium on Fault-tolerant Computing, June 1991.
[7] J.F. Isenberg, "Panel: Evolving systems for continuous availability," The 21st International Symposium on Fault-Tolerant Computing, June 1991.
[8] IMS/VS Extended Recovery Facility: Technical Reference. IBM GC24-3153, 1987.
[9] D. Gupta and P. Jalote, "Increasing system availability through on-line software version change," The 23rd International Symposium on Software Reliability Engineering, 1993.
[10] R. Chillarege, B.K. Ray, A.W. Garrigan and D. Ruth, "Estimating the recreate problem in software failures," The 4th International Symposium on Software Reliability Engineering, 1993.
[11] L. Spainhover, J. Isenberg, R. Chillarege, and J. Berding, "Design for fault-tolerance in systems ES/9000 Model 900," The 22nd International Symposium on Fault Tolerant Computing, 1992.
|
|
# Problem: For the reaction below, draw the structure of the appropriate compound. Indicate stereochemistry where it is pertinent.
###### Problem Details
For the reaction below, draw the structure of the appropriate compound. Indicate stereochemistry where it is pertinent.
|
|
Lemma 29.25.7. Let $f : X \to S$ be a morphism of schemes. Let $\mathcal{F}$ be a quasi-coherent sheaf of $\mathcal{O}_ X$-modules. Let $g : S' \to S$ be a morphism of schemes. Denote $g' : X' = X_{S'} \to X$ the projection. Let $x' \in X'$ be a point with image $x = g'(x') \in X$. If $\mathcal{F}$ is flat over $S$ at $x$, then $(g')^*\mathcal{F}$ is flat over $S'$ at $x'$. In particular, if $\mathcal{F}$ is flat over $S$, then $(g')^*\mathcal{F}$ is flat over $S'$.
Proof. See Algebra, Lemma 10.38.7. $\square$
There are also:
• 4 comment(s) on Section 29.25: Flat morphisms
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
|
Population Balances
# Introduction
Population balances belong to a branch in the sciences that deals with any particulate flow. It is a statement about the conservation of the population or number of particles present in a system. The word balance is used instead of conservation to stress the fact that particles undergo chemical and physiological changes. For example, particles can react with other particles leading to new chemical compounds. Also, particles undergo nucleation, growth, aggregation, and breakage.
The analysis of systems with particles aims at addressing the behavior of the population of particles and its environment. The population is usually described by the density of an extensive particle property such as the number of particles. Sometimes, it is convenient to use other extensive properties such as mass or volume of particles.
Population balances are an essential ingredient in a variety of disciplines such as physics and chemistry. Biology also employs population balances to study the behavior of cells of various kinds.
A system that contains particles is usually referred to as a disperse phase system or particulate system regardless of the density or role of particles in them.
In the framework of population balances, we are mainly concerned with systems consisting of particles disperesed in an environmental phase. We refer to the environmental phase as the continuous phase. For example, consider the transport of sand particles by air on a windy day. In case, the sand particles consistute the disperse phase, while the air constitutes the continuous phase.
# Mathematical Formalism
In this section, we present the mathematical formulation for deriving the population balance equation.
## Internal and External Coordinates
In assessing a disperse phase system, one is often concerned with the properties of the particles in that system. Such properties include their position vector $\mathbf{r} = x \mathbf{i} + y \mathbf{j} + z \mathbf{k}$ as well as intrinsic properties such as characteristic length $l$ and other quantities associated with a given particle. Therefore, if a particle has $n$ intrinsic quantities associated with it, it is customary to refer to those as $x_1, x_2, x_3, \cdots, x_n$ or more conveniently by the vector $\mathbf{x} = \left(x_1, x_2, x_3, \cdots, x_n \right)$. In general, a given particle will vary in both $\mathbf{r}$ and $\mathbf{x}$. To make the analysis clear, we split this variation and distinguish between external and internal coordinates. External coordinates will refer to the position vector $\mathbf{r}$ of a particle while internal coordinates will refer to the state quantities $\mathbf{x}$ of a particle. Note that while the external coordinates form an orthogonal basis, internal coordinates do not necessarily form one.
The combined external-internal coordinate system is conveniently referred to as the state space. Particles will convect about this state space as their internal properties change as well as the external environment conditions vary.
## Number Density Function
The description of a population of particles is best achieved introducing the number density function $\mathcal N$. The number density function is defined as the average number of particles per unit volume of state space. For example, when tracking particle sizes in a plug flow reactor, the number density function depends on the axial position in the reactor $x$ as well as the particle size $r$. Then, $\mathcal N \equiv \mathcal N(x,r)$ and its units are number per unit state space volume.
## One Dimensional Population Balance Equation
Consider the growth of a population of particles uniformly distributed in physical space. This is the case for example in a stirred tank crystallizer with a highly supersaturated solution. The internal coordinate of interest in this case is the particle size denoted by $r$. As the particles grow, they can be thought of as moving along the particle size dimension. In essence, growth is equivalent to convection in internal coordinates: as particles grow, they move from one particle size to another at a rate equal to the growth rate [[G \equiv $\dot {r}$]].
Consider now an arbitrary interval $[a,b]$ on the internal coordinate dimension. The rate of change of the TOTAL number of particles inside this interval is given by
(1)
\begin{align} \frac{\partial }{\partial t}\int_{a}^{b} \mathcal N(r,t) \text{ d} r= G(a,t) \mathcal{N}(a,t) - G(b,t)\mathcal{N}(b,t) \end{align}
Remember that the rate of change of a quantity inside a specified region of space is equal to the difference between the incoming and outgoing flowrates into this region.
To make further headway, we cast the right-hand-side of ([[eqref one_d_population_balance]]) in integral form and rewrite ([[eqref one_d_population_balance]]) as
(2)
\begin{align} \int_{a}^{b} \left[ \frac{\partial \mathcal N(r,t)}{\partial t} + \frac{\partial }{\partial r}G(r,t)\mathcal{N}(r,t) \right] \text{ d} r=0 \end{align}
Now, since the interval $[a,b]$ is arbitrary, the above integral holds for an arbitrary region of internal coordinate space. Therefore, for the integral to identically zero on an arbitrary region, the integrand must vanish identically. This leads
(3)
\begin{align} \frac{\partial \mathcal N(r,t)}{\partial t} + \frac{\partial }{\partial r}G(r,t)\mathcal{N}(r,t) =0 \end{align}
Equation 3 is a continuity equation for the number density function.
Alternate Derivation:
For an alternate derivation, consider a differential element of size $\mathrm{d}x$ in the internal coordinate space. The quantity of interest in this control volume is $\mathcal N(r,t)$.
page revision: 12, last edited: 04 Oct 2011 22:31
|
|
# SEC Alignment Tool
The Supreme Education Council Commons includes an alignment tool for tagging resources that are aligned to the Qatari Supreme Educational Council (SEC) curriculum standards.
Educators may use this tool to align OER to the SEC curriculum standards in Mathematics, Science, Arabic, and English.
In the future, users will be able to search for resources by individual SEC standard.
|
|
# Tight Kernels for Covering with Points and Polynomials
1 DATASHAPE - Understanding the Shape of Data
CRISAM - Inria Sophia Antipolis - Méditerranée , Inria Saclay - Ile de France
Abstract : The Point Hyperplane Cover problem in $R d$ takes as input a set of $n$ points in $R d$ and a positive integer $k$. The objective is to cover all the given points with a set of at most $k$ hyperplanes. The D-Polynomial Points Cover problem in $R d$ takes as input a family $F$ of D-degree polynomials from a vector space $R$ in $R d$ , and determines whether there is a set of at most $k$ points in $R d$ that hit all the polynomials in $F$. Here, a point p is said to hit a polynomial $f$ if $f (p) = 0$. For both problems, we exhibit tight kernels where $k$ is the parameter. We also exhibit a tight kernel for the Projective Point Hyperplane Cover problem, where the hyperplanes that are allowed to cover the points must all contain a fixed point, and the fixed point cannot be included in the solution set of points.
Keywords :
Type de document :
Pré-publication, Document de travail
2017
Littérature citée [6 références]
https://hal.archives-ouvertes.fr/hal-01518562
Contributeur : Kunal Dutta <>
Soumis le : jeudi 4 mai 2017 - 20:20:58
Dernière modification le : mardi 17 avril 2018 - 09:04:15
Document(s) archivé(s) le : samedi 5 août 2017 - 13:57:14
### Fichier
main.pdf
Fichiers produits par l'(les) auteur(s)
### Citation
Jean-Daniel Boissonnat, Kunal Dutta, Arijit Ghosh, Sudeshna Kolay. Tight Kernels for Covering with Points and Polynomials. 2017. 〈hal-01518562〉
### Métriques
Consultations de la notice
## 197
Téléchargements de fichiers
|
|
Scientists observe potential signs of dark matter
1. Apr 18, 2015
wolram
http://phys.org/news/2015-04-potential-interacting-dark.html
An international team of scientists, led by researchers at Durham University, UK, made the discovery using the Hubble Space Telescope and the European Southern Observatory's Very Large Telescope to view the simultaneous collision of four distant galaxies at the centre of a galaxy cluster 1.3 billion light years away from Earth.
2. Apr 18, 2015
marcus
http://arxiv.org/abs/1504.03388
The behaviour of dark matter associated with 4 bright cluster galaxies in the 10kpc core of Abell 3827
Richard Massey (Durham), Liliya Williams (Minnesota), Renske Smit (Durham), Mark Swinbank(Durham), Thomas Kitching (MSSL), David Harvey (EPFL), Mathilde Jauzac (Durham), Holger Israel(Durham), Douglas Clowe (Ohio), Alastair Edge (Durham), Matt Hilton (ACRU), Eric Jullo (LAM), Adrienne Leonard (UCL), Jori Liesenborgs (Hasselt), Julian Merten (JPL), Irshad Mohammed (Zurich), Daisuke Nagai (Yale), Johan Richard (Lyon), Andrew Robertson (Durham), Prasenjit Saha (Zurich), Rebecca Santana (Ohio), John Stott (Durham), Eric Tittley (Edinburgh)
(Submitted on 13 Apr 2015)
Galaxy cluster Abell 3827 hosts the stellar remnants of four almost equally bright elliptical galaxies within a core of radius 10kpc. Such corrugation of the stellar distribution is very rare, and suggests recent formation by several simultaneous mergers. We map the distribution of associated dark matter, using new Hubble Space Telescope imaging and VLT/MUSE integral field spectroscopy of a gravitationally lensed system threaded through the cluster core. We find that each of the central galaxies retains a dark matter halo, but that (at least) one of these is spatially offset from its stars. The best-constrained offset is 1.62+/-0.48kpc, where the 68% confidence limit includes both statistical error and systematic biases in mass modelling. Such offsets are not seen in field galaxies, but are predicted during the long infall to a cluster, if dark matter self-interactions generate an extra drag force. With such a small physical separation, it is difficult to definitively rule out astrophysical effects operating exclusively in dense cluster core environments - but if interpreted solely as evidence for self-interacting dark matter, this offset implies a cross-section sigma/m=(1.7+/-0.7)x10^{-4}cm^2/g x (t/10^9yrs)^{-2}, where t is the infall duration.
15 pages, 9 figures
An earlier short paper that reported no positive results from inspecting other collision data:
http://arxiv.org/abs/1503.07675
The non-gravitational interactions of dark matter in colliding galaxy clusters
David Harvey, Richard Massey, Thomas Kitching, Andy Taylor, Eric Tittley
(Submitted on 26 Mar 2015 (v1), last revised 13 Apr 2015 (this version, v2))
Collisions between galaxy clusters provide a test of the non-gravitational forces acting on dark matter. Dark matter's lack of deceleration in the bullet cluster collision' constrained its self-interaction cross-section \sigma_DM/m < 1.25cm2/g (68% confidence limit) for long-ranged forces. Using the Chandra and Hubble Space Telescopes we have now observed 72 collisions, including both major' and `minor' mergers. Combining these measurements statistically, we detect the existence of dark mass at 7.6\sigma significance. The position of the dark mass has remained closely aligned within 5.8+/-8.2 kpc of associated stars: implying a self-interaction cross-section \sigma_DM/m < 0.47 cm2/g (95% CL) and disfavoring some proposed extensions to the standard model.
5 Pages, 4 Figures and 18 pages supplementary information
Last edited: Apr 18, 2015
3. Apr 18, 2015
wolram
Is this another hint that dark matter is not cold?
4. Apr 18, 2015
Chalnoth
I believe these studies say more about the self-interaction of dark matter than that. The area most sensitive to the temperature of dark matter is structure formation in the early universe.
5. Apr 18, 2015
wabbit
http://arxiv.org/abs/1504.03388
The behaviour of dark matter associated with 4 bright cluster galaxies in the 10kpc core of Abell 3827, Richard Massey & al.
They qualify this is several ways but this is much more precise than their recent upper bound in
http://arxiv.org/abs/1503.07675
The non-gravitational interactions of dark matter in colliding galaxy clusters, David Harvey, Richard Massey, Thomas Kitching, Andy Taylor, Eric Tittley
Wht kind of candidates would that ~ 2 10^-4 cm2/g figure or something of similar magnitude suggest if confirmed ?
Last edited: Apr 18, 2015
6. Apr 18, 2015
wabbit
Hah, how cold is cold ? I have no idea but for some reason 2 10^-4 sounded pretty cold to me. Maybe not.
7. Apr 18, 2015
Chalnoth
It's not a temperature. It's a self-interaction cross-section. Basically this is related to the probability of two dark matter particles colliding with one another.
8. Apr 19, 2015
wabbit
Thanks - I wasn't in doubt about that, but for some reason I thought the "cold" in CDM meant "weakly interacting". Nope, no reason really, just silliness on my part.
Last edited: Apr 19, 2015
|
|
# An ellipse with major axis $4$ and minor axis $2$ touches both the coordinate axes. Locus of its Center and Focus is?
An ellipse with major axis $$4$$ and minor axis $$2$$ touches both the coordinate axes. Locus of its Center and Focus is?
My Approach: For locus of Center.
Since its touching coordinate axis so coordinate axes will act as tanget making angle of $$90 ^{\circ}$$ so origin will lies on Director Circle.
Center of director circle will be same as center of Ellipse
Let center of Ellipse $$(h,k)$$ so equation of director circle will be $$(x-h)^2+(y-k)^2=(semi major axis)^2+ (semi minor axis)^2$$
that is $$(x-h)^2+(y-k)^2=(2)^2+ (1)^2$$
becuse it passes through origin so $$(0-h)^2+(0-k)^2=(2)^2+ (1)^2$$
$$\implies$$ $$h^2+k^2=5$$
$$\implies$$ locus of center of Ellipse is $$x^2+y^2=5$$
For Locus of Focus:
I assumed focus as $$(x_1,y_1)$$ but i cannot processed further. I know one property that product of distance of tangents from Foci is constant and equal to square of semi minor axis and lies on auxiliary circle but that leads me nowhere.
Note There are many solution available on internet but they all did with same method taking axis of ellipse parallel to coordinate axis.
I don't want to do using that method i want it for slanted ellipse as shown in attached image
First, we try to determine the general form of a rotated ellipse in the first quadrant such that it remains tangent to the axes. We suppose it has the parametric equation
$$(x(t),y(t)) = (4 \cos u \cos t - 2 \sin u \sin t + h, 2 \cos u \sin t + 4 \sin u \cos t + k), \quad t \in [0,2\pi)$$ where $$(h,k)$$ is the center, and $$u$$ is the counterclockwise rotation angle of the ellipse relative to the coordinate axes. (Note I have modified the direction of rotation compared to the linked answer.)
Such an ellipse has horizontal tangent lines satisfying $$0 = \frac{dy}{dt} = 2 \cos u \cos t - 4 \sin u \sin t,$$ or $$t_{\text{crit}} = \arctan \frac{\cot u}{2}.$$ For these values of $$t_{\text{crit}}$$, we need to find $$k$$ such that $$y(t_{\text{crit}}) = 0$$, placing this ellipse so it is tangent to the $$x$$-axis; i.e, $$k = 2 \sqrt{\cos^2 u + 4 \sin^2 u}.$$ This gives, as a function of the angle of rotation $$u$$, the necessary vertical translation to make the ellipse tangent. A similar process using $$dx/dt$$ gives the necessary horizontal translation, which we show without proof: $$h = 2 \sqrt{4 \cos^2 u + \sin^2 u}.$$ Thus our ellipse is fully parametrized.
The locus of the center is simply $$(h,k)$$ as a function of $$u$$:
$$(h(u), k(u)) = \left(2 \sqrt{4 \cos^2 u + \sin^2 u}, 2 \sqrt{\cos^2 u + 4 \sin^2 u}\right).$$
A short computation of $$h^2 + k^2$$ shows that this locus is an arc of a circle, not the complete circle.
Where are the foci? We can first observe that they are located at some point along the line joining $$(x(0), y(0))$$ and $$(x(\pi), y(\pi))$$, so they have coordinates of the form $$(1-\lambda)(x(0), y(0)) + \lambda (x(\pi), y(\pi))$$ for $$\lambda = \frac{4 + 2 \sqrt{3}}{8} = \frac{2 + \sqrt{3}}{4}, \quad \text{and} \quad \lambda = \frac{2 - \sqrt{3}}{4}.$$ We again skip the calculation and show the result: $$(x_f(u), y_f(u)) = 2 \left(\sqrt{3} \cos u + \sqrt{4 \cos^2 u + \sin^2 u}, \sqrt{3} \sin u + \sqrt{\cos^2 u + 4 \sin^2 u} \right). \tag{1}$$ Note that this curve gives the locus of both foci, where $$u$$ and $$u + \pi$$ represent the location of each focus for a given rotation angle $$u$$.
All put together, we can visualize these loci in the following animation:
The conversion of the parametric formula to an implicit curve is tedious but not intractable; one would start with computing the square, then show that the square of the locus satisfies $$(x + y)(16 + xy) = 64 xy.$$
One approach to converting the locus is to note that we can write $$(x_f(u), y_f(u)) = \left(2 \sqrt{3} \cos u + \sqrt{(2 \sqrt{3} \cos u)^2 + 4}, 2 \sqrt{3} \sin u + \sqrt{(2 \sqrt{3} \sin u)^2 + 4} \right),$$ therefore $$x_f(u)$$ and $$y_f(u)$$ are roots of the quadratics $$x^2 - (4 \sqrt{3} \cos u) x - 4 = 0, \\ y^2 - (4 \sqrt{3} \sin u) y - 4 = 0,$$ or equivalently, $$48 \cos^2 u = \frac{(4-x^2)^2}{x^2}, \quad 48 \sin^2 u = \frac{(4-y^2)^2}{y^2}.$$ Thus $$48 = \frac{(4-x^2)^2}{x^2} + \frac{(4-y^2)^2}{y^2},$$ and the rest is an exercise in algebra.
• In which software did you do animation? Jun 17 at 1:35
• @mathophile The animation was created in Mathematica version 12. Jun 17 at 3:01
|
|
# Deriving the exponential distribution from the time between event in a Poisson distribution
This page is a stub, so it contains little or minimal information and is on a to-do list for being expanded.The message provided is:
In need of update
## Notes
Let [ilmath]X\sim[/ilmath][ilmath]\text{Poi}(\lambda)[/ilmath] for some [ilmath]\lambda\in\mathbb{R}_{>0} [/ilmath]
• Here supposed that [ilmath]X[/ilmath] models the number of events per unit time - although as with Poisson distribution - any continuum will do
Then:
• Suppose an event happens at [ilmath]t\eq t_0[/ilmath]
• Let [ilmath]T[/ilmath] be the random variable which is the time until the next event.
• Let [ilmath]d\in\mathbb{R}_{>0} [/ilmath] be given, so we can investigate [ilmath]\P{T>d} [/ilmath]
• We are interested in [ilmath]\mathbb{P}\big[\text{no events happening for time in }(t_0,t_0+d)\big]\eq\P{T>d} [/ilmath]
• Let [ilmath]X'\sim\text{Poi}(\lambda d)[/ilmath] be used to model this interval
• as if [ilmath]\lambda[/ilmath] events are expected to occur per unit time, then [ilmath]\lambda d[/ilmath] are expected to occur per unit [ilmath]d[/ilmath] of time
• It is easy to see that [ilmath]\mathbb{P}\big[\text{no events happening for time in }(t_0,t_0+d)\big]\eq\mathbb{P}\big[X'\eq 0\big]\eq e^{-\lambda d} [/ilmath]
• Thus [ilmath]\P{T>d}\eq e^{-\lambda d} [/ilmath]
• Or: [ilmath]\P{T\le d}\eq 1-\P{T>d}\eq 1-e^{-\lambda d} [/ilmath]
• But this is what we'd see if [ilmath]T[/ilmath] followed the exponential distribution with parameter [ilmath]\lambda d[/ilmath]
• [ilmath]\P{T\le d}\eq 1-e^{-\lambda d} [/ilmath]
Thus we see the time between occurrences of events in a Poisson distribution is exponentially distributed, or memoryless.
## Modifications
Suppose instead [ilmath]t_0[/ilmath] is the start time of the process rather than the last event time, how does this change things?
|
|
Is the fraction of radioactive isotopes on the near side of the moon higher than on the far side?
As time passes more slowly in a region of space close to the source of a gravitational field, shouldn't the moon, which always has one side facing towards the earth, have a higher fraction of radioactive isotopes on that side than on the far side? Could the fraction be used to determine the time tidal locking occured?
Also, shouldn't the fraction of radioactive isotopes be different in the earth depending on how deep you dig? Gravitation drops off with $r^2$, whereas the gravitationally relevant mass increases with $r^3$ (only the mass of the earth's sphere lying below the radioactive atom results in gravitation due to Gauss' law) , so outer layers should have more time dilation and therefore a higher fraction of radioactive isotopes. Can this be observed?
-
In the limit of weak-field gravity, time dilates according to (relative to an observer with zero gravitatainal potential)${}^{1}$:
$t = \frac{t_{0}}{\sqrt{1-\phi}}$
Taking the accepted values for the mass of the moon of the earth, the radius of the moon, and the earth-moon distance from google, you can calculate that time on the near side of the moon progresses at $1 - 1.15\times 10^{-11}$ the rate that time on the far side of the moon does. Therefore, if you look at the relative abundances of even something like Uranium, which has a half life of ~4 billion years, you're still stuck with a time difference of the order of milliseconds, which is likely undetectable, but given the energy that we've devoted as a society to assaying and enriching uranium, perhaps I'm wrong.
${}^{1}$: note that $\phi$ is unitless in units of $G=c=1$
-
The situation on the surface of the moon is presumably the same as on the surface of the earth: it's approximately an equipotential, so the effect vanishes completely. – Ben Crowell Aug 7 '13 at 21:52
@BenCrowell: was the moon phase locked when it hardened? I would assume the answer is no. – Jerry Schirmer Aug 7 '13 at 22:07
I don't think it matters. Only for very small bodies such as asteroids do you get shapes that aren't equipotentials. Rocks can roll downhill, the body can exhibit plasticity, etc. – Ben Crowell Aug 7 '13 at 22:14
@BenCrowell: the question is an equipotential relative to what. If the moon was not phase locked when it hardened, then it's initial configuration would have been an equipotential relative to the Earth's potential, but then that shape would move relative to the Earth. Although maybe that would also determine which part of the moon faces the Earth in the end. I don't think the answer is clear or trivial, though. – Jerry Schirmer Aug 7 '13 at 22:55
The moon is guaranteed to be an equipotential in its current state. Bodies that big conform themselves to equipotentials, and if the equipotential changes, the body's shape changes on a relatively short timescale. It doesn't matter if it hardened, because hardening isn't absolute. Only very small bodies are rigid enough to avoid conforming to the equipotential on relatively short time-scales. – Ben Crowell Aug 7 '13 at 23:34
|
|
# GATE2013-53
A DNA fragment of $5000bp$ needs to be isolated from E.coli (genome size $4\times10^3kb$) genomic library.
The number of clones to represent this fragment in genomic library with a probability of $95\%$ are
1. $5.9\times10^3$
2. $4.5\times10^3$
3. $3.6\times10^3$
4. $2.4\times10^3$
|
|
# Mean Absolute Deviation Is the mean absolute deviation of a sample a good statistic for estimating the mean absolute deviation of the population? Why or why not?
Question
Mean Absolute Deviation Is the mean absolute deviation of a sample a good statistic for estimating the mean absolute deviation of the population? Why or why not?
|
|
# zbMATH — the first resource for mathematics
Hybrid cluster ensemble framework based on the random combination of data transformation operators. (English) Zbl 1233.68198
Summary: Given a dataset $$P$$ represented by an $$n\times m$$ matrix (where $$n$$ is the number of data points and $$m$$ is the number of attributes), we study the effect of applying transformations to $$P$$ and how this affects the performance of different ensemble algorithms. Specifically, a dataset $$P$$ can be transformed into a new dataset $$P^{\prime}$$ by a set of transformation operators $$\Phi$$ in the instance dimension, such as sub-sampling, super-sampling, noise injection, and so on, and a corresponding set of transformation operators $$\Psi$$ in the attribute dimension. Based on these conventional transformation operators $$\Phi$$ and $$\Psi$$, a general form $$\Omega$$ of the transformation operator is proposed to represent different kinds of transformation operators. Then, two new data transformation operators, known respectively as probabilistic based data sampling operator and probabilistic based attribute sampling operator, are designed to generate new datasets in the ensemble. Next, three new random transformation operators are proposed, which include the random combination of transformation operators in the data dimension, in the attribute dimension, and in both dimensions respectively. Finally, a new cluster ensemble approach is proposed, which integrates the random combination of data transformation operators across different dimensions, a hybrid clustering technique, a confidence measure, and the normalized cut algorithm into the ensemble framework. The experiments show that (i) random combination of transformation operators across different dimensions outperforms most of the conventional data transformation operators for different kinds of datasets. (ii) The proposed cluster ensemble framework performs well on different datasets such as gene expression datasets and datasets in the UCI machine learning repository.
##### MSC:
68T05 Learning and adaptive systems in artificial intelligence
##### Software:
[1] Breiman, L., Bagging predictors, Machine learning, 24, 2, 123-140, (1996) · Zbl 0858.68080 [2] Freund, Y.; Schapire, R.E., A decision-theoretic generalization of on-line learning and an application to boosting, Journal of computer and system sciences, 55, 1, 119-139, (1997) · Zbl 0880.68103 [3] Breiman, L., Random forests, Machine learning, 45, 1, 5-32, (2001) · Zbl 1007.68152 [4] Ho, T.K., The random subspace method for constructing decision forests, IEEE transactions on pattern analysis and machine intelligence, 20, 8, 832-844, (1998) [5] Rodriguez, J.J.; Kuncheva, L.I.; Alonso, C.J., Rotation forest: a new classifier ensemble method, IEEE transactions on pattern analysis and machine intelligence, 28, 10, 1619-1630, (2006) [6] Kuncheva, L.I.; Whitaker, C.J., Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy, Machine learning, 51, 2, 181-207, (2003) · Zbl 1027.68113 [7] Kuncheva, L.I., A theoretical study on six classifier fusion strategies, IEEE transactions on pattern analysis and machine intelligence, 24, 2, 281-286, (2002) [8] Kuncheva, L.I., ‘fuzzy’ vs ‘non-fuzzy’ in combining classifiers designed by boosting, IEEE transactions on fuzzy systems, 11, 6, 729-741, (2003) [9] Yu, Z.; Deng, Z.; Wong, H.S.; Tan, L., Identifying protein kinase-specific phosphorylation sites based on the bagging-adaboost ensemble approach, IEEE transactions on nanobioscience, 9, 2, 132-143, (2010) [10] Gehler, P.; Nowozin, S., On feature combination for multiclass object classification, (), 221-228 [11] Strehl, A.; Ghosh, J., Cluster ensembles—a knowledge reuse framework for combining multiple partitions, Journal of machine learning research, 3, 583-617, (2002) · Zbl 1084.68759 [12] X.Z. Fern, C.E. Brodley, Random projection for high dimensional data clustering: a cluster ensemble approach, in: Proceedings of the 20th International Conference on Machine Learning, 2003, pp. 186-193. [13] Fred, A.L.N.; Jain, A.K., Combining multiple clusterings using evidence accumulation, IEEE transactions on pattern analysis and machine intelligence, 27, 6, 835-850, (2005) [14] Topchy, A.P.; Jain, A.K.; Punch, W.F., Clustering ensembles: models of consensus and weak partitions, IEEE transactions on pattern analysis and machine intelligence, 27, 12, 1866-1881, (2005) [15] Monti, S.; Tamayo, P.; Mesirov, J.; Golub, T., Consensus clustering: a resampling-based method for class discovery and visualization of gene expression microarray data, Machine learning, 52, 91-118, (2003) · Zbl 1039.68103 [16] T. Lange, J.M. Buhmann, Combining partitions by probabilistic label aggregation, in: KDD 2005, 2005, pp. 147-156. [17] Kuncheva, L.I.; Vetrov, D., Evaluation of stability of k-means cluster ensembles with respect to random initialization, IEEE transactions on pattern analysis and machine intelligence, 28, 11, 1798-1808, (2006) [18] Ayad, H.G.; Kamel, M.S., Cumulative voting consensus method for partitions with variable number of clusters, IEEE transactions on pattern analysis and machine intelligence, 30, 1, 16-173, (2008) [19] A.P. Topchy, M.H.C. Law, A.K. Jain, A.L.N. Fred, Analysis of consensus partition in cluster ensemble, in: ICDM 2004, 2004, pp. 225-232. [20] T. Lange, J.M. Buhmann, Combining partitions by probabilistic label aggregation, in: SIGKDD 2005, 2005, pp. 147-156. [21] Monti, S.; Tamayo, P.; Mesirov, J.; Golub, T., Consensus clustering: a resampling based method for class discovery and visualization of gene expression microarray data, Journal of machine learning, 52, 1-2, (2003) · Zbl 1039.68103 [22] Dudoit, S.; Fridlyand, J., A prediction-based resampling method to estimate the number of clusters in a dataset, Genome biology, 3, 7, 0036.1-0036.21, (2002) [23] Dudoit, S.; Fridlyand, J., Bagging to improve the accuracy of a clustering procedure, Bioinformatics, 19, 1090-1099, (2003) [24] Smolkin, M.; Ghosh, D., Cluster stability scores for microarray data in cancer studies, BMC bioinformatics, 4, 36, (2003) [25] Bertoni, A.; Valentini, G., Randomized maps for assessing the reliability of patients clusters in DNA microarray data analyses, Artificial intelligence in medicine, 37, 2, 85-109, (2006) [26] Valentini, G., Mosclust: a software library for discovering significant structures in bio-molecular data, Bioinformatics, 23, 3, 387-389, (2007) [27] Bertoni, A.; Valentini, G., Discovering multi-level structures in bio-molecular data through the Bernstein inequality, BMC bioinformatics, 9, Suppl 2:S4, 1-9, (2008) [28] Yu, Z.; Wong, H.S.; Wang, H., Graph based consensus clustering for class discovery from gene expression data, Bioinformatics, 23, 21, 2888-2896, (2007) [29] Yu, Z.; Wong, H.S., Class discovery from gene expression data based on perturbation and cluster ensemble, IEEE transactions on nanobioscience, 8, 2, 147-160, (2009) [30] Z. Yu, Z. Deng, H.S. Wong, Identification of phosphorylation sites using a hybrid classifier ensemble approach, in: IEEE International Conference on Pattern Recognition 2008 (ICPR2008), Tampa, FL, USA, 2008, pp. 1-4. [31] Hu, X.; Park, E.K.; Zhang, X., Microarray gene cluster identification and annotation through cluster ensemble and EM based informative textual summarization, IEEE transactions on information technology in biomedicine, 13, 5, 832-840, (2009) [32] Martinetz, T.M.; Berkovich, G.; Schulten, K.J., Neural-gas network for vector quantization and its application to times-series prediction, IEEE transactions on neural networks, 4, 4, 558-569, (1993) [33] Ferrari, S.; Ferrigno, G.; Piuri, V.; Borghese, N.A., Reducing and filtering point clouds with enhanced vector quantization, IEEE transactions on neural networks, 18, 1, 161-177, (2007) [34] Kohonen, T., Self-organizing maps, (1997), Springer-Verlag Heidelberg · Zbl 0866.68085 [35] Laana, M.; Pollardb, K.; Bryan, J., A new partitioning around medoids algorithm, Journal of statistical computation and simulation, 73, 8, 575-584, (2003) · Zbl 1054.62075 [36] Rand, W.M., Objective criteria for the evaluation of clustering methods, Journal of the American statistical association, 66, 846-850, (1971) [37] Shi, J.; Malik, J., Normalized cuts and image segmentation, IEEE transactions on pattern analysis and machine intelligence, 22, 8, 888-905, (2000) [38] Golub, T.R.; Slonim, D.K.; Tamayo, P.; Huard, C.; Gaasenbeek, M.; Mesirov, J.P.; Coller, H.; Loh, M.; Downing, J.; Caligiuri, M.; Bloomfield, C.; Lander, E., Molecular classification of cancer: class discovery and class prediction by gene expression, Science, 286, 5439, 531-537, (1999) [39] Su, A.I.; Cooke, M.P.; Ching, K.A.; Hakak, Y.; Walker, J.R.; Wiltshire, T.; Orth, A.P.; Vega, R.G.; Sapinoso, L.M.; Moqrich, A.; Patapoutian, A.; Hampton, G.M.; Schultz, P.G.; Hogenesch, J.B., Large-scale analysis of the human and mouse transcriptomes, Proceedings of the national Academy of sciences, 99, 7, 4465-4470, (2002) [40] Pomeroy, S.; Tamayo, P.; Gaasenbeek, M., Gene expression-based classification and outcome prediction of central nervous system embryonal tumors, Nature, 415, 6870, 436-442, (2002) [41] Ramaswamy, S.; Tamayo, P.; Rifkin, R.; Mukherjee, S.; Yeang, C.-H.; Angelo, M.; Ladd, C.; Reich, M.; Latulippe, E.; Mesirov, J.P.; Poggio, T.; Gerald, W.; Loda, M.; Lander, E.S.; Golub, T.R., Multi-class cancer diagnosis using tumor gene expression signatures, Proceedings of the national Academy of sciences, 98, 26, 15149-15154, (2001) [42] A. Frank, A. Asuncion. UCI Machine Learning Repository, University of California, School of Information and Computer Science, Irvine, CA, 2010 $$\langle$$http://archive.ics.uci.edu/ml〉. [43] F. Orabona, L. Jie, B. Caputo, Online-batch strongly convex multi kernel learning, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, June 2010. · Zbl 1283.68296 [44] A. Topchy, A.K. Jain, W. Punch, Combining multiple weak clusterings, in: Proceedings of the IEEE International Conference on Data Mining, 2003, pp. 331-338.
|
|
# Improving speed of data clean process in vba
My code does exactly what I want it to. However, being relatively new to VBA, I feel it could be a lot more efficient - namely I think I have overused loops and worksheet functions which are slowing it down. At the moment it takes around 3 minutes for ~15k rows of data.
Currently it's more of a combination of separate steps joined together so it doesn't flow nicely, rather for each steps it iterates through every row which, while it gets the job done, is frustratingly inefficient.
At the moment I am trying to remove the loops perhaps using Range objects instead, but I would really appreciate any pointers in the right direction.
Sub RunDataClean_Click()
With Sheets("Data")
'ensures code only loops through rows with data and not full worksheet
If Application.WorksheetFunction.CountA(.Cells) <> 0 Then
endrow = .Cells.Find(What:="*", _
After:=.Range("A4"), _
Lookat:=xlPart, _
LookIn:=xlFormulas, _
SearchOrder:=xlByRows, _
SearchDirection:=xlPrevious, _
MatchCase:=False).Row
Else
endrow = 4
End If
End With
Application.ScreenUpdating = False
Dim i As Long
'Checks another sheet to see if we have the cleaned customer name on file
For i = 5 To endrow
'does a vlookup in CDM file
Acc = Application.Cells(i, 5)
Cname = Application.Cells(i, 4)
Acname = Application.VLookup(Acc, Sheet3.Range("D2:F315104"), 3, False)
If IsError(Acname) Then
Cells(i, 32).Value = ""
Else
Cells(i, 32).Value = Acname
End If
Map = Application.VLookup(Acc, Sheet3.Range("C2:F315104"), 4, False)
If IsEmpty(Cells(i, 32)) Then
If IsError(Map) Then
Cells(i, 32).Value = ""
Else
Cells(i, 32).Value = Map
End If
End If
FXid = Application.VLookup(Acc, Sheet3.Range("B2:F315104"), 5, False)
If IsEmpty(Cells(i, 32)) Then
If IsError(FXid) Then
Cells(i, 32).Value = ""
Else
Cells(i, 32).Value = FXid
End If
End If
FXP = Application.VLookup(Cname, Sheet3.Range("A2:F315104"), 6, False)
If IsEmpty(Cells(i, 32)) Then
If IsError(FXP) Then
Cells(i, 32).Value = ""
Else
Cells(i, 32).Value = FXP
End If
End If
LkpName = Application.VLookup(Cname, Sheet3.Range("F2:F315104"), 1, False)
If IsEmpty(Cells(i, 32)) Then
If IsError(LkpName) Then
Cells(i, 32).Value = ""
Else
Cells(i, 32).Value = LkpName
End If
End If
If IsEmpty(Cells(i, 32)) Then
Cells(i, 32).Value = Cells(i, 4).Value
End If
Next i
For i = 5 To endrow
Cells(i, 28).Value = Cells(i, 3).Value & Cells(i, 5).Value
Length = Len(Cells(i, 28))
Cells(i, 29).Value = Length
Cells(i, 31).Value = Cells(i, 4).Value
'does a vlookup in CDM file (CDM)
Acc = Application.Cells(i, 28)
BP = Application.VLookup(Acc, Sheet3.Range("E2:G315104"), 3, False)
If IsError(BP) Then
Cells(i, 30).Value = ""
Else
Cells(i, 30).Value = BP
End If
'assigns B or P based on payment details (Business_Personal)
If Cells(i, 12).Value = "N" Then
Cells(i, 24).Value = "B"
ElseIf Cells(i, 30).Value = "Business" Then
Cells(i, 24).Value = "B"
ElseIf Cells(i, 30).Value = "Personal" Then
Cells(i, 24).Value = "P"
ElseIf Cells(i, 12).Value = "Y" Then
Cells(i, 24).Value = "P"
ElseIf InStr(1, Cells(i, 32), "LTD") <> 0 Then
Cells(i, 24).Value = "B"
ElseIf InStr(1, Cells(i, 32), "LIMITED") <> 0 Then
Cells(i, 24).Value = "B"
ElseIf InStr(1, Cells(i, 32), "MANAGE") <> 0 Then
Cells(i, 24).Value = "B"
ElseIf InStr(1, Cells(i, 32), "BUSINESS") <> 0 Then
Cells(i, 24).Value = "B"
ElseIf InStr(1, Cells(i, 32), "CONSULT") <> 0 Then
Cells(i, 24).Value = "B"
ElseIf InStr(1, Cells(i, 32), "INTERNATIONAL") <> 0 Then
Cells(i, 24).Value = "B"
ElseIf InStr(1, Cells(i, 32), "T/A") <> 0 Then
Cells(i, 24).Value = "B"
ElseIf InStr(1, Cells(i, 32), "TECH") <> 0 Then
Cells(i, 24).Value = "B"
ElseIf InStr(1, Cells(i, 32), "CLUB") <> 0 Then
Cells(i, 24).Value = "B"
ElseIf InStr(1, Cells(i, 32), "OIL") <> 0 Then
Cells(i, 24).Value = "B"
ElseIf InStr(1, Cells(i, 32), "SERVICE") <> 0 Then
Cells(i, 24).Value = "B"
ElseIf InStr(1, Cells(i, 32), "SOLICITOR") <> 0 Then
Cells(i, 24).Value = "B"
ElseIf InStr(1, Cells(i, 32), "CORP") <> 0 Then
Cells(i, 24).Value = "B"
ElseIf Left(Cells(i, 5).Value, 3) = "999" Then
Cells(i, 24).Value = "P"
End If
Next i
'Week_Of_Year
For i = 5 To endrow
WeekNo = Application.Cells(i, 1)
WeekNumba = Application.WeekNum(WeekNo)
Cells(i, 21).Value = WeekNumba
Next i
'Deal_Channel concatenation
For i = 5 To endrow
Cells(i, 22).Value = Cells(i, 6).Value & Cells(i, 13).Value & Cells(i, 17).Value
Next i
'Deal_Source_System
For i = 5 To endrow
DealSS = Application.Cells(i, 22)
Deal_Source = Application.VLookup(DealSS, Sheet4.Range("F2:H354"), 3, False)
If IsError(Deal_Source) Then
Cells(i, 23).Value = "#N/A"
Else
Cells(i, 23).Value = Deal_Source
End If
Next i
'Reporting_Quarter (only worked for type double)
'does a lookup in calendar tab to return reporting quarter - could move this to Access
For i = 5 To endrow
qdate = Cells(i, 1)
qlkp = Application.VLookup(CDbl(qdate), Sheet5.Range("A1:C500"), 3, False)
Cells(i, 26).Value = qlkp
Next i
'copies any #N/A deal channel to lookup tables and then sets deal source to map
lastrow = Sheet4.Cells(Rows.Count, "F").End(xlUp).Row + 1
With Sheet1.Range("W5:W" & endrow)
Set DS = .Find(What:="#N/A", LookIn:=xlValues)
If Not DS Is Nothing Then
Do
DS.Offset(, -1).Copy
Sheet3.Range("F" & lastrow).PasteSpecial xlPasteValues
DS.Value = "Map"
Set DS = .FindNext(DS)
lastrow = lastrow + 1
Loop While Not DS Is Nothing
End If
End With
Application.ScreenUpdating = True
End Sub
• This procedure looks like a Click event handler for some button. What kind of module is it written in? Whether it's a worksheet module or a standard module will make a significant difference in how reliable this code is. – Mathieu Guindon Apr 16 '19 at 13:52
• @MathieuGuindon it's in a form control command button. This was easiest so whoever I pass this process onto just has to click a button but I could use another type if that would help – edev Apr 16 '19 at 15:36
• One more question: a lot of this code could very well be substituted for Excel functions that, I'm pretty sure, would calculate much faster than 3 minutes. There are ways to make the code run faster, but I can't shake the feeling that most of it should be replaced with worksheet functions. Is there a specific reason not to? – Mathieu Guindon Apr 16 '19 at 16:58
• @MathieuGuindon the main reason was to get rid of the manual side of the process - dragging formulas etc. Nobody wants to spend time on it (needs to be done weekly) so the idea was that they could click a button and then forget about it – edev Apr 18 '19 at 13:28
Code that's hard to read, is code that's hard to modify without breaking. Consistent indentation helps with that:
For i = 5 To endrow
qdate = Cells(i, 1)
qlkp = Application.VLookup(CDbl(qdate), Sheet5.Range("A1:C500"), 3, False)
Cells(i, 26).Value = qlkp
Next i
For i = 5 To endrow
qdate = Cells(i, 1)
qlkp = Application.VLookup(CDbl(qdate), Sheet5.Range("A1:C500"), 3, False)
Cells(i, 26).Value = qlkp
Next i
The first thing I would do would be to indent the entire project in a single click with Rubberduck, and then review the inspection results:
Undeclared variables are a huge red flag: Option Explicit isn't specified and VBA will happily compile any typos and carry on running the code in an erroneous logical state, by declaring the new identifier on-the-spot as an implicit Variant. Using disemvoweled, abbreviated, and otherwise unpronounceable names makes it even easier for this to happen, and harder for the bugs it introduces to be found.
Since this code is in the code-behind of a UserForm, there are a lot of implicit ActiveSheet references, and this is making the code extremely frail, prone to blow up with error 1004, or to work off the wrong sheet (although, not Selecting and Activateing any sheets and toggling ScreenUpdating off does minimize the odds of that happening, albeit indirectly).
There's a Range.Find call at the top of the procedure that assumes there is data on the Sheets("Data") worksheet. In the event where that sheet would be empty, the chained .Row member call would raise error 91.
Acc = Application.Cells(i, 5)
Cname = Application.Cells(i, 4)
These instructions are invoking worksheet members off Application: it's equivalent to ActiveSheet.Cells, or simply Cells. Just reading the code isn't sufficient to understand what sheet that is expected to be active, and thus all these unqualified Cells calls are very ambiguous, at least for a reader that doesn't know what they're looking at.
Barring a few false positives, everything Rubberduck picks up is essentially a low-hanging fruit that should be addressed before deving into the more substantial stuff:
• Implicit ActiveSheet and ActiveWorkbook references, should be qualified with a specific Worksheet or Workbook object, or explicitly reference ActiveSheet/ActiveWorkbook, to clarify the intent of the code. I believe the intent is not to work off whatever workbook/sheet is currently active though.
• Avoid Systems Hungarian Notation prefixing. It's harmful, and brings no value.
• Don't make event handler procedures Public, implicitly or not. Event handlers are Private by default, and they should remain that way: they are meant to be invoked by VBA, not user code.
• Use string-typed functions where possible, e.g. Left takes and returns a Variant, but Left\$ takes and returns an actual String: that's a rather insignificant (to an extent) point performance-wise, but using explicit types should be preferred over Variant (and the slight run-time overhead using Variant incurs).
Since a UserForm is involved, I encourage you to read this answer and the article it links to (I wrote both). The crux being, the last thing you want is to have a form that manipulates a worksheet directly, inside some button's Click handler. A first step towards a thorough refactoring would to turn the click handler into something like this:
Private Sub RunDataClean_Click()
Macros.RunDataClean
End Sub
...and then move the entire body of the procedure into a Public Sub RunDataClean procedure in some Macros module, but that's just a first step.
Performance-wise, it's hard to justify all that VBA code to do work that looks very much like it could be done using standard worksheet formulas.
But one thing strikes me:
For i = 5 To endrow
This line appears 6 times in the procedure, so the macro is iterating every single one of these 15K rows, ...6 times. Remove all but the first For i = 5 To endrow and all but the last Next i, and you will likely instantly slash 83% of the work being done.
• Thanks so much Mathieu I really appreciate the help! I'll get working on tidying it up like you have suggested – edev Apr 18 '19 at 13:38
|
|
## Using netCDF data with Python¶
Callum has kindly put together this example of using netCDF data in python and creating a simple plot with it (which I've tweaked to make into an exercise). You may need to install the netCDF module/library to your Python installation first. You will also need to have numpy, matplotlib, and Basemap (other Python libraries that can be installed)
### Installing netCDF4¶
There are a variety of ways to do this. If you're using linux, you may be able to do it with your package manager e.g.:
sudo yum install netcdf4-python
sudo apt-get netcdf4-python
You might need to google to find the appropriate package for your linux distro.
You can also use the Python package manager, pip:
sudo pip install netcdf4
Windows/Mac users can use pip, or just google it. :)
You can work through this example as is, or adapt for your own data. Then if you feel like doing some more fun tasks, see the challenges at the bottom. We can go through it in the meeting, feel free to do as much of it as you like before then.
In [4]:
# Import the relevant modules first
import numpy as np
import netCDF4 as nc
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
When we write import numpy as np, this is just creating a shorthand term we can use to access functions in that module. Instead of writing numpy.someFunc we can just write np.someFunc.
In [5]:
# Load netcdf file.
myfile = nc.Dataset('sst_july_2015.nc')
The netCDF module has a function called variables which can read the netCDF metadata attached to the netCDF file. We use the sst key to extract sea surface temperature:
In [6]:
# Extract July SSTs
SST_July = myfile.variables['sst']
# Extract 1st July
SST_1st_July = SST_July[0,:,:]
Now we want to extract the longitude and lattitude so we have something to plot our sea surface temperature against. These variables on their own are not much use, so we grid them together using a NumPy function called meshgrid.
In [7]:
# Extact Longitude and Latitude & create a mesh.
longitude = myfile.variables['longitude']
latitude = myfile.variables['latitude']
lons, lats = np.meshgrid(longitude, latitude)
Now we're going to use matplotlib to do some plotting. The first thing to do is to create a figure object. Then we take our figure object and add a title to it. Remember we have imported pyplot as plt. Pyplot is an interactive plotting function designed to mimic the MATLAB plotting feature.
In [8]:
# This is just a display setting for the Notebook viewer, you can ignore it for now.
# You don't need to write it in your own code.
%matplotlib inline
### The Pyplot/MATLAB way¶
In [9]:
# Plot the data.
plt.figure()
plt.title('1st July SST from ECMWF ERA-Interim Reanalysis')
# Set up a background map with projection 'cyl', specified latitude/longitude boundaries and a coarse resolution.
# Other basemap projections are available here http://matplotlib.org/basemap/users/mapsetup.html
# Now we are going to use another module, Basemap, to import a background image of the world.
# First we create a Basemap object and define its projection. Then we add coastlines and the gridlines.
m = Basemap(projection='cyl', llcrnrlat=-90, urcrnrlat=90, llcrnrlon=-180, urcrnrlon=180, resolution='c')
m.drawcoastlines()
m.drawparallels(np.arange(-90.,99.,30.))
m.drawmeridians(np.arange(-180.,180.,60.))
# Now we are going to overlay our Sea Surface Temperature data:
m.pcolormesh(lons, lats, SST_1st_July, shading='flat', cmap=plt.cm.jet, latlon=True)
plt.show()
Note that because this example is displayed using the IPython notebook, the plots will display automatically on the page. If you are doing this in Spyder or by running a script manually, you would need to add one of the two lines below to save the figure or make it appear in a window:
### The object-oriented way¶
Another way is to create figure and axes objects and then set the attribute of those objects:
In [10]:
fig = plt.figure()
axes = plt.axes()
axes.set_title('1st July SST from ECMWF ERA-Interim Reanalysis')
m = Basemap(projection='cyl', llcrnrlat=-90, urcrnrlat=90, llcrnrlon=-180, urcrnrlon=180, resolution='c')
m.drawcoastlines()
m.drawparallels(np.arange(-90.,99.,30.))
m.drawmeridians(np.arange(-180.,180.,60.))
# Now we are going to overlay our Sea Surface Temperature data:
m.pcolormesh(lons, lats, SST_1st_July, shading='flat', cmap=plt.cm.jet, latlon=True)
plt.show()
In [11]:
# Save and display the plot.
plt.savefig("SST_1st_July.png") # Writes out a png file
plt.show() # Displays the plot, usually in a pop up window
<matplotlib.figure.Figure at 0x7f3c3d23cc50>
1. Plot a colour bar and lat/long on the map.
2. Plot the 2nd, 3rd, and 4th July SSTs on similar maps, keeping them all in the same figure. Aim to get a 2x2 grid of 4 maps on a single figure:
+======+======+
| | |
| Map1 | Map2 |
| | |
+======+======+
| | |
| Map3 | Map4 |
| | |
+======+======+
Is there an easy way to do this? (Think about using a for loop...)
Feel free to use your own data to make similar plots. You don't have to follow the exercise exactly - experiment!
Browse the matplotlib gallery for similar examples if you get stuck. Also have a read of the basemap documentation.
# Colour bar and Lat Long¶
In [14]:
fig = plt.figure()
axes = plt.axes()
axes.set_title('1st July SST from ECMWF ERA-Interim Reanalysis')
m = Basemap(projection='cyl', llcrnrlat=-90, urcrnrlat=90, llcrnrlon=-180, urcrnrlon=180, resolution='c')
m.drawcoastlines()
# Here we add the lables to the meridian and longitude lines
m.drawparallels(np.arange(-90.,99.,30.) ,labels=[True,False,False,True])
m.drawmeridians(np.arange(-180.,180.,60.), labels=[True,False,False,True])
# Now we are going to overlay our Sea Surface Temperature data:
sst = m.pcolormesh(lons, lats, SST_1st_July, shading='flat', cmap=plt.cm.jet, latlon=True)
# Now the colourbar
cbar.set_label('K')
plt.show()
# Multiple Plots¶
In [15]:
# You need to extract the other SST data days first
SST_2nd_July = SST_July[1,:,:]
SST_3rd_July = SST_July[2,:,:]
SST_4th_July = SST_July[3,:,:]
In [36]:
# We start with a slightly different setup
fig = plt.figure()
# Plot 1
ax1.set_title("1st July")
# Now we are going to overlay our Sea Surface Temperature data:
m = Basemap(projection='cyl', llcrnrlat=-90, urcrnrlat=90, llcrnrlon=-180, urcrnrlon=180, resolution='c')
m.pcolormesh(lons, lats, SST_1st_July, shading='flat', cmap=plt.cm.jet, latlon=True)
m.drawcoastlines()
m.drawparallels(np.arange(-90.,99.,30.) ,labels=[True,False,False,True])
m.drawmeridians(np.arange(-180.,180.,60.), labels=[True,False,False,True])
# Plot 2
ax2.set_title("2nd July")
m = Basemap(projection='cyl', llcrnrlat=-90, urcrnrlat=90, llcrnrlon=-180, urcrnrlon=180, resolution='c')
m.pcolormesh(lons, lats, SST_2nd_July, shading='flat', cmap=plt.cm.jet, latlon=True)
m.drawcoastlines()
m.drawparallels(np.arange(-90.,99.,30.) ,labels=[True,False,False,True])
m.drawmeridians(np.arange(-180.,180.,60.), labels=[True,False,False,True])
# Plot 3
ax3.set_title("3rd July")
m = Basemap(projection='cyl', llcrnrlat=-90, urcrnrlat=90, llcrnrlon=-180, urcrnrlon=180, resolution='c')
m.pcolormesh(lons, lats, SST_3rd_July, shading='flat', cmap=plt.cm.jet, latlon=True)
m.drawcoastlines()
m.drawparallels(np.arange(-90.,99.,30.) ,labels=[True,False,False,True])
m.drawmeridians(np.arange(-180.,180.,60.), labels=[True,False,False,True])
# Plot 4
ax4.set_title("4th July")
m = Basemap(projection='cyl', llcrnrlat=-90, urcrnrlat=90, llcrnrlon=-180, urcrnrlon=180, resolution='c')
m.pcolormesh(lons, lats, SST_4th_July, shading='flat', cmap=plt.cm.jet, latlon=True)
m.drawcoastlines()
m.drawparallels(np.arange(-90.,99.,30.) ,labels=[True,False,False,True])
m.drawmeridians(np.arange(-180.,180.,60.), labels=[True,False,False,True])
# Now the colourbar
#cbar.set_label('K')
#plt.tight_layout()
plt.show()
There are ways to adjust the padding around the plots to make them look a bit neater. (Sorry I didn't have time to go into this). Have a google.
## Using a for loop¶
If you have lots of plots to plot, it can be useful to use a for loop to plot them all as sub plots (such as an ensemble analysis etc.) Here's a generic example of how to do this:
In [40]:
fig, axes = plt.subplots(nrows=4, ncols=3)
for ax in axes.flat:
map_ax = Basemap(ax=ax)
map_ax.drawcoastlines()
plt.show()
|
|
Topological Subspaces Examples 2
# Topological Subspaces Examples 2
Recall from the Topological Subspaces page that if $(X, \tau)$ is a topological space and $A \subseteq X$ then the subspace topology on $A$ is defined to be:
(1)
\begin{align} \quad \tau_A = \{ A \cap U : U \in \tau \} \end{align}
We verified that $\tau_A$ is indeed a topology for any subset $A$ of $X$.
We will now look at some examples of subspace topologies.
## Example 1
Let $X$ be a set with the discrete topology $\tau$, and let $A \subseteq X$. Prove that the subspace $(A, \tau_A)$ also has the discrete topology (on $A$).
Let $A \subseteq X$. To show that $A$ has the discrete topology on $A$, all we need to show is that any subset of $A$ is open in $X$.
Let $U \subseteq A$ be open in $X$. Then $U \subseteq X$. So $U$ is open in $X$. Moreover, $A \cap U = U$ is open in $A$. Hence $\tau_A$ is the discrete topology on $A$.
## Example 2
Let $(X, \tau)$ be a topological space and let $A \subseteq X$. Prove that $\tau_A \subseteq \tau$ if and only if $A \in \tau$.
$\Rightarrow$ Suppose that $\tau_A \subseteq \tau$. Since $A$ is open in $A$, this means that $A \in \tau_A$, so $A \in \tau$.
$\Leftarrow$ Suppose that $A \in \tau$. Let $U \subseteq \tau_A$. Then $U$ is open in $A$. So there exists an open set $V \in X$ such that $U = A \cap V$. But $A$ is open in $X$ since $A \in \tau$, so $A \cap V = U$ is open in $X$. Therefore $U \in \tau$. This shows that:
(2)
\begin{align} \quad \tau_A \subseteq \tau \end{align}
|
|
Day 74
# 74 of 100: Number Flip
A strobogrammatic number is one where (with certain fonts) the number looks the same when rotated 180 degrees (an example is shown above, the year 1961). The possible digits are 0, 1, 6, 8, and 9.
When is the next year (after 2017) that will be strobogrammatic?
While some fonts vary, you can assume 0, 1, 6, 8, and 9 (and only these digits) are all written in a way to allow a strobogrammatic arrangement.
|
|
Moyal distribution - Maple Help
Home : Support : Online Help : Statistics : Statistics Package : Distributions : Statistics/Distributions/Moyal
Statistics[Distributions][Moyal] - Moyal distribution
Calling Sequence Moyal(mu, sigma) MoyalDistribution(mu, sigma)
Parameters
mu - mode parameter sigma - scale parameter
Description
• The Moyal distribution is a continuous probability distribution with probability density function given by:
$f\left(t\right)=\frac{\sqrt{2}{ⅇ}^{-\frac{t-\mathrm{\mu }}{2\mathrm{\sigma }}-\frac{{ⅇ}^{-\frac{t-\mathrm{\mu }}{\mathrm{\sigma }}}}{2}}}{2\sqrt{\mathrm{\pi }}\mathrm{\sigma }}$
subject to the following conditions:
$\mathrm{\mu }::\mathrm{real},0<\mathrm{\sigma }$
• Note that the Moyal command is inert and should be used in combination with the RandomVariable command.
Examples
> $\mathrm{with}\left(\mathrm{Statistics}\right):$
> $X:=\mathrm{RandomVariable}\left(\mathrm{Moyal}\left(\mathrm{μ},\mathrm{σ}\right)\right):$
> $\mathrm{PDF}\left(X,u\right)$
$\frac{{1}}{{2}}{}\frac{\sqrt{{2}}{}{{ⅇ}}^{{-}\frac{{1}}{{2}}{}\frac{{u}{-}{\mathrm{μ}}}{{\mathrm{σ}}}{-}\frac{{1}}{{2}}{}{{ⅇ}}^{{-}\frac{{u}{-}{\mathrm{μ}}}{{\mathrm{σ}}}}}}{\sqrt{{\mathrm{π}}}{}{\mathrm{σ}}}$ (1)
> $\mathrm{PDF}\left(X,0.5\right)$
$\frac{{0.3989422802}{}{{ⅇ}}^{{-}\frac{{0.5000000000}{}\left({0.5}{-}{1.}{}{\mathrm{μ}}\right)}{{\mathrm{σ}}}{-}{0.5000000000}{}{{ⅇ}}^{{-}\frac{{1.}{}\left({0.5}{-}{1.}{}{\mathrm{μ}}\right)}{{\mathrm{σ}}}}}}{{\mathrm{σ}}}$ (2)
> $\mathrm{Mean}\left(X\right)$
${\mathrm{σ}}{}\left({\mathrm{γ}}{+}{\mathrm{ln}}{}\left({2}\right)\right){+}{\mathrm{μ}}$ (3)
> $\mathrm{Variance}\left(X\right)$
$\frac{{1}}{{2}}{}{{\mathrm{σ}}}^{{2}}{}{{\mathrm{π}}}^{{2}}$ (4)
|
|
Stefano - 5 years ago 234
Java Question
# JAX-WS client : what's the correct path to access the local WSDL?
I suppose this is a trivial question but after spending a lot of time on trying all the paths I've given up.
The problem is I need to build a web service client from a file I'm been provided. I've stored this file on the local file system and, while I keep the WSDL file in the correct file system folder, everything is fine. When I deploy it to a server or remove the WSDL from the file system folder the proxy can't find the WSDL and rises an error. I've searched the web and I've found the following posts yet I'm not been able to make it work:
JAX-WS Loading WSDL from jar
http://www.java.net/forum/topic/glassfish/metro-and-jaxb/client-jar-cant-find-local-wsdl-0
http://blog.vinodsingh.com/2008/12/locally-packaged-wsdl.html
I'm using NetBeans 6.1 (this is a legacy application I've to update with this new web service client). Below is the JAX-WS proxy class :
@WebServiceClient(name = "SOAService", targetNamespace = "http://soaservice.eci.ibm.com/", wsdlLocation = "file:/C:/local/path/to/wsdl/SOAService.wsdl")
public class SOAService
extends Service
{
private final static URL SOASERVICE_WSDL_LOCATION;
private final static Logger logger = Logger.getLogger(com.ibm.eci.soaservice.SOAService.class.getName());
static {
URL url = null;
try {
URL baseUrl;
baseUrl = com.ibm.eci.soaservice.SOAService.class.getResource(".");
url = new URL(baseUrl, "file:/C:/local/path/to/wsdl/SOAService.wsdl");
} catch (MalformedURLException e) {
logger.warning("Failed to create URL for the wsdl Location: 'file:/C:/local/path/to/wsdl/SOAService.wsdl', retrying as a local file");
logger.warning(e.getMessage());
}
SOASERVICE_WSDL_LOCATION = url;
}
public SOAService(URL wsdlLocation, QName serviceName) {
super(wsdlLocation, serviceName);
}
public SOAService() {
super(SOASERVICE_WSDL_LOCATION, new QName("http://soaservice.eci.ibm.com/", "SOAService"));
}
/**
*
* @return
* returns SOAServiceSoap
*/
@WebEndpoint(name = "SOAServiceSOAP")
public SOAServiceSoap getSOAServiceSOAP() {
return super.getPort(new QName("http://soaservice.eci.ibm.com/", "SOAServiceSOAP"), SOAServiceSoap.class);
}
/**
*
* @param features
* A list of {@link javax.xml.ws.WebServiceFeature} to configure on the proxy. Supported features not in the <code>features</code> parameter will have their default values.
* @return
* returns SOAServiceSoap
*/
@WebEndpoint(name = "SOAServiceSOAP")
public SOAServiceSoap getSOAServiceSOAP(WebServiceFeature... features) {
return super.getPort(new QName("http://soaservice.eci.ibm.com/", "SOAServiceSOAP"), SOAServiceSoap.class, features);
}
}
This is my code to use the proxy :
WebServiceClient annotation = SOAService.class.getAnnotation(WebServiceClient.class);
//trying to replicate proxy settings
URL baseUrl = com.ibm.eci.soaservice.SOAService.class.getResource("");//note : proxy uses "."
URL url = new URL(baseUrl, "/WEB-INF/wsdl/client/SOAService.wsdl");
//URL wsdlUrl = this.getClass().getResource("/META-INF/wsdl/SOAService.wsdl");
SOAService serviceObj = new SOAService(url, new QName(annotation.targetNamespace(), annotation.name()));
proxy = serviceObj.getSOAServiceSOAP();
/* baseUrl;
//classes\com\ibm\eci\soaservice
//URL url = new URL(baseUrl, "../../../../wsdl/SOAService.wsdl");
proxy = new SOAService().getSOAServiceSOAP();*/
//updating service endpoint
Map<String, Object> ctxt = ((BindingProvider)proxy ).getRequestContext();
ctxt.put(JAXWSProperties.HTTP_CLIENT_STREAMING_CHUNK_SIZE, 8192);
ctxt.put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, WebServiceUrl);
NetBeans put a copy of the WSDL in web-inf/wsdl/client/SOAService, so I don't want to add it to META-INF too. Service classes are in WEB-INF/classes/com/ibm/eci/soaservice/ and baseurl variable contains the filesystem full path to it (c:\path\to\the\project...\soaservice ). The above code raises the error:
javax.xml.ws.WebServiceException: Failed to access the WSDL at: file:/WEB-INF/wsdl/client/SOAService.wsdl. It failed with: \WEB-INF\wsdl\client\SOAService.wsdl (cannot find the path)
.
So, first of all, shall I update the wsdllocation of the proxy class? Then how do I tell the SOAService class in WEB-INF/classes/com/ibm/eci/soaservice to search for the WSDL in \WEB-INF\wsdl\client\SOAService.wsdl?
Thank you in advance.
EDITED: I've found this other link - http://jianmingli.com/wp/?cat=41, which say to put the WSDL into the classpath. I'm ashamed to ask: how do I put it into the web application classpath?
Answer Source
The best option is to use jax-ws-catalog.xml
When you compile the local WSDL file , override the WSDL location and set it to something like
http://localhost/wsdl/SOAService.wsdl
Don't worry this is only a URI and not a URL , meaning you don't have to have the WSDL available at that address.
You can do this by passing the wsdllocation option to the wsdl to java compiler.
Doing so will change your proxy code from
static {
URL url = null;
try {
URL baseUrl;
baseUrl = com.ibm.eci.soaservice.SOAService.class.getResource(".");
url = new URL(baseUrl, "file:/C:/local/path/to/wsdl/SOAService.wsdl");
} catch (MalformedURLException e) {
logger.warning("Failed to create URL for the wsdl Location: 'file:/C:/local/path/to/wsdl/SOAService.wsdl', retrying as a local file");
logger.warning(e.getMessage());
}
SOASERVICE_WSDL_LOCATION = url;
}
to
static {
URL url = null;
try {
URL baseUrl;
baseUrl = com.ibm.eci.soaservice.SOAService.class.getResource(".");
url = new URL(baseUrl, "http://localhost/wsdl/SOAService.wsdl");
} catch (MalformedURLException e) {
logger.warning("Failed to create URL for the wsdl Location: 'http://localhost/wsdl/SOAService.wsdl', retrying as a local file");
logger.warning(e.getMessage());
}
SOASERVICE_WSDL_LOCATION = url;
}
Notice file:// changed to http:// in the URL constructor.
Now comes in jax-ws-catalog.xml. Without jax-ws-catalog.xml jax-ws will indeed try to load the WSDL from the location
http://localhost/wsdl/SOAService.wsdl
and fail, as no such WSDL will be available.
But with jax-ws-catalog.xml you can redirect jax-ws to a locally packaged WSDL whenever it tries to access the WSDL @
http://localhost/wsdl/SOAService.wsdl
.
Here's jax-ws-catalog.xml
<catalog xmlns="urn:oasis:names:tc:entity:xmlns:xml:catalog" prefer="system">
<system systemId="http://localhost/wsdl/SOAService.wsdl"
uri="wsdl/SOAService.wsdl"/>
</catalog>
What you are doing is telling jax-ws that when ever it needs to load WSDL from
http://localhost/wsdl/SOAService.wsdl
, it should load it from local path wsdl/SOAService.wsdl.
Now where should you put wsdl/SOAService.wsdl and jax-ws-catalog.xml ? That's the million dollar question isn't it ?
It should be in the META-INF directory of your application jar.
so something like this
ABCD.jar
|__ META-INF
|__ jax-ws-catalog.xml
|__ wsdl
|__ SOAService.wsdl
This way you don't even have to override the URL in your client that access the proxy. The WSDL is picked up from within your JAR, and you avoid having to have hard-coded filesystem paths in your code.
More info on jax-ws-catalog.xml http://jax-ws.java.net/nonav/2.1.2m1/docs/catalog-support.html
Hope that helps
Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download
|
|
×
# Big Powers
Note by Llewellyn Sterling
2 years, 5 months ago
Sort by:
We can easily prove that $$3^{444}+4^{333}$$ is divisible by $$5$$ using Modular Congruences.
Observe that
$3^{444} \equiv 9^{222} \equiv (-1)^{222} \equiv 1 {\pmod 5}$
Similarly,
$4^{333} \equiv (-1)^{333} \equiv -1 {\pmod 5}$
$3^{444}+4^{333} \equiv 1 + (-1) \equiv 0 {\pmod 5} \quad _\square$ · 2 years, 5 months ago
|
|
Notes On Conservation of Momentum and Equilibrium of a Particle - CBSE Class 11 Physics
The Law of Conservation of Momentum states that the total momentum of an isolated system of interacting particles is conserved. For an isolated system, there are no external forces acting on it. Let us apply this law to the direct collision between two bodies A and B. This shows that the total momentum before the collision is equal to the total momentum after the collision. Equilibrium of a Particle: When the net external force acting on a particle is zero, the particle is said to be in equilibrium. Applying the Newton’s First Law of Motion to this situation, we can say that the particle is either at rest or in uniform motion. When the particle is at rest, we say that the particle is in static equilibrium and when the body has uniform motion, we say that the particle is in dynamic equilibrium. When two forces act on a particle and keep it in equilibrium, then both forces must be equal in magnitude but opposite in direction and the two forces must be collinear. When three forces act on a particle and keep it in equilibrium, the three forces must be coplanar. The resultant of any two forces is equal and opposite to the third force. This third force is called the equilibrant force. The vector sum of the three forces must be equal to zero. When n number of forces act on a particle and keep it in equilibrium, the vector sum of the n forces must be equal to zero.
Summary
The Law of Conservation of Momentum states that the total momentum of an isolated system of interacting particles is conserved. For an isolated system, there are no external forces acting on it. Let us apply this law to the direct collision between two bodies A and B. This shows that the total momentum before the collision is equal to the total momentum after the collision. Equilibrium of a Particle: When the net external force acting on a particle is zero, the particle is said to be in equilibrium. Applying the Newton’s First Law of Motion to this situation, we can say that the particle is either at rest or in uniform motion. When the particle is at rest, we say that the particle is in static equilibrium and when the body has uniform motion, we say that the particle is in dynamic equilibrium. When two forces act on a particle and keep it in equilibrium, then both forces must be equal in magnitude but opposite in direction and the two forces must be collinear. When three forces act on a particle and keep it in equilibrium, the three forces must be coplanar. The resultant of any two forces is equal and opposite to the third force. This third force is called the equilibrant force. The vector sum of the three forces must be equal to zero. When n number of forces act on a particle and keep it in equilibrium, the vector sum of the n forces must be equal to zero.
Previous
|
|
## 26 - Copying and Moving Text in Word 2003
This is most used command, copying and moving text in document, in this video we will learn different ways to copy and move text within a document, and as well copying text from other document to retain document formatting.
|
|
# A new idea for a UL Biplane for the UltraVair
### Help Support Homebuilt Aircraft & Kit Plane Forum:
#### skyguynca
##### Well-Known Member
HBA Supporter
OK, so now with some time I am sitting down with SolidWorks to design a new Biplane for the UltraVair.
Here are a few pics of the wing design. Just getting started, tube spars and ribs along with the trailing edge done. Weight is 13.43 lbs for the panel. Total estimated weight is 18.67 lbs for the the completed covered panel so I will feel good with all 4 panels weighing 80 lbs covered.
That leaves 174lbs for airframe, tail feathers and engine.....I think I can do that.
David
#### cluttonfred
##### Well-Known Member
HBA Supporter
Neat! No ailerons…?
##### Well-Known Member
HBA Supporter
Log Member
Thats an odd airfoil? As mentioned no ailerons so going the CHEL or Sprat route?
#### skyguynca
##### Well-Known Member
HBA Supporter
That is the one of the upper wing panels, ailerons are only on the lower wing. It is not aerobatic so two sets of ailerons are not needed.
The airfoil is an LM7610, been used on a few planes. Leading Edge used to sell them. Pretty efficient for low speed when not heavily loaded, on the powered EZ's you would see 11/1....on the Early Bird Jenny (very draggy) you would still see 8 or 9/1 with single pilot no passenger, on 48hp still climbed at gross of 800lbs 450fpm.
Addicted2climbing, I have the Raceair plans for the Skylite and the Zipster in my collection. I bought them from Ed before he sold the rights.
Last edited:
#### BJC
##### Well-Known Member
HBA Supporter
It is not aerobatic so two sets of ailerons are not needed.
FWIW, Robert Armstrong flew a flat wing two aileron Pitts is the World Aerobatic Contest.
BJC
##### Well-Known Member
HBA Supporter
Log Member
Using the Zipster or micro mong plans would be worth referring to for your design. Im very interested in your motor when its ready. Im swamped until October but plan to start a Skylite immediately when im free.
#### skyguynca
##### Well-Known Member
HBA Supporter
Using the Zipster or micro mong plans would be worth referring to for your design. Im very interested in your motor when its ready. Im swamped until October but plan to start a Skylite immediately when im free.
Well, while I do like both designs alot, they are quite different from what I am doing.
As I post more of the design you will see that it takes no reference or similarities from either.
The Main Goal of this design is for it to be affordable for the average person and not a complicated build. I Fly's Aerolite 103 $13,000.00+, The Zipster welding required, coplicated fittings between the rib and spar and very very expensive tubing for the spars, well over$1500 just to buy the tubing for the spars.
My goal is to proof test that a nice ultralight with good performance can be built out of metal for less than $1500-2000 and simple enough for the average guy. The ultralight sport has mostly been forgotten. So the few models that are still out there cost around 1/3 of your annual salary or are completely beyond your disposable income. Or use so many off hand parts that you either have to pay a machine shop or metal shop to make your parts, or set up a wood shop.....making them un-affordable. I am trying for safe, inexpensive and mostly built or totally built using hand tools. I don't know if any of you knew Calvin Parker. I got acquainted with him when I built my Tennie Two. Great guy and became a good friend. He took so much crap from the homebuilding community when he introduced the T2. Well it was aerobatic, could be built for less than$2500 and used common everday tools. I loved mine and wish I had not been deployed to Europe and forced to sell it. He had the right idea and while I am late getting back to the game, I want to try and carry on the tradition of designing something so the every day middle income guy or gal can fly their own airplane.
thanks
David
Last edited:
#### karmarepair
##### Well-Known Member
HBA Supporter
The airfoil is an LM7610, been used on a few planes. Leading Edge used to sell them. Pretty efficient for low speed when not heavily loaded, on the powered EZ's you would see 11/1....on the Early Bird Jenny (very draggy) you would still see 8 or 9/1 with single pilot no passenger, on 48hp still climbed at gross of 800lbs 450fpm.
Larry designed this airfoil, a modification of an Eiffel section, to use on the Easy Riser tailless hang glider - the reflex is there for pitch stability.
But reflex shifts the lift/drag polar down, so the lift/drag ratio is worse, as well and the maximum lift coefficient (although the advertised Cl Max of 1.8 is QUITE good).
Airfoil Design for Tailless Airplanes: 4 explains this better than I could.
Could you use a thin, cambered un-reflexed airfoil? The reflexed airfoil you've proposed WILL reduce the needed tail volume, true, and there is A LOT to be said for using something that has worked before in a similar application...but I wonder if there is a drag reduction to be had by getting rid of the reflex?
Some possible airfoils to think about:
GOE 265 AIRFOIL (goe265-il)
GOE 495 AIRFOIL (goe495-il) Gottingen airfoils are numbered in the order they were tested. Makes it a pain in the ass to find a particular thickness and camber.
E61 (5.64%) (e61-il) Eiffel
lrn1007 (lrn1007-il) low reynolds number, but laminar flow. I'm skeptical anyone can achieve the really nice lift to drag the X-Foil predictions promise for this.
I can't find polars for the LM7610...
I applaud what you're doing. Go, GO!
HBA Supporter
48" cord
Thanks
David
#### Lars Odeen
##### Member
I think this is an awesome idea. I like biplanes a lot, and will be following your project with great interest.
#### dew777
##### Active Member
@skyguynca
I like what you're doing with the ultralight biplane design!
#### skyguynca
##### Well-Known Member
HBA Supporter
Super busy with work right now. I will progress on the project as time allows
Thanks for the interest
David
#### skyguynca
##### Well-Known Member
HBA Supporter
Lightest possible, going to use 3/16 acft cable for Drag and Anti-drag, they are just not in the uploaded CAD rendering
#### Bill-Higdon
##### Well-Known Member
David, I applaud your efforts, Cal Parker took a lot of crap for things that are now common practice.
#### David L. Downey
##### Well-Known Member
Lightest possible, going to use 3/16 acft cable for Drag and Anti-drag, they are just not in the uploaded CAD rendering
Just curious, Dave: 3/16 seems massively over strength for crossed wires in that kind of a wing? Did you mean 3/32" aircraft cable?
#### skyguynca
##### Well-Known Member
HBA Supporter
Yes, sorry for the typo, was in an hurry. The wing structure is almost the same as the Early Bird Jenny.
#### David L. Downey
##### Well-Known Member
Thank you for the reply; that was the reason I asked! I always liked the earlybird solutions! If you get done before I die I will be one of the builders! Love biplanes and love fabric covering! (and then I can use the crest used for my avatar on the fuselage sides!!!)
|
|
# Linear Mixed Model Python
Generalized linear mixed effects models, ubiquitous in social science research, are rarely seen in applied data science work despite their relevance and simplicity. We will discuss the motivation and main use cases for multilevel modeling, and illustrate by example how to fit linear and generalized linear mixed models. We added a new command to fit mixed logit models, and we rewrote all the rest. In this course, we will expand our exploration of statistical inference techniques by focusing on the science and art of fitting statistical models to data. I will start by introducing the concept of multilevel modeling where we will see that such models are a compromise between two extreme: complete pooling and no pooling. PROC MIXED fits not only these traditional variance component models but numerous other covariance structures as well. Previous Image. Random intercepts models, where all responses in a group are additively shifted by a. DLib - DLib has C++ and Python interfaces for face detection and training general object detectors. Newman‡ †IBM, 926 Incline Way, Suite 100, Incline Village, NV 89451 ‡Division of Economics and Business, Colorado School of Mines, Golden, CO 80401 [email protected] Python Mixed Integer Linear Programming. This article saved my life. Linear Mixed Effects modeling using Python (statsmodels) Short script for a linear mixed effects model. CODING CLUB TUTORIALS. The hard part is knowing whether the model you've built is worth keeping and, if so, figuring out what to do next. UPDATE #3: More wild stabs at finding a Python-based solver yielded PyGMO, which is a set of Python bindings to PaGMO, a C++ based global multiobjective optimization solver. Python has awesome robust libraries for machine learning, natural language processing, deep learning, big data and artificial Intelligence. In this post, we'll look at what linear regression is and how to create a simple linear regression machine learning model in scikit-learn. I am dealing with a scheduling problem for a production process. GLPK is an open-source C library for solving linear programs and mixed integer linear programs. GOLDSTEIN Department of Mathematics, Statistics & Computing, University of London Institute of Education, London WC1H0AL, U. Values in the models are defined by Constants, Parameters, and Variables. Linear Mixed-Effects Models: Basic Concepts and Examples. Diggle and others, 2002, Verbeke and Molenberghs, 2000, Verbeke and Molenberghs, 2005, McCulloch and others, 2008. History of Mixed Effect Modeling. Mixed models are applied in many disciplines where multiple correlated measurements. Python wins over R when it comes to deploying machine learning models in production. In addition, this package contains pre-trained models for extracting features from images using ResNet models, and doing sentiment analysis. Logistic Regression is a type of regression that predicts the probability of ocurrence of an event by fitting data to a logit function (logistic function). Most modern computing environments share a similar set of legacy FORTRAN and C libraries for doing linear algebra, optimization, integration, fast Fourier transforms, and other such algorithms. These powerful models will allow you to explore data with a more complicated structure than a standard linear regression. Inference for linear mixed models can be difficult. The student will be able to decide when to use a linear model and when to use a mixed model depending on the data structure. py is an implementation in Python of the classic diet problem; a linear program that can be generated by columns (add foods to the diet) or by rows (add requirements to the diet). Game Theory with Python- TalkPython Podcast (11) interaction models (4) linear models (10 I've written about fixed effects before in the context of mixed. Generalized linear mixed models (or GLMMs) are an extension of linear mixed models to allow response variables from different distributions, such as binary responses. Linear mixed model for heritability estimation that explicitly addresses environmental variation. c (Claudia Czado, TU Munich) - 1 - Overview West, Welch, and Galecki (2007) Fahrmeir, Kneib, and Lang (2007) (Kapitel 6) • Introduction • Likelihood Inference for Linear Mixed Models. Nonlinear Mixed-Effects Models Description. closed networks) Alexander Bruy 2017-01-12. SUMMARY Models for the analysis of hierarchically structured data are discussed. Multilevel mixed linear model analysis using iterative generalized least squares BY H. ols(’length ~ 1 + height ’, data=train_df). mixedlm("win% ~ statistic", data, groups = data['player']), with the player column being my grouping factor. This article saved my life. As it turns out, this is way too slow for this kind of problems,. This chapter is not a thorough review of integer programming literature, but is intended for technical researchers who may or may not have any familiarity with linear programming, but who are looking for an entry-level introduction to modelling and solution via integer and mixed-integer programming. This procedure is comparable to analyzing mixed models in SPSS by clicking: Analyze >> Mixed Models >> Linear Explanation: The following window from the SAS help menu shows the options available within the PROC. Learn Fitting Statistical Models to Data with Python from 미시건 대학교. Open Source. Discover how to prepare data with pandas, fit and evaluate models with scikit-learn, and more in my new book, with 16 step-by-step tutorials, 3 projects, and full python code. The feasible region for the problem, is the set of all points that satisfy the constraints and all sign restrictions. Our model operates on tree-structured data; it combines both Markov tree dependencies and characterization of the effect of explanatory variables (age of graft, type of connection with parent GU: succession or branching), as well as genetic effects (genotype of progenies in presence of replications), through Generalized Linear Mixed Model (GLMM. However, in PySB, the component declarations return software objects inside Python, allowing model elements to be manipulated programmatically. All Rights Reserved. With the increase of communication and. Linear Mixed Effects modeling using Python (statsmodels) Short script for a linear mixed effects model. As a follow up on this tutorial, I will be covering Mixed Integer Programming, where the variables can be integers, which will prove a very useful thing since it can be used to simulate boolean logic. They are organized by topics. Copyright ©2019, doctorsaha. Backends included in Moodle core. We rst revisit the multiple linear regression. Download Open Datasets on 1000s of Projects + Share Projects on One Platform. A friendly introduction to linear regression (using Python) A few weeks ago, I taught a 3-hour lesson introducing linear regression to my data science class. using the module gurobipy. the diagonal elements of the singular value matrix. Here the model tries to approximate the input data points using a straight line. It is better than linear regression (or MNIST for that matter, as it is just a large logistic regression) since linear regressions are almost too easy to fit. The Design. Linear Mixed Effects models are used for regression analyses involving dependent data. However, I found this Python library called pulp that provides a nice interface to glpk and other libraries. Generalized additive models are an extension of generalized linear models. Non-linear provides parallel multi-start capabilities and has adopted the new parallel tree. era (YY, m, n, nin, nout, r) Calculate an ERA model of order r based on the impulse-response data YY. on supporting linear and mixed-integer models. Methods for Mixed Linear Model Analysis¶ Overview. Build Linear Regression Model. A mixed model is similar in many ways to a linear model. Linear programming, sometimes known as linear optimization, is the problem of maximizing or minimizing a linear function over a convex polyhedron specified by linear and non-negativity constraints. ] NEW Python code by Taku Yoshioka (16 Nov 2016). eispice also includes a set of unique models like direct IBIS model support, Python based Behavioral models, non-linear capacitors, etc. With the increase of communication and. nlsList is documented separately. A Little Book of Python for Multivariate Analysis¶. PyMC3 is a probabilistic programming module for Python that allows users to fit Bayesian models using a variety of numerical methods, most notably Markov chain Monte Carlo (MCMC) and variational inference (VI). The within-group errors are allowed to be correlated and/or have unequal variances. 66] >>> my_y = [1. I've been trying to find something to explain implementation of multivariate time series regression in ARIMA. However, I found this Python library called pulp that provides a nice interface to glpk and other libraries. Rencher and G. Summary The results from a Monte Carlo simulation with 100 artificial datasets indicate that XGBoost with tree and linear base learners yields comparable results for classification problems, while tree learners are superior for regression. GLMs are most commonly used to model binary or count data, so. These models describe the relationship between a response variable and independent variables, with coefficients that can vary with respect to one or more grouping variables. Bruce Schaalje Department of Statistics, Brigham Young University, Provo, Utah. Optimization with PuLP¶. Getting Started. General linear constraints. Fitting Statistical Models to Data with Python. Generalized additive models are an extension of generalized linear models. PuLP is an open-source linear programming (LP) package which largely uses Python syntax and comes packaged with many industry-standard solvers. An interactive version with Jupyter notebook is available here. Since Dantzig's initial formulation of the simplex method for linear programs [12], Gomory's rst complete cutting plane algorithm for general. Introduction to Linear Programming with Python and PuLP. Moved Permanently. Linear Mixed Effects Models. Of particular interest for Bayesian modelling is PyMC, which implements a probabilistic programming language in Python. edu 3 4 Abstract. My main objective was to be able to interpret and reproduce the output of Python and R linear modeling tools. Manulife — Lab of Forward Thinking Data Scientist. two groups. Mixed models (also known as multilevel models or random effects models) are used in research involving data with repeated measures per observation unit. This is also …. This article walks through an example using fictitious data relating exercise to mood to introduce this concept. 1 PROC MIXED Fits a variety of mixed linear models to data and allows specification of the parameter estimation method to be used. Backends included in Moodle core. Python has awesome robust libraries for machine learning, natural language processing, deep learning, big data and artificial Intelligence. It is better than linear regression (or MNIST for that matter, as it is just a large logistic regression) since linear regressions are almost too easy to fit. Logistic regression is a particular instance of a broader kind of model, called a gener- alized linear model (GLM). Solving the XY Model using Mixed Integer Optimization in Python There are many problems in physics that take the form of minimizing the energy. Linear programming, sometimes known as linear optimization, is the problem of maximizing or minimizing a linear function over a convex polyhedron specified by linear and non-negativity constraints. inverse_fft (x_fft) print x_test 10. An issue we run into here is that in linear programming we can’t use conditional constraints. Methods for Mixed Linear Model Analysis¶ Overview. MixedLM(endog, exog, groups) result = model. Crossed Random Effects/Nested Random Effects: The current model of mixed_linear module allows to model only random effect arising out of single factor. It along with scipy are de rigeur libraries for any data scientist using Python. Download Open Datasets on 1000s of Projects + Share Projects on One Platform. Let us try some linear models, starting with multiple regression and analysis of covariance models, and then moving on to models using regression splines. This article saved my life. An autoregression model is a linear regression model that uses lagged variables as input variables. For the practitioner looking for a comprehensive guide to building an insurance-rating model utilizing GLMs, this monograph should prove to. The Statsmodels imputation of linear mixed models (MixedLM) closely follows the approach outlined in Lindstrom and Bates (JASA 1988). Let's get started. With linear mixed effects models, we wish to model a linear relationship for data points with inputs of varying type, categorized into subgroups, and associated to a real-valued output. variable, x, it may be apparent that for different ranges of x, different linear rela-tionships occur. , when y is a 2d-array of. Linear programming Mixed integer programming Model described with natural Python operators numerical optimization, genetic algorithms daviderizzo. I’m going to solve the problem with pulp. First we provide a word definition of each of the variables of the problem. PuLP can easily be deployed on any system that has a Python interpreter, as it has no dependencies on any other software packages. Modeling Data and Curve Fitting¶. In this second week, we’ll introduce you to the basics of two types of regression: linear regression and logistic regression. Fitzmaurice, M. py is an implementation in Python of the classic diet problem; a linear program that can be generated by columns (add foods to the diet) or by rows (add requirements to the diet). It helps to grow businesses e. Mixed integer linear programming¶ There are bad news coming along with this definition of linear programming: an LP can be solved in polynomial time. Original post by Jonas Kristoffer Lindeløv (blog, profile). Before we’ve solved our model though, we don’t know if the factory will be on or off in a given month. Documentation The documentation for the latest release is at. >>> from rpy import r >>> my_x = [5. I am dealing with a scheduling problem for a production process. Generalized linear mixed effects models, ubiquitous in social science research, are rarely seen in applied data science work despite their relevance and simplicity. The generalized linear model (GLZ) is a way to make predictions from sets of data. Linear Mixed effect Models are becoming a common statistical tool for analyzing data with a multilevel structure. GLPK is an open-source C library for solving linear programs and mixed integer linear programs. Linear Mixed Effects modeling using Python (statsmodels) Short script for a linear mixed effects model. I want to illustrate how to run a simple mixed linear regression model in SPSS. I am trying to use the Python statsmodels linear mixed effects model to fit a model that has two random intercepts, e. We welcome feedback on our work and are happy to answer any questions you might have on how to complete the tutorials. Of particular interest for Bayesian modelling is PyMC, which implements a probabilistic programming language in Python. Predictors can be continuous or categorical or a mixture of both. Documentation The documentation for the latest release is at. The model illustrates column-generation. Statsmodels is a Python package that provides a complement to scipy for statistical computations including descriptive statistics and estimation and inference for statistical models. Getting Started with Mixed Effect Models in R November 25, 2013 Jared Knowles Update : Since this post was released I have co-authored an R package to make some of the items in this post easier to do. Our model operated on tree-structured data and relied on a second-order Markov tree. In The GNU Linear Programming Kit, Part 1, the author uses glpk to solve this problem. For example, students could be sampled from within classrooms, or patients from within doctors. A general linear model (GLM) is the type of model you probably came across in elementary statistics. Autoregression Model. Unfortunately, it seems like sklearn only has an implementation for a mixture of Gaussian (Normal) distributions and does not support binomial or Poisson densities. All the constraints are inequalities and they are all linear in the sense that each involves an inequality in some linear function of the variables. In mathematical notation, if $$\hat{y}$$ is the predicted value. Whenever I try on some new machine learning or statistical package, I will fit a mixed effect model. Problems in linear programming, quadratic programming, integer programming, nonlinear optimization, systems of dynamic nonlinear equations, and multiobjective optimization can be solved. They smoke be-tween two and three times more than the general population and about 50% more than those with other types of psychopathology (??). It supports a wide range of both commercial and open-source solvers, and can be easily extended to support additional solvers. Generalized linear mixed effects models, ubiquitous in social science research, are rarely seen in applied data science work despite their relevance and simplicity. Link function: a continuous function that defines the response of variables to predictors in a generalized linear model, such as logit and probit links. Statistics and Computing. what is the mixed effects model linear model: formula Linear models can be expressed in formula notation, used by patsy, statsmodels, and R import statsmodels. Simplistically, linear programming is the optimization of an outcome based on some set of constraints using a linear mathematical model. They provide a modeling approach that combines powerful statistical learning with interpretability, smooth functions, and flexibility. whole numbers such as -1, 0, 1, 2, etc. 2-4x faster 2. MixedLMParams. A Little Book of Python for Multivariate Analysis¶. While generalized linear models are typically analyzed using the glm( ) function, survival analyis is typically carried out using functions from the survival package. They provide a modeling approach that combines powerful statistical learning with interpretability, smooth functions, and flexibility. Finally, it is Corresponding author. Mixed models are a form of regression model, meaning that the goal is to relate one dependent variable (also known as the outcome or response) to one or more independent variables (known as predictors, covariates, or regressors). 0 provides a new parallel MIP implementation that is based on a new task manager that optimizes deterministically independent of platform and number of CPU cores. Repeated Measures and Mixed Models. The monograph offers a practical blend of qualitative considerations along with the quantitative concepts. Previous Image. Python Tutorials. 1 Developer Guide demonstrates how to use the C++ and Python APIs for implementing the most common deep learning layers. Methods for Mixed Linear Model Analysis¶ Overview. Rather than focus on theory, Practical Python AI Projects , the product of the author's decades of industry teaching and consulting, stresses the model creation aspect; contrasting alternate approaches and practical variations. Let us understand how to build a linear regression model in Python. Build Linear Regression Model. ] NEW Python code by Taku Yoshioka (16 Nov 2016). Multivariate Linear Regression Models Regression analysis is used to predict the value of one or more responses from a set of predictors. Optimize ptarray_locate_along_linear to really honour the "from" parameter. Linear programming is an operations research technique used to determine the best outcome in a mathematical model where the objective and the constraints are expressed as a system of linear equations. No additional interpretation is required beyond the estimate ^ of the coefficient. Let me know if you find any bugs. In this section I will use the data read in Section 3, so make sure the fpe data frame is attached to your current session. Use artificial variables. 2013-01-01. Flexible Data Ingestion. Python runs well in automating various steps of a predictive model. Getting Started. 0 International License. First, the bottom roughness is estimated taking into account bottom sediment natures and bathymetric ranges. Here are some external resources: Non-Programmer's Tutorial for Python — from Wikibooks, the open-content textbooks collection, offspring of Wikipedia and probably the easiest introduction of all. A constraint is represented as a linear equation or inequality. For mixed integer programming, Xpress 8. com/users/29941 2019-09-26T15:11:58Z 2019-09-26T15:11:58Z. The classical methods of maximum likelihood and GMM and Bayesian methods,. Refer to the User's Manual for more details. by Christoph Gohlke, Laboratory for Fluorescence Dynamics, University of California, Irvine. variable, x, it may be apparent that for different ranges of x, different linear rela-tionships occur. Springer, New York, NY. APM Python is designed for large-scale optimization and accesses solvers of constrained, unconstrained, continuous, and discrete problems. or the many ways to perform GLMM in python playground. The Python Optimization Modeling Objects (Pyomo) package [1] is an open source tool for modeling optimization applications within Python. We will focus on a special class of models known as the generalized linear models (GLIMs or GLMs in Agresti). 2 ≥ 0, are special. The hard part is knowing whether the model you've built is worth keeping and, if so, figuring out what to do next. It was a new field of Statistics when I. 1510497113). Use linear programming models for decision making. concisely represent mixed-integer linear programming (MILP) models. Introducing linear regression The simplest form of linear regression is given by the relation y = k x + k 0 , where k 0 is called intercept, that is, the value of y when x=0 and k is the slope. Finally, it is Corresponding author. For µ ij = E(Y ij|t ij,b i), we can fit a model with random intercepts: g(µ ij) = β 0 +β 1 ·t ij +b 0,i, where g(·) can be any of the usual link functions (identity, log, logit, ···). If there are points. Choice modeling is jargon for conditional logit, mixed logit, multinomial probit, and other procedures that model the probability of individuals making a particular choice from the alternatives available to each of them. A special case of this model is the one-way random effects panel data model implemented by xtreg, re. Methods on these arrays are fast because they relies on well-optimised librairies for linear algebra (BLAS, ATLAS, MKL) NumPy is tolerant to python’s lists; NumPy inherits from years of computer based numerical analysis problem solving. Overview of mathematical programming¶. Generalized linear mixed-effects models allow you to model more kinds of data, including binary responses and count data. In this post you will discover how to select attributes in your data before creating a machine learning model using the scikit-learn library. The classical methods of maximum likelihood and GMM and Bayesian methods,. fit() As such, you would expect the random_effects method to return the city's intercepts in this case, not the coefficients/slopes. Its flexibility and extensibility make it applicable to a large suite of problems. Analog Devices’ Design Tools simplify your design and product selection process through ease of use and by simulating results that are optimized and tested for accuracy. Bayesian Models for Astrophysical Data Using R, JAGS, Python, and Stan. Generalized Linear Models¶ The following are a set of methods intended for regression in which the target value is expected to be a linear combination of the features. Linear Mixed Effects modeling using Python (statsmodels) Short script for a linear mixed effects model. Piecewise linear regression is a form of regression that allows multiple linear models to be. Section Week 8 - Linear Mixed Models - Stanford University. Learn Fitting Statistical Models to Data with Python from 미시건 대학교. This is also …. GWAS mixed linear model analysis uses a kinship matrix to correct for cryptic relatedness as a random effect and can include any additional fixed effects in the model. I am dealing with a scheduling problem for a production process. 2013-01-01. This is my first entry in my Statsmodels Project Summer 2011 blog. sample of the Program for. Some specific linear mixed effects models are. The Design. Linear Factor Model Macroeconomic Factor Models Factor Models. LINEAR MODELS IN STATISTICS Second Edition Alvin C. Mixed integer linear programming. The resulting functions can then be imported into other Python scripts. Solve Linear Programming Problem Using Simplex Method. It shows how you can take an existing model built with a deep learning framework and use that to build a TensorRT engine using the provided parsers. Logistic regression is a particular instance of a broader kind of model, called a gener- alized linear model (GLM). Whenever I try on some new machine learning or statistical package, I will fit a mixed effect model. By using Python, we don’t have to mix these packages at the C level, which is a huge advantage. For the generalized linear model different link functions can be used that would denote a different relationship between the linear model and the response variable (e. 0] β is what we want to learn, using (customer, item. 2013-01-01. Linear programming Mixed integer programming Model described with natural Python operators numerical optimization, genetic algorithms daviderizzo. An online community for showcasing R & Python tutorials Log In; Category Linear Mixed Model. Below is my mixed model equation and output. Subsequently, mixed modeling has become a major area of statistical research, including work on computation of maximum likelihood estimates, non-linear mixed effects models, missing data in mixed effects models, and Bayesian estimation of mixed effects models. All publications with annotations and links to talks Publications by category (a bit out of date) · Genomics · FaST-LMM and other mixed models. ^y = a + bx: Here, y is the response variable vector, x the explanatory variable, ^y is the vector of tted values and a (intercept) and b (slope) are real numbers. MOSEK is a commercial solver for mixed integer second-order cone programs and semidefinite programs. Inference for linear mixed models can be difficult. Use linear programming models for decision making. Numpy is the core library for scientific computing in Python. APMonitor – modeling language and optimization suite for large-scale, nonlinear, mixed integer, differential and algebraic equations with interfaces to MATLAB, Python, and Julia. Free, Web-based Software. We can't just randomly apply the linear regression algorithm to our data. Whenever I try on some new machine learning or statistical package, I will fit a mixed effect model. Introduction¶. CPLEX was the first commercial linear optimizer on the market to be written in the C programming language. Linear Mixed Effects modeling using Python (statsmodels) Short script for a linear mixed effects model. Optimization with PuLP¶. The first was Basic Linear Regressions in Python which suggests using pandas and numpy. PROC MIXED. However, they are still undecided between three possible campaigns for promoting the new product. PNAS, 113: 7377–7382, July 2016 (doi: 10. Develops a new approach based on a linear non-Gaussian acyclic structural equation model (LiNGAM) and a linear mixed model. PySP: modeling and solving stochastic programs in Python 113 subject to the constraint X ∈ s. GUROBI is a commercial solver for mixed integer second-order cone programs. dard linear model •The mixed-effects approach: – same as the fixed-effects approach, but we consider ‘school’ as a ran-dom factor – mixed-effects models include more than one source of random varia-tion AEDThe linear mixed model: introduction and the basic model10 of39. Simple Adjustments for Power with Missing Data 4. Here you can find our collection of programming and statistics tutorials. Build Linear Regression Model. All publications with annotations and links to talks Publications by category (a bit out of date) · Genomics · FaST-LMM and other mixed models. Class Notes. Let us try some linear models, starting with multiple regression and analysis of covariance models, and then moving on to models using regression splines. vectors = LA. Nonlinear Mixed Effects Models. It is part of the Python scientific stack that deals with data science, statistics and data analysis. While generalized linear models are typically analyzed using the glm( ) function, survival analyis is typically carried out using functions from the survival package. (Doing so in Java or C# is similar to the C++ example. Fixed effects are population parameters assumed to be the same each time data is collected, and random effects are random variables associated with each sample. The building block concepts of logistic regression can be helpful in deep learning while building the neural networks. I will use some data on the plasma protein levels of turtles at baseline, after fasting 10 days, and after fasting 20 days. It was a new field of Statistics when I. One of these steps is a regression analysis in SPSS, modeling", but we will eventually get mixed models and. In addition, this package contains pre-trained models for extracting features from images using ResNet models, and doing sentiment analysis. NOVA: This is an active learning dataset. sp Note that the \fB\-\-cache\-secs\fP option will override this value if a cache is enabled, and the value is larger. Pythonによる数理最適化入門 (実践Pythonライブラリー)posted with カエレバ並木 誠 朝倉書店 2018-04-09 Amazonで探す楽天市場で探すYahooショッピングで探す 目次 目次 はじめに 線形計画法の概要 Pythonによる線形計画法の解き方 Python製線形計画法モデリングライブ…. closed networks) Alexander Bruy 2017-01-12. Class Notes. Linear mixed models (LMMs): statistical models that assume normally distributed errors and also include both fixed and random effects, such as ANOVA incorporating a random effect. Traditional mixed linear models contain both fixed- and random-effects parameters, and, in fact, it is the combination of these two types of effects that led to the name mixed model. Python Tutorials. PySP: modeling and solving stochastic programs in Python 113 subject to the constraint X ∈ s. However, they are still undecided between three possible campaigns for promoting the new product. The Model Basic model: Stage 1 {Individual-level model y ij = f (t ij; u i; fl i)+ e ij;i =1;:::;m; j =1;:::;n i f function governing within-individual behavior fl i parameters of f speciflc to individual i (p £ 1) e ij satisfy E (e ij j u i; fl i)=0 Example: Theophylline pharmacokinetics † f is the one-compartment model with dose u i = D i † fl i =(k ai;V i;Cl i) T =(fl 1 i;fl 2 i;fl 3 i) T, where k ai, V i, and Cl i are. There's this game called Islanders. 5,0,1) and (0,1) with multipliers (0. Pre-trained models and datasets built by Google and the community Tools Ecosystem of tools to help you use TensorFlow. This course will explain the basic theory of linear and non-linear mixed-effects models, including hierarchical linear models (HLM). Previous Image. Similarly, more complex piecewise linear and piecewise polynomial fitting models can be formulated as constrained convex programs. ols('length ~ 1 + height ', data=train_df). 1-Draft) Oscar Torres-Reyna Data Consultant.
|
|
# When is nested cross-validation really needed and can make a practical difference?
When using cross-validation to do model selection (such as e.g. hyperparameter tuning) and to assess the performance of the best model, one should use nested cross-validation. The outer loop is to assess the performance of the model, and the inner loop is to select the best model; the model is selected on each outer-training set (using the inner CV loop) and its performance is measured on the corresponding outer-testing set.
This has been discussed and explained in many threads (such as e.g. here Training with the full dataset after cross-validation?, see the answer by @DikranMarsupial) and is entirely clear to me. Doing only a simple (non-nested) cross-validation for both model selection & performance estimation can yield positively biased performance estimate. @DikranMarsupial has a 2010 paper on exactly this topic (On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation) with Section 4.3 being called Is Over-fitting in Model Selection Really a Genuine Concern in Practice? -- and the paper shows that the answer is Yes.
All of that being said, I am now working with multivariate multiple ridge regression and I don't see any difference between simple and nested CV, and so nested CV in this particular case looks like an unnecessary computational burden. My question is: under what conditions will simple CV yield a noticeable bias that is avoided with nested CV? When does nested CV matter in practice, and when does it not matter that much? Are there any rules of thumb?
Here is an illustration using my actual dataset. Horizontal axis is $\log(\lambda)$ for ridge regression. Vertical axis is cross-validation error. Blue line corresponds to the simple (non-nested) cross-validation, with 50 random 90:10 training/test splits. Red line corresponds to the nested cross-validation with 50 random 90:10 training/test splits, where $\lambda$ is chosen with an inner cross-validation loop (also 50 random 90:10 splits). Lines are means over 50 random splits, shadings show $\pm1$ standard deviation.
Red line is flat because $\lambda$ is being selected in the inner loop and the outer-loop performance is not measured across the whole range of $\lambda$'s. If simple cross-validation were biased, then the minimum of the blue curve would be below the red line. But this is not the case.
### Update
It actually is the case :-) It is just that the difference is tiny. Here is the zoom-in:
One potentially misleading thing here is that my error bars (shadings) are huge, but the nested and the simple CVs can be (and were) conducted with the same training/test splits. So the comparison between them is paired, as hinted by @Dikran in the comments. So let's take a difference between the nested CV error and the simple CV error (for the $\lambda=0.002$ that corresponds to the minimum on my blue curve); again, on each fold, these two errors are computed on the same testing set. Plotting this difference across $50$ training/test splits, I get the following:
Zeros correspond to splits where the inner CV loop also yielded $\lambda=0.002$ (it happens almost half of the times). On average, the difference tends to be positive, i.e. nested CV has a slightly higher error. In other words, simple CV demonstrates a minuscule, but optimistic bias.
(I ran the whole procedure a couple of times, and it happens every time.)
My question is, under what conditions can we expect this bias to be minuscule, and under what conditions should we not?
• I'm not too sure I understand the diagram, could you generate a scatter plot showing the estimated error from nested and non-nested cross-validation on each axis (presuming the 50 test-training splits were the same each time)? How big is the dataset you are using? – Dikran Marsupial Oct 23 '15 at 13:19
• I generated the scatter plot, but all the points are very close to the diagonal and it's hard to discern any deviation from it. So instead, I subtracted simple CV error (for optimal lambda) from the nested CV error and plotted that across all training-test splits. There does seem to be a very small, but noticeable bias! I made the update. Let me know if the figures (or my explanations) are confusing, I'd like this post to be clear. – amoeba Oct 23 '15 at 16:36
• In the first paragraph, you have the model is selected on each outer-training set; should it perhaps be inner- instead? – Richard Hardy Mar 11 '17 at 12:21
• @RichardHardy No. But I can see that this sentence is not formulated very clearly. The model is "selected" on each outer-training set. Different models (e.g. models with different lambdas) are fit on each inner-training set, tested on inner-test sets, and then one of the models is selected, based on the whole outer-training set. It's performance is then assessed using outer-testing set. Does it make sense? – amoeba Mar 11 '17 at 22:06
|
|
Integer type: int32 int64 nag_int show int32 show int32 show int64 show int64 show nag_int show nag_int
PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox
# NAG Toolbox: nag_quad_1d_gauss_wres (d01tb)
## Purpose
nag_quad_1d_gauss_wres (d01tb) returns the weights and abscissae appropriate to a Gaussian quadrature formula with a specified number of abscissae. The formulae provided are for Gauss–Legendre, rational Gauss, Gauss–Laguerre and Gauss–Hermite.
## Syntax
[weight, abscis, ifail] = d01tb(key, a, b, n)
[weight, abscis, ifail] = nag_quad_1d_gauss_wres(key, a, b, n)
## Description
nag_quad_1d_gauss_wres (d01tb) returns the weights and abscissae for use in the Gaussian quadrature of a function f(x)$f\left(x\right)$. The quadrature takes the form
n S = ∑ wif(xi) i = 1
$S=∑i=1nwif(xi)$
where wi${w}_{i}$ are the weights and xi${x}_{i}$ are the abscissae (see Davis and Rabinowitz (1975), Fröberg (1970), Ralston (1965) or Stroud and Secrest (1966)).
Weights and abscissae are available for Gauss–Legendre, rational Gauss, Gauss–Laguerre and Gauss–Hermite quadrature, and for a selection of values of n$n$ (see Section [Parameters]).
b S ≃ ∫ f(x)dx a
$S≃∫abf(x)dx$
where a$a$ and b$b$ are finite and it will be exact for any function of the form
2n − 1 f(x) = ∑ cixi. i = 0
$f(x)=∑i=0 2n-1cixi.$
∞ a S ≃ ∫ f(x)dx (a + b > 0) or S ≃ ∫ f(x)dx (a + b < 0) a − ∞
$S≃∫a∞f(x) dx (a+b> 0) or S≃∫-∞a f(x) dx (a+b< 0)$
and will be exact for any function of the form
2n + 1 f(x) = ∑ (ci)/((x + b)i) = ( ∑ i = 02n − 1c2n + 1 − i(x + b)i)/((x + b)2n + 1). i = 2
$f(x)=∑i=2 2n+1ci(x+b)i=∑i=0 2n-1c2n+1-i(x+b)i(x+b)2n+1.$
∞ a S ≃ ∫ f(x)dx (b > 0) or S ≃ ∫ f(x)dx (b < 0) a − ∞
$S≃∫a∞f(x) dx (b> 0) or S≃∫-∞a f(x) dx (b< 0)$
and will be exact for any function of the form
2n − 1 f(x) = e − bx ∑ cixi. i = 0
$f(x)=e-bx∑i=0 2n-1cixi.$
+ ∞ S ≃ ∫ f(x)dx − ∞
$S≃∫-∞ +∞ f(x) dx$
and will be exact for any function of the form
2n − 1 f(x) = e − b(x − a)2 ∑ cixi (b > 0). i = 0
$f(x)=e-b (x-a) 2∑i=0 2n-1cixi (b>0).$
(e) Gauss–Laguerre quadrature, normal weights:
∞ a S ≃ ∫ e − bxf(x)dx (b > 0) or S ≃ ∫ e − bxf(x)dx (b < 0) a − ∞
$S≃∫a∞e-bxf(x) dx (b> 0) or S≃∫-∞a e-bxf(x) dx (b< 0)$
and will be exact for any function of the form
2n − 1 f(x) = ∑ cixi. i = 0
$f(x)=∑i=0 2n-1cixi.$
(f) Gauss–Hermite quadrature, normal weights:
+ ∞ S ≃ ∫ e − b(x − a)2f(x)dx − ∞
$S≃∫-∞ +∞ e-b (x-a) 2f(x) dx$
and will be exact for any function of the form
2n − 1 f(x) = ∑ cixi. i = 0
$f(x)=∑i=0 2n-1cixi.$
Note: the Gauss–Legendre abscissae, with a = 1$a=-1$, b = + 1$b=+1$, are the zeros of the Legendre polynomials; the Gauss–Laguerre abscissae, with a = 0$a=0$, b = 1$b=1$, are the zeros of the Laguerre polynomials; and the Gauss–Hermite abscissae, with a = 0$a=0$, b = 1$b=1$, are the zeros of the Hermite polynomials.
## References
Davis P J and Rabinowitz P (1975) Methods of Numerical Integration Academic Press
Fröberg C E (1970) Introduction to Numerical Analysis Addison–Wesley
Ralston A (1965) A First Course in Numerical Analysis pp. 87–90 McGraw–Hill
Stroud A H and Secrest D (1966) Gaussian Quadrature Formulas Prentice–Hall
## Parameters
### Compulsory Input Parameters
1: key – int64int32nag_int scalar
Indicates the quadrature formula.
key = 0${\mathbf{key}}=0$
Gauss–Legendre quadrature on a finite interval, using normal weights.
key = 3${\mathbf{key}}=3$
Gauss–Laguerre quadrature on a semi-infinite interval, using normal weights.
key = -3${\mathbf{key}}=-3$
Gauss–Laguerre quadrature on a semi-infinite interval, using adjusted weights.
key = 4${\mathbf{key}}=4$
Gauss–Hermite quadrature on an infinite interval, using normal weights.
key = -4${\mathbf{key}}=-4$
Gauss–Hermite quadrature on an infinite interval, using adjusted weights.
key = -5${\mathbf{key}}=-5$
Rational Gauss quadrature on a semi-infinite interval, using adjusted weights.
Constraint: key = 0${\mathbf{key}}=0$, 3$3$, -3$-3$, 4$4$, -4$-4$ or -5$-5$.
2: a – double scalar
3: b – double scalar
The quantities a$a$ and b$b$ as described in the appropriate sub-section of Section [Description].
Constraints:
• Rational Gauss: a + b0.0${\mathbf{a}}+{\mathbf{b}}\ne 0.0$;
• Gauss–Laguerre: b0.0${\mathbf{b}}\ne 0.0$;
• Gauss–Hermite: b > 0${\mathbf{b}}>0$.
4: n – int64int32nag_int scalar
n$n$, the number of weights and abscissae to be returned.
Constraint: n = 1${\mathbf{n}}=1$, 2$2$, 3$3$, 4$4$, 5$5$, 6$6$, 8$8$, 10$10$, 12$12$, 14$14$, 16$16$, 20$20$, 24$24$, 32$32$, 48$48$ or 64$64$.
Note: if n > 0$n>0$ and is not a member of the above list, the maxmium value of n$n$ stored below n$n$ will be used, and all subsequent elements of abscis and weight will be returned as zero.
None.
None.
### Output Parameters
1: weight(n) – double array
The n weights.
2: abscis(n) – double array
The n abscissae.
3: ifail – int64int32nag_int scalar
${\mathrm{ifail}}={\mathbf{0}}$ unless the function detects an error (see [Error Indicators and Warnings]).
## Error Indicators and Warnings
Errors or warnings detected by the function:
Cases prefixed with W are classified as warnings and do not generate an error of type NAG:error_n. See nag_issue_warnings.
W ifail = 1${\mathbf{ifail}}=1$
The n-point rule is not among those stored.
W ifail = 2${\mathbf{ifail}}=2$
Underflow occurred in calculation of normal weights.
W ifail = 3${\mathbf{ifail}}=3$
No nonzero weights were generated for the provided parameters.
ifail = 11${\mathbf{ifail}}=11$
Constraint: key = 0${\mathbf{key}}=0$, 3$3$, -3$-3$, 4$4$, -4$-4$ or -5$-5$.
ifail = 12${\mathbf{ifail}}=12$
The value of a and/or b is invalid for the chosen key. Either:
• Constraint: |a + b| > 0.0$|{\mathbf{a}}+{\mathbf{b}}|>0.0$.
• Constraint: |b| > 0.0$|{\mathbf{b}}|>0.0$.
• Constraint: b > 0.0${\mathbf{b}}>0.0$.
ifail = 14${\mathbf{ifail}}=14$
Constraint: n > 0${\mathbf{n}}>0$.
## Accuracy
The weights and abscissae are stored for standard values of a and b to full machine accuracy.
Timing is negligible.
## Example
```function nag_quad_1d_gauss_wres_example
key = int64(-3);
a = 0;
b = 1;
n = int64(6);
[weight, abscis, ifail] = nag_quad_1d_gauss_wres(key, a, b, n)
function [fv, iflag, user] = f(x, nx, iflag, user)
fv = sin(x)./x.*log(10*(1-x));
```
```
weight =
0.5735
1.3693
2.2607
3.3505
4.8868
7.8490
abscis =
0.2228
1.1889
2.9927
5.7751
9.8375
15.9829
ifail =
0
```
```function d01tb_example
key = int64(-3);
a = 0;
b = 1;
n = int64(6);
[weight, abscis, ifail] = d01tb(key, a, b, n)
function [fv, iflag, user] = f(x, nx, iflag, user)
fv = sin(x)./x.*log(10*(1-x));
```
```
weight =
0.5735
1.3693
2.2607
3.3505
4.8868
7.8490
abscis =
0.2228
1.1889
2.9927
5.7751
9.8375
15.9829
ifail =
0
```
PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox
© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2013
|
|
×
# Infinite cubic grid of resisrors
Can we find net resistance between the body diagonal points of a infinite cubic lattice
?
The objective is to find net resistance between A & B of the given cube which is part of infinite grid. Let the resistance between any two adjacent vertices is R
Note by Pranjal Prashant
1 year, 6 months ago
Sort by:
Oh my. Staff · 1 year, 6 months ago
Yes, I know about 2 d version for resistors and fourier series for getting across square diagonal 2r/pi.[you know that Ishan :D ] But this is quite different fom those, much more difficult in getting the approach. You should share this so that someone reaches at the final ansWeR. · 1 year, 6 months ago
Here is the 2-d version of the problem. Staff · 1 year, 6 months ago
It must be R/3 · 11 months, 2 weeks ago
Solution ? · 11 months, 2 weeks ago
I asked for diagonally opposite points. Not adjacent ones · 10 months, 3 weeks ago
A simpler version would be two find the equivalent resistance between two diagonally opposite points in an infinite grid of resistors, which is still quite difficult. See the page I have linked Infinite grid of resistors · 1 year, 6 months ago
Total resistance (across points A and B) = ( 5/6 ) * R
I can't post a picture of my solution here · 1 year, 6 months ago
Incorrect, This answer holds if there was a single cube of resistances rather than an infinite one. · 1 year, 6 months ago
but infinite series of resistors just push the "single cube value" to an exact number so my idea is that effective resistance is near (5/6)R (like 0.85R or 0.9R max but not less than (5/6)R). Please notify me if there is a mistake in my assumption. · 1 year, 6 months ago
Now what significance pushing values is of, the question is not an mcq, I want to know how it can be done, and I am sure that it is solvable. {although answer would not be beautiful perhaps}. · 1 year, 6 months ago
got it but I dont know how to solve the way you told and yes the answer sure will be weird · 1 year, 6 months ago
NO, IT IS NOT SO. In the fourier series , steps of integration are nasty, but answer is simply 2r/ $$\pi$$ · 1 year, 6 months ago
thanks for the info I will surely try to get the steps for your answer. · 1 year, 6 months ago
I could not understand "push the single cube value to an exact number". Also think of 2-D grid of infinite resistors in which we have to find the equivalent resistance between adjacent points which everyone knows to be R/2. What will be your argument in this one? Just because other resistances are present does not mean that the net would be greater than 5R/6 because resistances in parallel decrease the value · 1 year, 6 months ago
|
|
# Kerodon
$\Newextarrow{\xRightarrow}{5,5}{0x21D2}$ $\newcommand\empty{}$
Go back to the page of Subsection 2.1.4.
Comment #164 by DnlGrgk on
In Definition 2.1.4.3, there seems to be a tilde above the morphism $\mu_{X,Y}$ that is probably not supposed to be there. Also, at the end of the definition, it should be 'morphisms' instead of 'isomorphisms': "In this case, we will refer to the isomorphisms $\{\mu_{X,Y}\}_{X,Y} \in \mathcal{C}$ as the tensor constraints of $F$."
Comment #165 by DnlGrgk on
In Definition 2.1.4.3, there seems to be a tilde above the morphism $\mu_{X,Y}$ that is probably not supposed to be there. Also, at the end of the definition, it should be 'morphisms' instead of 'isomorphisms': "In this case, we will refer to the isomorphisms $\{\mu_{X,Y}\}_{X,Y \in \mathcal{C}}$ as the tensor constraints of $F$."
Comment #168 by Kerodon on
Yep, thanks!
Comment #240 by Peng DU on
2 lines before Definition 2.1.4.3, “are suitable compatible with” should be "are suitably compatible with".
Comment #247 by Kerodon on
Yep, thanks!
There are also:
• 4 comment(s) on Chapter 2: Examples of $\infty$-Categories
• 3 comment(s) on Section 2.1: Monoidal Categories
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
Unfortunately JavaScript is disabled in your browser, so the comment preview function will not work.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 00CV. The letter 'O' is never used.
|
|
# Weighted mean of complex exponential function using NIntegrate
I have defined the following functions:
γ[r_, v_, rDet_] :=
Which[
Abs[r - v tDet] >= rDet, 0,
r + v tDet <= rDet, π,
True, ArcCos[((v tDet)^2 + r^2 - rDet^2)/(2 v tDet r)]];
ρ[r_, v_, v0_] := Exp[-(1/2) (r/rCloud)^2] r Exp[-(1/2) (v/v0)^2] v;
ΔΦ[v_] := (2 π )/λ c (4 (v*t)^2)/(rBeam)^2;
I want to weight ΔΦ by ρ and γ. I do so by defining a function Awfc that numerically integrates the product of the three functions over v and r and divides by the numerical integral over the two weighting functions:
Awfc[rDet_?NumericQ, v0_?NumericQ] :=
NIntegrate[ γ[r,v,rDet]*ρ[r,v,v0]* Exp[I ΔΦ[v]], {r,0,∞}, {v,0,∞ }}]/
NIntegrate[ γ[r,v,rDet]*ρ[r,v,v0], {r,0,∞}, {v,0,∞}}];
I calculate Awfc for some parameters rDet and v0
AwfcTable =
ParallelTable[
{rDet, Awfc[rDet, 0.5 v0], Awfc[rDet, v0], Awfc[rDet,2 v0], Awfc[rDet,3.7 v0]},
{rDet,0.0005,0.010,0.0005}];
using these values for the other parameters:
tDet = 0.7;
t = 0.230;
rCloud = 0.0025;
λ = 780 10 ^-9;
c = λ/20;
v0 = 0.00588;
rDet = 0.008/2;
rBeam = 0.015;
All kernels throw warnings:
NIntegrate::slwcon: Numerical integration converging too slowly; suspect one of the following: singularity, value of the integration is 0, highly oscillatory integrand, or WorkingPrecision too small.
I think that ΔΦ is a hightly osciallating function so I don't see a way to fix this problem.
I use TableForm[Abs[AwfcTable]] to get the amplitude of the complex numbers in AwfcTable. The problem with the output is, that this amplitude is significantly larger than 1, especially for larger v0 and small rDet. The function ΔΦ I am calculating the weighted mean of, has a maximum amplitude of 1 so I think the weighted mean should also have a maximum of 1. (The complex phase is consistent with what I expect from a different approach to the problem I did using MATLAB)
Is there something wrong with my reasoning (meaning that the result Mathematica gives is actually correct), or is does the fault lie with my implementation of the problem?
-
Observation:
First of all I think it is always a useful trick to plot your problematic integrand if possible. It gives us often the clue in case NIntegrate complained about the particular integrand. If we can track down the issue we can often come up with a remedy. Given the following input if one sweeps over the rDet we get the following plots. Given your the quantity of interest which is ratio of two integrals.
Awfc[rDet_?NumericQ, v0_?NumericQ] := NIntegrate[ γ[r,v,rDet]*ρ[r,v,v0]* Exp[I ΔΦ[v]],
{r,0,∞}, {v,0,∞ }}]/NIntegrate[ γ[r,v,rDet]*ρ[r,v,v0], {r,0,∞}, {v,0,∞}}];
Conclusion:
The numerator integral involves a complex function that rapidly becomes zero. The denominator is a real function that is also fast asymptotic. Hence for faster evaluation of your ParallelTable we can NIntegrate on a shorter interval. But in this case we can also do the full integration by tuning up some options. One can also scale the integrals beforehand as that will avoid numerical error to some extent.
AwfcInf[rDet_?NumericQ, v0_?NumericQ] :=NIntegrate[γ[r, v, rDet]*ρ[r, v, v0]*Exp[I ΔΦ[v]],
{r, 0, Infinity}, {v, 0,Infinity}, AccuracyGoal -> 12,PrecisionGoal -> 30]/
(Re@NIntegrate[γ[r, v, rDet]*ρ[r, v, v0], {r, 0,Infinity}, {v, 0, Infinity},
AccuracyGoal -> 12,PrecisionGoal -> 30])
Now you can call your table safely.
AwfcTable = ParallelTable[{rDet, AwfcInf[rDet, 0.5 v0], AwfcInf[rDet, v0],
AwfcInf[rDet, 2 v0],AwfcInf[rDet, 3.7 v0]},
{rDet, 0.0005,0.010, 0.0005}]; // AbsoluteTiming
{11.972305, Null}
Ans now this is how the last four entries vary in AwfcTable as you sweep over rDet.
Going faster: Following the visual observation and the conclusion we can try to decrease the integration interval considerably! We first check for the numerator and denominator integral for convergence.
NIntegrate[γ[r, v, rDet]*ρ[r, v, v0]*
Exp[I ΔΦ[v]], {r, 0, Evaluate@#}, {v, 0,Evaluate@#}, AccuracyGoal -> 20,
PrecisionGoal -> 30] & /@ {.1, .25, 1, 10, 100, Infinity}
{1.98046*10^-10 + 1.57094*10^-12 I, 1.98046*10^-10 + 1.57094*10^-12 I, 1.98046*10^-10 + 1.57094*10^-12 I, 1.98046*10^-10 + 1.57094*10^-12 I, 1.98046*10^-10 + 1.57094*10^-12 I, 1.98046*10^-10 + 1.57094*10^-12 I}
The numerator converges to even if we replace Infinity with $0.25$! Now do the same for the denominator integral.
Re@NIntegrate[γ[r, v, rDet]*ρ[r, v, v0], {r, 0,Evaluate@#},
{v, 0, Evaluate@#}, AccuracyGoal -> 20,
PrecisionGoal -> 30] & /@ {.1, .25, 1, 10, 100, Infinity}
{1.98058*10^-10, 1.98058*10^-10, 1.60127*10^-10, 1.60127*10^-10, 1.60127*10^-10, 1.60127*10^-10}
The numerator converges even if we assume an integration limit of $1$ in both dimensions in place of Infinity. Following this we can have a new Awfc that will be much faster than your infinite integrals. Re is used in the denominator to get rid of the the imaginary part that arises due to numerical error.
Clear[Awfc];
Awfc[rDet_?NumericQ, v0_?NumericQ] :=
Quiet@(NIntegrate[γ[r, v, rDet]*ρ[r, v, v0]*
Exp[I ΔΦ[v]], {r, 0, .25}, {v,0, .25},
AccuracyGoal -> 12,PrecisionGoal -> 30]/
(Re@NIntegrate[γ[r, v, rDet]*ρ[r, v, v0], {r, 0,1}, {v, 0, 1},
AccuracyGoal -> 12, PrecisionGoal -> 30]));
Testing how fast it is.
AwfcTableLess =
ParallelTable[{rDet, Awfc[rDet, 0.5 v0], Awfc[rDet, v0],
Awfc[rDet, 2 v0], Awfc[rDet, 3.7 v0]}, {rDet, 0.0005, 0.010,
0.0005}]; // AbsoluteTiming
{3.879663, Null}
Compare the results with above plot.
Hope this helps you!
-
Your answer does indeed speed things up. But speed isn't really the problem here: The settings for AccuracyGoal and PrecisionGoal do alter the Arg[] of the values in AwfcTable and this value is also really important. Also, if you look at TableForm[Abs[AwfcInfTable]] you'll see that these values are greater than 1 but the weighted function Exp[I ΔΦ[v] is smaller or equal to 1. – frankundfrei Nov 26 '13 at 19:47
@frankundfrei good to hear few feed backs from you! Will give a deeper look at the issue here. Give me some time ;) – PlatoManiac Nov 26 '13 at 19:54
I think I found the mistake I've made. I'm looking into it a bit further and will post an answer if it turns out it is actually working. Thanks anyway so far, I did learn from your answer! – frankundfrei Nov 26 '13 at 21:43
By the way I noticed just now that Exp[I \[CapitalDelta]\[CapitalPhi][v]] is a function of single variable and probably not absolutely integrable with respect to your 2D weight function γ[r, v, rDet]*ρ[r, v, v0]. – PlatoManiac Nov 26 '13 at 21:59
The problems were apparently caused by how the parameters were chosen.
In the problem at hand they represent physical quantities: parameters starting with r have dimension meter, those with t have dimension second and those starting with v have meter/second.
A much more resonable choice would be to use milimeters and miliseconds, i.e. multiplying all variables starting with r or t by a factor $1000$:
tDet = 700;
t = 230;
rCloud = 2.5;
c = λ/20;
v0 = 0.00588;
rDet = 8.0/2;
rBeam = 15;
This way in AwfcTable instead of the intervall one can use a different intervall, thus both avoiding the NIntegrate::slwcon warnings and yielding reasonable results for Abs[AwfcTable]$\leq 1$:
AwfcTable = ParallelTable[
{rDet, Awfc[rDet, 0.5 v0], Awfc[rDet, v0], Awfc[rDet, 2 v0], Awfc[rDet, 3.7 v0]},
{rDet, 0.5, 10, 0.5}];
-
Changing the units results into what I meant by scaling in my answer! Physics and math both prefers to be in agreement most of the time. – PlatoManiac Nov 27 '13 at 3:55
|
|
# Why the electric field $\vec{E}$ is constant (=position independent) for an infinite 2D sheet of constant charge?
So I'm reading a text on electricity and it talks about using the integral to compute the total charge of a collection of points, which I mostly understand. But then we get to finding the electric field due to a charged collection of points and I find things that don't make sense to me. For instance, for an infinite sheet of constant charge, the text says that the electric field is constant on any one side of the sheet. But that seems intuitively wrong to me, since I would think the field should be stronger the closer a point is to the sheet. I mean, if I'm standing 10 meters from a sheet, holding a charged particle, and walk closer to the sheet, I'd think the particle would react more strongly. I follow the mathematical derivation fairly well, which leads me to think I must not be thinking about the physics correctly. Can anyone help make this make sense?
-
You can get this result without Gauss's law, just on dimensional grounds. There's no way to put together any equation in terms of the charge density and the distance $r$ from the sheet that will have the right units and vary with $r$. – Ben Crowell Nov 17 '14 at 16:02
for an infinite sheet of constant charge, the text says that the electric field is constant on any one side of the sheet. But that seems intuitively wrong to me, since I would think the field should be stronger the closer a point is to the sheet.
There's a geometric scaling argument at hand, and you probably need to appreciate Gauss' Law to get a real sense of it. It follows the same thinking that the $1/r^2$ law does.
In this argument for $1/r^2$ field strength from a point particle, it is seen that the solid angle that "A" occupies decreases at larger radii. If you consider a charged ball, then think about squishing it from a 3D object into a 2D piece of paper. That represents the area-based charge density.
An unsaid assumption for this line of thinking are that the field strength is determined by:
• The solid angle occupied by charged material times
• The 2D charge density presented by that material
So extend this to an infinite sheet. No matter how close or far away it is, the sheet occupies exactly half of your field of vision. Furthermore, the surface charge density and angles of the sheet are also irrelevant of your normal distance.
Illustratively, you could apply this via the image above. Just remove the "A", and consider that this is an infinite sheet. As you expand the distance, you expand the area you sweep, but the angle times the charge density is invariant. Thus, the field is constant with distance.
-
I see, I'll go re-read Gauss's Law. But as I see it right now, the explanation seems to be: As you get close, sure, the field strength from any one point is stronger, but the component of that strength which is directing you toward/away from the sheet is diminishing because of the angle at which it "hits" you. The rest of the components just cancel with the forces coming from a symmetrically located other point on the sheet. Hrrrrrrn, very cool! – Addem Jul 11 '14 at 18:45
It's definitely strains one's intuition. But think about this: if you are in the vicinity of an infinite sheet of charge, how can you determine whether you are close to it or you are far away? I can tell if I'm close to or far from a sphere by comparing my distance to the sphere with the radius. If I'm close the sphere looks big. If I'm far away it will look small, even like a point. The size of the sphere "sets the scale". Not so for an infinite sheet. No matter where you are, the sheet looks exactly the same. There is no scale. From this point of view one might say "what else could it be but constant" (and pointing perpendicular to the sheet).
-
But what about a point charge? The scaling argument doesn't work here, does it? – Christian 8 hours ago
Interestingly, as AlanSE mentioned above, this can be derived from Gauss's Law very intuitively if you consider that an infinitely large sheet (a plane) is identically the same as a sphere of infinite radius. Therefore, the flux of your test particle through this sphere (and vice versa) is the same regardless of its position within this sphere, which can be said to extend to all points in the space.
-
With a point in space and a plane running through the origin, there is a (shortest) distance to the plane. I don't see how we can similarly regard this as a sphere of infinite radius unless you mean some kind of analogue to higher dimensions. – Addem Jul 11 '14 at 21:16
|
|
/ Experiment-HEP arXiv:1201.4880
Search for a low mass Standard Model Higgs boson in the $\tau-\tau$ decay channel in $p\bar{p}$ collisions at $\sqrt{s}$ = 1.96 TeV
Pages: 8
Abstract: We report on a search for the standard model Higgs boson decaying into pairs of tau leptons in $p\bar{p}$ collisions produced by the Tevatron at $\sqrt{s}$ = 1.96 TeV. The analyzed data sample was recorded by the CDFII detector and corresponds to an integrated luminosity of 6.0 fb$^{-1}$. The search is performed in the final state with one tau decaying leptonically and the second one identified through its semi-hadronic decay. Since no significant excess is observed, a 95% credibility level upper limit on the production cross section times branching ratio to the tau-tau final state is set for hypothetical Higgs boson masses between 100 and 150 GeV/$c^2$. For a Higgs boson of 120 GeV/$c^2$ the observed (expected) limit is 14.6 (15.3) the predicted value.
Note: * Temporary entry *
Total numbers of views: 1766
Numbers of unique views: 744
Fulltext
Rate this document:
1
2
3
(Not yet reviewed)
|
|
# DuneMUD
### Site Tools
meeting_log_20141203
Mreh says: Hi, everyone, thanks for being here.
Amastacia nods at Mreh.
Mreh says: So, as the wiki post said, I'm looking to discuss four main topics tonight.
Mreh says: They are: the dwindling playerbase, the serverside gagging project, the consideration of allowing botting with heavy restrictions, and the consideration of allowing multi-logging.
Mreh says: The third and fourth points sort of tie into one another, but we'll get there.
Mreh says: Each of these points is open to discussion, don't feel that I am rambling away for no reason.
Mreh says: So, dwindling playerbase. This is kind of a given, and really the reason for the other three points.
Mreh says: We've got few players (you are all awesome), and I don't want people leaving, because having people here is one of the major ways that new people come and stay.
Mreh says: Hector, for example, is pretty freaking new, and he's stuck around and is level almost 40 now.
Josifa asks: have there been peaks and valleys in player base or one long decline?
Amastacia says: kind of one long decline.
Hector says: yeah, got my brother playing also (Rhaeger). He's really liking the MUD also.
Amastacia cheers enthusiastically.
Trystan goes “Woot!” excitedly.
Mreh says: Rhaegar doesn't want to hang out with us tonight.
Amastacia says: i may stray at times, but i always come back.
Cens says: He should've, clearly.
Kain says: maybe he does now.
Mreh says: We've had 12 characters created in the last two weeks.
Toadslime says: that seems pretty decent honestly.
Mreh says: Not many have stayed, one was Salrylia who was a wiz here a long time ago that I re-wizzed because that's how I roll.
Josifa says: i've been gone for a few years too and find my way back.
Kain says: I think the serverside gagging will help those who want to play from mobile devices.
Mreh says: We only had one person suicide in that time, which is good, but I don't know how many of these new people (without checking) are Rhaegar's multiple alts. :P.
Amastacia grins evilly.
Josifa says: allowing alts keeps interest up.
Mreh says: Anyhow, the player decline was sort of inevitable. Games happened that had flashy graphics, and, well, text can't really compare to flashy graphics.
Mreh nods at Josifa.
Mreh says: So, there are three types of players.
Cens says: Not all can be counted into that though, some of it can be explained how the mud is ran, it's rules and policies.
Kain says: graphics are fine but can't compare.
Amastacia agrees with Kain.
Mreh says: Oh, certainly, Cens.
Cens says: I can't say the mud has always been very welcoming in the past years.
Mreh says: I can't say I've made the best decisions in my tenure, but I like to think that I have generally made good ones.
Mreh says: And I can't speak for the head admins that came before me.
Amastacia says: oh lord, back in the day, i remember people being deleted by Sauron for multiple reasons.
Cens says: That for example.
Kerowyn nods.
Toadslime says: thats been 15 years ago though.
Kerowyn says: yep.
Mreh says: People hold grudges against places because of people though.
Amastacia says: Mreh, you've been the best leader this mud has seen.
Amastacia says: not trying to kiss ass, just being honest.
Mreh says: I appreciate that comment.
Toadslime says: i have to agree.
Kerowyn says: I have to agree.
Kain says: well ok I guess I will also.
Kain grins evilly.
Mreh says: Not looking for compliments, people. As much as I appreciate seeing them.
Kain says: yes I agree.
Amastacia lafs at Kain.
Trystan says: Guess i'll join the band wagon.
Mreh says: So, there are basically three types of players.
Mreh says: 1) The people that are here today, still playing the MUD.
Josifa says: i'm not being disagreeable, just missing gaps in history.
Amastacia grins evilly at Josifa.
Mreh says: 2) The people who left over the years for whatever reason.
Mreh says: 3) People who haven't heard of Dune / haven't played MUDs.
Mreh says: My main focus, in the short term, are groups 1 and 2, with the hope of drawing 3 by fact of simply being a great MUD.
Josifa exclaims: YES!
Kerowyn says: and the best “targets” among 3 might be the Mudders who haven't been on Dune.
Mreh says: I figure that the people who are here logging in regularly (for whatever definition of regularly you want to use) are going to keep coming back, unless something major changes.
Cens says: I think as we've all gotten older, time is of the essence.
Mreh says: Cens is trying to tell me to hurry the hell up. :P.
Josifa says: lol.
Kain says: my 12 year old keeps begging me to play.
Mreh says: The second group, people who have played here before, I am interested in trying to bring back, because they already have established characters, or miss the place, or whatever.
Mreh exclaims: Let him!
Kain says: his mother would kill me.
Mreh says: Pfft, you have transfer, you're pretty much invulnerable.
Kain laughs out loud.
Kain says: will keep thinking on it go on.
Mreh says: Group three will have to trickle in as they have been, and I have to hope that they stay.
Cens says: Also groups 1 / 2 will keep group 3 coming.
Mreh agrees with Cens.
Amastacia says: well, the only issue that i can see coming from multi log is the pk system.
Mreh says: Well, groups 1/2 will keep group 3 coming, but group 1 keeps people here.
Kain says: we need players here so they know the place is active.
Cens nogs at Kain.
Mreh asks: Hector, as the newest person in the room, why have you stuck around?
Amastacia remembers when who used to have 30+ people on it constantly.
Kain says: not many people want to play a game where they are the only one online.
Amastacia says: yeah, it's like running around your house nakkid, only the house is made of glass.
Kain pokes Hector evilly in the ribs.
Hector says: its pretty addicting. I like the level based system and the gaining of exp.
Cens says: The gexp bit always grabbed me.
Amastacia says: i like the gxp.
Mreh says: Personally, as a player, I liked the feel of the guild I was playing in.
Trystan says: I like the ansi boobs.
Josifa exclaims: hah!
Amastacia says: seperating levels and guild levels, makes it worth the work, unlike graphic games where they mash it all together.
Kerowyn says: I don't like the gexp rate, but love gexp … that's part of the grind.
Cens grins evilly.
Amastacia says: i want a set of huge titties from Hayt.
Kain says: yes I love the guild divisions.
Hector says: yeah, i like the guild. I'm reading the books and thats why I visited the MUD because I used to play a different MUD. Not anymore though.
Mreh says: I don't say this often, but when I played Fremen, it felt actually pleasant to play, but when I had a Harko player (yes, I briefly had a Harko), the guild felt like it was falling apart and I was going to be crushed by steel beams for walking around.
Trystan says: Harkos are pretty squishy.
Amastacia says: well, Harko is a bit strained, but Fremen has become that way too for new comers.
Mreh says: FS felt mechanical (even more mechanical than the idea of IMG), and BG was interesting in its own way.
Kerowyn says: yeah I have a few alts that died due to lack of guildmates.
Amastacia says: FS wants to make me cry.
Mreh says: Remember, of course, that I played when we had 20-30 people on every day.
Amastacia nods at Mreh.
Amastacia says: FS hasn't changed much though.
Trystan says: FS should be recoded and deleted twice.
Mreh says: I hit glevel 100 in FS in something like 12 hours using only seq and MUD stuff.
Josifa says: I dabbled in a few guilds as a noob, but stuck with this char a long time.
Mreh says: No triggers.
Hector says: plus i can be comical relief at times since ive died 12 times (around 1/5 the amount of deaths as Kain already) :).
Amastacia says: Harko did get upgraded, and so did Fremen.
Mreh says: It was incredibly rote and mechanical.
Mreh lafs at Hector.
Amastacia agrees with Mreh.
Mreh says: Kain has a lot of experience being here.
Amastacia says: yeah, Kain started out as a slot machine in a bar on Salusa…
Hector laughs.
Kain grins evilly.
Trystan says: then upgrade to a toaster.
Mreh says: Kids these days with their iPads and their Kindles…
Kerowyn says: and my Iphone.
Mreh says: Back in my day, it took an entire server farm of toasters just to run Windows 3.1 in black and white.
Trystan says: and note 4.
Kain says: yes but those devices may bring them here.
Amastacia can't say much, has 3 iphones and 1 ipad.
Cens says: I think we're getting a bit off track.
Mreh says: Which leads me actually very nicely to point two.
Mreh says: Actually.
Mreh says: So, one of the ways I hope to keep people here, and bring older people back on their mobile devices, is what I have called the serverside gagging project.
Mreh says: A small part of this is live MUDwide, and my attention has been focussed completely on the IMG recently, whenever I have the motivation to write code. Which, admittedly, isn't as often as I would like it to be.
Kain says: I love what has been done so far in IMG.
Mreh says: I'll let Kain describe the player experience of this change.
Cens says: BGW also have an excellent colour and gag system already in place.
Cens says: Don't know how optimized it is, but it works great.
Kain says: well it has made my game play from my phone much easier.
Amastacia says: half of my spam on my IMG has been cut down since Mreh started gagging certain functions.
Kain says: without it it was like 3 screens of spam everyround.
Amastacia nods at Kain.
Kerowyn asks: that's in brief or verybrief too?
Amastacia says: i use my ipad sometimes :).
Kain says: yes.
Amastacia says: verybrief.
Kerowyn nods.
Amastacia says: IMG in brief would be 10 pages.
Kain says: the guild gags have made it so i can still chat and fight on my phone.
Kerowyn says: very nice.
Amastacia says: which is saying a LOT.
Kain nods evilly.
Kain says: yes before I would miss tells all the time.
Mreh says: So, I've implemented this in combat.c (the code that is used for fighting, that handles MUD hits, block, dodge, and roll), and in headbutt. Because I have a soft spot for that skill.
Trystan says: yea he would, would take hours for a response…
Mreh says: Using the information in 'help gag', you can gag or allow classes of messages.
Josifa says: sorry. i actually have no idea how to turn stuff off.
Mreh says: The removing/wearing of your exo? That's automatic, don't worry about it.
Amastacia asks: sorry for what Josifa?
Amastacia finally gets the idea into his thick skull and gives an 'ah' of comprehension.
Mreh says: Also, guild soul removing/wearing doesn't end up in the log.
Mreh says: So, that's a good thing.
Kain says: who.
Mreh says: Anyhow, by ignoring and accepting different classes of message, you can tailor what you see.
Kain finally gets the idea into his thick skull and gives an 'ah' of comprehension.
Mreh says: Let's take the IMG skill 'reinforce' as an example.
Amastacia fucking fears.
Mreh says: You know, just off the top of my head.
Kerowyn says: I love it. Not using fully due to my own hit, miss, and counter messages in HM.
Mreh says: Not … that I implemented its gags recently or anything.
Cens says: Lightning shard + reinforce.
Mreh says: So, when you reinforce an armour, there's a message that says, “You did it!” and several messages that say, “You failed to do it for some reason!”.
Mreh says: The 'You did it!' message has the following classifications:.
Mreh says: reinforce, and success.
Kerowyn hrms.
Mreh says: All of the 'You failed to do it for some reason!' messages have the classes: reinforce, and failure.
Amastacia says: honestly, i've never had reinforce fail, unless it's an item that can't be reinforced.
Josifa says: agree.
Mreh says: An item you can't reinforce, or you don't have enough guildstuff, or you can't reinforce that type, or it's already reinforced, or you didn't pick a thing to reinforce, or you don't have the thing you're trying to reinforce, or whatever.
Mreh says: There are lots of ways it can “fail”, but it won't fail in the same way that you can miss an attack.
Mreh says: When you hit an opponent with a reinforced weapon, that has the following message classes: combat, my, damage, enemy damage, success, reinforce.
Mreh says: And the opponent receives a hit message with the following classes: combat, my, damage, my damage, success, reinforce.
Mreh asks: So, what does all of this jargon mean?
Kain asks: you can see as much as you want?
Mreh says: If you don't want to see reinforce messages, you can add 'reinforce' to your ignored messages.
Kerowyn says: or as little, thankfully.
Mreh says: But, if you want to see the error messages, you add 'failure' to your allowed messages.
Mreh says: Because whitelist trumps blacklist, you see the failed reinforce messages, but not any of the other ones.
Kain says: so you can still have triggers run to reapply it.
Mreh agrees.
Mreh says: I also added a classification to 'help gag' recently, 'expired', which is, “An effect you applied ran out.”.
Mreh says: Which is also helpful, because you can gag all reinforce messages, but allow 'expired', and use your trigger to re-reinforce your stuff.
Mreh says: The system is evolving, I'm not likely to remove message classes, and as I've previously said, when I am done with the IMG messages, I will move on to the HMs.
Kerowyn cheers enthusiastically.
Mreh says: The end result here is a more playable experience however you choose to connect, with the largest impact being low bandwidth connections and small screens.
Mreh says: That is, mobile players.
Kain agrees evilly.
Josifa says: nice.
Mreh says: As was noted before, we're all growing older, which means not being in high school anymore, or having jobs and families, and not being able to sit in front of a computer for hours at a time to just play Dune.
Kain sulks evilly.
Kerowyn asks: would faster progression, reduced death penalty, and reduced pk death penalty help with playerbase?
Josifa exclaims: yes!
Mreh says: So, if you're able to play Dune from your phone while you're in transit, you might consider doing that over level 228 of Candy Crush Saga.
Kain says: progression seems fast enough at first to me.
Toadslime says: progression is ridiculously fast now compared to long ago.
Cens nogs.
Mreh says: I think faster progression is a good idea on paper, but then a bad idea to try to implement.
Kerowyn says: fair enough. and it WAS slower.
Mreh says: I'm sure everyone here has heard of a small game called World of Warcraft.
Cens says: the level death penalty really never bothered me. In some guilds the gexp death penalty isn't much but for some guilds it's 24 hours of play. There's a big difference between losing 2-3 hours than 20 hours.
Mreh says: Yeah, it is.
Amastacia says: heh, sounds like an old fart here…. “ back in the day on Dune it would take 3 weeks and no deaths to get level 20”.
Cens says: That “should” get fixed.
Mreh says: Cens, I agree with you. Death penalties being harsh should be toned down and reined in.
Kerowyn asks: so tlx and atreid?
Kain says: with gholas I didn't think anyone even went down anymore.
Cens says: it can be directly correlated from the gexp rates.
Mreh says: I don't think we need to advance faster (as I will discuss in a moment), but I agree that harsh death penalties should be curbed, if not stopped completely.
Mreh says: Gholas don't necessarily stop you from losing guild stuff for dying.
Hector says: I like the idea of Gholas but they are dirty expensive.
Mreh says: They will stop you from losing the level and stats, but not the gexp or skills.
Cens nogs at Hector.
Kerowyn says: I'd be a lot more willing to pk and horse around (have fun) if I didn't have to work for hours when I mess up and die.
Mreh says: You're a machine, Hector, you don't get gholas to begin with.
Kain says: being a machine I would not know these things :0.
Amastacia lafs.
Hector boggles at the concept.
Mreh says: Very fair points.
Hector smiles happily.
Amastacia says: IMG is the only guild that can't use gholas.
Mreh says: So, people used to say of WoW, “The game begins at 60.” (Where 60 was the maximum level.).
Cens says: You need to be killing in an area that you get money from to be able to buy gholas constantly. This is a problem when you want to kill in an area that doesn't drop anything.
Hector says: interesting. I didnt know that.
Cens says: It's a big problem, you can't play as you would like.
Hector says: well now i dont feel bad i couldnt afford them.
Mreh says: And then the level cap moved and people said, “The game begins at X,” where X is the new max level.
Kerowyn says: yeah, same is true of GW2.
Mreh says: GW2 was interesting, but nothing really happened when I hit max level.
Kerowyn says: just pvp.
Cens says: When the game starts in Dune is dependant what you do, quite a lot.
Mreh says: Except that I could suddenly craft a lot better because I did 73-80 at the Leatherworking table.
Kerowyn nods.
Mreh says: That idea eventually got me thinking though that the people saying that were right for the wrong reasons.
Mreh says: World of Warcraft is one of those experiences that people liken to crack, and for good reason.
Cens says: Well, for GW2 you only needed to be max for the world pvp. Not structured PVP.
Josifa says: I'm not into pk or dangerous mobs but i love a good quest.
Cens says: But again, getting side-tracked.
Mreh says: If you're in WoW to run the end-game content, then yes, the “game” begins at 90.
Mreh says: Or, 100? They just released an expac or something, I don't know anymore.
Mreh says: But, if you look at the game as a structural thing, level 1-89 is /the tutorial/.
Mreh says: How does this rambling relate to making progression faster? Simple.
Mreh says: Let's say I cut experience point costs across the board by 50%.
Mreh says: You now level twice as fast and gain stats twice as fast.
Amastacia eeps.
Hector boggles at the concept.
Kerowyn noggles.
Josifa says: hm.
Mreh says: Everyone who's been here for a while knows that it's not your player level that matters.
Hector says: me likey.
Amastacia says: but then there wouldn't be a point to gxp.
Cens nogs at Mreh.
Mreh says: It's your guild stuff.
Amastacia says: you'd level way too fast for your guild stuff to catch up.
Mreh says: ALL of your guild stuff is more important than any of your player stuff.
Mreh says: Exactly.
Josifa says: I am now Queen of IMG.
Amastacia grins evilly at Josifa.
Mreh says: Suddenly, in order to progress, you have to murder things that you can't really touch, so that you can gain the points you need to become worse at fighting stuff.
Mreh says: Lowering the curve on gexp only furthers the problem.
Amastacia says: yeah, Sards can run into that problem when they first join.
Mreh says: WoW has a prestige class, the Death Knight.
Amastacia goes 'Blah'.
Mreh says: I like to think I'm a reasonably clever guy.
Mreh says: Sometimes I even demonstrate that quality.
Cens remembers Fremen and broken weapons. Was claimed as exploiting.
Amastacia lafs.
Amastacia says: i remember that.
Mreh says: Yeah, 0 damage weapons to gexp forever… That needed fixing.
Kerowyn says: I hear you Mreh and know where you are headed but man, is the slog long on gexp over about level 75.
Mreh says: Benedict and I started Death Knights together on the same day, and we both found the same fundamental flaw with that class.
Amastacia says: couldn't unwield it even though it's borken, was always told to relog if that happend, but Boss was the coder for Fremen at that time.
Mreh shudders uncontrollably.
Kain listens evilly.
Mreh's eyes get a far-away look in them as A Stab-Me-Boss doll appears in his hands.
Mreh stabs his Stab-Me-Boss doll repeatedly in the face with a large grin on his face.
Mreh says: Aaanyhow.,.
Amastacia grins evilly.
Mreh says: The Death Knight starts at level 55 or something.
Mreh says: And the problem with it is the information overload that comes with starting at level 55 and being given a fast track to level 60.
Amastacia nods.
Mreh says: “Here are ten abilities. Learn them and use them all. Now you're level 56, here are three new abilities, and a skill point, and go go go!”.
Amastacia says: unless you are accustomed to the game, starting that high creates a bad learning curve.
Mreh says: There are people who can do that. But it is not intuitive, and it's an informational overload, and that is what happens if you lower the gexp curve.
Mreh says: Now, I will admit that things get steep later on, and for no real reason.
Mreh says: Congratulations, you are now Fremen glevel 100. Here's 10 more water.
Kerowyn says: I think we'd all agree that a learning curve is a good thing - right, it is the slope of the curve.
Amastacia lafs at Mreh.
Toadslime says: or higher for some of us :p.
Mreh says: I wish I was joking.
Toadslime says: i wish you were too.
Amastacia says: i know you're not.
Mreh says: That's a design flaw that I admit I made a mistake with.
Kain says: yeah yeah I want something new….
Toadslime says: i make the exact same gxp rate i did 40 glvls ago too.
Mreh says: No, I'm not saying I designed the guild, or anything like that.
Cens says: Toadslime, can't say I know but maybe you're doing something wrong? ;).
Cens ducks down and attempts to hide.
Amastacia fucking lafs.
Amastacia says: i highly doubt that.
Mreh says: I did, however, make a big rookie mistake when I caved to player pressure (because I was a Fremen at heart and I still am) when they said, “Welp, I've reached the glevel cap. Make me more glevels.”.
Mreh says: I said, “Okay, sure, fine, here you go. 50 more glevels.”.
Toadslime exclaims: and you did anyway!
Kain says: I knew it fremen are op.
Mreh says: I was afraid you'd quit with nothing else to strive for.
Kerowyn laughs out loud.
Toadslime says: nah just switched guilds because it takes a couple months to get a pointless glvl lol.
Mreh says: I wish I had a solution to that problem.
Toadslime says: no biggie, lots of other guilds to play around with.
Mreh says: Maybe it would be worth giving every guild a point where a message pops up and says, “Congratulations, you've reached the end of the guild. You can keep going, but it's just going to make the gexp curve steeper and you don't gain any real benefit. Have you tried some of our other fine guilds?”.
Cens says: That's sometimes, usually solved by pk and pkpoints.
Cens says: again, something that I would like to see get fixed.
Cens says: Not all guilds get pk stuff.
Mreh says: Or maybe there should be something for players that just continue to grind out gexp. I don't have that answer.
Mreh says: No, not all guilds get PK stuff.
Mreh says: Tleilaxu are the most notorious for that, but they have nobody to PK.
Mreh says: Which is a different problem entirely.
Kain thinks evilly.
Toadslime says: id continue to grind it out on fremen if there was ANYTHING to grind for.
Hector asks: this might sound dumb. But what about a prestige system like the Call of Duty games where you gut a special title/badge/weapon/token and then start over??
Mreh says: The idea of ascension has been passed around from time to time.
Hector says: so when you reach the end, you start over, but get a little something for the effort.
Kerowyn says: other muds have done that. Wheel of Time has a remort.
Mreh says: Since the last time it was brought up (by Hayt, talking about some other MUD he played on), I was just starting to play Kingdom of Loathing.
Amastacia says: fear the Wheels in this game.
Mreh says: You don't know the half of it, Amastacia.
Mreh says: So, back then, I didn't have much experience with ascension/remort/whatever.
Amastacia says: all i know is, eveytime i see a Wheel spin, i want to type quit, heh.
Kain says: i love wheel spins.
Mreh says: Since then, I've played a lot of Kingdom of Loathing and a few different Shin Megami Tensei games that handle that sort of thing.
Mreh asks: You all know that I coded the slot machines solely to not have to answer “spin the wheel!” requests, right?
Amastacia grins evilly at Mreh.
Mreh says: Also, I coded the Wheel of Communism because I was told I was too low wizlevel to spin the Wheel of (Mis)Fortune.
Cens taps his foot impatiently.
Mreh says: So, remorts.
Amastacia grins evilly at Cens.
Mreh says: I've played a lot of different games that involve remort/new game plus/ascension, and I still have no idea what I would do for Dune.
Amastacia exclaims: Sandworm farm!
Kerowyn says: I can't say it would work, with the guilds all so different.
Kain says: the expanded levels you added are nice.
Risto says: Aight, here to join whatever is going on.
Mreh says: Maybe (as an idea, not something I'm going to do tonight), keep track of the number of times you reach guild level <endpoint> in a guild, and then you get to take a skill with you into your new life.
Amastacia says: well, most weapons work for all guilds, so unless it's something specific to the guild, you can add some type of honor/badge/etc system that anyone can buy from.
Mreh says: “Congratulations, you've reached the pinnacle of Bene Gesserit Warrior experience. You have been awarded one point, and may 'ascend' at any time.”.
Kerowyn asks: except FS could never do that, and some guilds would take half the time of others, right?
Mreh says: If I set an arbitrary endpoint for the guild, FS could.
Amastacia says: hmm, not exactly, IMG is pretty much inifinite on their glvls.
Mreh says: FS have “infinite” guild levels because Stryder wrote a formula that is based on the guild level, maxxing the cost at 2B.
Amastacia says: well, FS and IMG i think are the only ones that have unlimited glvls.
Mreh says: FS is still capped at guild level 2B.
Mreh says: And IMG at ilevel 2B.
Amastacia says: 2B, Kassandra was the biggest FS and i doubt they even made it past 1500 in glvls.
Mreh goes 'Etc'.
Kerowyn nods.
Cens says: Kassandra was way past glvl2000.
Mreh says: So, ascension is an idea, but one that requires some thought as to execution.
Risto says: I quite like the ascension idea, what mostly got me hooked me to Dune was how brilliantly you could grind here.
Cens says: All these sorts of things require a lot of effort, but I don't think these are the problems that needs to be solved today.
Amastacia nods at Risto.
Amastacia licks Risto.
Mreh asks: Well, Risto, have you seen level 501?
Baldur arrives in a puff of smoke.
Baldur bows.
Kain says: who.
Amastacia says: welcome to the conversation Risto :).
Kain lafs evilly.
Amastacia fucking fears.
Risto says: Not really, but I've seen almost 5y of online age with different characters :D.
Amastacia licks Baldur.
Kain says: Level 533 [0\5].
Risto says: Have a decent/massive size character in every guild, yup, really enjoyed the grind here.
Amastacia asks: you have to get 5 levels to get 1 level right?
Kain says: at this point.
Amastacia nods.
Mreh says: Cap x 5 for one level, and the x # grows as you increase in level.
Kain says: it was 3 then 4 then 5.
Risto asks: Do you like it ?
Kain asks: level 999 is what 49 sublevels?
Mreh says: Something like that, yes.
Risto says: I can imagine some pros and cons.
Amastacia fucking fears.
Kain says: yes I love it.
Risto nods.
Kerowyn says: welp kain is just OP.
Amastacia says: pro, only wizards could destroy you.
Amastacia says: con, you're a walking god.
Mreh says: You don't gain stat points for the sublevels either.
Kain says: not true I died a month ago or so.
Baldur grins evilly.
Cens says: But you didn't, for real.
Amastacia ducks down and attempts to hide.
Cens says: You just recalled a marker :(.
Kain says: no I didnt.
Mreh says: Funny story, Kain only learned about flux backward last month.
Kain lafs evilly.
Amastacia lafs.
Cens lafs.
Risto goes 'Heh'.
Nippar lafs.
Cens says: I guess that's what happens when there's no players playing then.
Amastacia grins evilly.
Mreh says: Anyhow, until we figure out a workable system for going backwards, we have one in place that makes it slower (but still rewarding) to go forwards.
Mreh says: Just not in guild.
Kain says: anyhow I don't want to be in a position where I can't die.
Kain says: no point then.
Amastacia says: 999, i dunno if they coded mobs that big yet, except for wurms.
Kain says: I actually want more fear not less.
Kain says: I want things Mreh hasn't coded for me yet.
Kain grins evilly.
Amastacia lafs.
Cens says: Still not the problem. Less than one percent of the players have reached those levels, or glevels for that matter.
Amastacia thinks carefully.
Mreh says: No, we haven't. Someone (cough… Zaknafein… cough…) Sorry, had something caught in my throat, told me he was interested in coding Very High Level mobs, and I've made it so that mobs can be set to level ~1500, so there's stuff /possible/ for that sort of content.
Kain peers at Trystan evilly.
Amastacia says: well, there used to be a lot of people at level/glvl cap, but since new stuff has been put in place, a lot of those people are no longer at their cap.
Kain says: some of them prolly don't even know they arent at cap anymore.
Amastacia lafs.
Amastacia says: true.
Mreh says: That thing Kain said.
Amastacia says: Shitlord.
Kerowyn says: and unless we go back to having parties roaming the mud, not likely to be taken on by anyone.
Mreh asks: Mind if I move the agenda forward?
Amastacia nods at Mreh.
Kain nods evilly.
Baldur says: So nodnod.
Kerowyn says: pls do.
Baldur nods and nods.
Mreh says: So, point three was the consideration of allowing botting.
Baldur grins evilly.
Mreh says: The last time we did one of these meetings …
Baldur says: This should be interesting….
Cens agrees.
Cens says: For those who don't know, botting = automated killing without input from the user.
Amastacia says: botting can be bad or good, depends on the person using it, Shadow isn't one of the best examples.
Mreh says: The last wiz meeting in this room was in 07.
Risto says: I would atleast like to see rekill triggers being legal.
Mreh says: Oh, no, sorry, July 30th, 2010.
Mreh says: Four years ago.
Amastacia agrees with Risto.
Hector asks: so that would be like setting a time ticker??
Mreh says: Was the last time I brought up the idea of allowing bots.
Nippar agrees with Risto.
Baldur says: Wow.
Kerowyn nods.
Mreh says: And people were violently against it, which is why I didn't do anything about it.
Amastacia says: no Hector, it's to kill the next mob in the room without input from the player, rekill trigger.
Mreh says: But I want to bring it up again because times change, people change, ideas change.
Mreh nods at Amastacia.
Risto says: Well, to be honest I've written a few bots myself.
Amastacia grins evilly at Risto.
Risto says: It was fun as hell.
Mreh says: So, here's the thing.
Toadslime says: might as well allow it, people do it anyway - players wont leave for getting caught if its legal and we wont feel cheated by getting killed/passed up by someone botting - we've lost tons of players from them getting caught and leaving.
Cens says: Hector, that's in the simplest forms. In it's most complicated forms you would start botting, when inventory full you would go and sell things in shop, when you the right amount of gexp, you would spend gexp, rince and repeat.
Nippar says: long time ago, when i used to lvl, i always botted…
Nippar ducks down and attempts to hide.
Amastacia says: i've never given botting a chance cause i never wanted Mreh to slash my glvl or iban me for a while.
Mreh says: I'm pretty sure that I've been the hardest on bots as far as admins go.
Nippar says: its strange playing now :D.
Amastacia nods at Mreh.
Mreh says: I was pretty good at finding them, making them dance around, and then getting rid of them.
Cens says: I used to bot when rekill in triggers was allowed, was caught once and haven't done it since.
Mreh says: “help botting” actually includes a paragraph I wrote titled “The Cens Clause”.
Kain says: ok Mreh we have them start the banning this meeting was a sham to get you all to…..
Mreh grins evilly at Cens.
Amastacia says: once Mreh said no rekill triggers, mine got deleted.
Hector says: ahhh, gotcha.
Baldur agrees hard with Mreh.
Mreh gives Kain a high five.
Mreh says: So, you all know I'm not a fan of bots.
Mreh says: Historically speaking.
Risto nods.
Kerowyn nods.
Amastacia nods.
Mreh says: Except maybe Hector, who is new here.
Amastacia grins evilly.
Mreh says: So there's been some thought put into this.
Amastacia says: damn Hector, you lucky dog.
Toadslime gives Amastacia a high five.
Mreh says: 1) This is a proposal.
Risto says: I would like to point out that's it's a somewhat of a skill to make a (good) bot and adds some depth to the game that way.
Baldur agrees with Risto.
Cens says: Toadslime brings up a good point.
Mreh says: I am not just going to do any of it, I want to run it by the people actually playing here before doing anything, and if I get shouted down, I'll drop it for another four years.
Hector chuckles politely.
Cens says: I remember Mreh checking up on me invisible but me using “evaluate” command to know he was actually there.
Cens says: Good old times.
Mreh says: Actually, Toad knows just how mad I get at bots.
Amastacia sits up for 3 nights learning how to make a good bot.
Mreh says: So, here's the crux of the idea.
Baldur says: Mreh, you once caught a bot and made him open PK at Caladan AP for all to slaughter…fun times.
Amastacia says: lookup had more bots on it than actual bug abusers.
Mreh says: 1) Bots would be tagged as such.
Amastacia lafs at Baldur.
Kerowyn asks: tagged, as in the title?
Amastacia says: that must have been one hell of a party at Cal ap.
Mreh says: Tagged in code. Likely also in title.
Mreh says: Definitely in finger.
Kerowyn nods.
Mreh says: 2) Any player, for any reason, can attack a bot starting a PK fight, but bots cannot initiate fights with legitimate players.
Baldur goes '0|-| 5|-|1+', just like Hazard.
Cens lafs.
Mreh says: This is to prevent bots from monopolizing areas, preventing normal players from getting experience.
Risto says: That would be abused so much for pk rewards.
Cens says: Time to create a pk bot.
Baldur says: Now THAT adds depth.
Risto says: I know I would.
Mreh says: We all know it's been a problem in the past.
Kerowyn says: interesting. okay with me.
Toadslime says: no pk reward for killing bots.
Mreh says: Sure, I certainly can see people writing bots with the express purpose of being attacked.
Cens asks: Would normal pkranges still apply, hence the area that is being used?
Kain says: you can't pk your own bots.
Amastacia sees Kain becoming the bot hunter.
Mreh says: As Toad points out, because it could be abusable, no PK rewards from bots. It's easy enough to implement in the combat code.
Baldur says: Pretty brilliant actually, bring back PK and satisfy botters simultaniously.
Risto noddles solemnly.
Mreh says: Also, no politics shifts from killing bots.
Mreh says: 3) All bots would have to be registered to an actual player.
Amastacia says: that's actually a pretty good idea.
Risto says: Masterbot is now assuming direct control.
Mreh says: Because point four of this meeting is multi-logging, this is a very relevant point that ties into point four of this topic.
Amastacia says: fear the old players that might show back up to bot.
Risto says: I quite like that idea.
Kain says: You have swung opinion of bots Mreh.
Mreh says: 4) In order to have any number of bots online and active, you must have a legitimate player online and at least semi-active.
Amastacia says: so main player and bot to the side.
Mreh says: Yes.
Kerowyn says: I'd love to be able to multi-log and form a party with a slave support bot or two.
Mreh says: Or, as I expect will happen, main player to the side, bot to the side to the side, main window watching Netflix.
Kerowyn noggles.
Mreh says: Basically, I want it to be possible for a player to say, “Hey, Kerowyn, think you can move your bot out of Terro's for a bit? I'm trying to gexp here.”.
Baldur asks: What about players assisting their bots in PK? Off limits I presume?
Mreh says: And Kerowyn to be online to say, “Oh, sure, no problem, dude.”.
Mreh asks: How do you mean, Baldur?
Cens says: Baldur: the players wouldn't actually be PK.
Cens says: Only PK against the bot.
Mreh nods at Cens.
Mreh says: That thing Cens said.
Kain says: unless the player is pk.
Baldur says: Exactly.
Cens nogs.
Cens asks: Would normal PK rules still apply?
Kerowyn says: I'm getting hot to PK suddenly.
Mreh says: But, it's possible Baldur means, “What if I go PK, can I have my botswarm ALSO go PK and run around like a fucking mob curbstomping dudes in the face?”.
Baldur asks: What's to stop someone from going PK and assist their bot?
Toadslime says: can we limit the number of multi logs to 2 so we dont have 1 player and 8 bots.
Nippar says: if a bot is registered to you and gets attacked, shouldnt u be able to defend it? :P.
Mreh says: You can defend your bot by logging into its screen and doing something, Nippar.
Nippar grins evilly.
Amastacia lafs.
Kain says: or if the player attacking it is pk.
Nippar says: but its a bot! :P.
Amastacia says: can't use your beefy protector.
Mreh says: So, in the situation that Cens is referring to that I think Baldur is talking about.
Mreh says: Bot A is farming merrily.
Mreh says: Player B wants in on the zone, goes and attacks Bot A.
Mreh says: Bot A's player, player C, notices this and runs over to their bot to assist.
Mreh says: Here's the problem.
Baldur nodnods noddingly.
Mreh says: Player B was never set PK.
Baldur looks blank and says 'oh'.
Mreh says: So, player B can't be attacked by player C, because player B isn't PK.
Amastacia lafs.
Mreh says: The bot effectively has the free_kill flag set on it.
Baldur says: Bots are treated as normal mobs then, clever :).
Mreh says: So anyone, regardless of PK status, can attack them.
Amastacia noddles solemnly.
Mreh says: But, player B can still die to Bot A.
Baldur says: So there would have to be a very clear bot tag for all to see.
Cens asks: Could Kain go on a level1 bot killing rampage?
Mreh says: Sure he could.
Mreh says: Nothing preventing that.
Risto says: But what if, someone aoes the bot.
Mreh says: You might have to start paying Kain protection money.
Risto says: Like a sandstorm.
Kain grins evilly.
Baldur lafs.
Risto says: And the bot could potentially kill a much lower guy.
Kain says: I HATE BOTS.
Mreh says: If someone AOEs the bot, it's exactly the same as a PKer AOEing a room full of PKs.
Kain lafs evilly.
Amastacia says: you screwed up hitting sandstorm, laf.
Mreh says: Exactly.
Baldur fucking lafs.
Baldur says: I love this idea.
Cens says: Kain, you are a bot.
Kerowyn says: I love this.
Kain says: oh yeah.
Mreh says: You dun goofed, but in all fairness, the bot is probably still merrily killing the mob, so it doesn't notice.
Kain sulks evilly.
Risto says: Follow people on gp tower 1 and go ham :D.
Cens says: Defend your own dude.
Mreh says: Until it starts hunting you because you run away, and then you accidentally die when it's entering its search protocol.
Mreh grins evilly.
Cens says: Unless it's smart enough.
Cens grins evilly.
Amastacia says: this has high potential, i can all of us going apeshit making new guys to create a botting/pk frenzy.
Mreh says: Now, here's a slightly different scenario with a different answer.
Mreh says: Bot A, Player B, Player C.
Mreh says: Same three characters.
Mreh says: Player B, in this case, IS PK.
Toadslime says: bots pking each other.
Mreh says: Player B walks over and attacks Bot A.
Amastacia gives Toadslime a high five.
Kerowyn noggles.
Mreh says: Player C notices this, goes PK, and attacks player B, defending their bot.
Kerowyn says: player C goes to town …
Mreh says: Completely legitimate.
Amastacia goes : Woo Woo.
Cens says: What if the bot is PK+.
Kerowyn says: I like it, but back to death penalty.
Mreh says: The bot is free kill, so it's PKable by anyone.
Risto says: Player B is pk.
Mreh says: Even another bot.
Amastacia says: then you get the rewards of pk if the bot is pk.
Kain asks: if the bot is pk can it attack pk players?
Mreh says: If the bot is PK Starstorm, I'm going to wonder why you're maining Ness and Lucas.
Nippar says: its its killable by anyone, of course you would pk it.
Mreh says: That's still questionable territory, Amastacia, because it re-opens up the abuse concept.
Amastacia says: ah.
Cens nogs at Nippar.
Cens says: Some guilds get an advantage by going pk.
Amastacia asks: then registering it a bot will leave it as a bot unless we unregister it?
Cens says: So it wouldn't really matter if it's bot pk anyway.
Mreh says: But, in all cases, bots can attack each other (bots can be attacked by everyone), so there's certainly freedom to Battlebots your way through the MUD.
Mreh says: No.
Kain asks: this sounds like it should move forward yes?
Mreh says: No, Amastacia, no.
Kerowyn says: YES.
Mreh says: When you register a character as a bot, that is a permanent change that cannot be undone.
Amastacia goes 'Ok'.
Amastacia says: sounds fair.
Cens says: Only thing I don't like is the pkranges not applying.
Kain says: its a bot.
Mreh says: You can't suddenly decide, “Okay, I've finally got that glevel 80 Fremen I've always wanted, now to enjoy this freedom.”.
Kain says: there has to be restriction.
Mreh says: But, nothing is stopping you from logging in as that bot and turning off all the botting triggers.
Cens says: I would argue that constantly being pk is already a really big restriction.
Mreh says: It's still a player character.
Mreh says: Constantly being PK is a big restriction.
Mreh says: Maybe there could be some grace period.
Baldur shouts “BINGO!”.
Mreh says: Like, you're fine until level 20 (time to tune your bot).
Baldur says: Cooldown.
Cens says: That's smart.
Toadslime says: we'll see a ton of lvl 19 bots.
Mreh says: And you get an hour after dying (respawn timer/mercy invulnerability).
Cens asks: Maybe 15-30min cooldown before being able to kill the bot again?
Cens says: or one hour, if 15min is too short.
Mreh says: Which also lets you tune your bot.
Cens nogs.
Mreh says: “What went wrong? Oh, I see.”.
Kerowyn says: sounds cool.
Risto says: that would be quite sweet.
Cens says: The grace period is a good idea.
Cens waves happily.
Amastacia goes : Woo Woo.
Amastacia cheers Hayt enthusiastically.
Kain nods evilly.
Risto lafs.
Hayt lafs.
Baldur fucking fears Hayt.
Cens grins evilly.
Mreh says: IG is repurposed into the Fremen recode I've been promising forever.
Hayt gives Mreh a high five.
Amastacia says: Hayt, can i have a set of titties? please.
Nippar lafs.
Risto burps.
Mreh says: So, it's a bit of code to write, but there seems to be a consensus of the people here that it's worth doing.
Baldur grins evilly at Risto.
Cens says: Hayt, we just want swag, it's not like we don't appreciate your opinion though.
Cens says: Mreh, I don't hear anyone absolutely hating the idea…
Amastacia says: i like the idea.
Baldur says: I think it's great.
Kerowyn asks: yes, though I'd like to hear more on multi-logging. Are you considering multilogging true players in non-bot status?
Toadslime says: im on the fence.
Kain says: I thought I would.
Mreh says: I'm going to post this discussion stuff on the wiki and leave it there for a few days, so that the people who didn't come can read it.
Cens says: I'm qurious as well Kerowyn.
Mreh says: And comment, before going forward.
Risto hiccups.
Amastacia says: shockingly, considering i can't exactly write a good bot, but it gives some people a chance to play with bots without getting in trouble.
Amastacia says: and it holds a high risk because of the pk.
Mreh says: That's my thought, Amastacia. One thing I've learned from being here forever and a day is that I love writing code more than I love playing the game.
Mreh says: I can understand other people sharing the sentiment, but not everyone wants to write areas and mechanics.
Cens says: I'm just gonna come and say it directly. I haven't played in two years. I've usually had like one month spurts when I have played. I would probably log daily with bots.
Mreh says: I figure that's why people like Bruce wrote bots.
Mreh nods at Cens.
Josifa says: this is all way over my head. that's ok though. still enjoy doing what I do.
Hayt says: i love writing areas… unfortunately my coding sucks ;).
Mreh says: Okay, so onto point four, allowing multi-logging.
Hector says: yeah, this is all greek to me beside basic tickers and triggers.
Kerowyn says: woohoo.
Mreh says: Your coding doesn't suck, Hayt. You have some incredible ideas.
Mreh says: You just love writing death traps and incomprehensible syntax.
Hayt grins evilly.
Kain agrees evilly.
Baldur lafs.
Cens says: Never figured outu spire..
Cens says: out*.
Kain says: I have spent the last few days in one of your areas.
Kerowyn says: friggin spire needs a road atlas.
Risto says: Yeah there's more wrong with your head than you skill to code Hayt :P.
Kerowyn smiles happily.
Mreh says: “Look at this weird organ you can squeeze to make this other room contract which does 5000000 damage to everything inside. Actually, that amount of damage seems low, someone might survive. Bump it to 99999999 and make it unblockable.”.
Risto says: your*.
Mreh says: The Spire isn't Hayt's code.
Cens says: atlas that keeps changing its pages.
Mreh says: The Spire is a Paradox creation.
Cens says: what am I thinking of.
Cens says: the mech place.
Kerowyn nods.
Cens nogs.
Mreh says: There's a weird bioorganic place on Tleilax that's incomprehensible Haytcode, and there's the Ra stations.
Hayt says: he's thinking of Ra or SoftTech area.
Mreh nogs.
Hayt exclaims: bio place is AWESOME!
Kain grins evilly.
Hayt laughs at Kain.
Baldur unsheaths his crysknife.
Hayt nods at Kain.
Cens lafs.
Mreh says: Okay, so point four.
Mreh says: Multi-logging, with restrictions.
Risto burps.
Kerowyn exclaims: multi-log! multi-log!
Hayt grins evilly.
Risto says: Absolutely against.
Mreh says: So, answering a previous question, yes the idea of allowing multiple character logins would be concurrent. You could have four legit players wandering around at the same time if you wanted to.
Josifa says: love.
Mreh says: Risto, explain/.
Hayt says: wow THAT is awesome.
Hayt nods at Mreh.
Hayt sends out a few hard nogs to Mreh.
Baldur hrms.
Risto says: Well, first of all, we would probably run out of areas.
Kerowyn says: I like it but do think there should be limits.
Amastacia lafs.
Toadslime says: I'm against multi logging as well, unless the extra char was your bot.
Kain lafs evilly.
Josifa says: the only problem would be partying up one'alts.
Amastacia says: yeah, areas would be over run.
Cens says: Wouldn't that be the same with bots though.
Cens says: It's no different.
Risto says: Bots you have a way to deal with.
Toadslime says: but i bot you can kill and get rid of it.
Cens says: oh I guess.
Mreh says: You're likely not going to be able to focus on four characters at the same time.
Baldur says: Yeah I was gonna ssay…
Amastacia says: 4 is past the limit.
Baldur says: That's a bit…intensive.
Toadslime says: well everyone could play 1 fremen - thats a once an hour task.
Kain says: I actually like the idea of mulit logs.
Amastacia says: 2 is hard enough.
Mreh says: I picked four out of the air.
Toadslime says: so that makes 2 pretty easy.
Mreh says: 2 is a good and reasonable number.
Josifa says: I just started a small tleilax alt. i could give him tos of \$.
Mreh says: The idea behind multi- was, of course, multi.
Amastacia says: you remember DoN, Toad? we played 2 at once on there.
Kerowyn says: I like that.
Kain says: you can already do that josifa.
Mreh says: This would also facilitate passing money and gear between characters.
Hayt says: so basically your bot can play with you, it's open kill, and youc oul d even party with it.
Cens asks: Kain, is it allowed though?
Josifa gasps in astonishment.
Kain says: yes it is.
Cens says: Or do I have old info.
Cens looks blank and says 'oh'.
Mreh says: Right now, in order to pass money between characters, you have to either use a middleman, or you find a place nobody knows about.
Risto says: With 2 alts logging, that would be reasonable.
Mreh says: Anyhow, like a bot, your alts have to be registered to you.
Kain says: like drop it at caladan ap.
Hayt says: drops are logged.
Kain says: its not illegal anymore.
Mreh says: Yes, this means that I'd effectively be drawing up a list of who is assigned to whom, but I've wanted this forever.
Josifa says: wowza.
Cens says: I've transferred so much money one solaris at a time. Never gotten caught :o.
Mreh says: Yes, it's legal to drop stuff and log out and have your alt pick it up.
Risto burps.
Hayt says: you could always tie in the logins mreh.
Mreh says: And, actually, because you have 15 minutes where you log back in where you log out, it's cool.
Hayt says: IE you couldn't log in a registered bot unless another of your legals was logged in.
Mreh says: Hayt, I spent about an hour yesterday coding an alt registration system.
Hayt says: not sure if that is applicable.
Mreh says: It's all ready to go.
Hayt says: ah coolio :).
Mreh says: I just want some accountability between the characters.
Kerowyn asks: so I'd list all my alts, and that'd be sufficient to know if I exceeded the multi cap?
Mreh says: You know firsthand that there's been two decades of, “Oh, it's my brother's next door neighbour's girlfriend's schoolmate's nephew logging into my WiFi.”.
Mreh says: There's no limit on how many multis you can have.
Cens asks: say But this would be required only if you wanted to be logged in simultaneously with another character?
Mreh says: There would be a limit on simultaneous connections.
Mreh nods at Cens.
Cens asks: So you could leave A character out of the list?
Mreh says: Err.
Hayt says: this would open up some FUN ideas about multi questing and stuff :).
Mreh nods at Cens, shouts “BINGO!”, goes, “You hit the nail RIGHT on the head.” and goes 'Etc'.
Risto hiccups.
Mreh says: If you leave a character out of the list and I find out, I'm going to be upset and strict.
Hayt says: create an area that requires two people to do two simultaneous things to complete or require some intra character actions.
Kain says: more Hayt syntax.
Risto burps.
Mreh says: If you leave a bot out of the list and I find out, I'm going to be upset, strict, and in a delety kind of mood.
Kerowyn says: or just kill stuff, n stuff :).
Cens nogs at Mreh.
Mreh says: Benedict wanted to create such an area years ago, Hayt. He never got around to it because players don't like to cooperate.
Cens says: That should be a clear rule.
Cens says: direct deletion.
Mreh nods.
Amastacia says: Hayt syntax is the most evil thing ever, my brain dosen't function like his.
Mreh says: Nobody's brain functions like his.
Amastacia lafs.
Cens says: nobody hasnt logged in almost 6 years :<.
Mreh says: I think I agree with the consensus that two concurrent 'actual' characters is probably a good limit.
Amastacia nods.
Risto looks drunk.
Cens says: This would be kind of crazy to come to think of it.
Risto says: You could even make alt making alts.
Mreh says: With probably a similar limit on how many concurrent bots you can have logged in.
Josifa asks: I have a char whose password is unknown to me. can you kill her off?
Hayt asks: lol no 3 bot pk extravaganza?
Risto says: That would be helluva cool.
Mreh says: I could kill her off, or I could change the password for you.
Hayt says: 1 bot heals, the other spews movement killers, while the other drolls off damage.
Mreh says: I just need some way to know that it's actually your character.
Josifa exclaims: ha!
Kerowyn hrms.
Mreh says: Yes, you could do very interesting solo party stuff with four characters.
Amastacia gives Mreh a high five.
Cens asks: Josifa is my alt, could you delete the char?
Amastacia lafs at Cens.
Mreh says: And I hope that it would add an interesting dimension to the game.
Josifa says: hilairs.
Mreh flexes his desting fingers and points at Cens.
Cens says: I thought we were testing stuff.
Kain says: and botting.
Kain points at Cens and yells “he's a witch! Burn him!”.
Mreh says: So, here's how alt registration works as I've coded it so far.
Kerowyn says: shenanigans.
Mreh says: You use a command (right now it's register_alt, but that needs changing) with the name of the main character.
Mreh says: So, Ravni would “register_alt mreh”.
Risto seems to fall, but takes a step and recovers.
Mreh says: The next thing the command asks you for is the main character's password.
Mreh says: Using appropriate flags that don't log the command or let wizards snoop that information out of you.
Mreh says: If you enter the right password, it registers you as an alt of that person.
Josifa says: Mreh will be up all night playing our mains.
Mreh lafs hard.
Cens asks: and it will show up in finger on both characters?
Risto hiccups.
Amastacia says: someone get Risto come coffee, his drunken falls are spammy.
Mreh says: im in ur gild gettin ur gexp.
Cens says: “X's alt.” “X's main.”.
Kerowyn says: fine by me.
Mreh says: I haven't decided on that part yet, but the alt will clearly display, “X is an alternate character of Y.”.
Mreh says: “X is a bot controlled by Y.”.
Cens asks: On the mains finger or on the alts finger?
Mreh says: On the alt's finger.
Risto burps.
Risto says: I'm going to test that alt registeration in a bit.
Cens goes 'Ok'.
Mreh says: I have to make it live first, Risto.
Risto looks blank and says 'oh'.
Mreh says: And I have to add in daisy chain prevention.
Kerowyn says: hrm certain guilds will make for better alts than others …
Risto says: My bad, I though it was live.
Cens nogs at Kerowyn.
Mreh says: It's live in my working directory.
Cens says: Finally there will be some use for bene gesserit and bgw healing.
Amastacia sees an increase in Fremen chars.
Mreh agrees with Cens.
Hayt says: he's joking.
Mreh exclaims: Ocure will have a use!
Hayt bonks Toadslime with a rubber mallet.
Cens noglegs.
Kain giggles evilly.
Mreh says: So, I need to add the small amount of logic that prevents, “Ravni is an alt of Mreh, Widget is an alt of Ravni.”.
Toadslime says: I don't actually know but I feel like someone will somehow turn this into a fuckcest.
Mreh says: Simple enough to do, shouldn't take long.
Risto hiccups.
Kerowyn says: its a grand experiment, what could go wrong! lol.
Amastacia says: you know, i was thinking the same thing Toad, but with Mreh on the job, i wouldn't want to be the one sticking my head into the lions mouth.
Josifa liked fuckcest better.
Hayt says: kali is retired so no fuckfestability.
Mreh says: Fuckcest seems like something an underground swinger club might host.
Amastacia fucking lafs.
Mreh exclaims: CUM TO FUCKCEST 2014!
Kain says: with relation.
Mreh goes 'Etc'.
Hayt spanks Josifa on the butt.
Kerowyn says: went to one of those once.
Mreh says: Yeah, I also realize that is ending up in the meeting log.
Baldur lafs.
Josifa lafs.
Mreh says: Have fun, person from 2018 reading this.
Cens lafs.
Toadslime says: I'm just one person but I'll go on record as voting no for this stuff.
Baldur waves happily.
Josifa creates a BG named Fuckcest.
Risto says: This was the place where the shit went down.
Cens says: Hi 2018 person.
Baldur exclaims: You can't unread that!
Baldur asks: To be clear, is it max 2 simultanious connections?
Toadslime says: I don't know what I disagree with really but 2 bots and 2 player chars seems like its bound to end badly.
Hayt says: or a lot of fun.
Hayt says: it's a game.
Hayt ).
Mreh asks: Are you worried that one of the two “real” players will also be a bot, and it be a horrible thing?
Cens asks: Toadslime, what's the other way, for all this work to be… unused?
Mreh says: There is no work yet, Cens.
Kain says: the whole mud is work.
Mreh says: This is all supposition that I could get working in about a week.
Cens says: There's 20 years of work.
Cens says: or 25 years.
Mreh says: Oh, that.
Toadslime says: if it were to bring lots of players back i would see value.
Kain says: and alot is unused.
Mreh says: The … everything.
Cens nogs.
Kain nods evilly.
Baldur lafs.
Toadslime says: but if its the same couple people here as normal…
Kain agrees evilly with Cens.
Risto seems to fall, but takes a step and recovers.
Cens says: Toadslime, that's the whole point of it.
Amastacia says: let the ones who want to abuse and lose everything have fun, otherwise the ones who want to play with a system that they've previously been denied, seems like a rather fun new idea.
Risto looks drunk.
Risto burps.
Kain says: and if you don't like bots go around killing them.
Mreh says: I fully expect that there will be people who test the limits.
Cens nogs at Kain.
Kain says: bounty on bots.
Risto says: And to test combos, what 2 guild combo best together.
Kerowyn says: yeah, I just want to log an alt and try that out. Simple enough.
Hayt says: i can see me coding weapons that do extra damage and affects to bots ;).
Risto says: For faster leveling / glvling.
Kain says: you surly have a alt char that can take most bots.
Cens says: Risto: that's going to be so much fun.
Mreh says: I'm not saying I know everything that will happen, or that can happen, but I do know I've been here a while and I've cracked down hard on bots in the past.
Amastacia says: if it's just us that are here playing with it, i doubt any of us are going to get into trouble, but it leaves the option open for new players also, not just us and the retired people. Risto noddles solemnly at Cens.
Toadslime says: in the end there will be bots that cant be killed by players.
Mreh says: Someone reports seeing one, I track it down, figure out how to make it move, and ban.
Mreh says: In the end, there will be bots that can't be killed by players.
Cens says: Toadslime, that's when you call the Kain-line.
Amastacia grins evilly at Cens.
Mreh says: But those bots can't interact with the players in an aggressive sense, and can just clear areas.
Toadslime says: even kain can be botted past.
Amastacia says: that's ggoing to be a hefty Bounty to pay Kain.
Risto seems to fall, but takes a step and recovers.
Mreh says: And, it's entirely my expectation that the bots won't be progressing /as/ quickly as a regular player, because the regular player has a higher exploration XP bonus.
Hayt says: any plans on giving perks to players that kill a bot? ;).
Mreh says: No.
Mreh says: Absolutely not.
Toadslime says: those high level bots will be competing for the same FEW areas you have at high lvl.
Mreh says: Immediately abuseable.
Kain says: if you can mutli log you surly have 2 players that can kill most bots.
Risto says: We could give players bot banning items, items that would freeze them for like 30 mins or so, (with a long cooldown).
Mreh says: With higher level players, I, and whoever else I dredge up to code, will be working on areas.
Baldur lafs at Risto and agrees hard.
Amastacia cheers enthusiastically.
Risto says: Or a long cooldown to the same target.
Mreh says: One thing Rha said to me yesterday was that the MUD /needs/ new content.
Kain says: 5.
Amastacia says: that would be frikkin funny Risto.
Risto says: Rod of system shutdown.
Amastacia says: that would be like making a new AGW.
Mreh says: I'm resistant to adding new content because 1) I much prefer working on the inner dynamics and the global systems, and 2) there isn't a playerbase to make it worth my time to write areas instead of new dynamics.
Risto looks drunk.
Amastacia says: only useable on bots, no matter what guild.
Cens says: Time is of the essence.
Kerowyn says: I don't think most of the current content is fully utilized.
Mreh says: If there is a playerbase that warrants new zones, I'll be working on new zones because that's what the game needs.
Kerowyn says: agree.
Mreh says: And, I'm sorry to say it again and again, Kain, but one player does not warrant my full attention for a week/month.
Baldur says: Maybe some areas can be beefed, mob wise, to satisfy some bloodlust.
Kain says: yes with the explorer bonus I have been to places I have never been in the last few days.
Risto seems to fall, but takes a step and recovers.
Mreh says: If you're still a no, Toad, I get it.
Risto seems to fall, but takes a step and recovers.
Cens says: Everyone should try Super Sietch, it has an anti-botting prevention system that instakils you.
Cens says: +l.
Hayt says: i've had weapons on the mud for years that give bonuses based on your explorer rating lol.
Risto looks drunk.
Mreh says: If it's the number of logged in people that worries you, that's tweakable.
Amastacia says: Sharn.
Mreh says: If it's the abusability, I will shut that shit down.
Risto looks drunk.
Cens says: That's the point isn't it. If it works, we start to get players, it's working. If it just doesn't work it can be overturned just as easily.
Hayt says: and other weapons that look at other metrics.
Kain says: just more names listed on who makes the mud look better to someone new.
Josifa says: I'm not sure where I haven't been b/c I've never been there, if that makes sense.
Hayt says: i have a coouple weapons that give bonuses based on total number of pkills and kills that you have lol.
Cens nogs at Josifa.
Hayt says: softech area has some sick stuff in it.
Mreh says: And some dying stuff and some dead stuff.
Baldur grins evilly at Hayt.
Mreh says: Also some repulsive stuff.
Mreh says: Also some repulsing stuff.
Risto says: All with the little price of your sanity.
Risto grins evilly at Hayt.
Josifa says: and there's areas I've been to I don't understand so I'm probably missing something.
Mreh says: SofTech is a scary area with weird syntax.
Cens says: But if that's all I will bow out at 5:30 am.
Hayt says: i need to find some time and “ease” up some of my areas lol.
Mreh says: Josifa, the point of the explorer flags bonus is to encourage you to explore, not to force it.
Amastacia says: every one of Hayts areas are scary with his syntax, that's why it got dubbed “Hayt Syntax”.
Baldur says: I need to find the 4 spots I havent been yet…
Josifa says: where is that? i love exploring.
Mreh says: I've reached the end of the things I wanted to talk about.
Josifa says: I hate dying.
Mreh says: The meeting log is available to everyone, I think, and if not, I will make it available somewhere. (Probably the wiki.).
Kerowyn says: well, log me down as supportive of all of it and seconding the idea of hating dying and wishing the gexp gain rate didn't plateau so hard.
Josifa says: thanks for all your effort Mreh.
Mreh says: I'll throw up the main points of the meeting up there as well, and hopefully that will incite some discussion.
Cens says: I hope we reached some kind of conCencus and I look forward to it. Have a good one everyone.
Risto hiccups.
Risto waves happily at Cens.
Josifa says: goodnight.
Kain waves evilly at Cens.
Cens sleeps at the keyboard quietly and will be Away From the Keyboard for a while.
Amastacia says: i'll agree with it all, it'll be interesting to see how it all goes.
Risto looks drunk.
Amastacia waves happily at Cens.
Mreh says: “Webster@Tubmut tells you: Consensus”.
Mreh says: Tubmud, rather.
Mreh asks: Okay, who's leaving right now?
Mreh says: Cens for sure.
Josifa says: me.
Kerowyn says: I can hang for a while.
Hayt says: lol i have no where to go.
Risto says: I'll stay on yout side, now and forever.
Mreh says: Thank you all for coming. I think this was productive.
Risto sings to Mreh.
Mreh says: He always tracked down that lute.
Mreh grins evilly.
Hector says: thanks for the meeting Mreh. Very informative.
Kerowyn says: indeed.
Baldur agrees.
Mreh says: Thanks for coming, Hector.
Mreh says: I hope you comment on the wiki stuff.
Mreh says: If you need an account, send me a tell and I can set you up. Tell your brother too.
Toadslime asks: if you can log a bot and an character to actually play why do we need multilogging for actual player chars?
Baldur says: Good to see you chaps again :).
Hector says: i will definitely see if i can add something here or there.
Mreh says: That's a good question, Toadslime.
Amastacia says: you logged in at the right time Baldur.
Kerowyn says: Toad because I'm too stupid to code a good bot, so I'll use two alts with some triggers.
Mreh says: And there's an answer.
Kain says: because I don't want to bot.
Mreh says: People who can't take advantage of a written bot.
Toadslime says: you can still tag a bot and play it though.
Kain says: but would like to play anther char if I take 5 mins to kill a mob.
Baldur lafs at Risto.
Mreh says: I can see a restriction where you can either have a second real player logged in, or two bots.
Kain nods evilly.
Mreh says: That seems like something that might make sense, so that you aren't overpopulating the MUD.
Toadslime says: the idea of 2 player characters is what i hate.
Toadslime says: you could do the quest ideas and such as a player and bot flagged char.
Risto says: Yeah I was extremely strongly against it at first.
Kerowyn exclaims: what about multi-classing lol!
Mreh asks: Two players logged in at the same time, or two characters in a party fighting together?
Risto says: I come from the era of limited alts.
Kain agrees evilly with Risto.
Amastacia says: me too Risto.
Amastacia says: 2 chars max.
Risto looks drunk.
Kain says: Hector wants to go kill.
Amastacia says: suicide was a ritual back then.
Mreh says: Thanks, Kain.
Amastacia says: he was not, the original Toadslime was Fremen.
Risto says: Suicide was a ritual when drunk Finns gather.
Kain nods evilly.
Toadslime says: i just cant see a need to have 2 chars logged in and one of them not be bot flagged.
Mreh says: You spend several minutes not doing anything as Toadslime when you attack a mob.
Mreh asks: If you could have, say, your Harko alt also logged in, fighting somewhere else, would you consider it?
Mreh says: If there were no penalty to doing so.
Risto burps.
Toadslime says: sure but i could still do that id just have to log my harko as a bot.
Kain says: I would.
Mreh says: But then your harko could be attacked.
Risto says: Powerleveling is the name of the game.
Nippar says: i would probably make an alt…because fight do last long.
Risto burps.
Risto looks drunk.
Mreh says: This is progress that you've genuinely made, with your own triggers, sure, but not without you making the thing move.
Risto says: To me, that's the biggest hook on dune, now and forever.
Nippar says: i would certainly make one if rekill would be allowed tho :P.
Risto says: To do things slightly faster than everybody else.
Kerowyn says: hrm. I was thinking that my alt would be in the same room, supporting my main.
Mreh says: If you set the character to be a bot, it's with the intention that you aren't paying attention to it.
Mreh says: Sure, Kerowyn, that's another idea.
Risto seems to fall, but takes a step and recovers.
Mreh says: But, I'm trying to offer different perspectives.
Amastacia says: don't forget also Toad, if you level up your alt through a party too much, gxp is going to be slow going, it'll be hard to balance the char out.
Toadslime says: sort of topic risto an achievement type system would be good.
Risto says: Yeah.
Baldur hmmmmmmms.
Mreh says: If alts are registered toward a central character, I could see an achievement system actually working.
Risto burps.
Risto says: Specially when registered_alts comes a thing.
Kain says: this of course is all based on if we actually get people here doing this.
Mreh says: If the people here are doing this, that's enough for me.
Mreh says: It's the whole point of asking your permission to do it in the first place.
Kain says: just because you can bot dosent mean tommorwo we will have 25 bots logged in.
Mreh says: If people are against the idea, as Toad is, I'm not going to do it.
Mreh says: Him and Trystan. Not as those characters though.
Kain says: sorry getting late spelling suffers.
Toadslime says: i did it to prove a point.
Kain says: yes you did.
Mreh says: I was furious.
Kain says: yes you were.
Mreh says: You did do it to prove a point.
Hayt fetches A concrete dildo from another dimension.
Risto burps.
Mreh says: And your point didn't go unproven.
Kerowyn hrms.
Risto burps.
Hayt grins evilly.
Risto says: I still have my elecran bot somewhere amongts my triggers.
Mreh says: Still logging, Hayt.
Hayt says: wooops hehehe.
Kain laughs out loud evilly.
Risto says: Somewhere.
Amastacia says: elecran…. that's a place i never got to visit cause i leveled past it too quick, heh.
Kain says: ah here it is on my active char.
Kain giggles evilly.
Baldur lafs.
Risto burps.
Risto grins evilly at Kain.
Risto seems to fall, but takes a step and recovers.
Risto looks drunk.
Kain says: I am a vote for as of now you can always go back or adjust or take out pent up frustration on them.
Kain smiles evilly.
Risto hiccups.
Risto exclaims: And get an achievement!
Mreh says: Well, it is one of those rules that are difficult to take back (not impossible) once implemented.
Mreh says: If I let bots in tomorrow, and on Friday I say, “No, bots are out again,” I'm going to lose face, and the trust and respect of the people here.
Mreh says: If the experiment lasts a year, it might be more tolerable.
Toadslime says: ok here's what i think too.
Kain thinks carefully and evilly.
Toadslime says: if you can have a legal bot.
Mreh says: There would need to be a huge caveat saying, “This policy could change.”.
Toadslime says: botting on a non bot flagged char should be auto delete for that char and all alts.
Mreh nods.
Risto says: I can agree with that.
Amastacia noggles.
Kain says: yep.
Mreh says: What I was thinking for a first punishment was having that character set as a bot.
Kain says: even better.
Kain grins evilly.
Amastacia says: yeah.
Mreh says: But for a second punishment (and we do have a really decent punishment log), it's account-wide deletion.
Toadslime says: but i do agree with risto that i do miss the kill in the same room trig thing.
Risto says: Deletion of all chars and Mreh also sends you poo via mail.
Hayt ponders over some problem.
Amastacia lafs at Risto.
Hayt says: mreh poo…
Mreh says: I can admit that the MUD could be a bit more lax about that policy, Toad.
Mreh says: But that behaviour was the bare minimum, “Well, sure, it's a bot but it doesn't CHANGE ROOMS so it's cool, right?”.
Hayt fetches An anvil from another dimension.
Mreh says: And no, that wasn't cool.
Risto looks drunk.
Toadslime says: yeah not something that initiates combat when there was none but like there are 3 mobs - i kill one then the next when the first dies then the last when the second dies, then the combat is over.
Mreh says: Because that bot was either 'x room' followed by a kill command if the mob had respawned, or 'wait 15 minutes' followed by a kill command.
Mreh says: I can see why you would want that.
Nippar says: bots were annoying when they entered your room and started killing your mobs…i say let them stay in the same room and kill if they want :D.
Nippar says: its not like they are efficient in that way.
Risto says: The infamous Romanian bow-bot.
Mreh says: But it gets difficult to police because if your fight takes obscenely long (read: Fremen), you could just be finishing off the third mob as the other two respawn.
Kain lafs evilly.
Mreh says: And you get yourself into a room that loops infinitely.
Risto says: Bows to soldier and soldier. still attacks your mobs.
Mreh agrees with Risto.
Toadslime says: killing 1 mob on a fremen wont get you very far though.
Toadslime says: you need lots of mobs that hit lots of times.
Toadslime says: i could probably kill 1 cymek for a 4 months and not glvl.
Risto burps.
Nippar lafs.
Risto looks drunk.
Risto says: Less is more.
Mreh says: There's still this weird edge case grey area where allowing you to kill the next mob in a room via trigger is iffy.
Amastacia lafs at Risto.
Risto winks suggestively at Amastacia.
Amastacia says: yeah, not in my case, if i had your glvl, maybe.
Mreh says: But I can probably work my way through that.
Hayt says: il'l have to read the rest o fthis log a bit later.
Hayt says: workroom to idle for a bit.
Risto says: I think what most needs overhaul, is the dodge/roll/block.
Mreh says: Yes.
Mreh says: But B/D/R needs to be overhauled alongside the rest of combat.
Mreh says: Which I am not going to touch until I can overhaul weapon/armour classes.
Risto noddles solemnly.
Mreh says: Which means we're waiting a while.
Mreh says: Half Life 3 will be done before I start.
Kerowyn says: lol.
Risto goes 'Heh'.
Amastacia lafs.
Toadslime says: if d/r/b got toned down there would be a reason to play fremen right there :p.
Risto looks drunk.
Mreh says: I want to reimplment B/D/R as damage shields (if you've played Sard in the last year, you know what I'm talking about).
Amastacia nods.
Mreh says: Then let them stack one in front of the next.
Risto's eyes glaze over as he enters a berserker rage.
Risto burps.
Mreh says: You can absorb only so much block damage before you're forced to try rolling before you have to rely on dodging (or some order thereof), before/after guild defense.
Amastacia acks.
Risto burps.
Mreh says: That way, you won't get a mysterious round of dodge fuckery and die out of nowhere.
Mreh says: Which, I freely admit, is a huge problem.
Amastacia exclaims: you nakkid drunk finnish monkey!
Risto looks drunk.
Risto says: Oh yes.
Risto says: The best and only kind.
Kerowyn says: afk a bit.
Kerowyn will be Away From the Keyboard for a while.
Kain thanks Mreh evilly and evilly.
Kain says: gotta go bed calling.
Risto looks drunk.
|
|
Seked slope of Great Pyramid
Seked (or seqed) is an ancient Egyptian term describing the inclination of the triangular faces of a right pyramid.[1] The system was based on the Egyptian's length measure known as the royal cubit. It was subdivided into seven palms, each of which was sub-divided into four digits. The inclination of measured slopes was therefore expressed as the number of horizontal palms and digits relative to each royal cubit rise.
The seked is proportional to the reciprocal of our modern measure of slope or gradient, and to the cotangent of the angle of elevation.[2] Specifically, if s is the seked, m the slope (rise over run), and ${\displaystyle \phi }$ the angle of elevation from horizontal, then:
${\displaystyle s={\frac {7}{m}}=7\cot(\phi ).}$
The most famous example of a seked slope is of the Great Pyramid of Giza in Egypt built around 2,550 B.C. Based on modern surveys, the faces of this monument had a seked of 5 1/2, or 5 palms and 2 digits, in modern terms equivalent to a slope of 1.27, a gradient of 127%, and an elevation of 51.84° from the horizontal (in our 360 degree system).
## Overview
Information on the use of the seked in the design of pyramids has been obtained from two mathematical papyri; the Rhind Mathematical Papyrus in the British Museum and the Moscow Mathematical Papyrus in the Museum of Fine Arts.[3] Although there is no direct evidence of its application from the archaeology of the Old Kingdom, there are a number of examples from the two mathematical papyri, which date to the Middle Kingdom that show the use of this system for defining the slopes of the sides of pyramids, based on their height and base dimensions. The most widely quoted example is perhaps problem 56 from the Rhind Mathematical Papyrus.
The most famous of all the pyramids of Egypt is the Great Pyramid of Giza built around 2,550 B.C.. Based on the surveys of this structure that have been carried out by Flinders Petrie and others, the slopes of the faces of this monument were a seked of 5 1/2, or 5 palms and 2 digits [see figure above] which equates to a slope of 51.84° from the horizontal, using the modern 360 degree system. This slope would probably have been accurately applied during construction by way of 'A frame' shaped wooden tools with plumb bobs, marked to the correct incline, so that slopes could be measured out and checked efficiently.[citation needed]
Furthermore, according to Petrie's survey data in "The Pyramids and Temples of Gizeh" [4] the mean slope of the Great Pyramid's entrance passage is 26° 31' 23" ± 5". This is less than 1/20th of one degree in deviation from an ideal slope of 1 in 2, which is 26° 33' 54". This equates to a seked of 14, and is generally considered to have been the intentional designed slope applied by the Old Kingdom builders for internal passages.[citation needed]
## Pyramid slopes
Casing stone from the Great Pyramid
The seked of a pyramid is described by Richard Gillings in his book 'Mathematics in the Time of the Pharaohs' as follows:
"The seked of a right pyramid is the inclination of any one of the four triangular faces to the horizontal plane of its base, and is measured as so many horizontal units per one vertical unit rise. It is thus a measure equivalent to our modern cotangent of the angle of slope. In general, the seked of a pyramid is a kind of fraction, given as so many palms horizontally for each cubit of vertically, where 7 palm equal one cubit. The Egyptian word 'seked' is thus related [in meaning, not origin] to our modern word 'gradient'."[2]
Many of the smaller pyramids in Egypt have varying slopes; however, like the Great Pyramid of Giza, the pyramid at Meidum is thought to have had sides that sloped by [5] 51.842° or 51° 50' 35", which is a seked of 512 palms.
The Great Pyramid scholar Professor I.E.S Edwards considered this to have been the 'normal' or most typical slope choice for pyramids.[6] Flinders Petrie also noted the similarity of the slope of this pyramid to that of the Great Pyramid at Giza, and both Egyptologists considered it to have been a deliberate choice, based on a desire to ensure that the circuit of the base of the pyramids precisely equalled the length of a circle that would be swept out if the pyramid's height were used as a radius.[7] Petrie wrote "...these relations of areas and of circular ratio are so systematic that we should grant that they were in the builder's design".[8]
## References
1. ^ Gillings: Mathematics in the Time of the Pharaohs 1982: pp 212
2. ^ a b Gillings: Mathematics in the Time of the Pharaohs 1982: pp 212
3. ^ Gillings: Mathematics in the Time of the Pharaohs 1982
4. ^ Petrie: The Pyramids and Temples of Gizeh 1893: pp58
5. ^ Petrie: Medum 1892
6. ^ Edwards. The Pyramids of Egypt 1979. pp269
7. ^ Lightbody. Egyptian Tomb Architecture: The Archaeological Facts of Pharaonic Circular Symbolism 2008: pp 22–27,
8. ^ Petrie Wisdom of the Egyptians 1940: 30
• Edwards, I.E.S. (1979). The Pyramids of Egypt. Penguin.
• Gillings, Richard (1982). Mathematics in the Time of the Pharaohs. Dover.
• Lightbody, David I (2008). Egyptian Tomb Architecture: The Archaeological Facts of Pharaonic Circular Symbolism. British Archaeological Reports International Series S1852. ISBN 978-1-4073-0339-0.
• Petrie, Sir William Matthew Flinders (1883). The Pyramids and Temples of Gizeh. Field & Tuer. ISBN 0-7103-0709-8.
• Petrie, Flinders (1892). Medum. David Nutt: London.
• Petrie, Flinders (1940). Wisdom of the Egyptians. British School of Archaeology in Egypt and B. Quaritch Ltd.
|
|
# Change of pressure simple cylinder
1. Mar 31, 2016
### ajd-brown
I am trying to calculate the spring rate for an air filled shock-absorber, however I am struggling to get my head around the change non-linearity at the start of compression. Obviously if the shock is fully extended then it will be applying no compressive force to the chassis so that accounts for the force starting from zero. but why does it not increase linearly to begin with? I hasten to add that I understand the non-linearity at the end of the stroke!
Could anybody give me a hint?
Thanks,
#### Attached Files:
• ###### Graphs.gif
File size:
121.8 KB
Views:
95
2. Mar 31, 2016
### haruspex
Not sure what you mean by initial non-linearity. Any smooth curve starts off effectively linear, but the slope may change.
Which graph are you referring to? The upper graph is for compression against force, and this does nothing surprising. It starts with a low gradient, which gradually increases.
If it's the lower graph, you'll need to explain to me what is meant by "shock travel" in this context.
3. Mar 31, 2016
### ajd-brown
Thank you for your reply, I am trying to create the second graph using data I have for a bicycle shock absorber.
Shock travel is essentially the compression of the air spring'. So at 0 travel, the force is zero because the spring is at its free length (although this is technically not true since there is a greater than atmos. pressure in the shock at this point, (see Pressure Pa. cell in the top left)) and therefore it cannot transmit a force into the frame at this point.
What i am wondering is why will does the bottom graph show a decreasing gradient of force/air-spring' compression?
I hope this makes sense!
Thanks
4. Mar 31, 2016
### haruspex
So how is that different from compression in the other graph? Is it that these two graphs are from different sources, and it's the difference between them that is puzzling you?
Ok, so there is something in the construction which means that the piston cannot go all the way out to the fully uncompressed length. If this something has some elasticity, it could explain the shape of tne lower graph.
5. Mar 31, 2016
### ajd-brown
It is the same. The lower graph has been created by Fox (a company that sell shocks) and I am trying to achieve the same graph in excel using an ideal gas approximation.
Yes, there is. You are spot on! I hadn't thought of that until now! Essentially there is a secondary smaller volume under the piston which has equal pressure to the volume above and thus at one particular length the resultant force on the piston is zero. which accounts for the force dropping to zero as the travel decreases.
(See image for reference)
http://www.pinkbike.com/news/Tech-Tuesday-negative-spring-air-shocks-2012.html
Thank you for making me think!
File size:
75.4 KB
Views:
81
6. Apr 1, 2016
### OldYat47
Start with the static load on the shock. That's the simplest way. Then assume that all the heat of compression stays in the air inside the cylinder as the shock is compressed. The internal pressure of the shock at any point of travel is the combination of reduced volume and increased pressure. In most cases the temperature increases quickly enough that no condensation (like water vapor) will occur, and the "fully extended" temperature of the air will be slightly higher than initial temperature due to friction heating.
It is difficult to calculate how hot the air will be over time unless you can model the shock's construction. Remember that it cools some every time the shock extends. Even so, the amount of heat transfer is usually negligible as a component of the pressure. Even friction heating doesn't affect the pressure much (assuming the seals are decently designed).
7. Apr 1, 2016
### ajd-brown
Hi, yes thanks for your input however I have solved the problem and it correlates with experimental results from previous tests. I will post this later on this evening to complete the thread.
I ended up using an polytropic process of Pv^(cv/cp) = C
And by calculating this for both the large and small volumes, assuming that the pressure in both at the equilibrium position was equal, it was possible to get calculate the resultance force on the piston at any point in the travel which yielded the 2nd graph from my 1st post
|
|
Volume 14, Issue 2
Discrete Approximations for Singularly Perturbed Boundary Value Problems with Parabolic Layers, II
DOI:
J. Comp. Math., 14 (1996), pp. 183-194
Published online: 1996-04
Preview Full PDF 90 1701
Export citation
Cited by
• Abstract
In his series of three papers we study singularly perturbed (SP) boundary value problems for equations of elliptic and parabolic type. For small values of the perturbation parameter parabolic boundary and interior layers appear in these problems. If classical discretisation methods are used, the solution of the finite difference scheme and the approximation of the diffusive flux do not converge uniformly with respect to this parameter. Using the method of special, adapted grids, we can construct difference schemes that allow approximation of the solution and the normalised diffusive flux uniformly with respect to the small parameter. We also consider singularly perturbed boundary value problems for convection-diffusion equations. Also for these problems we construct special finite difference schemes, the solution of which converges $\eps$-uniformly. We study what problems appear, when classical schemes are used for the approximation of the spatial derivatives. We compare the results with those obtained by the adapted approach. Results of numerical experiments are discussed. In the three papers we first give an introduction on the general problem, and then we consider respectively (i) Problems for SP parabolic equations, for which the solution and the normalised diffusive fluxes are required; (ii) Problems for SP elliptic equations with boundary conditions of Dirichlet, Neumann and Robin type; (iii) Problems for SP parabolic equation with discontinuous boundary conditions.
• Keywords
@Article{JCM-14-183, author = {}, title = {Discrete Approximations for Singularly Perturbed Boundary Value Problems with Parabolic Layers, II}, journal = {Journal of Computational Mathematics}, year = {1996}, volume = {14}, number = {2}, pages = {183--194}, abstract = { In his series of three papers we study singularly perturbed (SP) boundary value problems for equations of elliptic and parabolic type. For small values of the perturbation parameter parabolic boundary and interior layers appear in these problems. If classical discretisation methods are used, the solution of the finite difference scheme and the approximation of the diffusive flux do not converge uniformly with respect to this parameter. Using the method of special, adapted grids, we can construct difference schemes that allow approximation of the solution and the normalised diffusive flux uniformly with respect to the small parameter. We also consider singularly perturbed boundary value problems for convection-diffusion equations. Also for these problems we construct special finite difference schemes, the solution of which converges $\eps$-uniformly. We study what problems appear, when classical schemes are used for the approximation of the spatial derivatives. We compare the results with those obtained by the adapted approach. Results of numerical experiments are discussed. In the three papers we first give an introduction on the general problem, and then we consider respectively (i) Problems for SP parabolic equations, for which the solution and the normalised diffusive fluxes are required; (ii) Problems for SP elliptic equations with boundary conditions of Dirichlet, Neumann and Robin type; (iii) Problems for SP parabolic equation with discontinuous boundary conditions. }, issn = {1991-7139}, doi = {https://doi.org/}, url = {http://global-sci.org/intro/article_detail/jcm/9229.html} }
TY - JOUR T1 - Discrete Approximations for Singularly Perturbed Boundary Value Problems with Parabolic Layers, II JO - Journal of Computational Mathematics VL - 2 SP - 183 EP - 194 PY - 1996 DA - 1996/04 SN - 14 DO - http://doi.org/ UR - https://global-sci.org/intro/article_detail/jcm/9229.html KW - AB - In his series of three papers we study singularly perturbed (SP) boundary value problems for equations of elliptic and parabolic type. For small values of the perturbation parameter parabolic boundary and interior layers appear in these problems. If classical discretisation methods are used, the solution of the finite difference scheme and the approximation of the diffusive flux do not converge uniformly with respect to this parameter. Using the method of special, adapted grids, we can construct difference schemes that allow approximation of the solution and the normalised diffusive flux uniformly with respect to the small parameter. We also consider singularly perturbed boundary value problems for convection-diffusion equations. Also for these problems we construct special finite difference schemes, the solution of which converges $\eps$-uniformly. We study what problems appear, when classical schemes are used for the approximation of the spatial derivatives. We compare the results with those obtained by the adapted approach. Results of numerical experiments are discussed. In the three papers we first give an introduction on the general problem, and then we consider respectively (i) Problems for SP parabolic equations, for which the solution and the normalised diffusive fluxes are required; (ii) Problems for SP elliptic equations with boundary conditions of Dirichlet, Neumann and Robin type; (iii) Problems for SP parabolic equation with discontinuous boundary conditions.
|
|
# Types of poles of an elliptic function in a period parallelogram
I am reading Apostle's Modular Functions and Dirichlet Series in Number Theory. Theorem 1.7 says that a non-constant elliptic function has at least 2 simple poles or a double pole in each period parallelogram. And the book went on saying that these are the only 2 possibilities, and that they led to two different theories proposed by Weierstrass and Jacobi.
I am wondering why cannot we have a triple pole or just any poles of orders higher than 1.
• by double periodicity $\int_{\partial P} f(z)dz = 0 = 2 i \pi \sum_{a \in P} Res(f(z),a)$ where $P$ is a period parallelogram chosen such that $f(z)$ has no zero on $\partial P$, and $a$ are the poles of $f(z)$ – reuns Nov 1 '16 at 3:22
• I understand that point. But that does not eliminate the possibility of having higher orders of poles. The argument principle pertains only to the residues, with only concerns the first negative power in Laurent expansion... – Sekots Reivan Nov 1 '16 at 3:24
• The weierstrass function has only one pole of order $2$ by parallelogram, and its derivatives have only one pole of order $2+k$ – reuns Nov 1 '16 at 3:25
• And you should try another book : diamond shurman "a first course in modular forms" pdf – reuns Nov 1 '16 at 3:47
• Thanks! I have that as well. – Sekots Reivan Nov 1 '16 at 3:48
You can have poles of higher orders. For example, $\wp^2$ has poles of order $4$ in every point of the period lattice. Also the derivative $\wp'$ has a pole of order $3$.
A theorem of Abel says that you can find an elliptic function with any prescribed zeros and poles $\alpha_i$ and orders $m_i$ (positive $m_i$ being the order of a zero and negative $m_i$ the order of a pole) if and only if $\sum m_i = 0$ and $\sum m_i \alpha_i \in L$ lies in the lattice.
This excludes the possibility of a single simple pole $\alpha$ already: the first sum forces the existence of exactly one simple zero, and the second equation forces that zero to appear at $\alpha$.
• well $\wp'(z)$ has a pole of order $3$ due to direct differentiation – reuns Nov 1 '16 at 3:25
• @user1952009 Yes you're right – user384359 Nov 1 '16 at 3:34
• and $\sum m_i = 0$ is the argument principle : $\int_{\partial P} \frac{f'(z)}{f(z)}dz=0$, while $\sum m_i a_i \in \Lambda$ is $2i \pi \sum m_i a_i = \int_{\partial P} z \frac{f'(z)}{f(z)}dz = \int_{\partial P} \log f(z) dz = 2i \pi \lambda$ where $\lambda \in \Lambda$ (since $\log f(z+w_1) = \log f(z)+ 2i k_1 \pi$ and $\log f(z+w_2) = \log f(z)+ 2i k_2 \pi$) – reuns Nov 1 '16 at 3:36
• @user1952009 Abel's theorem really refers to the converse, existence of elliptic functions to given divisors, which is not so elementary – user384359 Nov 1 '16 at 5:02
• Yes of course. Anyway answering to such an elementary question with a complicated theorem wasn't a good idea. – reuns Nov 1 '16 at 5:07
|
|
INTELLIGENT WORK FORUMS
FOR ENGINEERING PROFESSIONALS
Are you an
Engineering professional?
Join Eng-Tips Forums!
• Talk With Other Members
• Be Notified Of Responses
• Keyword Search
Favorite Forums
• Automated Signatures
• Best Of All, It's Free!
*Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail.
Posting Guidelines
Promoting, selling, recruiting, coursework and thesis posting is forbidden.
Protection criteria for obstructions under ESFR system?
Protection criteria for obstructions under ESFR system?
(OP)
The application is industrial located in the US. Typically we go by EH G1 and we are installing process protection and for obstructions (including platforms) much lower below the roof. We need to provide data for the building sprinkler contractor for some tie in points and flow requirements. This time however the building is protected by ESFR sprinklers (12 heads K16.3, 52 psi).
What should we follow here for just obstruction protection? Abide by the roof criteria (ESFR) or standard criteria like EH G1? To me the flow of ESFR is absurd for just one dimensional platform or duct as it seems not rational to expect fire travel like that. We are not talking for flammable liquid installation. Mostly is metallic with ordinary combustibles, pumps, cable trays and lot of ducts and piping.
Has anyone encountered that and what criteria were applied for obstructions?
RE: Protection criteria for obstructions under ESFR system?
On some projects, we have protected for the occupancy / hazard class under a solid platform. For duct, we always put the ESFR under the duct and size the pipe supplying those sprinklers the same as the overhead so that you do not have to include 2 additional sprinklers in your ESFR calculation.
Travis Mack
MFP Design, LLC
www.mfpdesign.com
RE: Protection criteria for obstructions under ESFR system?
(OP)
Can you please provide the reference in NFPA 13 of what you said about the ducts and the two additional sprinklers?
RE: Protection criteria for obstructions under ESFR system?
Is this a warehouse occupancy if not ESFR is not advised.
RE: Protection criteria for obstructions under ESFR system?
(OP)
LCREP,
This is not a warehouse and I already expressed my concerns about the ESFR sprinklers. That wasn't something I could influence. I believe there is a misunderstanding of building contractors about ESFRs in non storage occupancies. I would attributed mainly to two reasons, a) "throw a bunch of ESFRS and we can protect pretty much any occupancy that may come in the future", b) a rule in IBC which allows the omittion of smoke exhaust systems when the building is protected with ESFRs. This seems quite tempting for building contractors or owners. But I believe they fail to understand that such a decision will bring other implications. For example in our applications, meeting all obstruction rules can be sometimes impossible, especially between machinery/equipment and platforms. We most likely end up with more heads but that doesn't seem the right approach to me especially when you clearly don't need a waterfall (e.g. water mixed with liquids that may require containment etc).
cdafd,
we will have to meet all obstructions rules but the initial question is all about how one would have to do the hyd. calculation.
RE: Protection criteria for obstructions under ESFR system?
UFT12
Have you reached out to the insurance carrier loss prevention engineer for guidance? As someone who worked in this area I would strongly advise the client not to use ESFR in a non storage application. The water damage potential alone is a concern. When I was involved in these types of discussions and the client would not change the sprinkler design our underwriter would hit them with either a $500K water/ sprinkler damage deductible and or withdraw the sprinkler damage coverage all together. That usually got them to reconsider. The saving on not having to install a water supply i.e. fire pump etc was usually enough to convince them when they really started to look at$\$.
RE: Protection criteria for obstructions under ESFR system?
Quote:
Can you please provide the reference in NFPA 13 of what you said about the ducts and the two additional sprinklers?
I'm sorry, but I can't spend the time doing a keyword search of NFPA 13. Please review all of the calculation criteria for ESFR in NFPA 13. You will find the criteria.
However, as others have said, in a non-storage application, ESFR is likely not the best choice.
Best of luck and Merry Christmas!
Travis Mack
MFP Design, LLC
www.mfpdesign.com
RE: Protection criteria for obstructions under ESFR system?
(OP)
No problem, I just though you had it handy in mind.
RE: Protection criteria for obstructions under ESFR system?
Sadly, as I get older, those little details seem to fall away. I can tell you basically what the page looks like and where you may find it on the page, but I lose the citations. Oh, the joys of getting older.
Merry Christmas!!
Travis Mack
MFP Design, LLC
www.mfpdesign.com
RE: Protection criteria for obstructions under ESFR system?
Sadly, as I get older, those little details seem to fall away. I can tell you basically what the page looks like and where you may find it on the page, but I lose the citations. Oh, the joys of getting older.
Travis
Do not retire.....lol been out of action for little over 2 years and stuff I knew off the top of my head is gone! After 36 years the last 5 years being the to go to guy for 65 folks with a 100 questions a day.....stuff is gone.
Now I have important stuff to worry about....where to travel to next....lol
Merry Christmas
RE: Protection criteria for obstructions under ESFR system?
(OP)
Well, this huge standard was always a brain stress test every time you need to dig up something, no matter how old you are.
Merry Christmas
Red Flag This Post
Please let us know here why this post is inappropriate. Reasons such as off-topic, duplicates, flames, illegal, vulgar, or students posting their homework.
Red Flag Submitted
Thank you for helping keep Eng-Tips Forums free from inappropriate posts.
The Eng-Tips staff will check this out and take appropriate action.
Close Box
Join Eng-Tips® Today!
Join your peers on the Internet's largest technical engineering professional community.
It's easy to join and it's free.
Here's Why Members Love Eng-Tips Forums:
• Talk To Other Members
• Notification Of Responses To Questions
• Favorite Forums One Click Access
• Keyword Search Of All Posts, And More...
Register now while it's still free!
|
|
## Tuesday, June 28, 2016
### Go Shawty... It's My Birthday... In A Super Magical Square (6)
So I decided to try more birthdays... because I'm a puzzle addict. So I started to work on Emma's, one of my littles, and since she was born 4-2-20-11... I didn't have a huge number range to work with and I didn't have a very big sum to work with either (37). The amount of combinations I could use to total 37 without repeating was very tricky. So I gave up for a little while and did my boyfriend's birthday instead. Even though he was born in January, eliminating early my ability to use the number 1, the rest of his numbers gave me a good range: 25, 19, 80. I tackled his the same way I did mine by starting with the diagonals but for his, since I had already worked with quadrants for my square, I decided to try those first. I think this actually made it easier... because once I got all the quadrants and middle squared away... with very minor adjustments the rows and columns were already in place.
Now for my crazy little Emmeline, I had to get creative. So like Nick did, I also used a negative number. I felt like this was probably an "illegal" move also... but it was the only way I could get it t work. Especially since she already had 2 and 4 in her birthdate, I was limited about which small numbers I could use to adjust the sums.
So THEN I wanted to do something kind of crazy and see where it went. Because I'm not adventurous enough, or experienced enough, I stayed in a 4x4 and used the first 4 letters of my name [MICH]. Then I assigned those as a number value [13, 9, 3, 8] and then developed kind of a "mod" style number association where numbers could be negative and also higher than 26.
That is actually the finished square once I got everything to be SUPER magical and equal to 33. I tried to stay within 1-26 but it was basically impossible. I mean, is it actually impossible? Because the total was 33 so I need 24 combinations of addition expressions equal to 33 using only numbers 1-26, without repeating. So since I could NOT figure out how to do that (I'm sure there is a formula... I just can't figure out how to figure it out lol) I went to using negative numbers and numbers above 26. This led me to the above square... FINALLY without repeating - Well, technically I did repeat but when I tried to change it... I evened out all the rows, columns, and diagonals but then my quadrants and were not equal to 33 :( So I went with second best.
THEN....
I subbed in the letter equivalents. However, you will notice that many of letters ended up repeating. So does this have something to do with the fact that I cycled them in a modular form... which I technically didn't even do correctly because I started A as 1 and not 0. At this point, you know I had to go back and try to fix it to be done the way it is supposed to and with a correct modular formation (because my growth mindset makes me). In this square, for the number portion I could NOT get it to be correct without repeating one number; same problem from the first time as well. However, with the letter portion, I was able to eliminate one of the repeated letters!
By default, the repeated number (12/M) would be a repeating letter but the only other letter that repeated was U when it was used as 20 and -6.
While these puzzles took a fair amount of time, going forward I would love to further research the patterns or formulas that go into making the squares "magic"... if there even is one. Based on my understanding of what math is though, I assume there must be a pattern. Reflecting on what was taking place during the activity was actually a lot of adding and subtracting. I roped my 7 year old into helping me with her birthday square, while she loves puzzles and math... it was a little over her head. She WAS however at least practicing adding 4 numbers up to equal a certain sum. So in that aspect, I believe there was educational value for her.
## Thursday, June 16, 2016
### The F word. (5)
Word, phrase, whatever it is. Dun dun dun... The Fundamental Theorem of Calculus. The bane of my existence. When calculus is put in front of me... this is what happens on the inside.
And outside really. Don't tell my best friend Jo B, but I "have yet to" [i.e. can't] understand calculus. If I were in a battle and I was propositioned with "Give Me Calculus or Give Me Death," it was nice knowing you.
So I activated my growth mindset and after another failed attempt of understanding any form of calculus thought, I said to myself "Lulu, you are capable of understanding this and you WILL get a deeper understanding." Here I am, back at it again. First, I went to dummies.com to try to explore The Fundamental Theorem of Calculus for dummies.
Here is my now [much deeper] understanding of the (1st) FTC:
F(x)=xaf(t) dt
1. It's important. Phew! That was a tricky one at first. You did it! Press on...
2. Then... we have a function "f(t)", okay I'm totally with you. The function creates a line on our graph.
3. Now... they want me to find the area under the curve? This is called F(x). So this F(x) IS the area under the "curve" or the function. I still feel okay, pending I'm not completely wrong.
4. Now, the area that we are measuring starts at value a and ends at value x. Makes sense.
5. Now I'm getting lost... dt represents the derivative of f(t) which is the amount that it is increasing or decreasing?
6. And that's it!
$\int_a^b f(x)\, dx = F(b)-F(a).$
1. Okay... so now we tackle the 2nd FTC, which I guess solves for all definite integrals? Well they switched me from f(t) to f(x) but I guess they're the same thing? Also, F(b) - F(a) is apparently F(x) evaluated from a to b... but where did b come from? Now I am lost. But hey! I got through at least half of the explanation without crying. I call this a win all around.
## Sunday, June 12, 2016
### My Bff Jo B. (4)
If I haven't already said this... Mathematical Mindsets is the best education "text" book I've ever read. Jo Boaler, "Jo B", starts by taking you on a journey through how the brain works and what we can do as educators, parents, peers, etc. The brain is such a complex being but the most fascinating piece was that our brain never stops changing or growing. Inherently, we think that our "smartness" is what we are born with and we simply activate our preexisting knowledge by learning in classes. Now if you actually think about that... it sounds ridiculous. That's exactly what Jo points out.
Jo follows up this exploration of the brain by how the brain reacts when we are making mistakes. I think so many of us have gotten used to knowing or finding an "easy" way out of many tasks and activities. This has been evidently true in my learning and even my teaching. Students are always wanting the "answers" and never interested in figuring out how to get there. Students are TERRIFIED of making mistakes. That is the standardized testing, cookie-cutter education that we are bringing students up in today. Jo fights strongly to combat that, explaining that it's a total waste of time to teach and learn that way because your brain is not actually learning.
I've been politely reminded by Jo that math is an art, a beautiful art that is for the creating. She spends a lot of time (and I won't lie... gets a little repetitive) explaining how to bring back the art and creativity of math. Not only does she give many, many creative and realistic ideas to apply to the classroom but also apps and assessments. This definitely one of the best books I've ever read in the field of educator education. Not only do I plan on implementing a growth mindset into the classroom for it's overall benefit; I also see mathematical mindsets in my students' futures.
The one awesome thing about this book is that I have been able to relate much of this discussion to what I have experienced with a few teachers, and in this class. I find myself learning much better when the activities are "hands-on" and I'm able to try to understand it through my own mind first and then get an explanation. As I have mentioned many, many times, I do still battle with this fixed mindset that I have had since at least high school if not before. I wish I knew when my learning started to change but I could relate to so many of the negative thoughts and experiences that Jo expressed in the book. Even thinking I was "good" at math because I did well on timed tests or that I didn't have to try to understand things until very high level math.
## Sunday, June 5, 2016
### The Dog Ate My Homework (3)
After reading Mathematical Mindsets (Boaler), I was intrigued by what she stated about the research on homework and why homework (or at least the kind that teachers give today) is NOT beneficial for the student. How crazy is that thought? I've always thought students need to practice what they have learned otherwise they won't remember it or understand it, but apparently, I'm wrong!
I decided to invest some extra research into how learning math is related to homework. I'm pretty much a "type A" person so I decided to make a list of the negatives of homework that is being assigned and then the positives (if there even are any!) The points I will reference refer to mostly the type of homework that I mentioned, problems presented after a lesson to practice those new skills.
1. Differentiation: There is no varied level of ability to the homework. All students receive the same homework no matter what level of understanding they have that day. This can be detrimental to all levels of students encouraging some students to be bored and others to be defeated.
2. Amount: Students are being assigned "too much" homework or "too many" problems. Once a student understands a concept, the repeated practice of it does not help the students understand the concept any better than they already do. So students who struggle are not going to learn anything more than they did during the lesson that day. Students who understand it do not need to keep doing it for 10 or 15 or 20 problems because they already have an understanding of what is being asked.
This quote from the journal below describes perfectly what goes wrong when students are assigned too many rote practice problems.
"The first gets frustrated and quits, the second gets bored and quits, and the third might get frustrated and bored by all the time it takes to get done or hastily complete the work with errors. Some may copy each other's work along the way, too."
These are not the emotions we want our students to go through when they are trying to practice or continue learning at home.
3. Parental Involvement: Parents are either too involved or not involved enough for their child to successfully complete it. There really is no winning. Parents, while I totally love them and value their importance, are not trained in education or teaching of a child (in most instances). Most parents are used to the way they grew up and the way they were educated and we are not trying to recreate the 70s, 80, or even 90s... we are trying to revolutionize education and learning. So what sense would it make to spend the whole day putting a child in one mind set, to send them home and ask their parents for help who have a completely different understanding and mindset about learning. This does more harm than good. Other parents may just want their child to succeed and think that good grades will mean that... while robbing their child of understanding, they rush their child into answering.
4. Home-School Relationship: Students are encouraged by parents, teachers, and friends to be involved in many "extra-curricular" activities. Not only are these activities great for learning and their self esteem and to teach the child many aspects of appropriate social interactions. When we assign homework to be 30 minutes for this subject and this many problems for this subject (which could vary from child to child in being 10 minutes or 2 hours!), we rob children of these opportunities outside of academia.
Sources:
One of the most interesting things that I've taken away from this class is our "homework". I've made the connection already between Jo Boaler's book, this website, and the class about the reflection piece of learning, like how in class we make a quick reflection on every discussion topic to kind of wrap up any thoughts or take-aways. The blog acts in a similar fashion to be a reflection of something we explored in class or in our "homework" time and to take a step further. While I have honestly found the blogging a little painful (which I really normally love) I think I was definitely trying to actively stretch my thinking about math. I'm also a little hard on myself by not thinking my work is good enough... can't pin point if that is from my fixed mindset admitting defeat or my growth mindset always wanting to do more.
## Sunday, May 29, 2016
### ...My {Math} Life Is A Lie (2)
I can't even believe how great this book is (and I'm a math-y... not a read-y) Mathematical Mindsets. I have easily spent over two hours of outside class time reading, highlighting, rereading, note taking, and side-researching everything the book talks about... and I haven't even gotten to the good stuff! All of the "which I write about in Chapter yada yada" that she has said so far.
Why has my whole life been a lie, you wonder? Well, that is because "math education" or math in schools today (and in my day) is not actually what math is! Timed tests, repetitive procedures, rules and memorization, all of it... We've been forced to think that that is what math is, and it's not! I'm here to tell you [or at least pass on from Jo] that we need to take back math. As math teachers, elementary teachers, mathematicians... all of us. We need to put the conceptual process back into math so kids, students, adults, anybody can explore math and realize that it is about more than just answering as fast as you can and using rote procedures.
Math is patterns! I feel like most (if not all) math can relate back to patterns. The book talks about how babies and infants (which I think are the same thing... no?) are obsessed with math and patterns. I didn't even think about this! So we just shove math at kids from the time they are born all the way through preschool and they love it. They just manipulate, create, build, learn and conceptualize and then as soon as they get to kindergarten we say "Math is a test and you have to be the fastest or you suck at math." Math trauma anyone? Math anxiety anyone?
I don't know about you, but the school I taught at had a SCHOOL WIDE GOAL for timed tests. Like hearing that the first time, made my heart hurt. But now, with Jo B in my life, I literally want to die thinking about it. #stopthetimedtests #mathisnotarace Another thing I heard all day long was "[the students] HAVE to know their facts first or they won't understand this"... the lie detector determined that that is ALSO a lie Jerry. I know how you're feeling right now, breathe through it. Jo B herself did not memorize any facts growing up and she went to become a famous mathematician and author. This is crazy, right?
So HOW do we teach math facts then. HOW do we teach math if we can't use the same rote methods that really haven't been working for years. Number sense - of course!
What is number sense you might ask? Teaching kids to memorize numbers right? Wrong-o. Teaching kids to be flexible with numbers, to PLAY with them, that's right I said PLAY with numbers... while learning. It can happen people. This website tells us that the five components of number sense are: number meaning, number relationships, number magnitude, operations involving numbers and referents for numbers, and referents for numbers and quantities. Basically what you need to know about number sense is that it's the foundation for math and it helps kids make sense of math.
I bet if you talk to someone who specializes in ELA or who has seen a dynamic reading program they will tell you all about the fundamentals of phonics and how phonemes has revolutionized learning to read. Well, guess what? Number sense will do that same thing for math and your kids will LOVE it. What it comes down to is that kids need to be able to learn numbers without structure, without rules, and without stress!
The other step towards freeing students of math anxiety by doing the REAL math is to do so much modeling our faces fall off. And I don't mean stand up there and talk at the kids and show them method after method after method. I mean put that pencil in their hand, that array on their desk, those chips or coins or blocks or whatever in front of them and let them do it THEIR way. Then, you have 26 little modelers in the room who will completely engage their classmates by showing their thinking, their process, and their creativity. Exploration is the answer, what was the question?
## Sunday, May 22, 2016
### Math Toys (1)
So since I've been introduced to this Math Toybox... I've been addicted to these Pattern Blocks. I started a tessellation in class and spent a good chunk of time continuing it. The problem is, I will be honest, I think at some point I didn't copy it correctly so it's really (maybe) not a true tessellation... very sadly. But overall, it looks totally awesome! In addition to it being very addicting, time consuming, and fun... this could also be an awesome tool to use in the classroom! (Everything I do gets connected back to the classroom some how :-P ) One of the things that I love most about this website is that it defies the teacher complaint of manipulatives.
"They take too long to get out" - This takes about 33.4 seconds to load on a classroom iPad/computer/laptop/chromebook/smartboard/whatever!
"The kids play with them and don't do what they're supposed to" mixed with "They're too distracting" - This is on the interweb so you can easily have them "put it away" or out of sight and pull it right back up!
Manipulatives are SO beneficial to math. Don't believe me? Well I did a little research...
By using manipulatives in mathematics, it allows children to be create their own models and commit it to memory. If we think about the popular "gradual release model", we think about how first the teacher models a new skill. After that, the teacher will walk through the skill supporting students and asking for them to support her. Then, the students are to explore and practice the skill on their own. Just like the practicing of the skill, students must also be practicing new concepts. And by practicing, many students learn best by using their hands to discover and manipulate. This is even more important in today's instantaneous environment for students to be required to explore the process before getting to the solution and a "rule" for solving a problem.
This ties in with the book I'm reading, "Mathematical Mindsets." So far, Jo (Boaler, the author) has talked in depth about the negative attitude toward mathematics that so many people have. This is including the influence that is being put on students in and outside the classroom. Where I'm going with this (and Jo) is that overcoming the instantaneous environment that society has created for today's student ties in with Jo's focus on utilizing the growth mindset specifically focusing on mathematics. Empowering students to learn their own process is key for encouraging growth mindset and also in encouraging a love and understanding of math!
## Tuesday, May 10, 2016
### What even IS math? (0)
What a fabulous question.
At first, my answer is what I knew math to be growing up. Which is what I consider a system of rules, equations, formulas, etc. I always loved my math, and my brain loved math (even thought I'm a girl!) I loved math every single day until it was conceptual and applied. This is how we teach math today. When I was growing up it was about "knowing your facts" and memorizing formulas or using a "formula sheet" and just plugging in numbers.
Math is SO. much. MORE! As a classmate described working in math, he says doing math involves wearing a tool belt. If you think about when you're working on a project (or a problem) and you use different tools for different things. The tools are put in place to make the project easier or faster. Sometimes you have to try to different tools if you are inexperienced with the project until you find the right tool that works. Sometimes you have a new project and you have to learn what tools to use to do the project. This is how math works.
Math also explains the things that are going on around us. Like when I go to the grocery store and I need to know how much food I can buy. Well, first I have to know how much money I have. To figure out how much money I have, I have to know how many hours I worked and how much money I made for each hour I worked, or the total amount of money I made. Doesn't make sense? Okay, well I want to play outside with my friends after school. But my mom says I need to do chores for 45 minutes and then I need to do my homework for 20 minutes and I also have to read for 30 minutes. And if I want to play with my friends the whole time then I won't be able to watch my normal 30 minutes of TV, but if I want to do both then I'll have to factor that in too. So how long can I play outside in order to make sure I get all my things done... well that's do some math! Now you're getting it!
Math can be proven... and it has been. That's why when we are working with problems today, we get to utilize our tool belt! Because there is proof in the proofs :)
|
|
# Extracts from Alice Lee's papers
Alice Lee wrote quite a few papers, some as the single author but most in collaboration. Many of the papers appear to be her work or the work of Lee and another assistant of Karl Pearson, but the paper itself was actually written by Pearson. This results in the use of "I", referring to Pearson, in papers with several authors. Let us note here that Alice Lee did outstanding work in arguing, with much well presented statistics, for women being intellectually equal to men. She did not, however, oppose Karl Pearson's racial views and many of the papers refer to civilised races and uncivilised races, terms quite standard for the time but which make uncomfortable reading today. She also seems to have accepted Pearson's views on eugenics which are totally unacceptable to the majority of people today.
Click on a link below to go to the papers published in that year
1896 1897 1898 1899 1900 1901 1902 1903 1904
1905 1907 1908 1910 1914 1915 1917 1925 1927
1896
1896.1: Karl Pearson and Alice Lee, Mathematical Contributions to the Theory of Evolution. On Telegony in Man, &c., Proceedings of the Royal Society of London 60 (1896-1897), 273-283.
The term telegony has been used to cover cases in which a female A, after mating with a male B, bears to a male C offspring having some resemblance to or some peculiar characteristic of A's first mate B. The instances of telegony usually cited are (i) cases of thoroughbred bitches when covered by a thoroughbred dog, reverting in their litter to half-breds, when they have been previously crossed by dogs of other races. Whether absolutely unimpeachable instances of this can be produced is, perhaps, open to question, but the strong opinion on the subject among dog-fanciers is at least remarkable; (ii) the case of the quagga noted by Darwin (see 'Origin of Species'), and still more recently (iii) a noteworthy case of telegony in man cited in the 'British Medical Journal' (22 February 1896).
In this latter case a very rare male malformation, which occurred in the male B, was found in the son of his widow A, by a second husband C. Here, as in the other cases cited, a question may always be raised as to the possibly unobserved or unknown occurrence of the characteristic in the ancestry of either A or C, or again as to the chance of the characteristic arising as a congenital sport, quite independently of any heredity. It seems unlikely that the observation of rare and isolated cases of asserted telegony will lead to any very satisfactory conclusions, although a well-directed series of experiments might undoubtedly do so. On the other hand, it is not impossible than an extensive and careful system of family measurements might bring to light something of the nature of a telegonic influence in mankind.
1897
1897.1: Alice Lee and Karl Pearson, Mathematical Contributions to the Theory of Evolution. On the Relative Variation and Correlation in Civilised and Uncivilised Races, Proceedings of the Royal Society of London 61 (1897), 343-357.
The following numerical data were calculated in the hope of reaching some general ideas on the comparative variation and comparative correlation in the case of civilised and uncivilised races, and further of determining, if possible, any general law connecting relative sexual variation and relative sexual correlation with the degree of civilisation, and so with what is probably inversely proportional to the degree of civilisation, namely, the intensity of natural selection.
The following two principles seem to flow from a study of variation in the organs of man:-
(a) Civilised man is more variable than uncivilised man.
(b) There is a greater equality of variation for the two sexes in uncivilised than in civilised races. Civilised woman appears, on the whole, to be slightly more variable than civilised man.
Both these principles are in accordance with the intensity of the struggle for existence - and the amount, consequently, of natural selection - being greater for uncivilised than for civilised races, and, further, greater for men than for women in the latter races.
The problem of correlation is, however, of a less simple character. While the action of selection can be shown theoretically to reduce variation, it by no means follows that it reduces correlation. Indeed, selection may increase, decrease, or reverse correlation at the very same time as it is reducing variation. We have then the following problems to guide us in our treatment of actual statistics:-
(a) Is correlation more intense among civilised than among uncivilised races?
(b) How does the relative correlation of the sexes differ in civilised and uncivilised races?
(c) Is there any marked prepotency of either sex in the matter of correlation?
These are the problems which the present calculations were designed, not to definitely solve, but to illustrate.
1897.2: Karl Pearson and Alice Lee, On the Distribution of Frequency (Variation and Correlation) of the Barometric Height at Diverse Stations [Abstract], Proceedings of the Royal Society of London 61 (1897), 491-493.
Although this paper contains the results of a very large amount of arithmetical work, which has been in progress during the last two or three years, it is not intended in the first place as a contribution to the meteorology of the British Isles. It is especially intended as an illustration of method. The authors believe that hitherto no exact theory of variation or of correlation has been applied to meteorological observations, and they have endeavoured to indicate that fruitful results may be obtained from such a theory when applied to one branch at least of meteorology, namely, barometric frequency. They wished to deal with a fairly extended area with an easily accessible material, and this was found in the Meteorological Observations at Stations of the Second Order for the British Isles. The "telegraph" stations would have provided better material, but it was far less accessible. The authors have accordingly only dealt with three telegraph stations. The main body of their data was drawn from twenty stations of the second order, four of which are in Ireland, and the remainder distributed round the coast of England, Wales, and Scotland, as indicated on a chart accompanying the memoir.
...
The writers hope that their paper may draw attention to the importance of rendering the large amount of barometric observations now made, available for the easy calculation of the variation and correlation coefficients. They consider that if a chain of stations round a large continental area could have their correlation for a series of intervals of time worked out, much might be done in the way of very close prediction of barometric changes.
1897.3: Karl Pearson, Alice Lee and G U Yule, On the Distribution of Frequency (Variation and Correlation) of the Barometric Height at Divers Stations, Philosophical Transactions of the Royal Society of London. Series A 190 (1897), 423-469.
In a memoir published in the 'Phil. Trans.', a series of generalised frequency curves are introduced, and it is shown that the asymmetry of the barometric frequency curve can probably be dealt with by one or other of these generalised curves. The importance of this conclusion lies in the fact that the distribution of barometric frequency in any locality can then be fully described by the statement of the values of three or four well defined constants.
Accordingly, in order to test this theory of barometric frequency, series of observations have been reduced and the frequency distributions for divers localities fitted with generalised probability curves. On the basis of these curves the attempt has been made to answer the following questions:
(a) Is there any one type of curve especially characteristic of barometric frequency?
(b) If so, what are the constants by which the distribution of this frequency can best be described?
(c) Does there appear to be any numerical or geographical relation between these constants,? and
(d) Does a knowledge of their values for a variety of localities enable us to make any statement with regard to the physics of atmospheric pressure?
1897.4: Alice Lee and Karl Pearson, On the Relative Variation and Correlation in Civilized and Uncivilized Races (Conclusion of a communication made to the Royal Society), Science, New Series 6 (132) (1897), 49-50.
The general conclusion would then be that, with increased civilisation, absolute size and variation tend to increase, while correlation, to judge by the males, is stationary; to judge by the females, tends to increase. It will be found somewhat difficult to reconcile these results with any simple applications of the principle of natural selection. In the first place increased variation undoubtedly suggests a lessening of the struggle for existence, and there can be no question that this increase has gone on among civilised races (See 'Variation in Man and Woman'). The lessening of the struggle has probably been greater for woman than man; hence the principle of natural selection might help to explain the preponderance of variability in civilised woman. The increase in size with civilisation seems, on the average, also incontestable. But is it the effect of lessening the struggle for existence?
1898
1898.1: Karl Pearson, Alice Lee and Leslie Bramley-Moore, Mathematical Contributions to the Theory of Evolution. VI. Reproductive or Genetic Selection. Part I. Theoretical. and Part II. On the Inheritance of Fertility in Man. And Part III. On the Inheritance of Fecundity in Thoroughbred Race-Horses [Abstract], Proceedings of the Royal Society of London 64 (1898-1899), 163-167.
The object of this memoir is twofold: first, to develop the theory of reproductive or genetic selection on the assumption that fertility and fecundity may be heritable characters; and, secondly, to demonstrate from two concrete examples that fertility and fecundity actually are inherited.
The problem of whether fertility is or is not inherited is one of very far reaching consequences. It stands on an entirely different footing to the question of inheritance of other characters. That any other organ or character is inherited, provided that inheritance is not stronger for one value of the organ or character than another, is perfectly consistent with the organic stability of a community of individuals. That fertility should be inherited is not consistent with the stability of such a community, unless there be a differential death-rate, more intense for the offspring of the more fertile, i.e., unless natural selection or other factor of evolution holds reproductive selection in check. The inheritance of fertility and the correlation of fertility with other characters are principles momentous in their results for our conceptions of evolution; they mark a continual tendency in a race to progress in a definite direction, unless equilibrium be maintained by any other equipollent factors, exhibited in the form of a differential death-rate on the most fertile. Such a differential death-rate probably exists in wild life, at any rate until the environment changes and the equilibrium between natural and reproductive selection is upset. How far it exists in civilised communities of mankind is another and more difficult problem, which I have partially dealt with elsewhere. At any rate it becomes necessary for the biologist either to affirm or deny the two principles stated above. If he affirms them, then he must look upon all races as tending to progress in definite directions - not necessarily one, but possibly several different directions, according to the characters with which fertility may be correlated - the moment natural selection is suspended; the organism carries in itself, in virtue of the laws of inheritance and the correlation of its characters, a tendency to progressive change. If, on the other hand, the biologist denies these principles, then he must be prepared to meet the weight of evidence in favour of the inheritance of fertility and fecundity contained in Parts II and III of the present memoir.
1899
1899.1: Karl Pearson and Alice Lee, On the Vibrations in the Field Round a Theoretical Hertzian Oscillator [Abstract], Proceedings of the Royal Society of London 64 (1898-1899), 246-248.
The object of this paper is to investigate the types of wave motion in the neighbourhood of a theoretical Hertzian oscillator. ... The theory given [in current textbooks of electromagnetism], is insufficient for two reasons, both of which were recognised by Hertz himself, namely, because (i) the actual oscillator has sensible extension, and (ii) the wave train it gives forth is not steady.
The present paper only attempts to remove the latter objection to Hertz's original theory; like that theory it becomes less accurate as we approach nearer to an actual oscillator. Still the range within which the damping produces a very sensible divergence from Hertz's theory, seems sufficiently large to allow of experiment being made at a considerable distance from the oscillator; certainly the chief divergences between the present and Hertz's original theory actually fall in the portion of the field, wherein his chief interference experiments were made. Besides therefore the difficulties arising from the phenomena of "multiple resonance," it seems necessary to measure the influence of damping in modifying the mathematical results for a steady wave train, which results are what Hertz made use of in interpreting his interference experiments. The four sources of divergence between theory and experiment in Hertz's case, i.e.:
(i) the damping of the wave train,
(ii) the size of the oscillator,
(iii) multiple resonance,
(iv) defect of electro-magnetic theory,
may one or all be effective, but the object of the present paper is confined entirely to a theoretical investigation of the first.
1899.2: Karl Pearson, Alice Lee and Leslie Bramley-Moore, Mathematical Contributions to the Theory of Evolution. VI. Genetic (Reproductive) Selection: Inheritance of Fertility in Man, and of Fecundity in Thoroughbred Racehorses, Philosophical Transactions of the Royal Society of London. Series A 192 (1899), 257-330.
I understand by a factor of evolution any source of progressive change in the constants-mean values, variabilities, correlations - which suffice to define an organ or character, or the interrelations of a group of organs or characters, at any stage in any form of life. To demonstrate the existence of such a factor we require to show more than the plausibility of its effectiveness, we need that a numerical measure of the changes in the organic constants shall be obtained from actual statistical data. These data must be of sufficient extent to render the numerical determinations large as compared with their probable errors.
In a "Note on Reproductive Selection," published in the 'Roy. Soc. Proc.', I have pointed out that if fertility be inherited or if it be correlated with any inherited character - those who are thoroughly conversant with the theory of correlation will recognise that these two things are not the same-then we have a source of progressive change, a vera casa of evolution. I then termed this factor of evolution Reproductive Selection. As the term has been objected to, I have adopted Genetic Selection as an alternative. I mean by this term the influence of different grades of reproductivity in producing change in the predominant type.
If there be two organs A and B both correlated with fertility, but not necessarily correlated with each other, then genetic or reproductive selection may ultimately cause, the predominance in the population of two groups, in which the organs A and B are widely different from their primitive types - 'widely different,' because reproductive selection is a source of' progressive change. Thus this form of selection can be a source, not only of change, but of differential change. As this differentiation is progressive, it may amount in time to that degree of divergence at which crossing between the two groups begins to be difficult or distasteful. We then reach in genetic or reproductive selection a source, of the origin of species.
When I assert that genetic (reproductive) selection is a factor of evolution, I do not intend at present to dogmatise as to the amount it is playing or has played in evolution. I intend to isolate it so far as possible from all other factors, and then measure its intensity numerically.
1899.3: Karl Pearson and Alice Lee, Mathematical Contributions to the Theory of Evolution. VII. - On the Application of Certain Formulae in the Theory of Correlation to the Inheritance of Characters Not Capable of Quantitative Measurement [Abstract], Proceedings of the Royal Society of London 66 (1899-1900), 324-327.
Many characters are such that it is very difficult if not impossible to form either a discrete or a continuous numerical scale of their intensity. Such, for example, are skin, coat, or eye-colour in animals, or colour in flowers. In other cases as in the amount of shading, degree of hairiness, &c., it might be possible by counting scales or hairs to obtain a numerical estimate of the character, but the labour in the case of several hundreds or a thousand individuals becomes appalling. Now these characters are some of those which are commonest, and of which it is generally possible for the eye at once to form an appreciation. A horse-breeder will classify a horse as brown, bay, or chestnut; a mother classify her child's eyes as blue, grey, or brown without hesitation and within certain broad limits correctly. It is clear that if the theory of correlation can be extended so as to readily apply to such cases, we shall have much widened the field within which we can make numerical investigations into the intensity of heredity, as well as much lessened the labour of collecting data and forming records.
The extension of theory required for such investigations is provided in a separate memoir. It is found that the sole conditions for applying this theory are: (1) that an order of intensity must exist even if there be no quantitative scale; (2) that the correlation must be supposed normal. If these assumptions are made, individuals may even be classified into only two groups of less and greater intensity, and the correlation still found. For example, the correlation between stature and hair-colour could be found by classifying all individuals simply into short and tall, light and dark haired, although for convenience of judgment a medium class in each case might be introduced. For the purpose of ascertaining the relative variability of the characters involved, this third or medium class at least must be introduced and a nine-fold division made of the correlation table. In the introduction to the present memoir the probable errors of all the quantities involved are considered, and illustrations given of their values for selected cases.
1900
1900.1: Karl Pearson and Alice Lee, Mathematical Contributions to the Theory of Evolution. VIII. On the Inheritance of Characters not Capable of Exact Quantitative Measurement. Part I. Introductory. Part II. On the Inheritance of Coat-Colour in Horses. Part III. On the Inheritance of Eye-Colour in Man, Philosophical Transactions of the Royal Society of London. Series A 195 (1900), 79-150.
A certain number of characters in living forms are capable of easy observation, and thus are in themselves suitable for observation, but they do not admit of an exact quantitative measurement, or only admit of this with very great labour. The object of the present paper is to illustrate a method by which the correlation of such characters may be effectively dealt with in a considerable number of cases. The conditions requisite are the following:
(i) The characters should admit of a quantitative order, although it may be impossible to give a numerical value to the character in any individual.
Thus it is impossible at present to give a quantitative value to a brown, a bay, or a roan horse, but it is not impossible to put them in order of relative darkness of shade. Or, again we see that a blue eye is lighter than a hazel one, although we cannot a priori determine their relative positions numerically on a quantitative scale. Even in the markings on the wings of butterflies or moths, where it might be indefinitely laborious to count the scales, some half dozen or dozen specimens may be taken to fix a quantitative order, and all other specimens may be grouped by inspection in the intervals so determined. We can even go a stage further and group men or beasts into simply two categories-light and dark, tall and short, dolichocephalic and brachycephalic - and so we might ascertain by the method adopted whether there is, for example, correlation between complexion and stature, or stature and cephalic index.
(ii) We assume that the characters are a function of some variable, which, if we could determine a quantitative scale, would give a distribution obeying - at any rate to a first approximation - the normal law of frequency.
The whole of the theoretical investigations are given in a separate memoir, in which the method applied is illustrated by numerical examples taken from inheritance of eye-colour in man, of coat-colour in horses and dogs, and from other fields. We shall not therefore in this paper consider the processes involved, but we may make one or two remarks on the justification for their use. If we take a problem like that of coat-colour in horses, it is by no means difficult to construct an order of intensity of shade. The variable on which it depends may be the amount of a certain pigment in the hair, or the relative amounts of two pigments. Much the same applies to eye-colour. In both cases we may fail to obtain a true quantitative scale, but we may reasonably argue that, if we could find the quantity of pigment, we should be able to form a continuous curve of frequency. We make the assumption that this curve-to at any rate a first approximation-is a normal curve. Now if we take any line parallel to the axis of frequency and dividing the curve, we divide the total frequency into two classes, which, so long as there is a quantitative order of tint or colour, will have their relative frequency unchanged, however we, in our ignorance of the fundamental variable, distort its scale. For example, if we classify horses into bay and darker, chestnut and lighter, we have a division which is quite independent of the quantitative range we may give to black, brown, bay, chestnut, roan, grey, &c.
1900.2: Karl Pearson and Alice Lee, On the Vibrations in the Field Round a Theoretical Hertzian Oscillator, Philosophical Transactions of the Royal Society of London. Series A 193 (1900), 159-188.
Although Hertz realised very fully that his oscillator did not give "perfectly regular and long continued sine-oscillations," and although Berknes determined so long ago us 1891 the general form of the damping, it does not appear that Hertz's original investigation of the nature of the vibrations in the field round one of his oscillators has hitherto been modified. Indeed, his diagrams of the wave mot ion have been copied into more than one textbook, and have usually been taken to represent what actually goes on in the surrounding field. Actually not only the diagrams, but a good deal of Hertz's original theory of interference requires modification, if we are to obtain quantitative accordance between theory and experiment. The object of the present paper is to give a fuller theory of the nature of the vibrations in the field round a typical Hertzian oscillator.
...
Conclusions. - We may draw the following general conclusions. - (i) The effect of damping makes itself very sensible in modifying the form of the wave-surface as propagated into space from a theoretical oscillator. The typical Hertzian wave-diagrams require to be replaced by the fuller series accompanying this memoir. (ii) Three waves of electro-magnetic force may be considered as sent out from the oscillator, and these waves we believe capable of physical identification. ... (iii) The velocities of these waves undergo remarkable changes in the neighbourhood of the oscillator, but still at distances such as Hertz experimented at, and which seem indeed to some extent within the field of possible physical investigation. (iv) The point of zero phase for both transverse and axial component electric waves does not coincide with the centre of the oscillator, so that these waves appear to start from a sphere of small but finite radius round the oscillator. A fourth wave dealt with by Hertz, the wave of magnetic induction, does not, as he supposes, start from the centre of the oscillator with zero phase, but ill the case of a damped wave train with a small but finite phase. (v) Our analysis of these waves and of their singular points in the neighbourhood of the oscillator appears to add something to Hertz's discussion; it is possible that it may throw light on the difficulties which arise in connecting with some of his interference experiments.
1900.3: Alice Lee and Karl Pearson, Data for the Problem of Evolution in Man. VI. - A First Study of the Correlation of the Human Skull [Abstract], Proceedings of the Royal Society of London 67 (1900), 333-337.
The substance of this paper was a thesis for the London D.Sc. degree; it was shown to Professor Pearson, at whose suggestion considerable modifications were made, and a revision undertaken with a view to publication.
In order to deal exactly with the problem of evolution in man it is necessary to obtain in the first place a quantitative appreciation of the size, variation, and correlation of the chief characters in man for a number of local races. Several studies of this kind have been already undertaken at University College. These fall into two classes, (i) those that deal with a variety of characters in one local race, and (ii) those which study the comparative value of the constants from a variety of races.
...
In the last place we turn to the third problem: the reconstruction of the capacity of the living head. The memoir contains tables of the skull capacity of some sixty men, and also of some thirty women, whose relative intellectual ability can be more or less roughly appreciated. It would be impossible to assert any marked degree of correlation between the skull capacities of these individuals and the current appreciation of their intellectual capacities. One of the most distinguished of Continental anthropologists has less skull capacity than 50 per cent of the women students of Bedford College; one of our leading English anatomists than 25 per cent of the same students. There will, of course, be errors in our probable determinations, but different methods of appreciation lead to sensibly like results, and although we are dealing with skull capacity, and not brain weight, there is, we hold, in our data material enough to cause those to pause who associate relative brain weight either in the individual or the sex with relative intellectual power. The correlation, if it exists, can hardly be large, and the true source of intellectual ability will, we are convinced, have to be sought elsewhere, in the complexity of the convolutions, in the variety and efficiency of the commissures, rather than in mere size or weight.
1901
1901.1: Alice Lee and Karl Pearson, Data for the Problem of Evolution in Man. VI. A First Study of the Correlation of the Human Skull, Philosophical Transactions of the Royal Society of London. Series A 196 (1901), 225-264.
Note.
The substance of this paper was presented by Miss Lee as a thesis for the London D.Sc. in March, 1899. After its presentation Miss Lee asked me to criticise and revise it with a view to publication. Illness in the spring of 1899 and later pressure of other work prevented my completing this revision until now. When Miss Lee started her work practically nothing had been published on the correlation of the parts of the skull; since then an interesting paper has appeared by Dr Franz Boas. To this reference is made in the footnotes at points where there is agreement or disagreement with his conclusions. The subject is of such great scientific interest, and anthropologically of such importance, that I urged Miss Lee to somewhat enlarge her original thesis by a series of additional investigations now incorporated in this paper. I have further rearranged a good deal of her material and reworded some of her conclusions, but the reduction of the material and the inferences drawn from it are substantially her work. My task has been that of an editor, who wished to mould the author's researches into a component part of a wider series dealing generally with the quantitative data for the problem of evolution in man. Such is the limit of my revision. I have passed of course nothing which did not seem to me valid, and have suggested to the author some lacunae which could be filled up by a consideration of additional data. - Karl Pearson
Introduction.
The reconstruction of an organism from a knowledge of some only of its parts is a problem which has occupied the attention of biologists for many years past. Cuvier was the first to introduce in his 'Discours sur les Révolutions de la Surface du Globe,' 1812, the idea of correlation. He considered that a knowledge of the size of a shoulder blade, leg, or arm might make it possible to reconstruct the whole individual to which the bone had belonged. The conception was taken up by Owen, but has fallen into discredit owing to the many errors made in attempts from a wide but only qualitative knowledge of the skeleton, to reconstruct forms the appreciation of which depends really on quantitative measurement and an elaborate quantitative theory. Such a theory having now been developed, and anatomists having provided large series of measurements, it has become possible to reconsider the problem on a sounder basis, and to determine more closely the limits under which our modern methods may be safely applied. The three fundamental problems of the subject are
(i) The reconstruction of an individual, of whom one or more organs only are known, when a series of organs for individuals of the same local race have been measured and correlated. ...
(ii) The reconstruction of the mean type of a local race from a knowledge of a series of one or more organs in that race, when a wide series of these and other organs have been measured in other races. ...
(iii) The reconstruction of an organ in the living individual not measurable during life, from a determination of the size of accessible organs, and a knowledge of the correlation between these organs and the inaccessible organ obtained from measurements made on individuals of the same race after death.
1901.2: Karl Pearson, Alice Lee, Ernest Warren, Agnes Fry and Cicely D Fawcett, Mathematical Contributions to the Theory of Evolution. IX. - On the Principle of Homotyposis and Its Relation to Heredity, to the Variability of the Individual, and to That of the Race. Part I - Homotyposis in the Vegetable Kingdom [Abstract], Proceedings of the Royal Society of London 68 (1901), 1-5.
If we take two offspring from the same parental pair, we find a certain diversity and a certain degree of resemblance. In the theory of heredity we speak of the degree of resemblance as the fraternal correlation, while the intensity of the diversity is measured by the standard deviation of the array of offspring due to given parents. Both correlation and standard deviation are determined for any given character or organ by perfectly definite well-known statistical methods. Passing from the case of bi-parental to asexual reproduction, we may still determine the correlation and variability of the offspring. This ultimately leads us to the measurement of the diversity and likeness of the products of pure budding, or, going still one stage further, we look, not to the reproduction of new individuals, but to the production of any series of like organs by an individual. Accordingly one reaches the following problem:- If an individual produces a number of like organs, which so far as we can ascertain are not differentiated, what is the degrees of diversity and of likeness among them. Such organs may be blood-corpuscles, hairs, scales, spermatozoa, ova, buds, leaves, flowers, seed-vessels, &c., &c. Such organs I term homotypes when there is no trace to be found between one and another of differentiation in function. The problem which then arises is this:- Is there a greater degree of resemblance between homotypes from the same individual than between homotypes from separate individuals? If fifty leaves are gathered at random from the same tree and from twenty-five different trees, shall we be able to determine from an examination of them what has been their probable source? Are homotypes from the individual only, a random sampling, as it were, of the homotypes of the race?
By the examination of very few series from the animal and vegetable kingdoms I soon reached the result, that homotypes, like brothers, have a certain degree of resemblance and a certain degree of diversity; that undifferentiated like organs, when produced by the, same individual, are, like types cast from the same mould, more alike than those cast by another mould, but yet not absolutely identical. I term this principle of the likeness and diversity of homotypes homotyposis. It soon became clear to me that this principle of homotyposis is very fundamental in nature. It must in some manner be the source of heredity. It does not, of course, "explain" heredity, but it shows heredity as a phase of a much wider process - the production by the individual of a series of undifferentiated-like organs with a certain degree of likeness. My first few series seemed to show that the homotyposis of the vegetable and animal kingdoms had approximately the same value, and it occurred to me that we had here the foundation of a very widespread natural law.
1901.3: Karl Pearson, Alice Lee, Ernest Warren, Agnes Fry and Cicely D Fawcett, Mathematical contributions to the theory evolution. - IX. On principle of homotyposis and its relation, the variability of the individual, and to that of the race. Part I. - Homotyposis in the vegetable Kingdom, Philosophical Transactions of the Royal Society of London. Series A 197 (1901), 287-299.
The present paper endeavours to deal with a problem upon which I have long been occupied, adopting the widest basis compatible with the time and means at my disposal. In the first place, I have often been impressed with the small reduction in variability which can be produced by selection. The offspring of a single parent while diverging in character, possibly very widely from the average character of the race, will still have a variability in that character only slightly reduced, say at most 10 per cent, below the racial variability. Even if we select the ancestry for an indefinite number of generations, the offspring will have a variability upwards of 89 per cent, of that of the original race. Now this capacity in the parent for producing variable offspring must be in some manner related to the degree of resemblance in those offspring. We have thus the two fundamental divisions of our subject: (i) What is the ratio of individual to racial variability? (ii) How is the variability in the individual related to inheritance within the race? I must endeavour to explain my meaning a little more fully and clearly. The individual puts forth a number of like organs, corpuscles in the blood, petals of the flower, leaves of the trees, scales on the wing. These may or may not be divided up into differentiated groups. Special forms of leaves occur in the neighbourhood of the fruit; florets may be differentiated according to their position on the flower, scales according to their position on the wing; there may be two or more classes of blood-corpuscles.
1902
1902.1: Alice Lee, Marie A Lewenz and Karl Pearson, On the Correlation of the Mental and Physical Characters in Man. Part II, Proceedings of the Royal Society of London 71 (1902-1903), 106-114.
In a first paper on this subject we gave a brief account of our material - Miss Beeton's copies of the Cambridge anthropometric measurements with degrees added at the University Registry, and the school measurements carried out by assistance from the Government Grant Committee. This material will take years to exhaust, but the present notice gives further conclusions to be drawn from Dr Lee's and Miss Lewenz's later reductions from this great mass of raw statistics.
In the first place we may refer to certain matters which arise directly from the first paper. In the discussion which followed the reading of that paper it was suggested that we ought not to correlate intelligence with absolute measurements on the head, but with their ratio to the size of the body. The answer made on that occasion was based on data not then published, namely, that there is no sensible correlation between intelligence and the absolute size of the body. The answer made on that occasion was based on data not then published, namely, that there is no sensible correlation between intelligence and the absolute size of the body. Hence the correlation between intelligence and any ratio of body lengths must also be small. ...
...
Since our school measurements were started, MM Vaschide, and Pelletier have published in the 'Comptes Rendus' a statement that although unable to find any relation between intelligence and length or breadth of head, they consider a relationship to hold between intelligence and the auricular height of head. Their process was of the following kind. They asked the school teacher to select ten intelligent and ten non-intelligent children, and then measured the heads of these two sets, and found their means. This was done for groups of three ages in boys and two ages in girls. The probable errors of the difference of the means of ten observations are not considered, and by exactly the same process that they reason that the auricular height is greater for the more intelligent children they might have deduced from their statistics that intelligent girls of 11 years have lower heads than intelligent girls of 9 years, and non-intelligent boys of 11 years lower heads than the same class of 9 years! Frankly, we consider that the memoir is a good illustration of how little can be safely argued from meagre data and a defective statistical theory.
1902.2: Alice Lee, Dr Ludwig on Variation and Correlation in Plants, Biometrika 1 (3) (1902), 316-319.
A number of points arise from Dr Ludwig's paper in the October number of Biometrika which deserve to be considered from the standpoint of statistical theory. I have accordingly worked out the statistical constants of the material given by him ...
1902.3: Cicely D Fawcett and Alice Lee, A Second Study of the Variation and Correlation of the Human Skull, With Special Reference to the Naqada Crania, Biometrika 1 (4) (1902), 408-467.
The present investigation was commenced in 1895, but the long series of measurements involved and the elaborate numerical calculations necessary, have delayed the completion of the work until the present time. It forms part of a more general scheme for determining the size, variability and correlation of the chief organs and characters in man, which has been in progress at University College for some years past. When this scheme was started but little had been done to obtain a scientific measure of the variability and correlation of the parts of the human body. Innumerable anthropometric, including craniometric measurements, had been made and published but very little had been done in determining scientifically their statistical constants. In fact there was considerable danger that the want of proper statistical theory would bring the science of craniology into discredit with archaeologists. The manner in which variation is dealt with even in such a classical work as Rütimeyer and His's Crania Helvetica is astonishing to the statistician who has realised the nature of the distribution of any character in a homogeneous population. A considerable population can be measured and we can determine whether or no it is sensibly differentiated from a second statistically defined population. But to classify a few individuals into different races by means of two or three measurements, such as the cephalic index, the length, or the facial angle, - before the correlation and the variation of these characters have been determined for even a single race - is a very dangerous proceeding, and calculated to bring craniometry into discredit.
It was with a view accordingly of providing anthropologists with the needful constants for determining racial differences that the scheme spoken of was started. It consisted partly in the reduction of existing published measurements, and partly in the measurement of new and large series, where such were not already available. A fairly comprehensive series of determinations of variability in man were made by Dr Alice Lee, Mr G U Yule, and Professor K Pearson, and published by the latter in his Chances of Death and other Studies in Evolution, Vol. I. Further a considerable quantity of new material was collected and reduced in a series of papers entitled: Data for the Problem of Evolution in Man, published by the Royal Society in their Proceedings and Transactions.
1903
1903.1: Alice Lee, On the Relation Between Rates, Expenditure on Remunerative Works, and Rate of Increase of Population in Fifty-Eight British Municipalities, The Economic Journal 13 (51) (1903), 424-429.
The statistics on which the following results are based were taken from Burdett's Official Intelligence (now called the Stock Exchange Official Intelligence), Vol. VI, 1888, pp. 26-7, and the Stock Exchange Official Intelligence, Vol. XX, 1902, pp. cxxiv-cxxv. The statistics there published are of value, but no sound judgment can be based upon them until they are quantitatively reduced, and a calculation made of the actual coefficients of correlation between the quantities involved.
It is also of interest to ascertain how far the rate of increase of population affects the relationship between expenditure on remunerative works and the magnitude of the rates. The data for rate of increase of population are taken from the Census Report of the Registrar-General for 1901, pp. xii-xiii, The investigation is confined to towns of over 50,000 inhabitants.
The Official Intelligence may fairly he looked upon as having no individualistic or socialistic bias in the preparation of its material.
I premise here that the coefficient of correlation is a statistical constant marking the degree of relationship between two quantities, and that it ranges from a value = 0, when the two quantities are quite independent, to a value = 1, when we may suppose their variations to be absolutely dependent or causal. A partial coefficient of correlation measures the relationship between two variables, when a third variable remains constant, e.g., we express what would be the degree of correlation between rates and remunerative expenditure, supposing the population to remain constant. We are thus able to remove from consideration any disturbing influence of changes of population on the relationship between rates and remunerative loans.
1903.2: Karl Pearson and Alice Lee, On the Laws of Inheritance in Man: I. Inheritance of Physical Characters, Biometrika 2 (4) (1903), 357-462.
About eight years ago I determined to supplement the data obtained by Mr Francis Galton for his work Natural Inheritance by a rather wider series of measurements on blood relations in man. Mr Galton had most generously placed his original data at my disposal and I had used them as far as stature was concerned in my memoir of 1895 and in a joint paper with Dr Lee in 1896. The eye-colour data of his Family Records were not reduced until after the discovery of a method for dealing with characters not capable of exact quantitative measurement, and it is only recently that the full scheme of relationships back to great-grandparents has been completed. There were about 200 families in Mr Galton's records and only one measurable character, stature. The conditions as to age of the measured, or to method of measurement were not, perhaps, as stringent as might now be considered desirable, but Mr Galton's data were amply sufficient to lead him to his great discovery of the general form of the inheritance of blending characters in a stable community. The full significance of this discovery is hardly yet understood, and one constantly notices grave misinterpretations of Mr Galton's theory in the works of non-statistically trained biologists. The constants as determined from Mr Galton's stature data did not seem to me to be final; they were to some extent irregular and were not in full accord with the more uniform eye-colour results. It therefore appeared to me desirable to obtain further data, not only for several physical characters and to compare the results for these characters with those for mental characters, but to deal with both in as wide as possible a system of blood relationships.
1903.3: Karl Pearson, G U Yule, Norman Blanchard and Alice Lee, The Law of Ancestral Heredity, Biometrika 2 (2) (1903), 211-236.
Alice Lee writes: From Mr Blanchard's racehorse coat-colour pedigrees, I have, paying no attention to sex, been able to extract 1155 cases of great-grandparent and offspring and 978 cases of great-great-grandparent and offspring. When it is noted that there are 16 types of great-grandparental and 32 types of great-great-grandparental relationship, so that 48 correlation tables would be required for the full working out of these cases, it will be noted why in this preliminary study, I have not differentiated between the sexes. Tables I and II reproduce my data. ...
1903.4: S Jacob, A Lee and Karl Pearson, Craniological Notes: Preliminary Note on Interracial Characters and their Correlation in Man, Biometrika 2 (3) (1903), 347-356.
1904
1904.1: Amy Barrington, Alice Lee and Karl Pearson, On Inheritance of Coat-Colour in The Greyhound, Biometrika 3 (2/3) (1904), 245-298.
There is little doubt that if money and time were no consideration direct experiments on the breeding of dogs would lead to results of the highest importance not only for the theory of inheritance, but also for the practical guidance of dog-fanciers. To be of the most complete service such experiments would have to commence with two or three generations of in-breeding simply to insure the purity of the various stocks to be employed in the final experiments. Further, in the description of the selected characters, a classification would have to be adopted of a far more comprehensive character than appears to be usual in a number of recent experiments on hybridisation. Lastly, from the standpoint which we believe to be the correct one, that safe conclusions can only be drawn from the average of large numbers of crossings, at least 50 and probably 100 individuals of both sexes would have to be the basis of an effective experimental stud. Now the difficulty both in time and money of dealing with such a stud may not in the future be insuperable, but at present to propose it as the only means of approaching the problem of inheritance in dogs is to adjourn sine die any consideration of that problem. In certain points also the extensive breeding records which are already available for dogs possess advantages which are not to be wholly disregarded when we compare them with the special merits of a biometric stud-farm. In the first place we have all the gain which arises from dealing with literally immense numbers. For example, in the present memoir we were able to classify over 10,000 cases of parent and offspring, over 7000 cases of grandparent and offspring, and over 24,000 cases of siblings. Nothing approaching such totals could be obtained by experiment ad hoc. Further, the colour pedigrees for a number of generations were directly available. Against these advantages is to be put in the foremost place the primary value of exactitude and uniformity in record such as might be in a well organised scientific experiment. This counts for a great deal, but it does not count for everything with those who realise what are the probable errors of small series, and how inconclusive such series usually are. On the other hand also, if we admit the want of scientific exactness and the play of individual judgment in the character classifications of breeders, we have still to remember that when the breeding of a particular species has been long established a conventional scale also grows up which, owing to the contact of breeder with breeder at sales and shows, and further to the regulations of societies and judges, becomes within broad lines universally recognised and appreciated. Hence, while we fully recognise all the disadvantages of stud-book records, we still hold that highly valuable work may be done in the field of inheritance by accepting the classification of professional, if non-scientific breeders.
1905
1905.1: J Blakeman, Alice Lee and Karl Pearson, A Study of the Biometric Constants of English Brain-Weights, and Their Relationships to External Physical Measurements, Biometrika 4 (1/2) (1905), 124-160.
The purpose of this paper is to present a biometric analysis of the measurements provided by Dr R J Gladstone and published in this volume. The conclusions reached are therefore of the same order of validity as the data upon which they are based. An attempt has been made to compare them with the fuller material reduced by Dr Raymond Pearl, and in many points where comparison was possible general confirmation of his conclusions has been obtained. Gladstone's statistical material differs from that used by Pearl in two essential points. It is in the first place more meagre, but in the second place it provides additional measurements which enable us to predict with a moderate degree of accuracy brain-weight from external measurements on the living subject.
1907
1907.1: Alexandra Wright, Alice Lee and K Pearson, A Cooperative Study of Queens, Drones and Workers in "Vespa Vulgaris", Biometrika 5 (4) (1907), 407-422.
The 11,000 to 12,000 microscopic measurements on which this paper is based were made by Miss Alexandra Wright, who also determined about half the indices. The remainder of the indices and all the statistical reductions were carried out by Dr Alice Lee.
The material on which this study is based was provided by Mr 0 H Latter of Charterhouse, who most kindly sent to my Biometric Laboratory a nest of Vespa vulgaris and a second of Vespa germanica, the members of the latter being still under measurement. The former nest was a singularly fortunate one, it contained 129 perfect queens, upwards of 150 perfect drones, and many hundred workers. In the actual reductions the measurements on the whole of the 129 queens were used; the first 130 drones were used and the first 129 workers out of 183 actually measured up to date. In all cases the wings were mounted permanently for reference and the body preserved in a separate numbered tube, so that it will be possible some day to compare the size, variability and correlation in other than wing characters. Seven measurements were made on both right and left fore wing - it is proposed to consider later the area of the wing by planimetering the image thrown by a lantern on a screen, a process found quite feasible. The measurements were taken with an ocular micrometer and a Leitz 1b objective. This objective was found to work very well for this purpose, and the draw tubes of both objective and ocular were set at fixed points throughout. The magnification was such that one ocular unit = 1.48 mm. All measurements are, however, given in this paper in the actual micrometer readings. Every measurement was twice repeated, and if the second reading (which followed after an interval) did not agree with the first, a special control measurement was made. In this way between 11,000 and 12,000 measurements were taken and nearly two years spent over the microscopic work. This will serve to explain not only the delay in publication, but the reason why only 130 individuals of each class were dealt with.
1908
1908.1: Alice Lee, On the Manner in Which the Percentage of Employed Workmen in this Country is Related to the Import of Articles Wholly or Mainly Manufactured, The Economic Journal 18 (69) (1908), 96-101.
An argument of the following kind has been occasionally raised by the advocates of Tariff Reform:- The import of manufactured articles means the employment of foreign instead of British workmen, and protection would transfer to our own wage-fund the large sums that at present pass, owing to these imports, to foreign workmen.
It is an extremely difficult problem to confirm or to refute an argument of this kind. The older English economists would have met it by an à priori reasoning which appealed only to a small extent to specific experience. The modern economist recognises far more clearly the complexity of the problem, the great part which local circumstance, environment, economic, and political development play in any real treatment of the question. It would probably be impossible to demonstrate the truth or falsehood of the argument in our own case by anything short of a gigantic and risky experiment. It is more easy, however, to show that no evidence in its favour can be obtained from any data bearing on the point available in our own country.
...
Now, of course, the data are slender, but, so far as they go, they not only lend no support to the Tariff Reform argument, but, on the contrary, they appear to stultify it. Like many other arguments used in political controversy, this appears based on a misuse of statistics, for, as Lord Goschen said some twenty years ago:- "Given a great number of figures partially unknown, given unlimited power and discretion of selection, and given an enthusiast determined to prove his case, and I will not answer for the consequences."
1908.2: Karl Pearson and Alice Lee, On the Generalised Probable Error in Multiple Normal Correlation, Biometrika 6 (1) (1908), 59-68.
The normal correlation surface in the case of n variables is known ... We see accordingly that the chance of an outlying or not, - the observation consisting of a complex of $n$ variates, - can be readily found, if the incomplete normal moment functions have once been tabled, and the constants of the correlation surface be known. These incomplete normal moment functions serve a variety of purposes which will be developed in later papers. The present paper merely refers to the means they provide of determining the probability of any observation lying outside a given contour ellipsoid ...
The general use of the table provided will be obvious, it enables us to tell the probability of any outlying individual really being a member of a population of which the constants are known. Thus one may look forward to the day when the biometric constants of a race being sufficiently well known, it may be possible to tell from a complex of five or six characters whether a skeleton or a skull may be reasonably supposed to have belonged to a member of that race. At present the labour of calculating the correlation coefficient-determinants and their minors stands in the way of much work in this direction, when we wish to advance beyond two or three characters.
1910
1910.1: Karl Pearson, Alice Lee and Ethel M Elderton, On the Correlation of Death-Rates, Journal of the Royal Statistical Society 73 (5) (1910), 534-539.
The discovery of possible inter-relationships between diseases by an examination of their death-rates as affected by varying environment, occupation, or race, has not been without fascination for more than one investigator. Personally I have considered the problem more than once, but always failed to make progress owing to the existence of spurious correlations, which I did not see how to meet....
...
It was only after reading Dr Maynard's paper in the current number of Biometrika, and thinking over the difficulties to which he draws attention, that another way of tackling the problem occurred to me. We reduce all our sub-populations to a standard population, or population with a standard age-distribution. The assumption made in doing this is, practically, that the particular standard population used is immaterial.
...
Dr Alice Lee took forty cities in the Registration States, United States, having more than 90,000 inhabitants, and calculated their age correction factors for (a) cancer deaths, and (b) all deaths other than cancer and diabetes. She then found the ten correlation coefficients of the following five quantities:- (1) Deaths from cancer. (2) Deaths from all diseases other than cancer and diabetes. (3) Population. (4) Cancer age corrective factor. (5) All diseases, except cancer and diabetes, age corrective factor.
1914
1914.1: Alice Lee, Table of the Gaussian "Tail" Functions; When the "Tail" is Larger than the Body, Biometrika 10 (2/3) (1914), 208-214.
In a paper published in Biometrika, Vol. VI. pp. 59-68, tables for the incomplete normal moment functions were printed, and they have since been reproduced in Tables for Statisticians and Biometricians recently issued from the Cambridge University Press. From these tables values of the Gaussian "Tail" functions were deduced and a short table of ψ1 and ψ2 appeared in Biometrika, Vol. VI. p. 68. The value of these functions being demonstrated in practice during the last few years, a more complete table of ψ1 , ψ2 , ψ3 , has appeared in the Tables for Statisticians and Biometricians.
In the introduction to those tables, however, Professor Pearson indicated that it was important to have a similar table when the "tail" forms more than half the entire curve, and gave the fundamental formulae for obtaining the numerical values of the functions. The present table has been calculated to supply the want thus indicated.
1915
1915.1: Alice Lee, Tuberculosis and Segregation, Biometrika 10 (4) (1915), 530-548.
In his book The Prevention of Tuberculosis (London: Methuen) Dr A Newsholme has examined the influence of segregation on Tuberculosis. ...
Dr Newsholme in the course of his chapter gives a number of very high correlations between the phthisis death-rate and the indirect forms of the segregation ratio he has selected, and he interprets these as well as a long series of graphs as demonstrating that institutional segregation has been a most important factor in the diminution of the phthisis death-rate. Now any two variates which are changing continuously with the time - say, the consumption
of bananas per head of the population and the fall in the birth-rate - will exhibit high correlation and will show graphically very high association, if plotted to appropriate scales and on a common time basis. Until the time factor has been removed, either by partial correlation or otherwise, it would be most unwise to interpret such cases as providing any causal relationship.
It seemed accordingly worth while to reinvestigate Dr Newsholme's problems with the aid of a rather more adequate statistical apparatus.
1917
1917.1: Alice Lee, Further Supplementary Tables for Determining High Correlations from Tetrachoric Groupings, Biometrika 11 (4) (1917), 284-291.
The difficulty of determining correlations between .80 and 1.00 by the tetrachoric method, owing to the slow convergency of the terms of the fundamental equation for 'tetrachoric $r$' has long been recognised. In 1912 Everitt published "Supplementary Tables for Determining High Correlations from Tetrachoric Groupings." These tables much simplified the work within the field in which it is really possible to determine accurately a high correlation - beyond certain values of '$h$ and $k$' such determination is impossible owing to the influence of random sampling on a quadrant category which in most practical cases will only contain an isolated unit or two. Everitt's tables covered the values of $r$ from + .80 to + 1.00 for values of the dichotomic planes given by $h$ and $k$ varying from + .0 to + 2.6. They admitted at once of our dealing with those cases of negative values of $r$, for which either $h$ or $k$ was negative, but not with cases in which $r$ was negative and both $h$ and $k$ remained of the same sign. The present tables provide for this omitted portion of the possible field and thus complete Everitt's work.
I have followed his method of quadrature in evaluating my integrals. But I have preserved more decimal places than he has done, partly because my significant figures are thrown into higher decimal places than his by the nature of the case, and partly because recent experience in other fields has shown workers in the Biometric Laboratory, that tables are often of service for purposes other than those for which they were originally calculated, and that it is worth while preserving every reliable figure. I think my results are always correct to six figures and generally to the actual number tabulated.
1917.2: H E Soper, A W Young, B M Cave, A Lee and K Pearson, On the distributions of the correlation coefficient in small samples. Appendix II to the papers of "Student" and R A Fisher, Biometrika 11 (4) (1917), 328-413.
In a paper of 1908 "Student" [William Gosset] dealt experimentally with the distribution of the correlation coefficient of small samples, and gave empirical curves - in particular for the case of zero correlation in the sampled population - which have proved remarkably exact. The problem was next considered in 1913 by H E Soper who obtained the mean correlation and the standard deviation of the distribution of correlations to second approximations. ... The next step was taken by R A Fisher who gave in 1915 the actual frequency distribution of [the correlation $r$ in samples of $n$ from a population by a curve]. ... Clearly in order to determine the approach to Soper's approximations, and ultimately to the normal curve as $n$ increases we require expressions for the moment coefficients of [Fisher's curve], and further for practical purposes we require to table the ordinates of [Fisher's curve] in the region for which $n$ is too small for Soper's formulae to provide adequate approximations. These are the aims of the present paper. It is only fair to state that the arithmetic involved has been of the most strenuous kind and has needed months of hard work on the part of the computers engaged. On the other hand the algebra has often been of a most interesting and suggestive character.
1925
1925.1: Alice Lee and Karl Pearson, Table of the First Twenty Tetrachoric Functions to Seven Decimal Places, Biometrika 17 (3/4) (1925), 343-354.
The present table was computed as ancillary to a complete table for tetrachoric coefficients of correlation, which will shortly be published. It gives the tetrachoric functions from 0 to 19 to seven decimal places for argument intervals of $h$ equal to .1 and proceeds from 0 to 4.0.
... a grant from the Government Grant Committee of the Royal Society, has enabled Dr Alice Lee to devote her time to the calculation of this table and certain other tables shortly to be published.
1927
1927.1: Alice Lee, Supplementary table for determining correlation from tetrachoric groupings, Biometrika 19 (1927)354-404.
This table enables the value of the tetrachoric coefficient of correlation to be found by simple interpolation and without the need of solving a higher order equation for all positive values of $r$ when $h$ and $k$ have the same sign, and for all negative values of $r$ when $h$ and $k$ have different signs.
Last Updated September 2021
|
|
AAS 198th Meeting, June 2001
Session 20. Galaxian Grab-bag
Oral, Monday, June 4, 2001, 10:00-11:30am, C107
[20.02] Star-formation at z=2.4 from a sample of Lyman \alpha emitters
M. Stiavelli (STScI), C. Scarlata (STScI and Padova), S. Lilly (Victoria), N. Panagia (STScI and ESA), T. Treu (Caltech), G. Bertin (Milano and SNS, Pisa), F. Bertola (Padova)
We have carried out a search of Lyman \alpha candidates over a field of 1200 \sq ' using the CFH12K camera at the CFHT and a custom medium band filter. The search has unconvered 155 candidates, corresponding to a density of ~0.13 \sq ' -1. We find that our sources have very red colors which imply either that a large fraction of the light is reddened and we are detecting Lyman \alpha through special lines of sight or that the objects contain an underlying older stellar population. While for each individual objects we cannot discriminate between these models, we expect that most of the objects will actually contain an older components since the star formation rate inferred from the model with reddening would exceed by a large factor the peak star formation rate as measured by Madau et al. and Steidel et al. Thus it appears that typical Lyman \alpha emitters at z=2.4 experienced their major episode of star formation at higher redshift.
|
|
# The choice coordination problem
@article{Rabin1982TheCC,
title={The choice coordination problem},
author={Michael O. Rabin},
journal={Acta Informatica},
year={1982},
volume={17},
pages={121-134}
}
In the course of a concurrent computation, processes P1,..., Pn must reach a common choice of one out of k alternatives A 1,..., A k. They do this by protocols using k shared variables, one for each alternative. If the range of the variables has m values then $$\frac{{\text{1}}}{{\text{2}}}\sqrt[{\text{3}}]{n} \leqq \operatorname{m}$$ is necessary, and n + 2≦m is sufficient, for deterministic protocols solving the choice coordination problem (C.C.P.). We introduce very simple randomizing… CONTINUE READING
#### Citations
##### Publications citing this paper.
Showing 1-10 of 62 extracted citations
Highly Influenced
5 Excerpts
Highly Influenced
3 Excerpts
2 Excerpts
#### References
##### Publications referenced by this paper.
Showing 1-4 of 4 references
## Experimental control of ear choice in the moth ear mite
• A. Treat
• XI. Internationaler Kongress ftir Entomologie…
• 1960
## Concurrent search of a large datastructure
• M. Ben-Or, M. Fischer, M. O. Rabin
|
|
# US Utilities Were More Volatile than Broader Markets
By
Updated
## Volatility
Utility stocks are often called “widow” and “orphan” stocks due to their smooth stock movements and stable dividends. Due to their relatively stable and predictable earnings, utilities become comparatively safe investment options. However, it should be noted that utility stocks have been more volatile so far this year compared to broader markets.
Since inception in 2017, the average implied volatility of the Utilities Select Sector ETF (XLU), which includes 30 stocks from the S&P 500 Utilities Index, was 12%. The SPDR S&P 500’s (SPX-INDEX) (SPY) implied volatility was 9.5%.
## Why utilities weren’t stable
US utilities became more volatile in the last few years after interest rate normalization started gaining ground. In the last five years, XLU’s average implied volatility was 13%. During this period, broader markets’ average implied volatility was also 13%. If the Fed continues to aggressively hike interest rates, it might keep utility investors on their toes.
As we discussed earlier, higher interest rates make utility stocks less attractive. Generally, investors dump utility stocks and switch to bonds in pursuit of higher yields.
## Utility stocks are more volatile
On July 5, 2017, NRG Energy (NRG), the most volatile stock among the S&P 500 Utilities Index, had an implied volatility of 40%. Its 15-day average implied volatility was 37%. NextEra Energy’s (NEE) implied volatility was 14%, which was marginally above its 15-day average. Duke Energy’s (DUK) implied volatility was 13%—marginally higher than its 15-day average implied volatility of 11%.
A fall in implied volatility is normally associated with a gain in the stock price, while a rise in implied volatility is normally associated with a fall in the stock price. When implied volatility increases, it usually indicates investors’ anxiety.
|
|
# How much power does it take to keep a massive particle suspended in a gravitational field?
For instance if I have a rocket of mass $m$ in a uniform gravitational field $g$, and I want to keep it floating in the air via thrust alone, then how much power in the form of (say) chemical energy would it expend?
This is a simple question, but I can't seem to find an answer to it. The answer shouldn't be 0, however, applying the definition of work
$$W = \int \textbf{F}\cdot \textbf{dr}$$
means that the work done is 0, since there is no displacement. Also, since
$$P = \int \textbf{F}\cdot \textbf{dv}$$
also means that the Power is 0 since keeping the rocket held in space means its speed is constantly 0.
• My first idea is that the second integral should be applied to the ejected material, which is accelerated by the rocket from rest (in the rocket's frame) to some final v. – morrna Mar 22 '15 at 20:01
• Both integrals should be applied to the ejected material! Indeed the rocket body stays still and is not gaining potential energy so the total work on it is necessarily zero. – Andrea Jan 15 '16 at 21:42
The equation you need is that force is equal to the rate of change of momentum. The force is the weight of the rocket, $Mg$, and the rate of change of momentum is the mass ejected from the exhaust per second multiplied by the exhaust velocity.
$$Mg = v\frac{dm}{dt}$$
So choose your exhaust velocity $v$, and you can work out the required $dm/dt$. The power is then just the change in kinetic energy per second so:
$$W = \tfrac{1}{2}v^2\frac{dm}{dt}$$
|
|
# How many ways to seat 9 couple around a round table
1. You are a host/hostess at your local Applebee’s. You are seating a group consisting of 9 couples at a round table.
A)In how many different ways can you do this, provided that each couple will sit together, and all that you care about is their position relative to one another?
B)What is the probability that Al doesn’t end up within two seats of Ricky, AND Beth doesn’t end up within two seats of Charlene?
Used a wrong tag before of order-statistics. Sorry about that.
• What are your thoughts so far? It looks like you've copied a question without even all the necessary information – Ian Coley Nov 10 '13 at 21:24
• This was all that was given to me by my instructor. I know that just using a simple 9 P 9 isn't enough, though. Because it's a circular table A,B,C,D,E,F,G,H,I would be the same thing as I,A,B,C,D,E,F,G,H. But then I get stuck. – Boxyouranswer Nov 10 '13 at 21:29
• Depends. Al and Ricky could be a couple. – André Nicolas Nov 10 '13 at 21:30
• Well, I assume from the question that they are not a couple. Otherwise it would just be 1/(the possibilities). – Boxyouranswer Nov 10 '13 at 22:39
For the number of arrangements, imagine that one of the chairs is a throne, and the Queen is one of the group at Applebee's. She sits down first, of course, on the throne. Her Consort has $2$ choices of chair. Now let us seat the other couples one at a time, counterclockwise from the Queen-Consort pair.
The person chosen to occupy the chair immediately counterclockwise from the royal pair can be chosen in $16$ ways. Now the occupant of the next chair is determined. The person chosen to occupy the next chair after that can be chosen in $14$ ways, and then the occupant of the chair after that is determined. And so on.
Multiply. We get $(2)(16)(14)(12)\cdots (4)(2)$. This can also be written as $(2^9)(8!)$.
• Wow Thanks! But I have a follow-up question. Would the generic formula for such a problem then be 2^n * (n-1)! ? I mean, if there was another problem that said I have to seat 10 people around a round table with 10 available seats, could I use (2^10)(9!) ? – Boxyouranswer Nov 10 '13 at 22:52
• Yes, precisely the same reasoning gives the expression in your comment, and works whenever we have $n$ couples. And if social arrangements divided people into triples, similar reasoning would give $(3!)^n (n-1)!$ ways to arrange $3n$ people around a round table so that "triples" are together. – André Nicolas Nov 11 '13 at 0:46
Another approach to this counting problem is to think of each couple as "single" object, counting the number of ways the single objects can be arranged, and then multiplying by the number of equivalent ways the couples can be arranged as "compound" objects.
In particular, there are $9$ couples (considered as "single objects"), and they can be arranged around the table in $8!$ ways. (Make sure you understand why this is so)
Now we consider the couples as "compound" objects, consisting of persons $A$ and $B$. Each couple can be arranged in 2 ways: $AB$ or $BA$. Since there are $9$ couples, there are $2^9$ choices.
So we get $2^9 \cdot 8!$ arrangements.
And yes, this generalizes to arbitrary $n$: if there are $n$ couples, they can be arranged in $2^n (n-1)!$ ways.
• Follow-up. For part B of the problem, would I just find all the ways Al and Ricky do sit within 2 seats of each other, and all the ways that Beth and Charlene sit within 2 seats of each other, divide by the outcomes given by 2^9(8!) for each one then add together and subtract from 1? – Boxyouranswer Nov 10 '13 at 23:32
• That is a valid approach. You can use the "compound" and "single" object approach, too. – nomen Nov 10 '13 at 23:37
|
|
# 5.5 kHz noise on SMPS output
I am designing a low noise SMPS like this http://www.ti.com/lit/df/tidrgc2/tidrgc2.pdf switching from 40-90 kHz.
I have a weird problem, on the output we are seeing a spike of noise at 5.5 kHz and multiple (decaying in a few multiples), tested with differential amplifier and correct techniques.
We have no idea where it comes from (maybe ringing of secondary transformer inductance + caps)?
Noise capture
Update:
We added a C and R across transformer, 1000 uF and 20 ohm and we seen a 6 dB reduction on noise. We are experimenting with different values to lower the noise. The only LC tank we are seeing is diode capacitance (and the RC snubber across it) and transformer secondary winding.
Second update.
We used 2x47uF parallel across pin 6/10 + 50R and spike is slightly lower .
• Comments are not for extended discussion; this conversation has been moved to chat. When you're done, edit any relevant new information into the question. – Dave Tweed Jun 8 '18 at 12:09
Good question. People think that the only noise that can occur is the switching frequency and harmonics. This is not true. You are not the only one that has found noise below the switching frequency. Normal SM PS filter LC parts will work well at say 40 kHz but your 5.5 kHz will cut through it like butter. It will be better to nail this at the source than use a large expensive output filter. These limit cycle oscillations are sometimes associated with the resonant frequency of the output filter. Study this and study your control loop poles. Double check for interaction and change your loop or change your output filter. Loose ferrite cores can also cause audible oscillations despite running far above 20 kHz.
• I used 4 different L (1 ,4 , 7 and 10uH) this noise is always present at 5.5Khz...so I am certain its not from resonance of LC filter – Johan B. Jun 7 '18 at 8:39
The spectrum of the noise is supplied and here is a copy: -
The noise at 5.5 kHz is a level of 250 uV RMS.
A typical noise/ripple level from a circuit like the one you have is probably around 100 times higher and you appear to be worried by this level.
Yes , we are making it under 1uV (0-20Khz)
The only chance of this happening is if you use a LDO linear voltage regulator fed from the flyback output. Even then I think you are going to struggle.
Take for example the TLV705 that is described by TI as being low noise. It has a quoted noise level of 27 uV RMS across 10 Hz to 100 kHz and it is: -
A LINEAR VOLTAGE REGULATOR AND NOT A SWITCHING REGULATOR
Take the LP5907 as another example. It is described as a: -
250mA Ultra-Low-Noise Low-IQ LDO
Notice my emphasis on the low noise bit. It has a noise output level over a 10 Hz to 100 kHz bandwidth of 6.5 uV RMS.
What you are hoping to achieve is not going to happen with a simple Flyback converter.
## MORE CONTEXT
The thermal noise across a 100 kHz bandwidth from a 560 ohm resistor at an elevated temperature of 50 degC (not unheard of in a flyback converter) is 1 uV RMS. At a 20 kHz BW it is about 0.45 uV.
How many resistors have you got in your circuit that are in the signal chain that will add noise to the output? I count 4 and these alone will tip the balance. Be realistic.
• I have not added a active opamp filter that has excellent psrr at 3-10Khz. Right now we are concentrating on reducing that spike as much as possible. – Johan B. Jun 8 '18 at 11:35
• An active op-amp filter won't do what you want - you need a linear voltage regulator following your flyback to stand half a chance of getting the noise down to below 10 uV. Be sensible about this and lower your expectations because you WILL be dissapointed. – Andy aka Jun 8 '18 at 11:36
• Andy we already have the active filter and it works. I will post the results with the filter so you can see it and comment on it. Meanwhile back to this spike.. – Johan B. Jun 8 '18 at 11:52
• Given the small level it's probably a control-system "hunting" artefact that gets there via the feedback loop. A slight over-voltage is produced by the flyback and the switching temporarily goes into a form of burst mode control hence the lower frequency of 5.5 kHz and not the actual switching frequency. This will be normal for this type of circuit and nothing generally to worry about in normal circumstances compared to the ripple voltage seen due to the switching frequency...... – Andy aka Jun 8 '18 at 13:12
• .... To verify this look at the waveform from the secondary - if it doesn't seem like a regular continual rectangular pulse then it's probably skipping cycles and that's where the lower frequency comes in. – Andy aka Jun 8 '18 at 13:14
|
|
# Is the standard deviation of a data set invariant to translation?
Dec 13, 2016
Yes it is. See explanation for a proof.
#### Explanation:
Let $S$ be a data set:
$S = \left\{{x}_{1} , {x}_{2} , \ldots , {x}_{n}\right\}$
Its mean and standard deviation:
$\overline{x} = \frac{1}{n} \times {\Sigma}_{i = 1}^{i = n} \left({x}_{i}\right)$
$\sigma = \sqrt{{\Sigma}_{i = 1}^{n} {\left({x}_{i} - \overline{x}\right)}^{2}}$
Let ${S}_{1}$ be a data set $S$ translated by $a$:
${S}_{1} = \left\{{x}_{1} + a , {x}_{2} + a , \ldots , {x}_{n} + a\right\}$
Its mean would equal:
$\overline{{x}_{1}} = \frac{{x}_{1} + a + {x}_{2} + a + \ldots + {x}_{n} + a}{n} =$
$= \frac{{x}_{1} + {x}_{2} + \ldots + {x}_{n}}{n} + \frac{n a}{n} = \overline{x} + a$
The standard deviation would be:
${\sigma}_{1} = \sqrt{{\Sigma}_{i = 1}^{n} {\left({x}_{i} + a - \left(\overline{x} + a\right)\right)}^{2}} =$
=sqrt(Sigma_{i=1}^{n}(x_i+a-bar(x)-a))^2)=
=sqrt(Sigma_{i=1}^{n}(x_icancel(+a)-bar(x) cancel(-a)))^2)=
$= \sqrt{{\Sigma}_{i = 1}^{n} {\left({x}_{i} - \overline{x}\right)}^{2}} = \sigma$
The standard deviation of the new set is equal to the deviation of the set before translation.
QED
|
|
# a→ab
## computation, game design, and experimentation
front page | about | archives | code dump | c.s. for mere mortals | tags | rss feed
# A Simplistic Introduction to Diffie-Hellman Key Exchange
October 4, 2012
So, recently, I've been getting into encryption. In that vein, I thought I'd give a quick overview of two topics: simple XOR encryption, and Diffie-Hellman key exchange.
## XOR encryption
The first topic is almost mindnumbingly simple, but it is a simple, cheap, easy to afford encryption algorithm which has several nice features: it's symmetric, composable, and stupid fast.
So, for XOR encryption to work, you need two things: a payload (the 'cleartext') and an encryption key. The algorithm takes the key and uses it to modify the payload, creating what is known as the 'cyphertext'.
Both the key and payload are typically strings, but anything that can be converted to binary numbers will work. To make things simple, I'm going to use numbers in my examples that follow, instead of strings, so as to avoid encoding concerns, which are really something that shouldn't be worried about when looking at encryption concepts, as they're more of an implementation detail.
Alright, so let's start off with an example. Say that I want to encrypt the payload '117' with the encryption key '18'. These are arbitrary numbers chosen for the purpose of this example.
First we start by converting these to binary numbers:
payload = 117 = 0b01110101
key = 18 = 0b00010010
Then, to create the cypertext, we simply XOR the numbers together:
01110101 117 cleartext
XOR 00010010 XOR 18 key
------------ -------
01100111 103 cyphertext
So we've successfully encrypted the payload to the cyphertext. Decryption is simply XORing the cyphertext with the same key. This is why it's a so-called symmetric encryption scheme--encryption and decryption are the exact same operation.
01100111 103 cyphertext
XOR 00010010 XOR 18 key
------------ -------
01110101 117 cleartext
What does it mean then, that this algorithm is composable? Well, what would happen if I were to encrypt the data with one key, and then a second, different key? Would it matter which order I applied the decryption? Composable encryption algorithms work regardless of the order of decryption. That is, the following works:
cleartext 117 01110101
key1 18 00010010
--------------------------
cyphertext1 103 01100111
key2 201 11001001
--------------------------
cyphertext2 174 10101110
Decrypt order: key2, key1:
cyphertext2 174 10101110
key2 201 11001001
--------------------------
cleartext1 103 01100111
key1 18 00010010
--------------------------
cleartext 117 01110101
Decrypt order: key1, key2:
cyphertext2 174 10101110
key1 18 00010010
--------------------------
cleartext1 188 10111100
key2 201 11001001
--------------------------
cleartext 117 01110101
Cool huh?
## Key Exchange
Now, how do two people get the same key? Surely, one could just tell the other, and trust that they'll keep it secret. But someone could be listening (the walls have ears, you know!). This won't work.
What does work is something called Diffie-Hellman key exchange. It involves both participants agreeing on a "shared secret", which will become part of the key, as well as each individually choosing a secret number that they won't tell anyone, not even each other. They then do some simple XORing on these and swap. Because XOR is composable, as explained above, both participants will be able to generate the same secret key without ever sharing it with the other, allowing them to communicate freely.
How does this work though? Here are the steps (don't worry, there's a textual flowchart of this later in the post):
1. Both participants agree on a shared secret. Let's use 34.
2. Each participant independently chooses a secret key. Let's use 12 and 21.
3. Each participant XORs their secret key with the shared secret. This gives us 46 and 55.
4. The participants swap these composed keys.
5. The participants XOR their private keys with the coposed keys that they just recieved. This gives us 59 and 59.
6. They've generated the same key! They can communicate.
Diagrammed out a bit, here's how it looks:
Person 1 Person2
34 shared 34
12 secret 21
46 composed 55
55 swap 46
59 key! 59
This does not completely prevent man-in-the-middle attacks, but it does a good bit to make it harder for people to find the encryption key.
Do note, however, that this implementation is highly flawed: Person 1 can glean Person 2's secret key, and vice versa, after the swap! It's a simple matter of XORing by the shared secret. True key exchange mechanisms use stronger encryption schemes that prevent simple means of ascertaining the private keys of other people.
## Some code
As always with these posts, I try to provide some working code to explain what my words can't convey. Here's an example in Io:
Enjoy.
|
|
# GATE 2014 EE – SET 1 – Complete Solutions
Q1. Given a system of equations:
x + 2y + 2z = b1
5x + y + 3z = b2
Which of the following regarding its solutions
(A) The system has a unique solution for any given $$b_{1}$$ and $$b_{2}$$.
(B) The system will have infinitely many solutions for any given $$b_{1}$$ and $$b_{2}$$.
(C) Whether or not a solution exists depends on the given $$b_{1}$$ and $$b_{2}$$.
(D) The system would have no solution for any values of $$b_{1}$$ and $$b_{2}$$.
Solution: (B)
Q2. Let f(x) = x e−x. The maximum value of the function in the interval (0, ∞) is
(A) $$e^{-1}$$ (B) $$e$$ (C) $$1-e^{-1}$$ (D) $$1+e^{-1}$$
Solution: (A)
Q3. The solution for the differential equation $$\frac{\mathrm{d} ^{2}x}{\mathrm{d} t^{2}}=-9x$$ with initial conditions x(0) = 1 and $$\frac{\mathrm{d} x}{\mathrm{d} t}\mid _{t=0}=1$$, is
(A) $$t^{2}+t+1$$ (B) $$\sin 3t+\frac{1}{3}\cos 3t+\frac{2}{3}$$
(C) $$\frac{1}{3}\sin 3t+\cos 3t$$ (D) $$\cos 3t+t$$
Solution: (C)
Q4. Let $$X(s)=\frac{3s+5}{s^{2}+10s+21}$$ be the Laplace Transform of a signal x(t). Then, x(0++) is
(A) 0 (B) 3 (C) 5 (D) 21
Solution: (B)
Q5. Let S be the set of points in the complex plane corresponding to the unit circle. (That is, S ={z : |z| = 1}). Consider the function f(z) = z z* where z* denotes the complex conjugate of z. The f(z) maps S to which one of the following in the complex plane
(A) unit circle
(B) horizontal axis line segment from origin to (1, 0)
(C) the point (1, 0)
(D) the entire horizontal axis
Solution: (C)
Q6. The three circuit elements shown in the figure are part of an electric circuit. The total power absorbed by the three circuit elements in watts is
Solution: Key = 330
Q7. C0 is the capacitance of a parallel plate capacitor with air as dielectric (as in figure (a)). If, half of the entire gap as shown in figure (b) is filled with a dielectric of permittivity ϵr, the expression for the modified capacitance is
(A) $$\frac{C_{0}}{2}\left ( 1+\epsilon _{r} \right )$$ (B) $$\left ( C_{0}+\epsilon _{r} \right )$$
(C) $$\frac{C_{0}}{2}\epsilon _{r}$$ (D) $$C_{0}\left ( 1+\epsilon _{r} \right )$$
Solution: (A)
Q8. A combination of 1 μF capacitor with an initial voltage vc(0) = −2 V in series with a 100 Ω resistor is connected to a 20 mA ideal dc current source by operating both switches at t = 0 s as shown. Which of the following graphs shown in the options approximates the voltage vs across the current source over the next few seconds?
(A) (B)
(C) (D)
Solution: (C)
Q9. x(t) is nonzero only for Tx < t < T’x, and similarly y, y(t) is nonzero for Ty < t < T’y. Let z(t) be convolution of x(t) and y(t). Which one of the following statements is TRUE?
(A) z(t) can be nonzero over a n unbounded interval
(B) z(t) is nonzero for $$t< T_{x}+T_{y}$$
(C) z(t) is zero outside of $$T_{x}+T_{y}< t< {T_{x}}’+{T_{y}}’$$
(D) z(t) is nonzero for $$t> {T_{x}}’+{T_{y}}’$$
Solution: (C)
Q10. For a periodic square wave, which one of the following statements is TRUE?
(A) The Fourier series coefficients do not exist.
(B) The Fourier series coefficients exist but the reconstruction converges at no point.
(C) The Fourier series coefficients exist and the reconstruction converges at most points.
(D) The Fourier series coefficients exist and the reconstruction converges at every point.
Solution: (C)
Q11. An 8-pole, 3-phase, 50 Hz induction motor is operating at a speed of 700 rpm. The frequency of the rotor current of the motor in Hz is ________.
Solution: Key = 3.2 to 3.5
Q12. For a specified input voltage and frequency, if the equivalent radius of the core of a transformer is reduced by half, the factor by which the number of turns in the primary should change to maintain the same no load current is
(A) 1/4 (B) 1/2 (C) 2 (D) 4
Solution: (C)
Q13. A star connected 400 V, 50 Hz, 4 pole synchronous machine gave the following open circuit and short circuit test results:
Open circuit test: Voc = 400 V (rms, line-to-line) at field current, If = 2.3 A
Short circuit test: Isc = 10 A (rms, phase) at field current, If = 1.5 A
The value of per phase synchronous impedance in Ω at rated voltage is __________.
Solution: Key = 14.5 to 15.5
Q14. The undesirable property of an electrical insulating material is
(A) high dielectric strength (B) high relative permittivity
(C) high relative permittivity (D) high insulation resistivity
Solution: (B)
Q15. Three-phase to ground fault takes place at locations F1 and F2 in the system shown in the figure
If the fault takes place at location F1, then the voltage and the current at bus A are VF1 and IF1 respectively. If the fault takes place at location F2, then the voltage and the current at bus A are VF2 and IF2 respectively. The correct statement about voltages and currents during faults at F1 and F2 is
(A) $$V_{F1}$$ leads $$I_{F1}$$ and $$V_{F2}$$ leads $$I_{F2}$$
(B) $$V_{F1}$$ leads $$I_{F1}$$ and $$V_{F2}$$ lags $$I_{F2}$$
(C) $$V_{F1}$$ lags $$I_{F1}$$ and $$V_{F2}$$ leads $$I_{F2}$$
(D) $$V_{F1}$$ lags $$I_{F1}$$ and $$V_{F2}$$ lags $$I_{F2}$$
Solution: (C)
Q16. A 2-bus system and corresponding zero sequence network are shown in the figure.
(A)
(B)
(C)
(D)
Solution: (B)
Q17. In the formation of Routh-Hurwitz array for a polynomial, all the elements of a row have zero values. This premature termination of the array indicates the presence of
(A) only one root at the origin (B) imaginary roots
(C) only positive real roots (D) only negative real roots
Solution: (B)
Q18. The root locus of a unity feedback system is shown in the figure
The closed loop transfer function of the system is
(A) $$\frac{C(s)}{R(s)}=\frac{K}{(s+1)(s+2)}$$
(B) $$\frac{C(s)}{R(s)}=\frac{-K}{(s+1)(s+2)+K}$$
(C) $$\frac{C(s)}{R(s)}=\frac{K}{(s+1)(s+2)-K}$$
(D) $$\frac{C(s)}{R(s)}=\frac{K}{(s+1)(s+2)+K}$$
Solution: (C)
Q19. Power consumed by a balanced 3-phase, 3-wire load is measured by the two wattmeter method. The first wattmeter reads twice that of the second. Then the load impedance angle in radians is
(A) π/12 (B) π/8 (C) π/6 (D) π/3
Solution: (C)
Q20. In an oscilloscope screen, linear sweep is applied at the
(A) vertical axis (B) horizontal axis
(C) origin (D) both horizontal and vertical axis
Solution: (B)
Q21. A cascade of three identical modulo-5 counters has an overall modulus of
(A) 5 (B) 25 (C) 125 (D) 625
Solution: (C)
Q22. In the Wien Bridge oscillator circuit shown in figure, the bridge is balanced when
(A) $$\frac{R_{3}}{R_{4}}=\frac{R_{1}}{R_{2}}, \omega =\frac{1}{\sqrt{R_{1}C_{1}R_{2}C_{2}}}$$
(B) $$\frac{R_{2}}{R_{1}}=\frac{C_{2}}{C_{1}}, \omega =\frac{1}{R_{1}C_{1}R_{2}C_{2}}$$
(C) $$\frac{R_{3}}{R_{4}}=\frac{R_{1}}{R_{2}}+\frac{C_{2}}{C_{1}}, \omega =\frac{1}{\sqrt{R_{1}C_{1}R_{2}C_{2}}}$$
(D) $$\frac{R_{3}}{R_{4}}=\frac{R_{1}}{R_{2}}=\frac{C_{2}}{C_{1}}, \omega =\frac{1}{R_{1}C_{1}R_{2}C_{2}}$$
Solution: (C)
Q23. The magnitude of the mid-band voltage gain of the circuit shown in figure is (assuming hfe of the transistor to be 100)
(A) 1 (B) 10 (C) 20 (D) 100
Solution: (D)
Q24. The figure shows the circuit of a rectifier fed from a 230-V (rms), 50-Hz sinusoidal voltage source. If we want to replace the current source with a resistor so that the rms value of the current supplied by the voltage source remains unchanged, the value of the resistance (in ohms) is __________ (Assume diodes to be ideal.)
Solution: Key = 23
Q25. Figure shows four electronic switches (i), (ii), (iii) and (iv). Which of the switches can block voltages of either polarity (applied between terminals ‘a’ and ‘b’) when the active device is in the OFF state?
(A) (i), (ii) and (iii) (B) (ii), (iii) and (iv)
(C) (ii) and (iii) (D) (i) and (iv)
Solution: (C)
Q26. Let g: [0, ∞) → [0, ∞) be a function defined by g(x) = x – [x], where [x] represents the integer part of x. (That is, it is a largest integer which is less than or equal to x).
The value of the constant term in the Fourier series expansion of g(x) is _______
Solution: Key = 0.5
Q27. A fair coin is tossed n times. The probability that the difference between the number of heads and tails is (n − 3) is
(A) $$2^{-n}$$ (B) 0 (C) $$_{n-3}^{n}\textrm{C}2^{-n}$$ (D) $$2^{-n+3}$$
Solution: (B)
Q28. The line integral of function F = yzi, in the counterclockwise direction, along the circle x2 + y2 = 1 at z = 1 is
(A) −2π (B) −π (C) π (D) 2π
Solution: (B)
Q29. An incandescent lamp is marked 40 W, 240V. If resistance at room temperature (26°C) is 120 Ω, and temperature coefficient of resistance is 4.5 × 10−3/°C, then its ‘ON’ state filament temperature in °C is approximately _______
Solution: Key = 2470 to 2471
Q30. In the figure, the value of resistor R is (25 + I/2) ohms, where I is the current in amperes. The current I is ______
Solution: Key = 10
Q31. In an unbalanced three phase system, phase current Ia = 1 ∠(−90°) pu, negative sequence current Ib2 = 4∠(−150°) pu, zero sequence current Ic0 = 3∠90° pu. The magnitude of phase current Ib in pu is
(A) 1.00 (B) 7.81 (C) 11.53 (D) 13.00
Solution: (C)
Q32. The following four vector fields are given in Cartesian co-ordinate system. The vector field which does not satisfy the property of magnetic flux density is
(A) $$y^{2}a_{x}+z^{2}a_{y}+x^{2}a_{z}$$
(B) $$z^{2}a_{x}+x^{2}a_{y}+y^{2}a_{z}$$
(C) $$x^{2}a_{x}+y^{2}a_{y}+z^{2}a_{z}$$
(D) $$y^{2}z^{2}a_{x}+x^{2}z^{2}a_{y}+x^{2}y^{2}a_{z}$$
Solution: (C)
Q33. The function shown in the figure can be represented as
(A) $$u(t)-u(t-T)+\frac{(t-T)}{T}u(t-T)-\frac{(t-2T)}{T}u(t-2T)$$
(B) $$u(t)+\frac{t}{T}u(t-T)-\frac{t}{T}u(t-2T)$$
(C) $$u(t)-u(t-T)+\frac{(t-T)}{T}u(t)-\frac{(t-2T)}{T}u(t)$$
(D) $$u(t)+\frac{(t-T)}{T}u(t-T)-2\frac{(t-2T)}{T}u(t-2T)$$
Solution: (A)
Q34. Let $$X(z)=\frac{1}{1-z^{-3}}$$ be the Z-transform of a causal signal x[n]. Then, the values of x[2] and x[3] are
(A) 0 and 0 (B) 0 and 1 (C) 1 and 0 (D) 1 and 1
Solution: (B)
Q35. Let f(t) be continuous time signal and let F(ω) be its Fourier Transform defined by $$F(\omega )=\int_{-\infty }^{\infty }f(t)e^{j\omega t}dt$$
Define g(t) by $$g(t)=\int_{-\infty }^{\infty }F(u)e^{-jut}du$$
What is the relationship between f(t) and g(t)
(A) g(t) would always be proportional to f(t).
(B) g(t) would be proportional to f(t) if f(t) is an even function.
(C) g(t) would be proportional to f(t) only if f(t) is a sinusoidal function.
(D) g(t) would never be proportional to f(t).
Solution: (B)
Q36. The core loss of a single phase, 230/115 V, 50 Hz power transformer is measured from 230 V side by feeding the primary (230 V side) from a variable voltage variable frequency source while keeping the secondary open circuited. The core loss is measured to be 1050 W for 230 V, 50 Hz input. The core loss is again measured to be 500 W for 138 V, 30 Hz input. The hysteresis and eddy current losses of the transformer for 230 V, 50 Hz input are respectively,
(A) 508W and 542W (B) 468W and 582W
(C) 498W and 552W (D) 488W and 562W
Solution: (A)
Q37. A 15 kW, 230 V dc shunt motor has armature circuit resistance of 0.4 Ω and field circuit resistance of 230 Ω. At no load and rated voltage, the motor runs at 1400 rpm and the line current drawn by the motor is 5 A. At full load, the motor draws a line current of 70 A. Neglect armature reaction. The full load speed of the motor in rpm is _________.
Solution: Key = 1239 to 1242
Q38. A 3 phase, 50 Hz, six pole induction motor has a rotor resistance of 0.1 Ω and reactance of 0.92 Ω. Neglect the voltage drop in stator and assume that the rotor resistance is constant. Given that the full load slip is 3%, the ratio of maximum torque to full load torque is
(A) 1.567 (B) 1.712 (C) 1.948 (D) 2.134
Solution: (C)
Q39. A three phase synchronous generator is to be connected to the infinite bus. The lamps are connected as shown in the figure for the synchronization. The phase sequence of bus voltage is R-Y-B and that of incoming generator voltage is R’-Y’-B’.
It was found that the lamps are becoming dark in the sequence La-Lb-Lc. It means that the phase sequence of incoming generator is
(A) opposite to infinite bus and its frequency is more than infinite bus
(B) opposite to infinite bus but its frequency is less than infinite bus
(C) same as infinite bus and its frequency is more than infinite bus
(D) same as infinite bus and its frequency is less than infinite bus
Solution: (A)
Q40. A distribution feeder of 1 km length having resistance, but negligible reactance, is fed from both the ends by 400V, 50 Hz balanced sources. Both voltage sources S1 and S2 are in phase. The feeder supplies concentrated loads of unity power factor as shown in the figure.
The contributions of S1 and S2 in 100 A current supplied at location P respectively, are
(A) 75 A and 25 A (B) 50 A and 50 A
(C) 25 A and 75 A (D) 0 A and 100 A
Solution: (D)
Q41. A two bus power system shown in the figure supplies load of 1.0+j0.5 p.u.
The values of V1 in p.u. and δ2 respectively are
(A) 0.95 and 6.00° (B) 1.05 and −5.44°
(C) 1.1 and −6.00° (D) 1.1 and −27.12°
Solution: (B)
Q42. The fuel cost functions of two power plants are
Plant $$P_{1}: C_{1}=0.05P_{g1}^{2}+AP_{g1}+B$$
Plant $$P_{2}: C_{2}=0.10P_{g2}^{2}+AP_{g2}+2B$$
where, Pg1 and pg2 are the generated powers of two plants, and A and B are the constants. If the two plants optimally share 1000 MW load at incremental fuel cost of 100 Rs/MWh, the ratio of load shared by plants P1 and P2 is
(A) 1:4 (B) 2:3 (C) 3:2 (D) 4:1
Solution: (D)
Q43. The over current relays for the line protection and loads connected at the buses are shown in the figure.
The relays IDMT in nature having the characteristic
The maximum and minimum fault currents at bus B are 2000 A and 500 A respectively. Assuming the time multiplier setting and plug setting for relay RB to be 0.1 and 5A respectively, the operating time of RB (in seconds) is __________
Solution: Key = 0.21 to 0.23
Q44. For the given system, it is desired that the system be stable. The minimum value of α for this condition is ___________.
Solution: Key = 0.61 to 0.63
Q45. The Bode magnitude plot of the transfer function
Note that −6 dB/octave = −20 dB/decade. The value of $$\frac{a}{bK}$$ is
Solution: Key = 0.7 to 0.8
Q46. A system matrix is given as follows.
The absolute value of the ratio of the maximum eigenvalue to the minimum eigenvalue is _________
Solution: Key = 2.9 to 3.1
Q47. The reading of the voltmeter (rms) in volts, for the circuit shown in the figure is __________
Solution: Key = 140 to 142
Q48. The dc current flowing in a circuit is measured by two ammeters, one PMMC and another electrodynamometer type, connected in series. The PMMC meter contains 100 turns in the coil, the flux density in the air gap is 0.2 Wb/m2, and the area of the coil is 80 mm2. The electrodynamometer ammeter has a change in mutual inductance with respect to deflection of 0.5 mH/deg. The spring constants of both the meters are equal. The value of current, at which the deflections of the two meters are same, is ________
Solution: Key = 3.0 to 3.4
Q49. Given that the op-amps in the figure are ideal, the output voltage V0 is
(A) $$\left ( V_{1}-V_{2} \right )$$
(B) $$2\left ( V_{1}-V_{2} \right )$$
(C) $$\frac{\left ( V_{1}-V_{2} \right )}{2}$$
(D) $$\left ( V_{1}+V_{2} \right )$$
Solution: (B)
Q50. Which of the following logic circuits is a realization of the function F whose Karnaugh map is shown in figure
(A)
(C) (D)
Solution: (C)
Q51. In the figure shown, assume the op-amp to be ideal. Which of the alternatives gives the correct Bode plots for the transfer function
(A)
(B)
(C)
(D)
Solution: (A)
Q52. An output device is interfaced with 8-bit microprocessor 8085A. The interfacing circuit is shown in figure
The interfacing circuit makes use of 3 Line to 8 Line decoder having 3 enable lines The address of the device is
(A) $$50_{H}$$ (B) $$5000_{H}$$
(C) $$A0_{H}$$ (D) $$A000_{H}$$
Solution: (B)
Q53. The figure shows the circuit diagram of a rectifier. The load consists of a resistance 10 Ω and an inductance 0.05 H connected in series. Assuming ideal thyristor and ideal diode, the thyristor firing angle (in degree) needed to obtain an average load voltage of 70 V is ______
Solution: Key = 69 to 70
Q54. Figure (i) shows the circuit diagram of a chopper. The switch S in the circuit in figure (i) is switched such that the voltage vD across the diode has the wave shape as shown in figure (ii). The capacitance C is large so that the voltage across it is constant. If switch S and the diode are ideal, the peak to peak ripple (in A) in the inductor current is ______
Solution: Key = 2.49 to 2.51
Q55. The figure shows one period of the output voltage of an inverter. α should be chosen such that 60° < α < 90°. If rms value of the fundamental component is 50 V, then α in degree is _________
Solution: Key = 76.5 to 78.0
|
|
# Tag Info
12
YES. First construct a "Cantor like set" with positive measure $C$ - otherwise known as fat Cantor set. Then define the product $C\times [0,1]\subset \mathbb R^2$ and finally to make it connected define $$T=C\times [0,1]\cup [0,1]\times\{0\}.$$ What is a Cantor like set? Take $C_0=[0,1]$, then $C_1=[0,1]\setminus [a_1^0,b_1^0]$, then $C_2=C_1\setminus([... 8 The answer "yes" follows from results in the article "On Two Halves Being Two Wholes" by Andrew Simoson in The American Mathematical Monthly Vol. 91, No. 3 (Mar., 1984), pp. 190-193. Some readers might not have access, but here is the JSTOR link: http://www.jstor.org/stable/2322357 The article was cited in the article i707107 linked to in a comment. In ... 4 The family$\mathcal F$of nonempty open subsets of$[0,1]$of measure$< 1$has cardinality$c$, so it can be well-ordered so that each member of$\mathcal F$has fewer than$c$predecessors. Since the complement in$[0,1]$of any member of$\mathcal F$has cardinality$c$, we can choose by transfinite induction, for each$U \in \mathcal F$, a countably ... 4 Actually, it's the other way around:$\mu^*(A)\le \mu(A)$. This is because$A$itself covers$A$. Specifically, let$A_0=A$and$A_i=\emptyset$for$i>0$. Then$A\subseteq\bigcup A_i$. Now it's easy to see that there are two possibilities for$\mu(\emptyset)$, by finite additivity: either$0$or$\infty$. If$\mu(\emptyset)=0$, then$\sum\mu(A_i)=\mu(A)$... 4 Here's a proof by contrapositive: suppose that$C \subseteq \Omega$and there is an$A \in \mathcal A$with$\mu^*(A \cap C) < \mu(A)$. Noting that$C \subset A^c \cup (A \cap C)$, we find that $$\mu^*(C) \leq \mu^*(A \cap C) + \mu^*(A^c) = \mu^*(A \cap C) + \mu(A^c) < \mu(A) + \mu(A^c) = 1$$ So,$\mu^*(C)<1$. We have reached the desired ... 4 If$C$is the Cantor set and$A=B=\{e^{x}:x\in C\}$then$A$and$B$have measure$0$but$AB$contains all numbers of the form$e^{x+y}$with$x,y \in C$. Since$C+C=[0,2]$it follows that$AB$does not have measure$0$. [I have used the following facts:$e^{x}$is absolutely continuous on$[0,1]$and any absolutely continuous function maps sets of measure$...
3
From the article provided by @Jonas Meyer: http://www.jstor.org/stable/2322357 An alternative approach to this problem is possible. This approach is already provided by @Syang Chen at MathOverflow: https://mathoverflow.net/questions/59978/additive-subgroups-of-the-reals/59980 Let $f(x) = m^*(G\cap [0,x])$. Then, for any $x, y \in G \cap (0,\infty)$, we ...
3
Step 1: Let $E \subseteq X$ be an arbitrary set. Show that there exists $B \in S(R)$ such that $E \subseteq B$ and $\mu(B) = \mu^*(E)$. Step 2: Let $E \subseteq X$ be such that $\mu^*(E) < \infty$. Choose $B$ be as in step 1. Then \begin{align*} \mu_*(E) &= \sup \{\mu(A); A \subseteq E, A \in S(R)\} \\ &= \sup\{\mu(A); B \cap A^c \supseteq B \... 3 Generalize the construction of a Bernstein set. Enumerate A=\{x_\xi:\xi<2^\omega\}. Let \mathscr{F}=\{F_\xi:\xi<2^\omega\} be the family of uncountable closed subsets of A. Recursively construct distinct y_{\xi,n}\in A for \langle\xi,n\rangle\in 2^\omega\times\omega as follows. If \eta<2^\omega, and we have y_{\xi,n} for each \... 3 Ok lets look at it step by step f(x)=c_1 Now we need to check the \{x\in X| f(x)>c\} Basically this is the same as \{x\in X| c1>c\} Now we debate depending on variable c For c \ge c_1 We get \{x\in X| c_1>c\ge c_1\}=\emptyset since c_1>c_1 cannot happen. And as we know empty set is in sigma algebra by definition. Now for c <... 3 You can but the result is somewhat trivial. Since every open set is Lebesgue measurable, we have\inf\{m(E)\mid E \text{ is measurable},\; A \subseteq E\} \leq \inf\{m(E)\mid E \text{ is open}, \;A\subseteq E\} = m^*(A).$$This follows because the left infimum has a greater range. By monotonicity, m^{*}(A) \leq m^{*}(E) = m(E) holds for all measurable E... 3 For x\in A, let U_x=\{y\in X:d(y,x)<\frac12 d(A,B)\}. Clearly this set is open, so U:=\bigcup_{x\in A}U_x is open. Obviously A\subseteq U. If y\in U, then there exists x\in A such that$$d(x,y)<\frac12 d(A,B)\le\frac12\inf_{z\in B}d(x,z),$$so in particular y\notin B. Hence$$\mu^*(A\cup B)=\mu^*\big((A\cup B)\cap U\big)+\mu^*\big((A\cup ...
3
For each $m\in\mathbb N$, you know that there is a family $(U_{k,m})_{k\in\mathbb N}$ of open sets in $\mathbb{R}^n$ such that $A\subset\bigcup_{k\in\mathbb N}U_{k,m}$ and that $\sum_{k=1}^\infty l(U_{k,m})<\mu^*(A)+\frac1m$. Define $B_m=\bigcup_{k\in\mathbb N}U_{k,m}$. Then $A\subset B_m$ and$$\mu^*(A)\leqslant\mu(B_m)<\mu^*(A)+\frac1m.$$ Now, define, ...
3
Denote by $(X_n)_{n \in \mathbb{N}} \subseteq \mathcal{A}$ a sequence of increasing sets, $X_n \uparrow X$, such that $\mu(X_n)< \infty$ for all $n \in \mathbb{N}$. You have already shown that $$B \in \mathcal{M} \implies \exists F \in \mathcal{A}, F \supseteq B: \mu(F) = \mu^*(B). \tag{1}$$ Roughly speaking, the idea is to use $(1)$ for $B:=A^c \in \... 3 HINT:$P+n$form a partition of$\mathbb{R}$, so$A\cap (P+n)$form a partition of$A$. Now use$\mu(A\cap (P+n) = \mu((A-n) \cap P)$3 Fix$\epsilon>0$. For all$i\ge 1$,$\exists B_i\in \mathcal{A}$s.t.$\mu(B_i)\le \mu^*(A_i)+\epsilon2^{-i}$. Then $$\mu^*\left(\bigcup_{i\ge 1}A_i\right)\le \mu^*\left(\bigcup_{i\ge 1}B_i\right)\le \sum_{i\ge 1}\mu^*(A_i)+\epsilon.$$ It's clear that$\mu^*\mid_{\mathcal{A}}=\mu$(by monotonicity of$\mu$). If$A\in\mathcal{A}$, then for any$E\subset ...
3
What we want to show here is that for any countable collection $\lbrace A_{i} \rbrace$ of subsets of $X$, we have $$\mu^{*} \left( \bigcup_{i = 1}^{\infty} A_{i} \right) \leq \sum_{i=1}^{\infty} \mu^{*}(A_{i})$$ For #1, note that if $\bigcup_{i=1}^{\infty} A_{i} = \emptyset$, then each $A_{i} = \emptyset$. Clearly, $\mu^{*} \left( \bigcup_{i=1}^{\infty} A_{... 3 6)$\displaystyle\sum_{k=1}^{\infty}\frac{\epsilon}{2^{k}}=\epsilon\sum_{k=1}^{\infty}\dfrac{1}{2^{k}}=\epsilon\cdot\dfrac{\dfrac{1}{2}}{1-\dfrac{1}{2}}=\epsilon$. 7) No, the sum is$\epsilon$, but now we have$0\leq m^{\ast}(C)<\epsilon$for every$\epsilon>0$, if$m^{\ast}(C)>0$, then put$\epsilon$to be$m^{\ast}(C)$, then we arrive to$m^{\ast}...
3
(This answer assumes that $I$ is countable) This is a case for the $\epsilon/2^n$ trick, in combination with the "give yourself an epsilon of room" trick. Once we've internalized these tricks the proof seems straightforward. Let $\epsilon > 0$. For each positive integer $n$, let $\{I_{nj}\}$ be a countable collection of intervals such that $A_n \subset \... 3 Partial answer. Here's a proof for$n=1$. We actually show that$E_\epsilon$is the union of finitely many pairwise disjoint balls. Since$\{x : G(x) > 0\}$is open, we may write it as$\{G > 0\} = \sqcup_{n=1}^\infty (a_n,b_n)$, a countable union of disjoint intervals (the proof is just to take maximal intervals in$\{G > 0\}$). Now, since$G$is ... 2 (3) The sum starts at$n=1$and$\frac{1}{2} + \frac{1}{2^2} + \cdots = 1$. Even if the sum really were$2\epsilon$, which would be an actual error in the proof, it would be trivial to fix (just carry forward the$2\epsilon$, having a$2\epsilon$i nthe final inequality is just as good. (2) If you know that$a<b$then also$a\le b$. Which kind of ... 2 In general, whenever we deal with measures (or integrals, or things of the sort), there is some kind of simple sets (or functions) which generate all other things, in some sense. You can see this, for example, when we define integrals: First define integrals for characteristic functions, then for simple functions, and then take some limits and define for ... 2 Another reason besides the excellent answer given is that every set has an outer measure, whereas not every set has a measure. If a set can't be proven to be measurable, it's common to investigate it with the outer measure. If the set turns out to be measurable, the outer measure results still apply because they agree 2 It is true when$\epsilon=1/2$:$m^\ast[a,b]\leq b-a+2\times 1/2$It is true when$\epsilon=1/4$:$m^\ast[a,b]\leq b-a+2\times 1/4$It is true when$\epsilon=1/10000000$:$m^\ast[a,b]\leq b-a+2\times 1/10000000$Since we can make$\epsilon$arbitrarily small, it must be true that$m^\ast[a,b]\leq b-a$. If$m^\ast[a,b]$were even a little bit bigger than$...
2
The important difference between the two is that one covering is allowed to be countable, while the other is finite. The fact that allowing countable coverings gives $\mathbb Q \cap [0,1]$ measure 0 is the same as for all other countable sets; that a finite covering has measure at least 1 is a good exercise.
2
You mentioned $m$, so I am assuming you are talking about the Lebesgue measure $m$ on $\mathbb{R}^k$. Given $\epsilon > 0$, the entire space can be written as the union of countably many disjoint $k$-cells $W_n$ such that $m(W_n) < \epsilon$ for all $n$. Since $m(E) < \infty$, we can choose compact $K\subset E$ such that $m(E-K) < \epsilon$. $K$ ...
2
$$\mathbb{Q}\cap[0,1] \subseteq \bigcup_{n=1}^k I_n$$ $$\overline{\mathbb{Q}\cap[0,1]} \subseteq \overline{\bigcup_{n=1}^k I_n}= \bigcup_{n=1}^k \overline{I_n}$$ $$1=m[0,1]=m(\overline{\mathbb{Q}\cap[0,1]})\le m(\bigcup_{n=1}^k \overline{I_n})\le \sum_{n=1}^k l(\overline{I_n})=\sum_{n=1}^k l(I_n)$$
2
For any set $A$; $m^*(A)+m_*([0,1]\setminus A)=1$ Also it is given that $m^*(A)+m^*([0,1]\setminus A)=1$ Hence we have $m^*([0,1]\setminus A)=m_*([0,1]\setminus A)\implies [0,1]\setminus A$ is measurable $\implies A$ is measurable
2
Since you aren't given that $f$ is measurable, you can't say that $\{x \in E : f(x) > \alpha\}$ is measurable for any $\alpha \in \mathbb{R}$. Instead, recall that continuous functions are measurable. So $f$ restricted to the set to $E \backslash D$ is measurable. Denote this restriction $g \doteq f \vert_{E \backslash D}$. Then we have for any \$\...
Only top voted, non community-wiki answers of a minimum length are eligible
|
|
## Lectures on Chevalley groups
Last update: 13 July 2013
## §11. Some twisted Groups
In this section we study the group ${G}_{\sigma }$ of fixed points of a Chevalley group $G$ under an automorphism $\sigma \text{.}$ We consider only the simplest case, in which $\sigma$ fixes $U,H,{U}^{-},N,$ hence acts on $W=N/H$ and permutes the ${𝔛}_{\alpha }\text{'s.}$ Before launching into the general theory, we consider some examples:
(a) $G={SL}_{n}\text{.}$ If $\sigma$ is a nontrivial graph automorphism, it has the form $\sigma x=a{x\prime }^{-1}{a}^{-1}$ (where $x\prime$ is the transpose of $x$ and
$a=[ε1ε2⋰],$
${\epsilon }_{i}=±1\text{).}$ We see that $\sigma$ fixes $x$ if and only if $xax\prime =a\text{.}$ If $a$ is skew, we get ${G}_{\sigma }={Sp}_{n}\text{.}$ If $a$ is symmetric, we get ${G}_{\sigma }={SO}_{n}$ (split form). The group ${SO}_{2n}$ in characteristic 2 does not arise here, but it can be recovered as a subgroup of ${SO}_{2n+1},$ namely the one "supported" by the long roots.
Let $t\to \stackrel{‾}{t}$ be an involutory automorphism of $k$ having ${k}_{0}$ as fixed field. If $\sigma$ is now modified so that $\sigma x=a{\stackrel{‾}{x\prime }}^{-1}{a}^{-1},$ then ${G}_{\sigma }={SU}_{n}$ (split form). This last result holds even if $k$ is a division ring provided $t\to \stackrel{‾}{t}$ is an anti-automorphism.
If $V$ is the vector space over $ℝ$ generated by the roots and $W$ is the Weyl group, then $\sigma$ acts on $V$ and $W$ and has fixed point subspaces ${V}_{\sigma }$ and ${W}_{\sigma }\text{.}$ ${W}_{\sigma }$ is a reflection group on ${V}_{\sigma }$ with the corresponding "roots" being the projection on ${V}_{\sigma }$ of the original roots. To see these facts, we write $n=2m+1$ or $n=2m$ and use the indices $-m,$ $-\left(m-1\right),$ $\dots ,$ $m-1,$ $m$ with the index $0$ omitted in case $n=2m\text{.}$ If ${\omega }_{i}$ is the weight on $H$ defined by ${\omega }_{i}:\text{diag}\left({a}_{-m},\dots ,{a}_{m}\right)\to {a}_{i},$ then the roots are ${\omega }_{i}-{\omega }_{j}$ $\left(i\ne j\right)$ and $\sigma {\omega }_{i}=-{\omega }_{-i}\text{.}$ ${V}_{\sigma }$ is thus spanned by $\left\{{\omega }_{i}^{\prime }={\omega }_{-i}-{\omega }_{i} | i>0\right\}\text{.}$ Now $w\in {W}_{\sigma }$ if and only if $w$ commutes with $\sigma ,$ i.e., if and only if $w\left({\omega }_{i}-{\omega }_{j}\right)={\omega }_{k}-{\omega }_{\ell }$ implies $w\left({\omega }_{-i}-{\omega }_{-j}\right)={\omega }_{-k}-{\omega }_{-\ell }\text{.}$ We see that ${W}_{\sigma }$ is the octahedral group acting on ${V}_{\sigma }$ by all permutations and sign changes of the basis $\left\{{\omega }_{i}^{\prime }\right\}\text{.}$ The projection of ${\omega }_{i}-{\omega }_{j}$ $\left(i,h\ne 0\right)$ on ${V}_{\sigma }$ is $\frac{1}{2}\left({\omega }_{i}-{\omega }_{-i}-{\omega }_{j}+{\omega }_{-j}\right)=\frac{1}{2}\left(±{\omega }_{k}^{\prime }±{\omega }_{\ell }^{\prime }\right)$ $\left(k,\ell >0\right)$ or $±{\omega }_{k}^{\prime }$ $\left(k>0\right)\text{.}$ If either $i=0$ or $j=0$ the projection of ${\omega }_{i}-{\omega }_{j}$ is $±\frac{1}{2}{\omega }_{k}^{\prime }$ $\left(k>0\right)\text{.}$ The projected system is of type ${C}_{m}$ if $n=2m$ or $B{C}_{m}$ (a combination of ${B}_{m}$ and ${C}_{m}\text{)}$ if $n=2m+1\text{.}$
(b) $G={SO}_{2n}$ (split form, char $k\ne 2\text{).}$ We take the group defined by the form $f=2\underset{i=1}{\overset{n}{\Sigma }}{x}_{i}{x}_{-i}\text{.}$ We will take the graph automorphism to be $\sigma x={a}_{1}a{x\prime }^{-1}{a}^{-1}{a}_{1}^{-1}$
$( a=[1⋰1], a1= [ 1 ⋱ 1 01 10 1 ⋱ 1 ] ) .$
The corresponding form fixed by elements of ${G}_{\sigma }$ is $f\prime =2\underset{i=2}{\overset{n}{\Sigma }}{x}_{i}{x}_{-i}+{x}_{1}^{2}+{x}_{-1}^{2}\text{.}$ Thus, ${G}_{\sigma }$ fixes $f\prime -f={\left({x}_{1}-{x}_{-1}\right)}^{2}$ and hence the hyperplane ${x}_{1}-{x}_{-1}=0\text{.}$ ${G}_{\sigma }$ on this hyperplane is the group ${SO}_{2n-1}\text{.}$
If we now combine $\sigma$ with $t\to \stackrel{‾}{t}$ as in (a), the form $f\prime$ is replaced by ${f}^{\prime \prime }=\underset{i=2}{\overset{n}{\Sigma }}\left({x}_{i}{\stackrel{‾}{x}}_{-i}+{x}_{-i}{\stackrel{‾}{x}}_{i}\right)+{x}_{1}{\stackrel{‾}{x}}_{1}+{x}_{-1}{\stackrel{‾}{x}}_{-1}\text{.}$ If we make the change of coordinates ${x}_{1}$ replaced by ${x}_{1}+t{x}_{-1}{x}_{-1}$ replaced by ${x}_{1}+\stackrel{‾}{t}{x}_{-1}$ $\left(t\in k,t\ne \stackrel{‾}{t}\right),$ we see that $f$ is replaced by $2\underset{i=2}{\overset{n}{\Sigma }}{x}_{i}{x}_{-i}+2\left({x}_{1}^{2}+a{x}_{1}{x}_{-1}+b{x}_{-1}^{2}\right)$ and ${f}^{\prime \prime }$ is replaced by $\underset{i=2}{\overset{n}{\Sigma }}\left({x}_{i}{\stackrel{‾}{x}}_{-i}+{x}_{-i}{\stackrel{‾}{x}}_{i}\right)+\left(2{x}_{1}{\stackrel{‾}{x}}_{1}+a\left({x}_{1}{\stackrel{‾}{x}}_{-1}+{x}_{-1}{\stackrel{‾}{x}}_{1}\right)+2b{x}_{-1}{\stackrel{‾}{x}}_{-1}\right),$ where $a=t+\stackrel{‾}{t}$ and $b=t\stackrel{‾}{t}\text{.}$ Since these two forms have the same matrix, ${G}_{\sigma }$ is ${SO}_{2n}$ over ${k}_{0}$ re the new version of $f\text{.}$ That is, ${G}_{\sigma }$ is ${SO}_{2n}\left({k}_{0}\right)$ for a form of index $n-1$ which has index $n$ over $k\text{.}$
Example: If $n=4,$ $k=ℂ,$ and ${k}_{0}=ℝ,{G}_{\sigma }$ is the Lorentz group (re $f={x}_{1}^{2}-{x}_{2}^{2}-{x}_{3}^{2}-{x}_{4}^{2}\text{).}$ If we observe that ${D}_{2}$ corresponds to ${A}_{1}×{A}_{1},$ we see that ${SL}_{2}\left(ℂ\right)$ and the $0\text{-component}$ of the Lorentz group are isomorphic over their centers. Thus, ${SL}_{2}\left(ℂ\right)$ is the universal covering group of the connected Lorentz group.
Exercise: Work out ${D}_{3}\sim {A}_{3}$ in the same way.
For other examples see E. Cartan, Oeuvres Complètes, No. 38, especially at the end.
Aside from the specific facts worked out in the above examples we should note the following. In the single root length case, the fixed point set of a graph automorphism yields no new group, only an imbedding of one Chevalley group in another (e.g. ${Sp}_{n}$ or ${SO}_{n}$ in ${SL}_{n}\text{).}$ To get a new group (e.g. ${SU}_{n}\text{)}$ we must use a field automorphism as well.
Now to start our general development we will consider first the effect of twisting abstract reflection groups and root systems. Let $V$ be a finite dimensional real Euclidean vector space and let $\Sigma$ be a finite set of nonzero elements of $V$ satisfying
(1) $\alpha \in \Sigma$ implies $c\alpha \notin \Sigma$ if $c>0,$ $c\ne 1\text{.}$ (2) ${w}_{\alpha }\Sigma =\Sigma$ for all $\alpha \in \Sigma$ where ${w}_{\alpha }$ is the reflection in the hyperplane orthogonal to $\alpha \text{.}$
(See Appendix I). We pick an ordering on $V$ and let $P$ (respectively $\Pi \text{)}$ be the positive (respectively simple) elements of $\Sigma$ relative to that ordering. Suppose $\sigma$ is an automorphism of $V$ which permutes the positive multiples of the elements of each of $\Sigma ,$ $P,$ and $\Pi \text{.}$ It is not required that $\sigma$ fix $\Sigma ,$ although it will if all elements of $\Sigma$ have the same length. Let $\rho$ be the corresponding permutation of the roots. Note that $\sigma$ is of finite order and normalizes $W\text{.}$ Let ${V}_{\sigma }$ and ${W}_{\sigma }$ denote the fixed points in $V$ and $W$ respectively. If $\stackrel{‾}{\alpha }$ is the average of the elements in the $\sigma \text{-orbit}$ of $\alpha ,$ then $\left(\beta ,\stackrel{‾}{\alpha }\right)=\left(\beta ,\alpha \right)$ for all $\beta \in {V}_{\sigma }\text{.}$ Hence the projection of $\alpha$ on ${V}_{\sigma }$ is $\stackrel{‾}{\alpha }\text{.}$
Theorem 32: Let $\Sigma ,P,\Pi ,\sigma$ etc. be as above.
(a) The restriction of ${W}_{\sigma }$ to ${V}_{\sigma }$ is faithful. (b) ${W}_{\sigma }|{V}_{\sigma }$ is a reflection group. (c) If ${\Sigma }_{\sigma }$ denotes the projection of $\Sigma$ on ${V}_{\sigma },$ then ${\Sigma }_{\sigma }$ is the corresponding "root system"; i.e., $\left\{{w}_{\stackrel{‾}{\alpha }} | {V}_{\sigma },\stackrel{‾}{\alpha }\in {\Sigma }_{\sigma }\right\}$ generates ${W}_{\sigma }|{V}_{\sigma }$ and ${w}_{\stackrel{‾}{\alpha }}{\Sigma }_{\sigma }={\Sigma }_{\sigma }\text{.}$ However, (1) may fail for ${\Sigma }_{\sigma }\text{.}$ (d) If ${\Pi }_{\sigma }$ is the projection of $\Pi$ on ${V}_{\sigma },$ then ${\Pi }_{\sigma }$ is the corresponding "simple system"; i.e. if multiples are cast out (in case (1) fails for ${\Pi }_{\sigma }\text{),}$ then ${\Pi }_{\sigma }$ is linearly independent and the positive elements of ${\Sigma }_{\sigma }$ are positive linear combinations of elements of ${\Pi }_{\sigma }\text{.}$
Proof. Denote the projection of $V$ on ${V}_{\sigma }$ by $v\to \stackrel{‾}{v}\text{.}$ This commutes with $\sigma$ and with all elements of ${W}_{\sigma }\text{.}$ (1) If $\alpha \in \Sigma ,$ then $\stackrel{‾}{\alpha }\ne 0\text{;}$ indeed $\alpha >0$ implies $\stackrel{‾}{\alpha }>0\text{.}$ If $\alpha$ is positive, so are all vectors in the $\sigma \text{-orbit}$ of $\alpha \text{.}$ Thus, their average $\stackrel{‾}{\alpha }$ is also positive. If $\alpha <0,$ then $\stackrel{‾}{\alpha }=-\left(-\stackrel{‾}{\alpha }\right)<0\text{.}$ (2) Proof of (a). If $w\in {W}_{\sigma },$ $w\ne 1,$ then $w\alpha <0$ for some root $\alpha >0\text{.}$ Thus, $w\stackrel{‾}{\alpha }=\stackrel{‾}{w\alpha }<0$ and $\stackrel{‾}{\alpha }>0\text{.}$ So $w|{V}_{\sigma }\ne 1\text{.}$ (3) Let $\pi$ be a $\rho \text{-orbit}$ of simple roots, let ${W}_{\pi }$ be the group generated by all ${w}_{\alpha }$ $\left(\alpha \in \pi \right),$ let ${P}_{\pi }$ be the corresponding set of positive roots, and let ${w}_{\pi }$ be the unique element of ${W}_{\pi }$ so that ${w}_{\pi }{P}_{\pi }=-{P}_{\pi }\text{.}$ Then ${w}_{\pi }\in {W}_{\sigma }$ and ${w}_{\pi }|{V}_{\sigma }={w}_{\stackrel{‾}{\alpha }}|{V}_{\sigma }$ for any root $\alpha \in {P}_{\pi }\text{.}$ To see this, first consider $\sigma {w}_{\pi }{\sigma }^{-1}\in {W}_{\pi }\text{.}$ Since $\sigma {w}_{\pi }{\sigma }^{-1}{P}_{\pi }={P}_{\pi },$ then $\sigma {w}_{\pi }{\sigma }^{-1}={w}_{\pi }$ by uniqueness, and ${w}_{\pi }\in {W}_{\sigma }\text{.}$ Since $\rho$ permutes the elements of $\pi$ in a single orbit, the projections on ${V}_{\sigma }$ of the elements of ${P}_{\pi }$ are all positive multiples of each other. It follows that if $\alpha$ is any element of ${P}_{\pi },$ then ${w}_{\pi }\stackrel{‾}{\alpha }=-\stackrel{‾}{\alpha }\text{.}$ If $v\in {V}_{\sigma }$ with $\left(v,\stackrel{‾}{\alpha }\right)=0,$ then $0=\left(v,\stackrel{‾}{\beta }\right)=\left(v,\beta \right)$ for $\beta \in \pi \text{.}$ Hence ${w}_{\pi }v=v\text{.}$ Thus ${w}_{\pi }|{V}_{\sigma }={w}_{\stackrel{‾}{\alpha }}|{V}_{\sigma }\text{.}$ (4) If $\nu$ is a $\rho \text{-orbit}$ of roots and $w\in {W}_{\sigma }$ then all elements of $w\nu$ have the same sign. This follows from $w\sigma \alpha =\sigma w\alpha$ for $\alpha \in \Sigma ,$ $w\in {W}_{\sigma }\text{.}$ (5) $\left\{{w}_{\pi } | \pi$ a $\rho \text{-orbit}$ of simple $\text{roots}\right\}$ generates ${W}_{\sigma }\text{.}$ Let $w\in {W}_{\sigma }$ with $w\ne 1$ and let $\alpha$ be a simple root such that $w\alpha <0\text{.}$ Let $\pi$ be the $\rho \text{-orbit}$ containing $\alpha \text{.}$ By (4), $w{P}_{\pi }<0$ (i.e., $w\beta <0$ for all $\beta \in {P}_{\pi }\text{).}$ Now $w{w}_{\pi }{P}_{\pi }>0$ and ${w}_{\pi }$ permutes the elements of $P-{P}_{\pi }\text{.}$ Hence, $N\left(w{w}_{\pi }\right)=N\left(w\right)-N\left({w}_{\pi }\right)$ (see Appendix 11.17). Using induction on $N\left(w\right),$ we may thus show that $w$ is a product of ${w}_{\pi }\text{'s.}$ (6) If ${w}_{0}$ is the element of $W$ such that ${w}_{0}P=-P,$ then ${w}_{0}\in {W}_{\sigma }\text{.}$ This follows from $\sigma {w}_{0}{\sigma }^{-1}P=-P$ and the uniqueness of ${w}_{0}\text{.}$ (7) $\left\{w{P}_{\pi } | w\in {W}_{\sigma },\pi$ a $\rho \text{-orbit}$ of simple $\text{roots}\right\}$ is a partition of $\Sigma \text{.}$ If the $w{P}_{\pi }\text{'s}$ are called parts, then $\alpha ,\beta$ belong to the same part if and only if $\stackrel{‾}{\alpha }=c\stackrel{‾}{\beta }$ for some $c>0\text{.}$ To prove (7), we consider $\alpha \in \Sigma ,$ $\alpha >0\text{.}$ Now ${w}_{0}\alpha <0$ and ${w}_{0}={w}_{1}{w}_{2}\dots {w}_{r}$ where each ${w}_{i}={w}_{\pi }$ for some $\rho \text{-orbit}$ of simple roots $\pi$ (by (5) and (6)). Choose $i$ so that ${w}_{i+1}\dots {w}_{r}\alpha >0$ and ${w}_{i}{w}_{i+1}\dots {w}_{r}\alpha <0\text{.}$ If ${w}_{i}={w}_{\pi },$ then ${w}_{i+1}\dots {w}_{r}\alpha \in {P}_{\pi }\text{;}$ i.e., $\alpha$ is in some part. Similarly, if $\alpha <0,$ $\alpha$ is in some part. Now assume $\alpha ,\beta$ belong to the same part, say to $w{P}_{\pi }\text{.}$ We may assume $\alpha ,\beta \in {P}_{\pi }\text{.}$ Then $\stackrel{‾}{\alpha }$ and $\stackrel{‾}{\beta }$ are positive multiples of each other, as has been noted in (3). Conversely, assume 8) ${\Sigma }_{0}$ consists of all $w\stackrel{‾}{\alpha }$ such that $w\in {W}_{0}$ and $\alpha$ is a root whose support lies in a simple $\rho \text{-orbit.}$ Now $\stackrel{‾}{\beta }$ has its support in $\pi$ and hence so does $\beta$ since $\sigma$ maps simple roots not in $\pi$ to positive multiples of simple roots not in $\pi \text{.}$ We see then that $\beta \in {P}_{\pi },$ and that any part containing $\alpha$ also contains $\beta \text{.}$ The parts are just the sets of $\beta$ such that $\stackrel{‾}{\beta }=c\stackrel{‾}{\alpha },$ $c>0$ and hence form a partition. (8) $\left\{w\stackrel{‾}{\alpha } | w\in {W}_{\sigma },\alpha$ has support in a $\rho \text{-orbit}$ of simple $\text{roots}\right\}={\Sigma }_{\sigma }\text{.}$ (9) Parts (b) and (c) follow from (3), (5), and (8). (10) Proof of (d). We select one root $\alpha$ from each $\rho \text{-orbit}$ and form $\left\{\alpha \right\}\text{.}$ This set, consisting of elements whose supports in $\Pi$ are disjoint, is independent since $\Pi$ is. If $\alpha >0$ then it is a positive linear combination of the elements of $\Pi \text{.}$ Hence $\stackrel{‾}{\alpha }$ is a positive linear combination of the elements of ${\Pi }_{\sigma }\text{.}$ $\square$
Remark: To achieve condition (1) for a root system, we can stick to the set of shortest projections in the various directions.
Examples:
(a) For $\sigma$ of order 2, $W$ of type ${A}_{2n-1},$ we get ${W}_{\sigma }$ of type ${C}_{n}\text{.}$ For $W$ of type ${A}_{2n},$ we get ${W}_{\sigma }$ of type $B{C}_{n}\text{.}$ (b) For $\sigma$ of order 2, $W$ of type ${D}_{n},$ we get ${W}_{\sigma }$ of type ${B}_{n-1}\text{.}$ (c) For $\sigma$ of order 3, $W$ of type ${D}_{4},$ we get ${W}_{\sigma }$ of type ${G}_{2}\text{.}$ To see this let $\alpha ,\beta ,\gamma ,\delta$ be the simple roots with $\delta$ connected with $\alpha ,\beta ,$ and $\gamma \text{.}$ Then $\stackrel{‾}{\alpha }=\frac{1}{3}\left(\alpha +\beta +\gamma \right),$ $\stackrel{‾}{\delta }=\delta$ and $⟨\stackrel{‾}{\alpha },\stackrel{‾}{\delta }⟩=-1,$ $⟨\stackrel{‾}{\delta },\alpha ⟩=-3,$ giving ${W}_{\sigma }$ of type ${G}_{2}\text{.}$ Schematically: $= ⟶$ (d) For $\sigma$ of order 2, $W$ of type ${E}_{6},$ we get ${W}_{\sigma }$ of type ${F}_{4}\text{.}$ $= ⟶$ (e) For $\sigma$ or order 2, $W$ of type ${C}_{2},$ we get ${W}_{\sigma }$ of type ${A}_{1}\text{.}$ (f) For $\sigma$ or order 2, $W$ of type ${G}_{2},$ we get ${W}_{\sigma }$ of type ${A}_{1}\text{.}$ (g) For $\sigma$ or order 2, $W$ of type ${F}_{4},$ we get ${W}_{\sigma }$ of type ${𝒟}_{16}$ (the dihedral group of order 16). To see this let $\begin{array}{c} α β γ δ \end{array}$ be the Dynkin diagram of ${F}_{4},$ and $\sigma \alpha =\sqrt{2}\delta ,$ $\sigma \beta =\sqrt{2}\gamma \text{.}$ Since $\stackrel{‾}{\alpha }=\frac{1}{2}\left(\alpha +\sqrt{2}\delta \right),$ $\stackrel{‾}{\beta }=\frac{1}{2}\left(\beta +\sqrt{2}\gamma \right),$ we have $⟨\stackrel{‾}{\beta },\stackrel{‾}{\alpha }⟩=-1,⟨\stackrel{‾}{\alpha },\stackrel{‾}{\beta }⟩=-\left(2+\sqrt{2}\right)\text{.}$ This corresponds to an angle of $7\pi /8$ between $\stackrel{‾}{\alpha }$ and $\stackrel{‾}{\beta }\text{.}$ Hence ${W}_{\sigma }$ is of type ${𝒟}_{16}\text{.}$ Alternatively, we note that ${w}_{\stackrel{‾}{\alpha }}{w}_{\stackrel{‾}{\beta }}$ makes six positive roots negative and that there are 24 positive roots in all, so that ${w}_{0}={\left({w}_{\stackrel{‾}{\alpha }}{w}_{\stackrel{‾}{\beta }}\right)}^{4}\text{.}$ Hence, ${w}_{\stackrel{‾}{\alpha }}^{2}={w}_{\stackrel{‾}{\beta }}^{2}={\left({w}_{\stackrel{‾}{\alpha }}{w}_{\stackrel{‾}{\beta }}\right)}^{8}=1$ and ${W}_{\sigma }$ is of type ${𝒟}_{16}\text{.}$ Note that this is the only case of those we have considered in which ${W}_{\sigma }$ fails to be crystallographic (See Appendix V).
In (e), (f), (g) we are assuming that multiples have been cast out.
The partition of $\Sigma$ in (7) above can be used to define an equivalence relation $R$ on $\Sigma$ by $\alpha \equiv \beta$ if and only if $\stackrel{‾}{\alpha }$ is a positive multiple of $\stackrel{‾}{\beta }$ where $\stackrel{‾}{\alpha }$ is the projection of $\alpha$ on ${V}_{\sigma }\text{.}$ Letting $\Sigma /R$ denote the collection of equivalence classes we have the following:
Corollary: If $\Sigma$ is crystallographic and indecomposable, then an element of $\Sigma /R$ is the positive system of roots of a system of one of the following types:
(a) ${A}_{1}^{n}$ $n=1,2,$ or $3\text{.}$ (b) ${A}_{2}$ (this occurs only if $\Sigma$ is of type ${A}_{2n}\text{).}$ (c) ${C}_{2}$ (this occurs if $\Sigma$ is of type ${C}_{2}$ or ${F}_{4}\text{).}$ (d) ${G}_{2}\text{.}$
Now let $G$ be a Chevalley group over a field $k$ of characteristic $p\text{.}$ Let $\sigma$ be an automorphism of $G$ which is the product of a graph automorphism and a field automorphism $\theta$ of $k$ and such that if $\rho$ is the corresponding permutation of the roots then
(1) if $\rho$ preserves lengths, then order $\theta =$ order $\rho \text{.}$ (2) if $\rho$ doesn't preserve lengths, then $p{\theta }^{2}=1$ (where $p$ is the map $x\to {x}^{p}\text{).}$
(Condition (1) focuses our attention on the only interesting case. Observe that $\rho =\text{id.,}$ $\theta =\text{id.}$ is allowed. Condition (2) could be replaced by ${\theta }^{2}=p$ thereby extending the development to follow, suitably modified, to imperfect fields $k\text{.)}$ We know that $p=2$ if $G$ is of type ${C}_{2}$ or ${F}_{4}$ and $p=3$ if $G$ is of type ${G}_{2}\text{.}$ Recall also that
$σxα(t)= { xρα (εαtθ) if |α| ≥|ρα| xρα (εαtpθ) if |α|< |ρα| where$
${\epsilon }_{\alpha }=±1$ and ${\epsilon }_{\alpha }=1$ if $±\alpha$ is simple. (See the proof of Theorem 29.)
Now $\sigma$ preserves $U,H,B,{U}^{-},$ and $N,$ and hence $N/H\cong W\text{.}$ The action thus induced on $W$ is concordant with the permutation $\rho$ of the roots. Since $\rho$ preserves angles, it agrees up to positive multiples with an isometry on the real space generated by the roots. Thus the results of Theorem 32 may be applied. Also we observe that if $n$ is the order of $\rho ,$ then $n=1,2,$ or $3,$ so that the length of each $\rho \text{-orbit}$ is $1$ or $n\text{.}$
Lemma 60: $\Pi {\epsilon }_{\alpha }=1$ over each $\rho \text{-orbit}$ of length $n\text{.}$
Proof. Since ${\sigma }^{n}$ acts on each ${𝔛}_{\alpha }$ $\text{(}±\alpha$ simple) as a field automorphism, it does so on all of $G,$ whence the lemma. $\square$
Lemma 61: If $\alpha \in \Sigma /R,$ then ${𝔛}_{\alpha ,\sigma }\ne 1\text{.}$
Proof. Choose $\alpha \in a$ so that no $\beta \in a$ can be added to it to yield another root. If the orbit of $\alpha$ has length 1, set $x={x}_{\alpha }\left(1\right)$ if ${\epsilon }_{\alpha }=1,$ $x={x}_{\alpha }\left(t\right)$ with $t\in k,$ $t\ne 0$ and $t+{t}^{\theta }=0$ if ${\epsilon }_{\alpha }\ne 1\text{.}$ Then $x\in {𝔛}_{\alpha ,\sigma }\text{.}$ If the length is $n,$ we set $y={x}_{\alpha }\left(1\right),$ then $x=y·\sigma y·{\sigma }^{2}y\dots$ over the orbit, and use Lemma 60. $\square$
Theorem 33: Let $G,\sigma ,$ etc. be as above.
(a) For each $w\in {W}_{\sigma },$ the group ${U}_{w}=U\cap {w}^{-1}{U}^{-}w$ is fixed by $\sigma \text{.}$ (b) For each $w\in {W}_{\sigma },$ there exists ${n}_{w}\in {N}_{\sigma },$ indeed ${n}_{w}\in ⟨{U}_{\sigma },{U}_{\sigma }^{-}⟩,$ so that ${n}_{w}H=w\text{.}$ (c) If ${n}_{w}$ $\left(w\in {W}_{\sigma }\right)$ is as in (b), then ${G}_{\sigma }=\bigcup _{w\in {W}_{\sigma }}{B}_{\sigma }{n}_{w}{U}_{w,\sigma }$ with uniqueness of expression on the right.
Proof. (a) This is clear since $U$ and ${w}^{-1}{U}^{-}w$ are fixed by $\sigma \text{.}$ (b) We may assume that $w={w}_{\pi }$ for some $\rho \text{-orbit}$ of simple $\pi \text{.}$ By Lemma 61, choose $x\in {𝔛}_{-a,\sigma }x\ne 1,$ where $a\in \Sigma /R$ corresponds to $\pi \text{.}$ Using Theorem 4' we may write $x=u{n}_{w}v$ for some $w\in W$ where $u\in U,$ $v\in {U}_{w},$ and ${n}_{w}H=w\text{.}$ Now $x=\sigma x=\sigma u·\sigma {n}_{w}\sigma v$ and by Theorem 4 and the uniqueness in Theorem 4', we have $\sigma w=w,$ $\sigma {n}_{w}={n}_{w},$ $\sigma u=u,$ and $\sigma v=v\text{.}$ Thus, ${n}_{w}\in ⟨{U}_{\sigma },{U}_{\sigma }^{-}⟩\text{.}$ Since $w\ne 1,$ $w\in {W}_{\sigma },$ and $w\in {W}_{\pi },$ we have $w\alpha <0$ for some $\alpha \in \pi ,$ $w\pi <0,$ and $w={w}_{\pi }\text{.}$ (c) Let $x\in {G}_{\sigma },$ say $x\in BwB,$ Since $\sigma \left(BwB\right)=B\sigma wB$ we have $w\in {W}_{\sigma }\text{.}$ Choose ${n}_{w}$ as in (b) and write $x=b{n}_{w}v$ with $b\in B$ and $v\in {U}_{w}\text{.}$ Applying $\sigma$ we get $b\in {B}_{\sigma }$ and $v\in {U}_{w,\sigma }\text{.}$ Uniqueness follows from Theorem 4'. $\square$
Corollary: The conclusions of Theorem 33 are still valid if ${G}_{\sigma }$ and ${B}_{\sigma }$ are replaced by ${G}_{\sigma }^{\prime }=⟨{U}_{\sigma },{U}_{\sigma }^{-}⟩$ and ${B}_{\sigma }^{\prime }={G}_{\sigma }^{\prime }\cap {B}_{\sigma }\text{.}$ Also since ${B}_{\sigma }={U}_{\sigma }{H}_{\sigma },$ we can replace ${H}_{\sigma }$ by ${H}_{\sigma }^{\prime }={G}_{\sigma }^{\prime }\cap {H}_{\sigma }\text{.}$
Lemma 62: Let $a$ generically denote a class in $\Sigma /R\text{.}$ Let $S$ be a union of classes in $\Sigma /R$ which is closed under addition and such that if $a\subseteq S$ then $-a⊈S\text{.}$ Then ${𝔛}_{S,\sigma }=\underset{\alpha \subseteq S}{\Pi }{𝔛}_{a,\sigma }$ with the product taken in any fixed order and there is uniqueness of expression on the right. In particular, ${U}_{\sigma }=\underset{a>0}{\Pi }{𝔛}_{\alpha ,\sigma }$ and ${U}_{w,\sigma }=\underset{\underset{wa\le 0}{a>0}}{\Pi }{𝔛}_{a,\sigma }$ for all $w\in {W}_{\sigma }\text{.}$
Proof. We arrange the positive roots in a manner consistent with the order of the $a\text{'s;}$ i.e., those roots in the first $a$ are first, etc. Now ${𝔛}_{S}=\underset{\alpha >0}{\Pi }{𝔛}_{\alpha }$ in the order just described and with uniqueness of expression on the right by Lemma 17. Hence ${𝔛}_{S}=\underset{a>0}{\Pi }{𝔛}_{a}$ in the given order and again with uniqueness of expression on the right. The lemma follows by considering the fixed points of $\sigma$ on both sides of the last equation. $\square$
Corollary: If $a,b$ are classes in $\Sigma /R$ with $a\ne ±b,$ then $\left({𝔛}_{a},{𝔛}_{b}\right)\subseteq \Pi {𝔛}_{c},$ where the roots on the right are in the closed subsystem generated by $a$ and $b,$ those of $a$ and $b$ excluded. The condition on $c$ can be stated alternately, in terms of ${\Sigma }_{\sigma },$ that $\stackrel{‾}{c}$ is in the interior of the (plane) convex cone generated by $\stackrel{‾}{a}$ and $\stackrel{‾}{b}\text{.}$
Remark: The exact relations in the above corollary can be quite complicated but generally resemble those in the Chevalley group whose Weyl group is ${W}_{\sigma }\text{.}$ For example, if $G$ is of type ${A}_{3}$ and $\sigma$ is of order 2, say $\begin{array}{c} α β γ \end{array},$ $a=\left\{\beta \right\},$ $b=\left\{\alpha ,\gamma \right\},$ $c=\left\{\alpha +\beta ,\beta +\gamma \right\},$ $d=\left\{\alpha +\beta +\gamma \right\},$ and if we set ${x}_{\alpha }\left(t\right)={x}_{\beta }\left(t\right)$ $\left(t\in {k}_{\theta }\right),$ ${x}_{b}\left(u\right)={x}_{\alpha }\left(u\right){x}_{\gamma }\left({u}^{\theta }\right)$ $\left(u\in k\right),$ and similarly for $c$ and $d,$ we get $\left({x}_{a}\left(t\right),{x}_{b}\left(u\right)\right)={x}_{c}\left(±tu\right){x}_{d}\left(±tu{u}^{\theta }\right)\text{.}$ In ${C}_{2},$ the corresponding relation is $\left({x}_{a}\left(t\right),{x}_{b}\left(u\right)\right)={x}_{a+b}\left(±tu\right){x}_{a+2b}\left(±t{u}^{2}\right)\text{.}$
If $G$ is of type $X$ and $\sigma$ is of order $n,$ we say ${G}_{\sigma }$ is of type ${}^{n}X\text{.}$ E.g., the group considered in the above remark is of type ${}^{2}A_{3}\text{.}$ The group of type ${}^{2}C_{2}$ is called the Suzuki group and the groups of type ${}^{2}G_{2}$ and ${}^{2}F_{4}$ are called Ree groups. We write $G\sim X$ and ${G}_{\sigma }\sim {}^{n}X\text{.}$
Lemma 63: Let $a$ be a class in $\Sigma /R,$ then ${𝔛}_{a,\sigma }$ has the following structure:
(a) If $a\sim {A}_{1},$ then ${𝔛}_{a,\sigma }=\left\{{x}_{\alpha }\left(t\right) | t\in {k}_{\theta }\right\}$ (b) If $a\sim {A}_{1}^{n},$ then ${𝔛}_{a,\sigma }=\left\{x·\sigma x\dots | x={x}_{\alpha }\left(t\right),\alpha \in a,t\in k\right\}$ (c) If $a\sim {A}_{2},$ $a=\left\{\alpha ,\beta ,\alpha +\beta \right\},$ then ${\theta }^{2}=1$ and ${𝔛}_{a,\sigma }=\left\{{x}_{\alpha }\left(t\right){x}_{\beta }\left({t}^{\theta }\right){x}_{\alpha +\beta }\left(u\right) | t{t}^{\theta }+u+{u}^{\theta }=0\right\}$
If $\left(t,u\right)$ denotes the given element, then $\left(t,u\right)\left(t\prime ,u\prime \right)=\left(t+t\prime ,u+u\prime -{t}^{\theta }t\prime \right)\text{.}$
(d) If $a\sim {C}_{2},$ $a=\left\{\alpha ,\beta ,\alpha +\beta ,\alpha +2\beta \right\},$ then $2{\theta }^{2}=1$ and ${𝔛}_{a,\sigma }=\left\{{x}_{\alpha }\left(1\right){x}_{\beta }\left({t}^{\theta }\right){x}_{a+2\beta }\left(u\right){x}_{\alpha +\beta }\left({t}^{1+\theta }+{u}^{\theta }\right) | t,u\in k\right\}\text{.}$ If $\left(t,u\right)$ denotes the given element, $\left(t,u\right)\left(t\prime ,u\prime \right)=\left(t+t\prime ,u+u\prime +{t}^{2\theta }t\prime \right)\text{.}$ (e) If $a\sim {G}_{2},$ $a=\left\{\alpha ,\beta ,\alpha +\beta ,\alpha +2\beta ,\alpha +3\beta ,2\alpha +3\beta \right\},$ then $3{\theta }^{2}=1$ and ${𝔛}_{a,\theta }=\left\{{x}_{\alpha }\left(t\right){x}_{\beta }\left({t}^{\theta }\right){x}_{\alpha +3\beta }\left(u\right){x}_{\alpha +\beta }\left({u}^{\theta }-{t}^{1+\theta }\right){x}_{2\alpha +3\beta }\left(v\right){x}_{\alpha +2\beta }\left({v}^{\theta }-{t}^{1+2\theta }\right) | t,u,v\in k\right\}\text{.}$ If $\left(t,u,v\right)$ denotes the given element then $(t,u,v) (t′,u′,v′)= ( t+t′, u+u′+t′ t3θmv+ v′-t′u+ t′2t3θ ) .$
Note that in (a) and (b),${𝔛}_{a,\sigma }$ is a one parameter group for the fields ${k}_{\theta }$ and $k$ respectively.
Proof. (a) and (b) are easy and we omit their proofs. For (c), normalize the parametrization of ${𝔛}_{\alpha +\beta }$ so that ${N}_{\alpha ,\beta }=1\text{.}$ Then $\sigma {x}_{\alpha }\left(t\right)={x}_{\beta }\left({t}^{\theta }\right),$ $\sigma {x}_{\beta }\left(t\right)={x}_{\alpha }\left({t}^{\theta }\right),$ and $\sigma {x}_{\alpha +\beta }\left(u\right)={x}_{\alpha +\beta }\left(-{u}^{\theta }\right)\text{.}$ Write $x\in {𝔛}_{a,\theta }$ as $x={x}_{\alpha }\left(t\right){x}_{\beta }\left(v\right){x}_{\alpha +\beta }\left(u\right)$ and compare the coefficients on both sides of $x=\sigma x$ to get (c). The proof of (d) is similar to that of (c). For (e), first normalize the signs as in Theorem 28, and then complete the proof as in (c) and (d). $\square$
Exercise: Complete the details of the above proof.
Remark: The role of the group ${SL}_{2}$ in the untwisted case is taken by the groups ${SL}_{2}\left({k}_{\theta }\right),$ ${SL}_{2}\left(k\right),$ ${SU}_{3}\left(k,\theta \right)$ (split form), the Suzuki group, and Ree group of type ${G}_{2}\text{.}$
Exercise: Determine the structure of ${H}_{\sigma }$ in the case $G$ is universal.
Lemma 64: If $G$ is universal, then ${G}_{\sigma }$ is generated by ${U}_{\sigma }$ and ${U}_{\sigma }^{-}$ except perhaps for the case ${G}_{\sigma }\sim {}^{2}G_{2}$ with $k$ infinite.
Proof. Let ${G}_{\sigma }^{\prime }=⟨{U}_{\sigma },{U}_{\sigma }^{-}⟩$ and let ${H}_{\sigma }^{\prime }={H}_{\sigma }\cap {G}_{\sigma }^{\prime }\text{.}$ By the corollary to Theorem 33, it suffices to show ${H}_{\sigma }\subseteq {G}_{\sigma }^{\prime }\text{;}$ i.e., $\left(*\right)$ ${H}_{\sigma }^{\prime }={H}_{\sigma }\text{.}$ Since $G$ is universal, $H$ is a direct product of $\left\{{𝔥}_{\alpha } | \alpha \text{simple}\right\}$ (see the corollary to Lemma 28). These groups are permuted by $\sigma$ exactly as the roots are. Hence it is enough to prove $\left(*\right)$ when there is a single orbit; i.e., when ${G}_{\sigma }$ is one of the types ${SL}_{2},$ ${}^{2}A_{2},$ ${}^{2}C_{2},$ or ${}^{2}G_{2}\text{.}$ For ${SL}_{2},$ this is clear. (1) For $x\in {U}_{\sigma }-\left\{1\right\},$ write $x={u}_{1}n{u}_{2}$ with ${u}_{i}\in {U}_{\sigma }^{-},i=1,2$ and $n=n\left(x\right)\in N\cap {G}_{\sigma }^{\prime }\text{.}$ Then ${H}_{\sigma }^{\prime }$ is generated by $\left\{n\left(x\right)n{\left({x}_{0}\right)}^{-1} | {x}_{0}$ a fixed choice of $x\right\}\text{.}$ To see this let ${H}_{\sigma }^{\prime \prime }$ be the group so generated. Consider ${G}_{\sigma }^{\prime \prime }={U}_{\sigma }^{-}{H}_{\sigma }^{\prime \prime }\cup {U}_{\sigma }^{-}{H}_{\sigma }^{\prime \prime }n\left({x}_{0}\right){U}_{\sigma }^{-}\text{.}$ This set is closed under multiplication by ${U}_{\sigma }^{-}\text{.}$ It is also closed under right multiplication by $n{\left({x}_{0}\right)}^{-1}\text{.}$ This follows from $n{\left({x}_{0}\right)}^{-1}=n\left({x}_{0}^{-1}\right)=n\left({x}_{0}^{-1}\right)n{\left({x}_{0}\right)}^{-1}n\left({x}_{0}\right)$ and $n\left({x}_{0}\right){U}_{\sigma }^{-}n{\left({x}_{0}\right)}^{-1}={U}_{\sigma }\subseteq {G}_{\sigma }^{\prime \prime }$ since $x={u}_{1}\left(n\left(x\right)n{\left({x}_{0}\right)}^{-1}\right)n\left({x}_{0}\right){u}_{2}$ for $x\in {U}_{\sigma }-\left\{1\right\}\text{.}$ We see that ${G}_{\sigma }^{\prime \prime }={G}_{\sigma }^{\prime },$ whence ${H}_{\sigma }^{\prime \prime }={H}_{\sigma }^{\prime }\text{.}$ (2) If $\alpha$ and $\beta$ are the simple roots of ${A}_{2},$ ${C}_{2},$ or ${G}_{2}$ labeled as in Lemma 63 (c), (d), or (e) respectively, then ${H}_{\sigma }$ is isomorphic to ${k}^{*}$ via the map $\phi :t\to {h}_{\alpha }\left(t\right){h}_{\beta }\left({t}^{\theta }\right)\text{.}$ (3) Let $\lambda$ be the weight such that $⟨\lambda ,\alpha ⟩=1,$ $⟨\lambda ,\beta ⟩=0,$ let $R$ be a representation of ${ℒ}^{k}$ (obtained from one of $ℒ$ by shifting the coefficients to $k\text{)}$ having $\lambda$ as highest weight and let ${v}^{+}$ be a corresponding weight vector. Let $\mu$ be the lowest weight of $R$ and let ${v}^{-}$ be a corresponding weight vector. For $x\in {U}_{\sigma }-\left\{1\right\},$ write $x{v}^{-}=f\left(x\right){v}^{+}+$ terms for lower weights. Then $f\left(x\right)\ne 0$ and ${H}_{\sigma }^{\prime }$ is isomorphic under ${\phi }^{-1}$ in (2) to the subgroup $m$ of ${k}^{*}$ generated by all $f\left(x\right)f{\left({x}_{0}\right)}^{-1}\text{.}$ To prove (3), let $x\in {U}_{\sigma }-\left\{1\right\}$ and write $x={u}_{1}n\left(x\right){u}_{2}$ as in (1). We see $x{v}^{-}=n\left(x\right){v}^{+}+$ terms for lower weights, so $n\left(x\right){v}^{-}=f\left(x\right){v}^{+}$ and $n\left(x\right)n{\left({x}_{0}\right)}^{-1}{v}^{+}=f\left(x\right)f{\left({x}_{0}\right)}^{-1}{v}^{+}\text{.}$ If $n\left(x\right)n{\left({x}_{0}\right)}^{-1}={h}_{\alpha }\left(t\right){h}_{\beta }\left({t}^{\theta }\right),$ then by the choice of $\lambda ,f\left(x\right)f{\left({x}_{0}\right)}^{-1}=t$ (see Lemma 19 (c)). (3) then follows from (1). (4) The case ${G}_{\sigma }\sim {}^{2}A_{2}\text{.}$ Here $f\left(x\right)=-{u}^{\theta }$ and $m={k}^{*}\text{.}$ To see this, we note that the representation $R$ of (3) in this case is $R:{ℒ}^{k}\to {s\ell }_{3}\left(k\right)$ and if $x={x}_{\alpha }\left(t\right){x}_{\beta }\left({t}^{\theta }\right){x}_{\alpha +\beta }\left(u\right)$ then $x→ [ 1tu+ttθ 01tθ 001 ] .$ Thus, $f\left(x\right)=u+t{t}^{\theta }=-{u}^{\theta }$ by Lemma 63 (c). Thus, $m$ is the group generated by ratios of elements $\left(-{u}^{\theta }\right)$ of ${k}^{*}$ whose traces are norms $\left(t{t}^{\theta }\right)\text{.}$ Let $u\in {k}^{*}\text{.}$ If ${u}^{\theta }\ne u,$ set ${u}_{1}={\left(u-{u}^{\theta }\right)}^{-1},$ and if ${u}^{\theta }=u,$ choose ${u}_{1}\in {k}^{*}$ so that ${u}_{1}^{\theta }=-{u}_{1}\text{.}$ Then $u{u}_{1}$ and ${u}_{1}$ are values of $f$ (their traces are 0 or 1), so that $u\in m$ and $m={k}^{*}\text{.}$ (5) The case ${G}_{\sigma }\sim {}^{2}C_{2}\text{.}$ Here $f\left(x\right)={t}^{2+2\theta }+{u}^{2\theta }+tu$ and $m={k}^{*}\text{.}$ To see this, first note that since the characteristic of $k$ is 2, there is an ideal in ${ℒ}^{k}$ "supported" by short roots. The representation $R$ can be taken as ${ℒ}^{k}$ acting on this ideal, and ${v}^{+}={X}_{\alpha +\beta }$ while ${v}^{-}={X}_{-\alpha -\beta }\text{.}$ Letting $x={x}_{\alpha }\left(t\right){x}_{\beta }\left({t}^{\theta }\right){x}_{\alpha +2\beta }\left(u\right){x}_{\alpha +\beta }\left({u}^{\theta }+{t}^{1+\theta }\right)$ we can determine $f\left(x\right)\text{.}$ By taking $t=0$ in the expression for $f\left(x\right)$ and writing $v={\left({v}^{\theta }\right)}^{2\theta },$ we see that $m={k}^{*}\text{.}$ (6) The case ${G}_{\sigma }\sim {}^{2}G_{2}\text{.}$ Here $f\left(x\right)={t}^{4+6\theta }-{u}^{1+3\theta }-{v}^{2}+{t}^{3+3\theta }u+{t}^{1+3\theta }{u}^{3\theta }+t{v}^{3\theta }-tuv\text{.}$ The group $m$ is generated by all values of $f$ for which $\left(t,u,v\right)\ne \left(0,0,0\right),$ and it contains ${{k}^{*}}^{2}$ and $-1\text{;}$ hence $m={k}^{*},$ if $k$ is finite. Here the representation $R$ can be taken to be the adjoint representation on ${ℒ}^{k},$ ${v}^{+}={X}_{2\alpha +3\beta },$ and ${v}^{-}={X}_{-2\alpha -3\beta }\text{.}$ Letting $x$ be as in Lemma 63 (e), and working modulo the ideal in ${ℒ}^{k}$ "supported" by the short roots, we can compute $f\left(x\right)\text{.}$ Setting $t=u=0,$ we see that $-{v}^{2}\in m,$ hence $-1\in m$ and ${{k}^{*}}^{2}\subseteq m\text{.}$ If $k$ is finite $m={k}^{*}$ follows from $\left(*\right)$ $-1\notin {{k}^{*}}^{2}\text{.}$ To show $\left(*\right),$ suppose ${t}^{2}=-1$ with $t\in k\text{.}$ Then ${t}^{2\theta }=-1,$ so ${t}^{\theta }=±t$ and ${t}^{{\theta }^{2}}=t\text{.}$ Since $3{\theta }^{2}-1,$ we see $t={\left({t}^{{\theta }^{2}}\right)}^{3}={t}^{3}\text{.}$ But ${t}^{3}={t}^{2}t=-t,$ so $t=0,$ a contradiction. This proves the lemma. $\square$
Corollary: If $G$ is universal, then ${G}_{\sigma }^{\prime }={G}_{\sigma }$ and ${H}_{\sigma }^{\prime }={H}_{\sigma }$ except possibly for ${}^{2}G_{2}$ with $k$ infinite in which case ${G}_{\sigma }/{G}_{\sigma }^{\prime }={H}_{\sigma }/{H}_{\sigma }^{\prime }\cong {k}^{*}/m$ with $m$ as in (6) above.
Remarks:
(a) It is not known whether $m={k}^{*}$ always if ${G}_{\sigma }\sim {}^{2}G_{2}\text{.}$ One can make the changes in variables $v\to v+tu$ and then $u\to u-{t}^{1+3\theta }$ to convert the form $f$ in (6) to ${t}^{4+6\theta }-{u}^{1+3\theta }-{v}^{2}+{t}^{2}{u}^{2}+t{v}^{3\theta }\text{.}$ Both before and after this simplification the form satisfies the condition of homogenity: $f(t,u,v)= t4+6θf ( 1,u/ t1+3θ,v/ t2+3θ ) ift≠0.$
(b) A corollary of (3) above, is that the forms in (5) and (6) are definite, i.e., $f=0$ implies $t=u\left(=v\right)=0\text{.}$ A direct proof in case $f$ is as in (5) can be made as follows: Suppose $0=f\left(t,u\right)={t}^{2+2\theta }+{u}^{2\theta }+tu$ with one of $t,u$ nonzero. If $t=0,$ then $u=0,$ so we have $t\ne 0\text{.}$ We see $f\left(t,u\right)={t}^{2+2\theta }f\left(1,u/{t}^{2\theta +1}\right)$ using $2{\theta }^{2}=1\text{.}$ Hence we may assume $t=1\text{.}$ Thus, $1+{u}^{2\theta }+u=0$ or (by applying $\theta \text{)}$ ${u}^{\theta }=1+u\text{.}$ Hence ${u}^{{\theta }^{2}}=1+{u}^{\theta }=u$ and $u={u}^{2{\theta }^{2}}={u}^{2}\text{.}$ Thus, $u=0$ or $1,$ a contradiction. A direct proof in case $f$ is as (6) appears to be quite complicated.
(c) The form in (5) leads to a geometric interpretation of ${}^{2}C_{2}\text{.}$ Form the graph $v={t}^{2+2\theta }+{u}^{2\theta }+tu$ in ${k}^{3}$ of the form $f\left(x\right)\text{.}$ Imbed ${k}^{3}$ in ${P}^{3}\left(k\right)$ projective 3-space over $k,$ by adding the plane at $\infty ,$ and adjoin the point at $\infty$ in the direction $\left(0,0,1\right)$ to the graph to obtain a subset $Q$ of ${P}^{3}\left(k\right)\text{.}$ $Q$ is then an ovoid in ${P}^{3}\left(k\right)\text{;}$ i.e.
(1) No line meets $Q$ in more than two points. (2) The lines through any point of $Q$ not meeting $Q$ again always lie in a plane.
The group ${}^{2}C_{2}$ is then realized as the group of projective transformations of ${P}^{3}\left(k\right)$ fixing $Q\text{.}$ For further details as well as a corresponding geometric interpretation of ${}^{2}G_{2}$ see J. Tits, Séminaire Bourbaki, 210 (1960). For an exhaustive treatment of ${}^{2}C_{2},$ especially in the finite case, see Luneberg, Springer Lecture Notes 10 (1965).
Theorem 34: Let $G$ and $\sigma$ be as above with $G$ universal. Excluding the cases: (a) ${}^{2}A_{2}\left(4\right),$ (b) ${}^{2}B_{2}\left(2\right),$ (c) ${}^{2}G_{2}\left(3\right),$ (d) ${}^{2}F_{4}\left(2\right),$ we have that ${G}_{\sigma }^{\prime }$ is simple over its center.
Sketch of proof. Using a calculus of double cosets re ${B}_{\sigma },$ which can be developed exactly as for the Chevalley groups with ${W}_{\sigma }$ in place of $W$ and $\Sigma /R$ (or ${\Sigma }_{\sigma }$ (see Theorem 32)) in place of $\Sigma ,$ and Theorem 33, the proof can be reduced exactly as for the Chevalley groups to the proof of: ${G}_{\sigma }^{\prime }=𝒟{G}_{\sigma }^{\prime }\text{.}$ If $k$ has "enough" elements, so does ${H}_{\sigma }^{\prime }$ by the Corollary to Lemma 64 and the action of ${H}_{\sigma }^{\prime }$ on ${𝔛}_{a,\sigma }$ can be used to show ${𝔛}_{a,\sigma }\subseteq 𝒟{G}_{\sigma }^{\prime }\text{.}$ This takes care of nearly everything. If $k$ has "few" elements then the commutator relations within the ${𝔛}_{a}\text{'s}$ and among them can be used. This leads to a number of special calculations. The details are omitted. $\square$
Remark: The groups in (a) and (b) above are solvable. The group in (c) contains a normal subgroup of index 3 isomorphic to ${A}_{1}\left(8\right)\text{.}$ The group in (d) contains a "new" simple normal subgroup of index 2. (See J. Tits, "Algebraic and abstract simple groups," Annals of Math. 1964.)
Exercise: Center of ${G}_{\sigma }^{\prime }={\text{(Center of} G\text{)}}_{\sigma }\text{.}$
We now are going to determine the orders of the finite Chevalley groups of twisted type. Let $k$ be a finite field of characteristic $p\text{.}$ Let a be minimal such that $\theta ={p}^{a}$ (i.e., such that ${t}^{\theta }={t}^{{p}^{a}}$ for all $t\in k\text{).}$ Then $|k|={p}^{2a}$ for ${}^{2}A_{n},$ ${}^{2}D_{n},$ ${}^{2}E_{6}\text{;}$ $|k|={p}^{3a}$ for ${}^{3}D_{4}\text{;}$ and $|k|={p}^{2a+1}$ for ${}^{2}C_{2},$ ${}^{2}F_{4},$ ${}^{2}G_{2}\text{.}$ We can write $\sigma {x}_{\alpha }\left(t\right)={x}_{\rho \alpha }\left({\epsilon }_{\alpha }{t}^{q\left(\alpha \right)}\right)$ where $q\left(\alpha \right)$ is some power of $p$ less than $|k|\text{.}$ If $q$ is the geometric average of $q\left(\alpha \right)$ over each $\rho \text{-orbit}$ then $q={p}^{a}$ except when ${G}_{\sigma }$ is of type ${}^{2}C_{2},$ ${}^{2}F_{4},$ or ${}^{2}G_{2}$ in which case $q={p}^{a+1/2}\text{.}$
Let $V$ be the real Euclidean space generated by the roots and let ${\sigma }_{0}$ be the automorphism of $V$ permuting the rays through the roots as $\rho$ permutes the roots. Since ${\sigma }_{0}$ normalizes $W,$ we see that ${\sigma }_{0}$ acts on the space $I$ of polynomials invariant under $W\text{.}$ Since ${\sigma }_{0}$ also acts on the subspace of $I$ of homogeneous elements of a given positive degree, we may choose the basic invariants ${I}_{j},$ $j=1,\dots ,\ell ,$ of Theorem 27 such that ${\sigma }_{0}{I}_{j}={\epsilon }_{j}{I}_{j}$ for some ${\epsilon }_{j}\in ℂ$ (here we have extended the base field $ℝ$ to $ℂ\text{).}$ As before, we let ${d}_{j}$ be the degree of ${I}_{j},$ and these are uniquely determined. Since ${\sigma }_{0}$ acts on $V,$ we also have the set $\left\{{\epsilon }_{0j} | j=1,\dots ,\ell \right\}$ of eigenvalues of ${\sigma }_{0}$ on $V\text{.}$ We recall also that $N$ denotes the number of positive roots in $\Sigma \text{.}$
Theorem 35: Let $\sigma ,q,N,{\epsilon }_{j},$ and ${d}_{j}$ be as above, and assume $G$ is universal. We have
(a) $|{G}_{\sigma }|={q}^{N}\underset{j}{\Pi }\left({q}^{{d}_{j}}-{\epsilon }_{j}\right)\text{.}$ (b) The order of the corresponding simple group is obtained by dividing $|{G}_{\sigma }|$ by $|{C}_{\sigma }|$ where $C$ is the center of $G\text{.}$
Lemma 65: Let $\sigma ,H,U,$ etc. be as above.
(a) $|{U}_{q}|={q}^{N},$ $|{U}_{w,\sigma }|={q}^{N\left(w\right)\text{.}}$ (b) $|{H}_{\sigma }|=\underset{j}{\Pi }\left(q-{\epsilon }_{0j}\right)\text{.}$ (c) $|{G}_{\sigma }|={q}^{N}\Pi \left(q-{\epsilon }_{0j}\right)\underset{w\in {W}_{\sigma }}{\Sigma }{q}^{N\left(w\right)}\text{.}$
where $N\left(w\right)$ is the number of positive roots in $\Sigma$ made negative by $w\text{.}$
Proof. (a) It suffices to show that $|{𝔛}_{a,\sigma }|={q}^{|a|}$ $a\in \Sigma /R$ by Lemma 62. This is so by Lemma 63. (b) Let $\pi$ be a $\rho \text{-orbit}$ of simple roots. Since $\sigma {h}_{\alpha }\left(t\right)={h}_{\rho \alpha }\left({t}^{q\left(\alpha \right)}\right),$ the contribution to $|{H}_{\sigma }|$ made by elements of ${H}_{\sigma }$ "supported" by $\pi$ is $\left(\underset{\alpha \in \pi }{\Pi }q\left(\alpha \right)\right)-1={q}^{m}-1$ if $m=|\pi |\text{.}$ Since the ${\epsilon }_{0j}\text{'s}$ corresponding to $\pi$ are the roots of the polynomial ${X}^{m}-1,$ (b) follows. (c) This follows from (a), (b), and Theorem 33. $\square$
Corollary: ${U}_{\sigma }$ is a $p\text{-Sylow}$ subgroup.
Lemma 66: We have the following formal identity in $t\text{:}$
$Σw∈Wσ tN(w)=Πj (1-εjtdj)/ (1-ε0jt)$
Proof.
We modify the proof of Theorem 26 as follows:
(a) $\sigma$ there is replaced by ${\sigma }_{0}$ here. (b) $\Sigma$ there is replaced by ${\Sigma }_{0}$ here, where ${\Sigma }_{0}$ is the set of unit vectors in $V$ which lie in the same directions of the roots. (c) Only those subsets $\pi$ of $\Pi$ fixed by ${\sigma }_{0}$ are considered. (d) ${\left(-1\right)}^{\pi }$ is now defined to be ${\left(-1\right)}^{k}$ where $k$ is the number of ${\sigma }_{0}$ orbits in $\pi \text{.}$ (e) $W\left(t\right)$ is now defined to be $\underset{w\in {W}_{\sigma }}{\Sigma }{t}^{N\left(w\right)}\text{.}$
With these modifications the proof proceeds exactly as before through step (5). Steps (6)-(8) become:
(6') For $\pi \subseteq \Pi ,$ $w\in W,$ let ${N}_{\pi }$ be the number of cells in $K$ congruent to ${D}_{\pi }$ under $W$ and fixed by $w{\sigma }_{0}\text{.}$ Then $\Sigma {\left(-1\right)}^{\pi }{N}_{\pi }=\text{det} w\text{.}$ (Hint: If $V\prime ={V}_{w{\sigma }_{0}}$ and $K\prime$ is the complex on $V\prime$ cut by $K,$ then the cells of $K\prime$ are the intersections with $V\prime$ of the cells of $K$ fixed by $w{\sigma }_{0}\text{.)}$ (7') Let $x$ be a character on $⟨W,{\sigma }_{0}⟩$ and ${x}_{\pi }$ the restriction of $x$ to $⟨{W}_{\pi },{\sigma }_{0}⟩$ induced up to $⟨W,{\sigma }_{0}⟩\text{.}$ Then $\Sigma {\left(-1\right)}^{\pi }{x}_{\pi }\left(w{\sigma }_{0}\right)=x\left(w{\sigma }_{0}\right)\text{det} w$ $\left(w\in W\right)\text{.}$ (8') Let $M$ be a $⟨W,{\sigma }_{0}⟩$ module, let $\stackrel{ˆ}{I}\left(M\right)$ be the space of skew invariants under $W,$ and let ${I}_{\pi }\left(M\right)$ be the space of invariants under ${W}_{\pi }\text{.}$ Then $Σ(-1)π tr (σ0,Iπ(M)) =tr (σ0,Iˆ(M)) .$ The remainder of the proof proceeds as before.
$\square$
Lemma 67: The ${\epsilon }_{j}\text{'s}$ form a permutation of the ${\epsilon }_{0j}\text{'s.}$
Proof. Set $t=1$ in Lemma 66. Then $\left(*\right)$ 1 has the same multiplicity among the ${\epsilon }_{j}\text{'s}$ as among the ${\epsilon }_{0j}\text{'s.}$ This is so since otherwise the right side of the expression would have either a root or a pole at $t=1\text{.}$ Assume ${\sigma }_{0}\ne 1,$ then either ${\sigma }_{0}^{2}=1$ and all $\epsilon \text{'s}$ not $1$ are $-1$ or else ${\sigma }_{0}^{3}=1$ and all $\epsilon \text{'s}$ not 1 are cube roots of 1, coming in conjugate complex pairs since ${\sigma }_{0}$ is real. Thus in all cases $\left(*\right)$ implies the lemma. $\square$
Proof of Theorem 35. (a) follows from Lemmas 65, 66, 67. Now let $C\prime$ be the center ${G}_{\sigma }\text{.}$ Clearly $C\prime \supseteq {C}_{\sigma }\text{.}$ Using the corollary to Theorem 33 and an argument similar to that in the proof of Corollary 1(b) to Theorem 4', we see $C\prime \subseteq {H}_{\sigma }\subseteq H\text{.}$ Since $H$ acts "diagonally," we have $C\prime \subseteq C,$ hence $C\prime ={C}_{\sigma },$ proving (b). $\square$
Corollary: The values of $|{G}_{\sigma }|$ and $|{C}_{\sigma }|=|\text{Hom} \left({L}_{0}/{L}_{1},{k}^{*}\right)\sigma |$ are as follows:
$Gσ εj's≠1 |Gσ| |Cσ| Chevalley group (σ=1) None (*)qNΠ(qdk-1) |Hom(L0/L1,k*)| An2 (n≥2) -1 if dj is odd Replace qdj-1 by qdj-(-1)dj in (*) Same change; i.e. (n+1,q+1) E62 Same as An2 Same change as An2 (3,q+1) Dn2 -1 for one dj=n Replace one qn-1 by qn+1 in (*) (4,qn+1) D43 ω,ω2 for dj=4,4 q12(q2-1)(q6-1)(q8+q4+1) 1 C22 -1 for dj=4 q4(q2-1)(q4+1) 1 G22 -1 for dj=6 q6(q2-1)(q6+1) 1 F42 -1 for dj=6,12 q24(q2-1)(q6+1)(q8-1)(q12+1) 1$
Here $\omega$ denotes a primitive cube root of $1\text{.}$
Proof (except for $|{C}_{\sigma }|\text{).}$ We consider the cases: ${}^{2}A_{n}\text{.}$ We first note $\left(*\right)$ $-1\in W{\sigma }_{0}\text{.}$ To prove $\left(*\right)$ we use the standard coordinates $\left\{{\omega }_{i} | 1\le i\le n+1\right\}$ for ${A}_{n}\text{.}$ Then ${\sigma }_{0}$ is given by ${\omega }_{i}\to -{\omega }_{n+2-i}\text{.}$ Since $W$ acts via all permutations of $\left\{{\omega }_{i}\right\},$ we see $-1\in W{\sigma }_{0}\text{.}$ Alternatively, since $W$ is transitive on the simple systems (Appendix II.24), there exists ${w}_{0}\in W$ such that ${w}_{0}\left(-\Pi \right)=\Pi \text{.}$ Hence, $-{w}_{0}\left(-1\right)=1$ or ${\sigma }_{0}\text{;}$ i.e., $-1\in W$ or $-1\in W{\sigma }_{0}\text{.}$ Since there are invariants of odd degree $\left({d}_{i}=2,3,\dots \text{),}$ $-1\notin W\text{.}$ By $\left(*\right)$ ${\sigma }_{0}$ fixes the invariants of even degree and changes the signs of those of odd degree. ${}^{2}E_{6},{}^{2}D_{2n+1}\text{.}$ The second argument to establish $\left(*\right)$ in the case ${}^{2}A_{n}$ may be used here, and the same conclusion holds. ${}^{2}D_{n}$ $\text{(}n$ even or odd). Relative to the standard coordinates $\left\{{v}_{i} | 1\le i\le n\right\},$ the basic invariants are the first $n-1$ elementary symmetric polynomials in $\left\{{v}_{i}^{2}\right\}$ together with $\Pi {v}_{i},$ and $W$ acts via all permutations and even number of sign changes. Here ${\sigma }_{0}$ can be taken to be the map ${v}_{i}\to {v}_{i}$ $\left(1\le i\le n-1\right),$ ${v}_{n}\to -{v}_{n}\text{.}$ Hence, only the last invariant changes sign under ${\sigma }_{0}\text{.}$ ${}^{3}D_{4}\text{.}$ The degrees of the invariants are 2, 4, 6, and 4. By Lemma 67, the ${\epsilon }_{j}\text{'s}$ are 1, 1, $\omega ,$ ${\omega }^{2}\text{.}$ Since ${\sigma }_{0}$ is real, $\omega$ and ${\omega }^{2}$ must occur in the same dimension. Thus, we replace ${\left({q}^{4}-1\right)}^{2}$ in the usual formula by $\left({q}^{4}-\omega \right)\left({q}^{4}-{\omega }^{2}\right)={q}^{8}+{q}^{4}+1\text{.}$ ${}^{2}C_{2},$ ${}^{2}G_{2}\text{.}$ In both cases the ${\epsilon }_{j}\text{'s}$ are 1, $-1$ by Lemma 67. Since $⟨W,{\sigma }_{0}⟩$ is a finite group, it fixes some nonzero quadratic form, so that ${\epsilon }_{j}=1$ for ${d}_{j}=2\text{.}$ ${}^{2}F_{4}\text{.}$ The degrees of the invariants are 2, 6, 8, 12 and the ${\epsilon }_{j}\text{'s}$ are 1, 1, $-1,$ $-1\text{.}$ As before there is a quadratic invariant fixed by ${\sigma }_{0}\text{.}$ Consider $I=\underset{\alpha \text{long root}}{\Sigma }{\alpha }^{8}+\underset{\beta \text{short root}}{\Sigma }{\left(\sqrt{2}\beta \right)}^{8}\text{.}$ We claim that $I$ is an invariant of degree 8 fixed by ${\sigma }_{0}$ and there is a quadratic invariant fixed by ${\sigma }_{0}$ which does not divide $I\text{.}$ The first part is clear since $W$ and ${\sigma }_{0}$ preserve lengths and permute the rays through the roots. To see the second part, choose coordinates $\left\{{v}_{i} | i=1,2,3,4\right\}$ so that the long roots (respectively, the short roots) are the vectors obtained from $2{v}_{1},$ ${v}_{1}+{v}_{2}+{v}_{3}+{v}_{4}$ (respectively, ${v}_{1}+{v}_{2}\text{)}$ by all permutations and sign changes. The quadratic invariant is ${v}_{1}^{2}+{v}_{2}^{2}+{v}_{3}^{2}+{v}_{4}^{2}\text{.}$ To show that this does not divide $I,$ consider the sum of those terms in $I$ which involve only ${v}_{1}$ and ${v}_{2}$ and note that this is not divisible by ${v}_{1}^{2}+{v}_{2}^{2}\text{.}$ Hence, I can be taken as one of the basic invariants, and ${\epsilon }_{j}=1$ if ${d}_{j}=8\text{.}$ $\square$
Remark: $|{}^{2}C_{2}|$ is not divisible by 3. Aside from cyclic groups of prime order, these are the only known finite simple groups with this property.
Now we consider the automorphisms of the twisted groups. As for the untwisted groups diagonal automorphisms and field automorphisms can be defined.
Theorem 36: Let $G$ and $\sigma$ be as in this section and ${G}_{\sigma }^{\prime }$ the subgroup of $G$ (or ${G}_{\sigma }\text{)}$ generated by ${U}_{\sigma }$ and ${U}_{\sigma }^{-}\text{.}$ Assume that $\sigma$ is not the identity. Then every automorphism of ${G}_{\sigma }^{\prime }$ is a product of an inner, a diagonal, and a field automorphism.
Remark: Observe that graph automorphisms are missing. Thus the twisted groups cannot themselves be twisted, at least not in the simple way we have been considering.
Sketch of proof.
As in step (1) of the proof of Theorem 30, the automorphism, call it $\phi ,$ may be normalized by an inner automorphism so that it fixes ${U}_{\sigma }$ and ${U}_{\sigma }^{-}$ (in the finite case by Sylow's theorem, in the infinite case by arguments from the theory of algebraic groups). Then it also fixes ${H}_{\sigma }^{\prime },$ and it permutes the ${𝔛}_{a}\text{'s}$ (a simple, $a\in \Sigma /R\text{;}$ henceforth we write ${𝔛}_{a}$ for ${𝔛}_{a,\sigma }\text{)}$ and also the ${𝔛}_{-a}\text{'s}$ according to the same permutation, in an angle preserving manner (see step (2)) in terms of the corresponding simple system ${\Pi }_{\sigma }$ of ${V}_{\sigma }\text{.}$ By checking cases one sees that the permutation is necessarily the identity: if $k$ is finite, one need only compare the various $|{𝔛}_{a}|\text{'s}$ with each other, while if $k$ is arbitrary further argument is necessary (one can, for example, check which ${𝔛}_{a}\text{'s}$ are Abelian and which are not, thus ruling out all possibilities except for ${}^{2}A_{3},$ ${}^{2}E_{6},$ and ${}^{3}D_{4},$ and then rule out these cases (the first two together) by considering the commutator relations among the ${𝔛}_{a}\text{'s).}$ As in step (4) of the proof of Theorem 30, we need only complete the proof of our theorem when ${G}_{\sigma }^{\prime }$ is one of the groups ${G}_{a}=⟨{𝔛}_{a},{𝔛}_{-a}⟩,$ in other words, when ${G}_{\sigma }^{\prime }$ is of one of the types ${A}_{1},$ ${}^{2}A_{2},$ ${}^{2}C_{2}$ or ${}^{2}G_{2}$ (with ${}^{2}C_{2}\left(2\right)$ and ${}^{2}G_{2}\left(3\right)$ excluded, but not ${A}_{1}\left(2\right),$ ${A}_{1}\left(3\right),$ or ${}^{2}A_{2}\left(4\right)\text{),}$ which we henceforth assume. The case ${A}_{1}$ having been treated in §10, we will treat only the other cases, in a sequence of steps. We write $x\left(t,u\right)$ or $x\left(t,u,v\right)$ for the general element of ${U}_{\sigma }$ as given in Lemma 63 and $d\left(s\right)$ for ${h}_{\alpha }\left(s\right){h}_{\beta }\left({s}^{\theta }\right)\text{.}$
(1) We have the equations
$d(s) x(t,u) d(s)-1 = x ( s2-θt, s1+θu ) in A22 = x ( s2-2θt, s2θu ) in C22 d(s) x(t,u,v) d(s)-1 = x ( s2-3θt, s-1+3θu, sv ) in G22.$
This follows from the definitions and Lemma 20(c).
(2) Let ${U}_{1},{U}_{2}$ be the subgroups of ${U}_{\sigma }$ obtained by setting $t=0,$ then also $u=0\text{.}$ Then ${U}_{\sigma }\supset {U}_{1}\supset {U}_{2}=1$ is the lower central series ${U}_{\sigma }\supset \left({U}_{\sigma },{U}_{\sigma }\right)\supset \left({U}_{\sigma },\left({U}_{\sigma },{U}_{\sigma }\right)\right)\supset \dots$ for ${U}_{\sigma }$ if the type is ${}^{2}A_{2}$ or ${}^{2}C_{2},$ while ${U}_{\sigma }\supset {U}_{1}\supset {U}_{2}\supset 1$ is if the type is ${}^{2}G_{2}\text{.}$
Exercise: Prove this.
(3) If the case ${}^{2}A_{2}\left(4\right)$ is excluded, then $d\left(s\right)x\left(t,\dots \right)d{\left(s\right)}^{-1}=x\left(g\left(s\right)t,\dots \right),$ with $g:{k}^{*}\to {k}^{*}$ a homomorphism whose image generates $k$ additively.
Proof. Consider ${}^{2}A_{2}\text{.}$ By (1) we have $g\left(s\right)={s}^{2-\theta },$ so that $g\left(s\right)=s$ for $s\in {k}_{\theta }\text{.}$ Since $\left[k:{k}_{\theta }\right]=2,$ we need only show that $g$ takes on a value outside of ${k}_{\theta }\text{.}$ Now if $g$ doesn't, then ${s}^{2-\theta }={\left({s}^{2-\theta }\right)}^{\theta }$ so that ${s}^{3}\in {k}_{\theta },$ for all $s\in {k}^{*},$ whence we easily conclude (the reader is asked to supply the proof) that $k$ has at most 4 elements, a contradiction. For ${}^{2}C_{2}$ and ${}^{2}G_{2}$ the proof is similar, but easier. $\square$
(4) The automorphism $\phi$ (of ${G}_{\sigma }^{\prime }\text{)}$ can be normalized by a diagonal and a field automorphism to be the identity on ${U}_{\sigma }/{U}_{1}\text{.}$
Proof. Since $\phi$ fixes ${U}_{\sigma },$ it also fixes ${U}_{1},$ hence acts on ${U}_{\sigma }/{U}_{1}\text{.}$ Thus there is an additive isomorphism $f:k→ksuch that φ x(t,…)=x (f(t),…).$ By multiplying $\phi$ by a diagonal automorphism we may assume $f\left(1\right)=1\text{.}$ Since $\phi$ fixes ${H}_{\sigma }^{\prime },$ there is an isomorphism $i:{k}^{*}\to {k}^{*}$ such that $\phi d\left(s\right)=d\left(i\left(s\right)\right)\text{.}$ Combining these equations with the one in (3) we get $f(g(s)t)= g(i(s))f(t) for alls∈k*, t∈k.$ Setting $t=1,$ we get $\left(*\right)$ $f\left(g\left(s\right)\right)=g\left(i\left(s\right)\right),$ so that $f\left(g\left(s\right)t\right)=f\left(g\left(s\right)\right)f\left(t\right)\text{.}$ If the case ${}^{2}A_{2}\left(4\right)$ is excluded, then $f$ is multiplicative on $k$ by (3), hence is an automorphism. The same conclusion, however, holds in that case also since $f$ fixes 0 and 1 and permutes the two elements of $k$ not in ${k}_{\theta }\text{.}$ $\square$
Our object now is to show that once the normalization in (4) has been attained $\phi$ is necessarily the identity.
(5) $\phi$ fixes each element of ${U}_{1}/{U}_{2}$ and ${U}_{2},$ and also some $w\in {G}_{\sigma }^{\prime }$ which represents the nontrivial element of the Weyl group.
Proof. The first part easily follows from (2) and (4), then the second follows as in the proof of Theorem 33(b). $\square$
(6) If the type is ${}^{2}C_{2}$ or ${}^{2}G_{2},$ then $\phi$ is the identity.
Proof. Consider the type ${}^{2}C_{2}\text{.}$ From the equation $\left(*\right)$ of (4) and the fact that $f=1,$ we get $g\left(s\right)=g\left(i\left(s\right)\right),$ i.e., ${s}^{2-2\theta }=i{\left(s\right)}^{2-2\theta },$ and then taking the $1+\theta \text{th}$ power, $s=i\left(s\right)\text{;}$ in other words $\phi$ fixes every $d\left(s\right)\text{.}$ By (4) and (5), $\phi x\left(t,u\right)=x\left(t,u+j\left(t\right)\right)$ with $j$ an additive homomorphism. Conjugating this equation by $d\left(s\right)=\phi d\left(s\right),$ using (1), and comparing the new equation with the old, we get $j\left({s}^{2-2\theta }t\right)={s}^{2\theta }j\left(t\right),$ and on replacing $s$ by ${s}^{1+\theta },$ $j\left(st\right){s}^{1+2\theta }j\left(t\right)\text{.}$ Choosing $s\ne 0,1,$ which is possible because ${}^{2}C_{2}\left(2\right)$ has been excluded, and replacing $s$ by $s+1$ and by 1 and combining the three equations, we get $\left(s+{s}^{2\theta }\right)j\left(t\right)=0\text{.}$ Now $s+{s}^{2\theta }\ne 0,$ since otherwise we would have $s+{s}^{2\theta }={\left(s+{s}^{2\theta }\right)}^{2\theta },$ then $s={s}^{2},$ contrary to the choice of $s\text{.}$ Thus $j\left(t\right)=0\text{.}$ In other words $\phi$ fixes every element of ${U}_{\sigma }\text{.}$ If the type is ${}^{2}G_{2}$ instead, the argument is similar, requiring one extra step. Since ${G}_{\sigma }^{\prime }$ is generated by ${U}_{\sigma }$ and the element $w$ of (5), $\phi$ is the identity. $\square$
The preceding argument, slightly modified, barely fails for ${}^{2}A_{2},$ in fact fails just for the smallest case ${}^{2}A_{2}\left(4\right)\text{.}$ The proof to follow, however, works in all cases.
(7) If the type is ${}^{2}A_{2},$ then $\phi$ is the identity.
Proof. Choose $w$ as in (5) and, assuming $u\ne 0,$ write $wx\left(t,u\right){w}^{-1}=xnx\prime$ with $x,x\prime \in {U}_{\sigma },$ $n\in {H}_{\sigma }^{\prime }w\text{.}$ A simple calculation in ${SL}_{3}$ shows that $x=x\left(at{\stackrel{‾}{u}}^{-1},*\right)$ for some $a\in {k}^{*}$ depending on $w$ but not on $t$ or $u\text{.}$ (Prove this.) If now we write $\phi x\left(t,u\right)=x\left(t,u+j\left(t\right)\right),$ apply $\phi$ to the above equation, and use (4) and (5), we get $t{\stackrel{‾}{u}}^{-1}=t{\left(\stackrel{‾}{u+j\left(t\right)}\right)}^{-1},$ so that $j\left(t\right)=0$ and we may complete the proof as before. $\square$
$\square$
It is also possible to determine the isomorphisms among the various Chevalley groups, both twisted and untwisted. We state the results for the finite groups, omitting the proofs.
Theorem 37: (a) Among the finite simple Chevalley groups, their twisted analogues, and the alternating groups ${𝒜}_{n}$ $\left(n\ge 5\right),$ a complete list of isomorphisms is given as follows.
(1) Those independent of $k\text{.}$ $C1∼B1∼A1 C2∼B2 D2∼A1×A1 D22∼ (A1×A1)2 ∼A1 D3∼A3 D32∼ A32 A12(q2) ∼A1(q)$ (2) ${B}_{n}\left(q\right)\sim {C}_{n}\left(q\right)$ if $q$ is even. (3) Just six other cases, of the indicated orders. $A1(4)∼ A1(5)∼ 𝒜5 60 A1(7)∼ A2(2) 168 A1(9)∼ 𝒜6 360 A3(2)∼ 𝒜8 20160 A32(4)∼ B2(3) 25920$
(b) In addition there are the following cases in which the Chevalley group just fails to be simple.
$The derived group of B2(2)∼𝒜6 360 G2(2)∼ A22(9) 6048 G22(3)∼ A1(8) 504 F42(2)$
The indices in the original group are 2, 3, 2, 2, respectively.
Remarks:
(a) The existence of the isomorphisms in (1) and (2) is easy, and in (3) is proved, e.g., in Dieudonné (Can. J. Math. 1949). There also the first case of (b), considered in the form ${B}_{2}\left(2\right)\sim {S}_{6}$ (symmetric group) is proved. (b) It is natural to include the simple groups ${𝒜}_{n}$ in the above comparison since they are the derived groups of the Weyl groups of type ${A}_{n-1}$ and the Weyl groups in a sense form the skeletons of the corresponding Chevalley groups. We would like to point out that the Weyl groups $W\left({E}_{n}\right)$ are also almost simple and are related to earlier groups as follows.
Proposition: We have the isomorphisms:
$𝒟W(E6)∼ B2(3)∼ A32(4) 𝒟W(E7)∼ C3(2) 𝒟W(E8)/C∼ D4(2), with C the center, of order 2.$
Proof. The proof is similar to the proof of ${S}_{6}=W\left({A}_{5}\right)\sim {B}_{2}\left(2\right)$ given near the beginning of §10. $\square$
Aside from the cyclic groups of prime order and the groups considered above, only 11 or 12 other finite simple groups are at present (May, 1968) known. We will discuss them briefly.
(a) The five Mathieu groups ${M}_{n}$ $\left(n=11,12,22,23,24\right)\text{.}$ These were discovered by Mathieu about a hundred years ago and put on a firm footing by Witt (Hamburger Abh. 12 (1938)). They arise as highly transitive permutation groups on the indicated numbers of letters. Their orders are:
$|M11|=7920= 8·9·10·11 |M12|=95040= 8·9·10·11·12 |M22|=443520= 48·20·21·22 |M23|=10200960= 48·20·21·22·23 |M24|=244823040= 48·20·21·22·23·24$
(b) The first Janko group ${J}_{1}$ discovered by Janko (J. Algebra 3 (1966)) about five years ago. It is a subgroup of ${G}_{2}\left(11\right)$ and can be represented as a permutation group on 266 letters. Its order is
$|J1|=175560=11 (11+1) (113-1)= 19·20·21·22= 55·56·57.$
The remaining groups were all uncovered last fall, more or less.
(c) The groups ${J}_{2}$ and ${J}_{2 1/2}$ of Janko. The existence of ${J}_{2}$ was put on a firm basis first by Hall and Wales using a machine, and then by Tits in terms of a "geometry." It has a subgroup of index 100 isomorphic to $𝒟{G}_{2}\left(2\right)\sim {}^{2}A_{2}\left(9\right),$ and is itself of index 416 in ${G}_{2}\left(4\right)\text{.}$ The group ${J}_{2 1/2}$ has not yet been put on a firm basis, and it appears that it will take a great deal of work to do so (because it does not seem to have any "large" subgroups), but the evidence for its existence is overwhelming. The orders are:
$|J2|=604800 |J2 1/2|=50232960.$
(d) The group $H$ of D. Higman and Sims, and the group $H\prime$ of G. Higman. The first group contains ${M}_{22}$ as a subgroup of index 100 and was constructed in terms of the automorphism group of a graph with 100 vertices whose existence depends on properties of Steiner systems. Inspired by this construction, G. Higman then constructed his own group in terms of a very special geometry invented for the occasion. The two groups have the same order, and everyone seems to feel that they are isomorphic, but no one has yet proved this. The order is:
$|H|=|H′| =44352000.$
(e) The (latest) group $S$ of Suzuki. This contains ${G}_{2}\left(4\right)$ as a subgroup of index 1782, and is contructed in terms of a graph whose existence depends on the imbedding ${J}_{2}\subset {G}_{2}\left(4\right)\text{.}$ It possesses an involutory automorphism whose set of fixed points is exactly ${J}_{2}\text{.}$ Its order is:
$|S|=448345497600.$
(f) The group $M$ of McLaughlin. This group is constructed in terms of a graph and contains ${}^{2}A_{3}\left(9\right)$ as a subgroup of index 275. Its order is:
$|M|=898128000.$
Theorem 38: Among all the finite simple groups above (i.e., all that are currently known), the only coincidences in the orders which do not come from isomorphisms are:
(a) ${B}_{n}\left(q\right)$ and ${C}_{n}\left(q\right)$ for $n\ge 3$ and $q$ odd. (b) ${A}_{2}\left(4\right)$ and ${A}_{3}\left(2\right)\sim {𝒜}_{8}\text{.}$ (c) $H$ and $H\prime$ if they aren't isomorphic.
That the groups in (a) have the same order and are not isomorphic has been proved earlier. The orders in (b) are both equal to 20160 by Theorem 25, and the groups are not isomorphic since relative to the normalizer $B$ of a $2\text{-Sylow}$ subgroup the first group has six double cosets and the second has 24. The proof that (a), (b) and (c) represent the only possibilities depends on an exhaustive analysis of the group orders which can not be undertaken here.
## Notes and References
This is a typed excerpt of Lectures on Chevalley groups by Robert Steinberg, Yale University, 1967. Notes prepared by John Faulkner and Robert Wilson. This work was partially supported by Contract ARO-D-336-8230-31-43033.
|
|
# How to sort an alphanumeric list
I have been trying to find a package or a method to sort lists of names in an easy manner in TeX/LaTeX. I have tried some of the routines in the xfor package, as well as looked at some of the new macros in LaTeX3 but was not very successful with either of them.
I have come with the idea of using the MakeIndex program to sort and categorize. It is available with every distribution and the LaTeX class is very short and easily hackable being approximately 36 lines long (including the glossary macros that I can eliminate) and a few more definitions for styling in the basic book or article class.
Here is a minimal example that sorts a list of names by profession (category).
\documentclass[11pt]{article}
\usepackage{makeidx}
\makeindex
\begin{document}
\renewcommand{\indexname}{Famous and Infamous Sorted People}
%% Importing the .ind file rather than use \printindex
%% so we do not need to redefine the command
%%
\input{indextest.ind}
\begin{verbatim}
\begin{theindex}
\item Lifelong Trainee \TeX nician
\subitem Yiannis Lazarides, 1
\item Linguist
\subitem Noam Chomsky, 1
\end{theindex}
\end{verbatim}
\end{document}
Is this a good idea? Are there any packages devoted to alphanumeric sorting? I know it can easily be done with an external script, but I am looking for a TeX/LaTeX solution. Do you think it is a good approach?
• With my LaTeX3 hat on, I'd point out that we've not really needed sorting yet, hence the somewhat limited support. Perhaps you could outline what might be useful, say on the LaTeX-list. Dec 13, 2010 at 9:30
• Do you just want a list or are you also going to reference the names in your document? If so, you might want to have a look at datagidx (part of the datatool bundle). However, as it uses TeX to do the sorting, it's a lot slower than using makeindex. Jan 7, 2014 at 20:12
• For reference, we have added l3sort since that time. This allows sorting various things. Oct 30, 2021 at 17:28
• @BrunoLeFloch Thanks for adding the note. The example in the documentation is based on numeric comparators. Would you kindly add an example based on alphanum, where the order is based in an alphabet clist or sequence? By the way the regex is great in l3. Oct 30, 2021 at 22:44
• @YiannisLazarides Do you mean sorting words in naive lexicographic order of Unicode code points, or a more elaborate code with better support for accents? Oct 30, 2021 at 23:36
I didn't look at Charles's code because I wanted to see how difficult it would be to implement a generic sorting macro that was expandable. It turns out that it's more or less straight-forward to do in a continuation-passing style.
\documentclass{article}
\usepackage{etoolbox}
\makeatletter
% #1 - comparator
% #2 - token list to sort
\newcommand\sort[2]{%
\ifstrempty{#2}
{}% else
{%
\sort@begin#1{}#2\sort@s\sort@begin
}%
}
% helpers
\def\sort@s{\sort@s}
\def\ifsort@s#1{%
\ifx\sort@s#1%
\expandafter\@firstoftwo
\else
\expandafter\@secondoftwo
\fi
}
% #1 - comparator
% #2 - tokens processed so far
% #3 - smallest token so far
% #4 - rest of the list
\def\sort@begin#1#2#3#4\sort@begin{%
\ifsort@s{#4}
{%
\sortend{#3}%
\sort#1{#2}%
}% else
{%
\sort@go#1{#2}{#3}#4\sort@go
}%
}
% #1 - comparator
% #2 - tokens processed so far
% #3 - smallest token so far
% #4 - token under consideration
% #5 - rest of the list
\def\sort@go#1#2#3#4#5\sort@go{%
#1{#3}{#4}{\sort@output#1{#2}{#5}}%
}
% #1 - comparator
% #2 - tokens processed so far
% #3 - rest of the list
% #4 - smaller of the two tokens
% #5 - larger of the two tokens
\def\sort@output#1#2#3#4#5{%
\ifsort@s{#3}
{%
\sortoutput{#4}%
\sort#1{#2{#5}}%
}% else
{%
\sort@begin#1{#2{#5}}{#4}#3\sort@begin
}%
}
\def\sort@numlt#1#2#3{%
\ifnumcomp{#1}<{#2}
{#3{#1}{#2}}% else
{#3{#2}{#1}}%
}
\def\sort@numgt#1#2#3{%
\ifnumcomp{#1}>{#2}
{#3{#1}{#2}}% else
{#3{#2}{#1}}%
}
\def\sort@alpha#1#2#3{%
\ifnumcomp{\pdfstrcmp{#1}{#2}}<{0}
{#3{#1}{#2}}% else
{#3{#2}{#1}}%
}
\newcommand*\sortnumlt{\sort\sort@numlt}
\newcommand*\sortnumgt{\sort\sort@numgt}
\newcommand*\sortalpha{\sort\sort@alpha}
\makeatother
% Change these to change out the sort outputs.
\newcommand*\sortoutput[1]{#1, }
\newcommand*\sortend[1]{#1.}
\begin{document}
\sortnumgt{87632147{55}9{8/2}}
\sortalpha{{Goodbye}{Cruel}{World}}
\renewcommand*\sortoutput[1]{#1}
\renewcommand*\sortend[1]{#1}
\edef\temp{\sortnumlt{87632147{55}9}}
\texttt{\meaning\temp}
\end{document}
I hope the code is readable. I tried to document the arguments to each function. In particular, the first argument to \sort is a comparator control sequence that must take three arguments. The first two are the two elements of the following list to compare and the third is the continuation. Basically, the comparator should expand to either #3{#1}{#2} or #3{#2}{#1} depending on #1 being "less than" #2.
I've implemented three such comparators. The first two compare lists of numbers whereas the third does alphanumeric string comparison using \pdfstrcmp. Since the number comparisons use \ifnumcomp from etoolbox, you can use arbitrary arithmetic expressions for the elements, hence {8/2} in the list.
Finally, to show that this is expandable (at least it is whenever the comparator, \sortoutput, and \sortend are), \temp is defined using \edef and its meaning is typeset to ensure that it has been set to the appropriate value: macro:->12346778955.
Note that it would also be straight-forward to thread \sortoutput and \sortend through all of these macros so that multiple \sorts could be used in the same expansion. I just didn't think about adding those until I had written all of the rest of the code (more or less).
Note further that this is selection sort and thus would take Θ(n2) time, even in the best case. However, this being TeX and it having to construct the token lists for each argument each time, I think this implementation is actually Θ(n3) time. So don't try it on large lists.
• @TH Impressive. I will give it a try a bit later. Speed is not really a problem with the application I am busy with. Dec 14, 2010 at 15:10
The following certainly works for one list, but I am not certain as to how to extend it to more than one sorted list.
\documentclass[a4paper,12pt]{report}
\usepackage{datatool}
\usepackage[top=2cm, bottom=2cm, left=1cm, right=1cm]{geometry}
\usepackage[spanish]{babel}
\usepackage{amsfonts,amssymb,amsmath}
\newcommand{\sortitem}[2]{%
\DTLnewrow{list}%
\DTLnewdbentry{list}{label}{#1}%
\DTLnewdbentry{list}{description}{#2}%
}
\newenvironment{sortedlist}{%
\DTLifdbexists{list}{\DTLcleardb{list}}{\DTLnewdb{list}}%
}{%
\DTLsort{label}{list}%
\begin{description}%
\DTLforeach*{list}{\theLabel=label,\theDesc=description}{%
\item[\theLabel] \theDesc }%
\end{description}%
}
\begin{document}
\begin{sortedlist}
\sortitem{Leonard Euler}{Mathematician}
\sortitem{Carl Friedrich Gauss}{Mathematician}
\sortitem{August Ferdinand M"\obius}{Mathematician}
\end{sortedlist}
\end{document}
If you are willing to use LuaLaTeX, then you it should be possible to port the various language specific sorting functions from ConTeXt MkIV. For example see
• It would be really nice to see a MWE of this.
– user193767
Jan 28, 2020 at 9:53
A long time ago, I wrote a sorting package for LaTeX, for personal use. I have no idea how to use it or whether it works, but here is the source code:
• @Charles Thanks, code looks very neat, I will give it a try a bit later ... and welcome to the community. Dec 13, 2010 at 6:13
I started using TH's code, and found that it had to be tweaked a bit for my use case. Specifically, I wanted to be able to sort things with \par in them, and wanted the sort order to be based on the value of a \ref. After struggling for a bit, I got something working, and figured it would be worth sharing my efforts here. An example document that uses it looks like this:
\documentclass{article}
\usepackage{sort}
\begin{document}
\newcounter{foo}
\refstepcounter{foo}\label{a}
\refstepcounter{foo}\label{b}
\begin{trivlist}
\sortref{%
{{a}{Text with some proofs about part a
in multiple paragraphs!}}%
the other part}}%
}
\end{trivlist}
\end{document}
\usepackage{etoolbox}
\usepackage{refcount}
\makeatletter
% #1 - comparator
% #2 - token list to sort
\newcommand\sort[2]{%
\ifstrempty{#2}
{}% else
{%
\sort@begin#1{}#2\sort@s\sort@begin
}%
}
% helpers
\def\sort@s{\sort@s}
\long\def\ifsort@s#1{%
\ifx\sort@s#1%
\expandafter\@firstoftwo
\else
\expandafter\@secondoftwo
\fi
}
% #1 - comparator
% #2 - tokens processed so far
% #3 - smallest token so far
% #4 - rest of the list
\long\def\sort@begin#1#2#3#4\sort@begin{%
\ifsort@s{#4}
{%
\sortend{#3}%
\sort#1{#2}%
}% else
{%
\sort@go#1{#2}{#3}#4\sort@go
}%
}
% #1 - comparator
% #2 - tokens processed so far
% #3 - smallest token so far
% #4 - token under consideration
% #5 - rest of the list
\long\def\sort@go#1#2#3#4#5\sort@go{%
#1{#3}{#4}{\sort@output#1{#2}{#5}}%
}
% #1 - comparator
% #2 - tokens processed so far
% #3 - rest of the list
% #4 - smaller of the two tokens
% #5 - larger of the two tokens
\long\def\sort@output#1#2#3#4#5{%
\ifsort@s{#3}
{%
\sortoutput{#4}%
\sort#1{#2{#5}}%
}% else
{%
\sort@begin#1{#2{#5}}{#4}#3\sort@begin
}%
}
\def\sort@numlt#1#2#3{%
\ifnumcomp{#1}<{#2}
{#3{#1}{#2}}% else
{#3{#2}{#1}}%
}
\def\sort@numgt#1#2#3{%
\ifnumcomp{#1}>{#2}
{#3{#1}{#2}}% else
{#3{#2}{#1}}%
}
\def\sort@alpha#1#2#3{%
\ifnumcomp{\pdfstrcmp{#1}{#2}}<{0}
{#3{#1}{#2}}% else
{#3{#2}{#1}}%
}
\long\def\fst#1#2{#1}
\long\def\snd#1#2{#2}
\long\def\sort@ref#1#2#3{%
% Since #1 and #2 frequently contain newlines in their \snd part, and
% \getrefnumber is not \long, we must take care to remove any newlines
% *before* supplying an argument to \getrefnumber
\edef\@leftref{\fst#1}%
\edef\@rightref{\fst#2}%
\ifnumcomp{\getrefnumber\@leftref}<{\getrefnumber\@rightref}%
{#3{#1}{#2}}%
{#3{#2}{#1}}%
}
\newcommand*\sortnumlt{\sort\sort@numlt}
\newcommand*\sortnumgt{\sort\sort@numgt}
\newcommand*\sortalpha{\sort\sort@alpha}
\newcommand\sortref{\sort\sort@ref}
% Change these to change out the sort outputs.
\newcommand\sortoutput[1]{%
% As in the definition of \sort@ref, we must take care to remove
% newlines before handing off to \ref
\edef\@refname{\fst#1}%
\item {\bf Case \ref\@refname:} \snd#1%
}
\newcommand\sortend\sortoutput
\makeatother
With the package dbshow (current version: v1.2). The code below sorts the list in ascending order, descending order, and multi-level order. The last example shows the independent environment.
\documentclass{article}
\usepackage{dbshow}
\newlength{\boldAcrwidth}
\setlength{\boldAcrwidth}{2cm}
\ExplSyntaxOn
\NewDocumentCommand{\boldAcrExp}{m >{\SplitList{~}}m}{%
\makebox[\boldAcrwidth][l]{\bfseries\textit{\MakeUppercase{#1}}}%
\ProcessList{#2}{\boldAcrFirst}%
\unskip
}
\NewDocumentCommand{\boldAcr}{m m}{
\exp_args:Nxx \boldAcrExp { #1 } { #2 } % need expand first
}
\ExplSyntaxOff
\NewDocumentCommand{\boldAcrFirst}{m}{%
\boldAcrFirstAux#1 % we want a space
}
\NewDocumentCommand{\boldAcrFirstAux}{m}{%
\textbf{\MakeUppercase{#1}}%
}
\dbNewDatabase{acronym}{acronym=str, describe=str}
\dbNewStyle{sort-list}{acronym}{
before-code = {\begin{flushleft}},
after-code = {\end{flushleft}},
item-code = {\boldAcr{\dbuse{acronym}}{\dbuse{describe}} \\},
sort = acronym,
}
\dbNewStyle[sort-list]{sort-list-desc}{acronym}{sort=acronym*}
\dbNewStyle[sort-list]{multi-sort-list}{acronym}{sort={acronym, describe}}
\newcommand\saveAcr[2]{%
\begin{dbitem}{acronym}[acronym=#1, describe=#2]%
\end{dbitem}%
}
\NewDocumentEnvironment { acronym } { m }
{ \dbclear{acronym} } { \dbshow{#1}{acronym} }
\begin{document}
\saveAcr{owasp}{open web application security project}
\saveAcr{dbms}{duxbury bay maritime school}
\saveAcr{dbms}{database management system}
\saveAcr{sqlia}{structured query language injection attack}
\section{Ascending Order}
\dbshow{sort-list}{acronym}
\section{Descending Order}
\dbshow{sort-list-desc}{acronym}
\section{Multi-level Sorting}
\dbshow{multi-sort-list}{acronym}
\section{More list}
% clear the database acronym and use style sort-list to display
% previous records are lost
\begin{acronym}{sort-list}
\saveAcr{aef}{american expeditionary force}
|
|
## Thursday, May 3, 2012
### Sum of randomly selected numbers
In a class with 10 students, each student is allowed to choose a number from the following list: 100,101,102,300,301,302. The teacher then collects all chosen numbers and calculates the sum. How many ways are there for the sum to be 2012? Two "ways" are considered distinct if there is at least one student who chooses different numbers.
Variant: In a class with $2n+1$ students, each student is allowed to choose a number from the following list: $n, n+1, -n, -n-1$. The teacher then collects all chosen numbers and calculates the sum. How many ways are there for the sum to be zero?
Solution Suppose the number of students who chose 100,101,102,300,301,302 are $a,b,c,d,e,f$ respectively, so we have: $$a+b+c+d+e+f = 10$$ and $$100.a + 101.b + 102.c + 300.d + 301.e+ 302.f = 2012$$
Note that we can write 100=201-100-1, 101 as 201-100, 102 as 201-100+1, 300 as 201+100-1, 301 as 201+100, 302 as 201+100+1, so the second equation becomes: $$201(a+b+c+d+e+f) + 100(d+e+f-a-b-c) + (c+f-a-d) = 2012$$ Since $a+b+c+d+e+f = 10$ then: $$100(d+e+f-a-b-c) + (c+f-a-d) = 2$$
Now, since $|c+f-a-d| \leq 10$ then it must be the case that: $$d+e+f-a-b-c = 0$$ and $$c+f-a-d = 2$$
We also note that when the students choose one of the numbers, they're essentially making two choices: first, they choose if they want to choose small or big numbers (be included in $a+b+c$ or $d+e+f$), and second, they choose if they want their number to end with 0,1, or 2 (be included in $a+d, b+e$ or $c+f$).
Thus, the number of ways to satisfy the first constraint $a+b+c = d+e+f = 5$ is 10C5, which means 5 students choose the small numbers and the rest choose the big numbers.
In order to satisfy $c+f - a - d = 2$, we make the following substitutions: $x = a+d, y = b+e, z = c+f$. Then $x+y+z = 20, z-x = 2$. We divide the cases by the values of $z$. Note that once a student chooses to be included in $x$ or $y$ or $z$, he only needs to choose if he wants to choose a big or small number, since each of $x,y,z$ contains exactly one big and one small number.
If $z > 6$ then $x > 4$, impossible.
If $z = 6$, then $x = 4, y = 0$. There are 10C6 = 210 ways to choose this.
If $z = 5$, then $x = 3, y = 2$, there are 10! / 2! 5! 3! = 2520 ways.
If $z = 4$, then $x = 2, y = 4$, there are 10! / 4! 4! 2! = 3150 ways.
If $z = 3$, then $x = 1, y = 6$, there are 10! / 3! 6! 1! = 840 ways.
If $z = 2$, then $x = 0, y = 8$, there are 10C8 = 45 ways.
If $z < 2$ then $x < 0$, impossible.
So the total number of ways is $$_{10}C_{5} (210+2520+3150+840+45) = _{10}C_{5} 6765 = 1 704 780$$
Solution to variant
Suppose the number of students who chose $-n-1, n+1, -n, n$ are $a,b,c,d$ respectively, so we have: $$a+b+c+d = 2n+1$$ $$(n+1)(b-a) + n(d-c) = 0$$
Note that because $n$ and $n+1$ are relatively prime, so we must have $n+1 | d-c$. But $|d-c|$ can take any values from $0$ to $2n+1$, so $d-c = 0, n+1, -(n+1)$.
If $d-c=0$, then $b-a=0$, which means $a+b+c+d$ would be even, impossible.
If $d-c=n+1$, then $b-a = -n$. Combined with $a+b+c+d = 2n+1$ we have $(a,b,c,d) = (n,0,0,n+1)$. (Remember that $a,b,c,d$ have to be non-negative).
This means that $n$ of the students choose $-n-1$ and the other $n+1$ choose $n$. There are $_{2n+1}C_{n}$ ways to do that.
The case of $d-c = -n-1$ is similar, where $(a,b,c,d) = (0,n,n+1,0)$, and there are $_{2n+1}C_{n}$ ways. Note that all these ways are disjoint, so the total number of ways is $2_{2n+1}C_{n}$
|
|
40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
40m Log, Page 157 of 339 Not logged in
ID Date Author Type Category Subject
2572 Fri Feb 5 01:04:58 2010 SanjitUpdateAdaptive FilteringOAF at > 5Hz
There is lot of coherence between the error signal and PEM channels at 5-100Hz. We had been applying a 1Hz low pass filter to all the GUR and RANGER channels for stability. I turned those off and OAF still works with mu=0.0025, this will give us some more freedom. Kind of annoying for testing though, it takes about 45min to adapt!
In any case, there is no significant improvement at high frequencies as compared to our usual OAF performance. Also, the low frequency improvement (see previous e-log) is lost in this set up. I think, we have to adjust the number of taps and channels to do better at high frequencies. Also, delay can be important at these frequencies, needs some testing.
Attachment 1: OAF_04FEB2010_highFreq.png
2575 Sat Feb 6 00:10:08 2010 SanjitUpdateAdaptive FilteringOAF at > 5Hz
Did some more test to get better performance at higher frequencies.
Increased # taps to 4000 and reduced downsampling to 4, without changing the AA32 filters, from CORR, EMPH and the matching ADPT channels. But for testing I turned off AA32 from the input PEM channels. So that high frequency still gets blocked at CORR, but the adaptive filters have access to higher frequencies. Once we fix some reasonable downsampling, we should create corresponding AA filters.
I used only two channels, RANGER/GUR2_Y and GUR1_Z, and basically they had only one filter 0.1:0
This set up gave little better performance (more reduction at more frequencies), at some point even the 16HZ peak was reduced by a factor of 3. The 24Hz peak was a bit unstable, but became stable after I removed the Notch24 filters from PEM channels, to ensure that OAF is aware of those lines. There was some improvement also at the 24Hz peak.
6075 Tue Dec 6 00:58:34 2011 DenUpdateWienerFilteringOAF current goal
After reducing the digital noise I did offline Wiener filtering to see how good should be online filter. I looked at the MC_F and GUR1_X and GUR1_Y signals. Here are the results of the filtering. The coherence is plotted for MC_F and GUR1_X signals.
We can see the psd reduction of the MC_F below 1 Hz and at resonances. Below 0.3 Hz some other noises are present in the MC_F. Probably tilt becomes important here.
OAF is ready to be tested. I added AA and AI filters and also a highpass filter at 0.1 Hz. OAF workes, MC stays at lock. I looked at the psd of MC_F and filter output. They are comparable, filter output adapts for MC_F in ~10 minutes but MC_F does not go down too much. Determing the right gain I unlocked the MC, while Kiwamu was measuring something. Sorry about that. I'll continue tomorrow during the daytime.
2548 Tue Jan 26 19:51:44 2010 Sanjit, ranaUpdateAdaptive FilteringOAF details
We turned on the OAF again to make sure it works. We got it to work well with the Ranger as well as the Guralp channels. The previous problem with the ACC is that Sanjit and Matt were using the "X" channels which are aligned the "Y" arm. Another casualty of our ridiculous and nonsensical coordinate system. Long live the Right Hand Rule!!
The changes that were made are:
• use of RANGER channel (with ACC_MC1_X and/or ACC_MC2_X)
• mu = 0.01, tau = 1.0e-6, ntaps = 2000, nDown = 16
• nDelay = 5 and nDelay = 7 both work (may not be so sensitive on delay at low frequencies)
• Main changes: filter bank on the PEM channels - ASS_TOP_PEM_## filters: 0.1:0, 1:, Notch24, AA32, gain 1
• Added the AI800 filter for upsampling in MC1 (should not matter)
Other parameters which were kept at usual setting:
• CORR: AI32, gain = 1
• EMPH: 0.001:0, AA32, gain = 1
• ERR_MCL: no filters, gain = 1
• SUS_MC1: no filter, gain = 1
• PEM Matrix: All zero except: (24,1), (15,2), (18,3)
• ADAPT path filter: union of CORR and EMPH filters, gain 1
• XYCOM switches # 16-19 (last four on the right) OFF
Screenshots are attached.
Burt snapshot is kept as: /cvs/cds/caltech/scripts/OAF/snaps/ass_burt_100126_211330.snap
taken using the script: /cvs/cds/caltech/scripts/OAF/saveOAF
we should put this in ASS screen.
ERROR Detected in filter ASS_TOP_PEM_24 (RANGER): 1: was actually typed as a 1Hz high pass filter!
(Correcting this one seems to spoil the adaptation)
Possibly this makes sense, we may not want to block witness signals in the 0.1-20 Hz range.
11:40 PM: Leaving the lab with the OAF running on 5 PEM channels (Ranger + Guralp 1&2 Y & Z). There's a terminal open on op440m which will disable the OAF in ~2.8 hours. Feel free to disable sooner if you need the MC/IFO.
Attachment 1: C1ASS_TOP.png
Attachment 2: C1SUS_SRM_XYCOM1.png
Attachment 3: Untitled.png
2555 Mon Feb 1 18:31:00 2010 SanjitUpdateAdaptive FilteringOAF details
I tried downsampling value 32 (instead of 16), to see if it has any effect on OAF. Last week I encountered some stability issue - adaptation started to work, but the mode cleaner was suddenly unlocked, it could be due to some other effect too.
One point to note is that different downsampling did not have any effect on the CPU meter (I tried clicking the "RESET" button few times, but no change).
2557 Mon Feb 1 21:51:12 2010 SanjitUpdateAdaptive FilteringOAF details
I tried some combination of PEM channels and filters to improve OAF performance at other frequencies, where we do not have any improvement so far. There is progress, but still no success.
Here are the main things I tried:
For the ACC channels replaced the 0.1 Hz high pass filters by 3Hz high pass and turned off the 1: filter.
Then I tried to incorporate the Z ACC/GUR channels, with some reasonable combination of the others.
The Z axis Guralp and Accelerometers were making OAF unstable, so I put a 0.1 gain in all four of those.
Following the PEM noise curves Rana has put up, we should probably use
• two ACC_Y channels (3:0, Notch24, AA32)
• two GUR_Z channels (filters: 0.1:0, 1:, AA32, gain 0,1)
• one RANGER_Y, just because it works (0.1:0, 1:, Notch24, AA32)
In the end I tried this combination, it was stable after I reduced the GUR_Z gain, but looked very similar to what we got before, no improvement at 5Hz or 0.5Hz. But there was a stable hint of better performance at > 40Hz.
Possibly we need to increase the GUR_Z gain (but not 1) and try to use ACC_Z channels also. Since we can not handle many channels, possibly using one GUR_Z and one ACC_Z would be worth checking.
5572 Wed Sep 28 22:30:01 2011 JenneUpdateAdaptive FilteringOAF is disabled
I am leaving the OAF disabled, so there should be nothing that goes to the suspensions from the OAF.
Disabled for the OAF means all the outputs are multiplied by 0 just before the signals are sent back over to the LSC system to be summed in and sent to the suspensions. So 0 times anything should mean that the OAF isn't injecting signals.
In other news, while Mirko was trying to figure out the c-code, I began making a new OAF screen. It is still in progress, but it's in c1oaf/master/C1OAF_OVERVIEW.adl If you open it up, you can see for yourself that the OAF is disabled. (Eventually I'll put an enable/disable button on the LSC screen also, but that hasn't happened yet.)
6641 Fri May 11 17:21:56 2012 JenneUpdateIOOOAF left enabled - MC unlocked for more than an hour
No leaving the OAF running until you're sure (sure-sure, not kind of sure, not pretty sure, but I've enabled guardians to make sure nothing bad can happen, and I've been sitting here watching it for 24+ hours and it is fine) that it works okay.
OAF (both adaptive and static paths) were left enabled, which was kicking MC2 a lot. Not quite enough that the watchdog tripped, but close. The LSCPOS output for MC2 was in the 10's or 100's of thousands of counts. Not okay.
This brings up the point though, which I hadn't actively thought through before, that we need an OAF watchdog. An OAF ogre? But a benevolent ogre. If the OAF gets out of control, it's output should be shut off. Eventually we can make it more sophisticated, so that it resets the adaptive filter, and lets things start over, or something.
But until we have a reliable OAF ogre, no leaving the adaptive path enabled if you're not sitting in the control room. The static path should be fine, since it can't get out of control on it's own.
Especially no leaving things like this enabled without a "I'm leaving this enabled, I'll be back in a few hours to check on it" elog!
5886 Mon Nov 14 12:16:41 2011 JenneUpdateComputersOAF model died for unknown reason
I am meditating on the OAF, and had it running and calculating things. I had the outputs disabled so I could take reference traces in DTT, but the Adapt block was calculating for MCL. At some point, all the numbers froze, and the CPU meter had gone up to ~256ms. Usually it's around ~70 or so for the configuration I had (2 witness sensors and one degree of freedom enabled....no c-code calculations on any other signals). The "alive" heartbeat was also frozen.
I ssh'ed into c1lsc, ran ./startc1oaf in the scripts directory, and it came back just fine.
Anyhow, I don't know why it got funny, but I wanted to record the event for posterity. I'm back to OAFing now.
6626 Tue May 8 17:48:50 2012 JenneUpdateCDSOAF model not seeing MCL correctly
Den noticed this, and will write more later, I just wanted to sum up what Alex said / did while he was here a few minutes ago....
Errors are probably really happening.... c1oaf computer status 4-bit thing: GRGG. The Red bit is indicating receiving errors. Probably the oaf model is doing a sample-and-hold thing, sampling every time (~1 or 2 times per sec) it gets a successful receive, and then holding that value until it gets another successful receive.
Den is adding EPICS channels to record the ERR out of the PCIE dolphin memory CDS_PART, so that we can see what the error is, not just that one happened.
Alex restarted oaf model: sudo rmmod c1oaf.ko, sudo insmod c1oaf.ko . Clicked "diag reset" on oaf cds screen several times, nothing changed. Restarted c1oaf again, same rmmod, insmod commands.
Den, Alex and I went into the IFO room, and looked at the LSC computer, SUS computer, SUS I/O chassis, LSC I/O chassis and the dolphin switch that is on the back of the rack, behind the SUS IO chassis. All were blinking happily, none showed symptoms of errors.
Alex restarted the IOP process: sudo rmmod c1x04, sudo insmod c1x04. Chans on dataviewer still bad, so this didn't help, i.e. it wasn't just a synchronization problem. oaf status: RRGG. lsc status: RGGG. ass status: RGGG.
sudo insmod c1lsc.ko, sudo insmod c1ass.ko, sudo insmod c1oaf.ko . oaf status: GRGG. lsc status: GGGG. ass status: GGGG. This means probably lsc needs to send something to oaf, so that works now that lsc is restarted, although oaf still not receiving happily.
Alex left to go talk to Rolf again, because he's still confused.
Comment, while writing elog later: c1rfm status is RRRG, c1sus status is RRGG, c1oaf status is GRGG, both c1scy and c1scx are RGRG. All others are GGGG.
6632 Wed May 9 10:46:54 2012 DenUpdateCDSOAF model not seeing MCL correctly
Quote: Den noticed this, and will write more later, I just wanted to sum up what Alex said / did while he was here a few minutes ago....
From my point of view during rfm -> oaf transmission through Dolphin we loose a significant part of the signal. To check that I've created MEDM screen to monitor the transmission errors in the OAF model. It shows how many errors occurs per second. For MCL channel this number turned out to be 2046 +/- 1. This makes sense to me as the sampling rate is 2048 Hz => then we actually receive only 1-3 data points per second. We can see this in the dataviewer.
C1:OAF-MCL_IN follows C1:IOO-MC_F in the sense that the scale of 2 signals are the same in 2 states: MC locked and unlocked. It seems that we loose 2046 out of 2048 points per second.
2012 Mon Sep 28 11:52:23 2009 JenneUpdateTreasureOAF screen added to the screenshots webpage
I used Kakeru's instructions in elog 1221 to add the C1OAF screen (still called C1ASS_TOP) to the medm screenshots webpage. The tricky part of this is figuring out that the file that needs editing is in fact in /cvs/cds/projects/statScreen, not /cvs/cds/caltech/statScreen, as claimed in the entry.
1985 Fri Sep 11 17:11:15 2009 SanjitUpdateASSOAF: progress made
[Jenne & Sanjit]
Good news: We could successfully send filtered output to MC1 @ SUS.
We used 7 channels (different combinations of 3 seismometer and six accelerometer)
We tried some values of \mu (0.001-0.005) & gain on SUS_MC1_POSITION:MCL and C1ASS_TOP_SUS_MC1 (0.1-1).
C1:ASS-TOP_SUS_MC1_INMON is huge (soon goes up to few times 10000), so ~0.1 gains at two places bring it down to a reasonable value.
Bad news: no difference between reference and filtered IOO-MC_L power spectra so far.
Plan of action: figure out the right values of the parameters (\mu, \tau, different gains, and may be some delays), to make some improvement to the spectra.
** Rana: there's no reason to adjust any of the MCL gains. We are not supposed to be a part of the adaptive algorithm.
10708 Thu Nov 13 01:03:28 2014 rana, jenneUpdateSUSOL updates on ITMs and ETMs
We copied the new SRM filters over onto the OL banks for the ITMs and ETMs. We then adjusted the gain to be 3x lower than the gain at which it has a high frequency oscillation. This is the same recipe used for the SRM OL tuning.
Before this tune up, we also set the damping gains of the 4 arm cavity mirrors to give step response Q's of ~5 for all DOF and ~7-10 for SIDE.
12568 Tue Oct 18 18:56:57 2016 gautamUpdateGeneralOM5 rotated to bypass OMC, green scatter is from window to PSL table
[ericq, lydia, gautam]
• We started today by checking leveling of ITMY table, all was okay on that front after the adjustment done yesterday. Before closing up, we will have detailed pictures of the current in vacuum layout
• We then checked centering on OMs 1 and 2 (after having dither aligned the arms), nothing had drifted significantly from yesterday and we are still well centered on both these OMs
• We then moved to the BS/PRM chamber and checked the leveling, even though nothing was touched on this table. Like in the OMC chamber, it is difficult to check the leveling here because of layout constraints, but I verified that the table was pretty close to being level using the small (clean) spirit level in two perpendicular directions
• Beam centering was checked on OMs 3 and 4 and verified to be okay. Clearance of beam from OM4 towards the OMC chamber was checked at two potential clipping points - near the green steering mirror and near tip-tilt 2. Clearance at both locations was deemed satisfactory so we moved onto the OMC chamber
• We decided to go ahead and rotate OM5 to send the beam directly to OM6 and bypass the partially transmissive mirror meant to send part of the AS beam to the OMC
• In order to accommodate the new path, I had to remove a razor beam dump on the OMC setup, and translate OM5 back a little (see Attachment #1), but we have tried to maintain ~45 degree AOI on both OMs 5 and 6
• Beam was centered on OM6 by adjusting the position of OM5. We initially fiddled around with the pitch and yaw knobs of OM4 to try and center the beam on OM5, but it was decided that it was better just to move OM5 rather than mess around on the BS/PRM chamber and introduce potential additional scatter/clipping
• OMC table leveling was checked and verified to not have been significantly affected by todays work
• It was necessary to loosen the fork and rotate OM6 to extract the AS beam from the vacuum chambers onto the AP table
• AS beam is now on the camera, and looks nice and round, no evidence of any clipping. Some centering on in air lenses and mirrors on the AP table remains to be done. We are now pretty well centered on all 6 OMs and should have more power at the AS port given that we are now getting light previously routed to the OMC out as well. A quantitative measure of how much more light we have now will have to be done after pumping down and turning the PSL power back up
• I didn't see any evidence of back-scattered light from the window even though there were hints of this previously (sadly the same can't be said about the green). I will check once again tomorrow, but this doesn't look like a major problem at the moment
Lydia and I investigated the extra green beam situation. Here are our findings.
1. There appears to be 3 ghost beams in addition to the main beam. These ghosts appeared when we locked the X green and Y green individually, which lead us to conclude that whatever is causing this behaviour is located downstream of the periscope on the BS/PRM chamber
2. I then went into the BS/PRM chamber and investigated the spot on the lower periscope mirror. It isn't perfectly centered, but it isn't close to clipping on any edge, and the beam leaving the upper mirror on the periscope looks clean as well (only the X-arm green was used for this, and subsequent checks). The periscope mirror looks a bit dusty and scatters rather a lot which isn't ideal...
3. There are two steering mirrors on the IMC table which we do not have access to this vent. But I looked at the beam coming into the OMC chamber and it looks fine, no ghosts are visible when letting the main beam pass through a hole in one of our large clean IR viewing cards - and the angular separation of these ghosts seen on the PSL table suggests that we would see these ghosts if they exist prior to the OMC chamber on the card...
4. The beam hits the final steering mirror which sends it out onto the PSL table on the OMC chamber cleanly - the spot leaving the mirror looks clean. However, there are two reflections from the two surfaces of the window that come back into the OMC chamber. Space constraints did not permit me to check what surfaces these scatter off and make it back out to the PSL table as ghosts, but this can be checked again tomorrow.
I can't think of an easy fix for this - the layout on the OMC chamber is pretty crowded, and potential places to install a beam dump are close to the AS and IMC REFL beam paths (see Attachment #1). Perhaps Steve can suggest the best, least invasive way to do this. I will also try and nail down more accurately the origin of these spots tomorrow.
Light doors are back on for the night. I re-ran the dithers, and centered the oplevs for all the test-masses + BS. I am leaving the PSL shutter closed for the night
Attachment 1: OMCchamber.pdf
Attachment 2: greenGhosts.JPG
Attachment 3: IMG_3322.JPG
Attachment 4: IMG_3326.JPG
19 Fri Oct 26 17:34:43 2007 waldmanOtherOMCOMC + earthquake stops
[Chub, chris, Pinkesh, Sam]
Last night we hugn the OMC for the first time and came up with a bunch of pictures and some problems. Today we address some of the problems and, of course, make new problems. We replaced the flat slotted disks with the fitted slotted disks that are made to fit into the counterbore of the breadboard. This changed the balance slightly and required a more symmetric distribution of mass. It probably did not change the total mass very much. We did find that the amount of cable hanging down strongly affected the breadboard balance and may also have contributed to the changing balance.
We also attached earthquake stops and ran into a few problems:
• The bottom plate of the EQ stops is too thick so that it bumps into the tombstones
• The vertical member on the "waist" EQ stops is too close to the breadboard, possibly interfering with the REFL beam
• The "waist" EQ stops are made from a thin plate that doesn't have enough thickness to mount helicoils in
• Helicoil weren't loaded in the correct bottom EQ stops
• The DCPD cable loops over the end EQ stop looking nasty but not actually making contact
However, with a little bit of jimmying, the EQ stops are arrayed at all points within a few mm of the breadboard. Meanwhile, Chub has cabled up all the satellite modules and DCPD modules and Pinkesh is working on getting data into the digital system so we can start playing games. Tonight, I intend to mount a laser in Rana's lab and fiber couple a beam into the 056 room so we can start testing the suspended OMC.
99 Wed Nov 14 07:48:38 2007 nornaOmnistructureOMCOMC Cable dressing
[Snipped from an email]
1) Last Friday Pinkesh and I set the OMC up with only the top three OSEMs and took a vertical transfer function. We had removed the other OSEMs due to difficulty of aligning all OSEMs with the weight of the bench etc bringing the top mass lower than the tablecloth can accommodate. See attached TF.Clearly there are extra peaks (we only expect two with a zero in between) and my belief is that at least some of them are coupling of other degrees of freedom caused by the electrical wiring. Pinkesh and I also noticed the difficulty of maintaining alignment if cables got touched and moved around. So.....
2) Yesterday Dennis and I took a look at how much moving a cable bundle around (with the peak shielding) changed the DC alignment. In a not too precise experiment ( using HeNe laser reflecting off the bench onto a surface ~ 1 metre away) we saw that we could reposition the beam one or two mm in yaw and pitch. This corresponds to ~ one or two mrad which is ~ the range of the OSEM DC alignment. We discussed possibility of removing the cabling from the middle mass, removing the peak and taking it from the bench directly to the structure above. I asked Chub if he could make an equivalent bundle of wires as those from the two preamps to see what happens if we repeat the "moving bundle" experiment. So...
3) Today Chub removed the cabling going to the preamps and we replaced it with a mock up of wire bundle going directly from the preamps to the structure above. See attached picture. The wires are only attached to the preamp boxes weighted down with masses but the bundle is clamped at the top. We repeated the "wiggle the bundle" test and couldnt see any apparent movement ( so maybe it is at most sub-mm). The cable bundle feels softer.
The next thing Chub did was to remove the second bundle ( from photodiodes, heater, pzt) from its attachment to the middle mass and strip off the peek. It is now also going to the top of the structure directly. The whole suspension now appears freer. We discussed with Dennis the "dressing " of the wires. There are some minor difficulties about how to take wires from the bright side to the dark side, but in general it looks like that the wires forming the second "bundle" could be brought to the "terminal block" mounted on the dark side and from there looped up to the top of the structure. We would have to try all this of course to see the wiring doesnt get in the way of other things (e.g. the L and R OSEMs). However this might be the way forward. So...
4) Tomorrow Pinkesh and I will check the alignment and then repeat the vertical transfer function measurement with the two bundles as they are going from bench to top of structure. We might even do a horizontal one if the middle mass is now within range of the tablecloth.
We can then remove preamp cables completely and lay the second bundle of cables on the optical bench and repeat the TFs.
The next thing will be to weigh the bench plus cables. This will allow us to
a) work out what counterbalance weights are needed - and then get them manufactured
b) firm up on how to handle the extra mass in terms of getting the masses at the correct height.
And in parallel Chub will work on the revised layout of cabling.
Looking a little further ahead we can also get some stiffness measurements made on the revised bundle design ( using Bob's method which Alejandro also used) and fold into Dennis's model to get some sanity check the isolation.
I think that's it for now. Comments etc are of course welcome.
Norna
Attachment 1: OMC-11-13-07_011.jpg
Attachment 2: VerticalTrans.pdf
2220 Mon Nov 9 18:27:30 2009 AlbertoFrogsComputersOMC DCPD Interface Box Disconnected from the power Supply
This afternoon I inadvertently disconnected one of the power cables coming from the power supply on the floor next to the OMC cabinet and going to the DCPD Interface Box.
Rob reconnected the cable as it was before.
14120 Tue Jul 31 22:50:18 2018 aaronUpdateOMCOMC Expected Refl Signal
I learned a lot about lasers this week from Siegman. Here are some plots that show the expected reflectivity off of the OMC for various mode matching cases.
The main equation to know is 11.29 in Siegman, the total reflection coefficient going into the cavity:
$R=r-\frac{t^2}{r}\frac{g(\omega)}{1-g(\omega)}$
Where r is the mirror reflectivity (assumed all mirrors have the same reflectivity), t is the transmissivity, and g is the complex round-trip gain, eq 11.18
$g(\omega)=r_1r_2(r_3...)e^{-i\phi}e^{-\alpha_0p}$
The second exponential is the loss; in Siegman the \alpha_0 is some absorption coecfficient and p is the total round trip length, so the product is just the total loss in a round trip, which I take to be 4*the loss on a single optic (50ppm each). \phi is the total round trip phase accumulation, which is 2\pi*detuning(Hz)/FSR. The parameters for the cavity can be found on the wiki.
I've added the ipynb to my personal git, but I can put it elsewhere if there is somewhere more appropriate. I think this is all OK, but let me know if something is not quite right.
Attachment 1: omcRefl.pdf
2221 Mon Nov 9 18:32:38 2009 robUpdateComputersOMC FE hosed
It won't start--it just sits at Waiting for EPICS BURT, even though the EPICS is running and BURTed.
[controls@c1omc c1omc]$sudo ./omcfe.rtl cpu clock 2388127 Initializing PCI Modules 3 PCI cards found *************************************************************************** 1 ADC cards found ADC 0 is a GSC_16AI64SSA module Channels = 64 Firmware Rev = 3 *************************************************************************** 1 DAC cards found DAC 0 is a GSC_16AO16 module Channels = 16 Filters = None Output Type = Differential Firmware Rev = 1 *************************************************************************** 0 DIO cards found *************************************************************************** 1 RFM cards found RFM 160 is a VMIC_5565 module with Node ID 130 *************************************************************************** Initializing space for daqLib buffers Initializing Network Waiting for EPICS BURT 2222 Mon Nov 9 19:04:23 2009 robUpdateComputersOMC FE hosed Quote: It won't start--it just sits at Waiting for EPICS BURT, even though the EPICS is running and BURTed. [controls@c1omc c1omc]$ sudo ./omcfe.rtl cpu clock 2388127 Initializing PCI Modules 3 PCI cards found *************************************************************************** 1 ADC cards found ADC 0 is a GSC_16AI64SSA module Channels = 64 Firmware Rev = 3 *************************************************************************** 1 DAC cards found DAC 0 is a GSC_16AO16 module Channels = 16 Filters = None Output Type = Differential Firmware Rev = 1 *************************************************************************** 0 DIO cards found *************************************************************************** 1 RFM cards found RFM 160 is a VMIC_5565 module with Node ID 130 *************************************************************************** Initializing space for daqLib buffers Initializing Network Waiting for EPICS BURT
From looking at the recorded data, it looks like the c1omc started going funny on the afternoon of Nov 5th, perhaps as a side-effect of the Megatron hijinks last week.
It works when megatron is shutdown.
2224 Mon Nov 9 19:44:38 2009 rob, ranaUpdateComputersOMC FE hosed
We found that someone had set the name of megatron to scipe11. This is the same name as the existing c1aux in the op440m /etc/hosts file.
We did a /sbin/shutdown on megatron and the OMC now boots.
Please: check to see that things are working right after playing with megatron or else this will sabotage the DR locking and diagnostics.
14365 Tue Dec 18 18:13:32 2018 aaronUpdateOMCOMC L HV Piezo driver tests (again)
I tested the OMC-L HV driver box again, and made the following observations:
• Drove the HV diff pins (2,7) with a 5V triangle wave
• Observed that with a ~0.4V offset on the drive, the HV output (measured directly with a 10x probe) has a 0-(almost)200V triangle wave (for 200V HV in), saturating near 200V and near 0V somewhat before reaching the full range of the triangle
• The HV mon gives the same answer as measuring the HV output directly, and is reduced 100x compared to the HV output.
• At 1Hz and above, the rolloff of the low pass still attenuates the drive a bit, and we don't reach the full range.
• Drove the HV dither pins (1,6) with a 100mV to 10V triangle wave, around 15kHz
• Even at 10V, the dithering is near the noise of the mon channel, so while I could see a slight peak changing on the FFT near the dither frequency, I couldn't directly observe this on a scope using the mon channel
• However, measuring the HV directly I do see the dither applied on top of the HV signal. The amplitude of the dither is the same on the HV output as on the dither drive.
[gautam, aaron]
We searched for blips while nominally scanning the OMC length.
We sent a 0.1Hz, 10Vpp triangle wave to the OMC piezo drive diff channels, so the piezo length is seeing a slow triangle wave from 0-200V.
Then, we applied a ~15kHz dither to the OMC length. This dither is added directly onto the HV signal, so the amplitude of the dither at the OMC is the same as the amplitude of the dither into the HV driver.
We monitored the OMC REFL signal (where we saw no blips yesterday) and mixed this with the 15kHz dither signal to get an error signal. Gautam found a pomona box with a low pass filter, so we also low passsed to get rid of some unidentified high frequency noise we were seeing (possibly a ground loop at the function generator? it was present with the box off, but gone with the AC line unplugged). [So we made our own lock-in amplifier.] Photo attached.
We tested the transfer function of the LP, and finding it at 100kHz rather than the advertised 10kHz, we opened the box, removed a resistor to change the 3dB back to 10kHz, and confirmed this by measuring the TF.
We didn't see flashes of error signal in the mixed reflection either, so we suspect that either the PZT is not actuating on the OMC or the alignment is bad. Based on what appears to be the shimmering of far-misaligned fringes on the AS camera, Aaron's suspicion from aligning the cavity with the card, and the lack of flashes, we suspect the alignment. To avoid being stymied by a malfunctioning PZT, we can scan the laser frequency next time rather than the PZT length.
Attachment 1: IMG_4576_copy.jpg
8849 Mon Jul 15 16:44:46 2013 AlexUpdateOMCOMC North Safety
[Eric Alex]
We are planning on testing our laser module soon, so we have added aluminum foil and a safety announcement to the door of OMC North. The safety announcement is as pictured in the attachment.
Attachment 1: photo_2_(1).JPG
60 Sun Nov 4 23:22:50 2007 waldmanUpdateOMCOMC PZT and driver response functions
I wrote a big long elog and then my browser hung up, so you get a less detailed entry. I used Pinkesh's calibration of the PZT (0.9 V/nm) to calibrate the PDH error signal, then took the following data on the PZT and PZT driver response functions.:
• FIgure 1: PZT dither path. Most of the features in this plot are understood: There is a 2kHz high pass filter in the PZT drive which is otherwise flat. The resonance features above 5 kHz are believed to be the tombstones. I don't understand the extra motion from 1-2 kHz.
• Figure 2: PZT dither path zoom in. Since I want to dither the PZT to get an error signal, it helps to know where to dither. The ADC Anti-aliasing filter is a 3rd order butterworth at 10 kHz, so I looked for nice flat places below 10 KHz and settled on 8 kHz as relatively harmless.
• Figure 3: PZT LSC path. This path has got a 1^2:10^2 de-whitening stage in the hardware which hasn't been digitally compensated for. You can see its effect between 10 and 40 Hz. The LSC path also has a 160 Hz low path which is visible causing a 1/f between 200 and 500 Hz. I have no idea what the 1 kHz resonant feature is, though I am inclined to point to the PDH loop since that is pretty close to the UGF and there is much gain peaking at that frequency.
Attachment 1: 071103DitherShape.png
Attachment 2: 071103DitherZoom.png
Attachment 3: 071103LSCShape.png
Attachment 4: 071103DitherShape.pdf
Attachment 5: 071103DitherZoom.pdf
Attachment 6: 071103LSCShape.pdf
Attachment 7: 071103LoopShape.pdf
5 Fri Oct 19 16:11:38 2007 pkpOtherOMCOMC PZT response
Sam and I locked the laser to the OMC cavity and looked at the error signal as a function of the voltage applied to the OMC PZT.
Here are two plots showing the response as a function of frequency from 1 kHz to 100 kHz and another high-res response in the region of 4.5 kHz to 10 kHz.
Attachment 1: fullspec.jpg
Attachment 2: 4.5to10.jpg
Attachment 3: 4.5to10.pdf
Attachment 4: fullres.pdf
6 Sat Oct 20 11:54:13 2007 waldmanOtherOMCOMC and OMC-SUS work
[Rich, Chub, Pinkesh, Chris, Sam]
Friday the 18th was a busy day in OMC land. Both DCPDs were mounted to the glass breadboard and the OMC-SUS structure was rebuilt to the point that an aluminum dummy mass is hanging, unbalanced. The OSEMs have not be put on the table cloth yet, but everything is hanging free. As for the DCPDs, if you recall one beam is 3mm off center from the DCPD tombstone. Fortunately, one DCPD is nearly 3mm offcenter from the case in the right direction, so the errors nearly cancel. The DCPD is too high, so the beam isn't quite centered, but they're close. We'll get photos of the beam positions in someday. Also, the DC gain between the two PDs is, at first glance, different by 15%. DCPD1, the one seen in transmission has 315 mV of signal while DCPD2 has 280 mV. Not sure why, could be because of beam alignment or tolerances in the Preamp or the angle incident on the diode or the QE of the diodes. The glass cans have *not* been removed.
14819 Wed Jul 31 09:41:12 2019 gautamUpdateBHDOMC cavity geometry
Summary:
We need to determine the geometry (= round-trip length and RoC of curved mirrors) of the OMC cavities for the 40m BHD experiment. Sticking to the aLIGO design of a 4 mirror bowite cavity with 2 flat mirrors and 2 curved mirrors, with a ~4deg angle of incidence, we need to modify the parameters for the 40m slightly on account of our different modulation frequencies. I've setup some infrastructure to do this analytically - even if we end up doing this with Finesse, it is useful to have an analytic calculation to validate against (also not sure if Finesse can calculate HOMs up to order 20 in a reasonable time, I've only seen maxtem 8).
Attachment #1: Heatmap of the OMC transmission for the following fields:
• Carrier TEM00 is excluded, but HOMs up to m+n=20 included for both the horizontal and vertical modes of the cavity.
• f1 and f2 upper and lower sidebands, up to m+n=20 HOMs for both the horizontal and vertical modes of the cavity, including TEM00.
• Power law decay assumed for the HoM content incident on the OMC - this will need to be refined
• The white region is where the cavity isn't geometrically stable.
• Green dashed line indicates a possible operating point, white dashed line indicates the aLIGO OMC operating point. On the basis of this modeling, we would benefit from choosing a better operating point than the aLIGO OMC geometric parameters.
Algorithm:
1. Compute the round-trip Gouy phase, $\phi_{\mathrm{gouy}}$, for the cavity.
2. With the carrier TEM00 mode resonant, compute the round-trip propagation phase, $\phi_{\mathrm{prop}} = \frac{2 \pi f_{\mathrm{offset}} L_{\mathrm{rt}}}{c}$, and the round-trip Gouy phase, $\phi_{\mathrm{G}} = (m+n)\phi_{\mathrm{gouy}}$ for the $\mathrm{TEM}_{mn}$ mode of the field, with $f_{\mathrm{offset}}$ specifying the offset from the carrier frequency (positive for the upper sideband, negative for the lower sideband). For the aLIGO cavity geometry, the 40m modulation sidebands acquire ~20% more propagation phase than the aLIGO modulation sidebands.
3. Compute the OMC transmission for this round-trip phase (propagation + Gouy).
4. Multiply the incident mode power (depending on the power law model assumed) by the cavity transmission.
5. Sum all the fields.
Next steps:
1. Refine the incident mode content (and power) assumption. Right now, I have not accounted for the fact that the f2 sideband is resonant inside the SRC while the f1 sideband is not. Can we somehow measure this for the 40m? I don't see an easy way as it's probably power dependent?
2. Make plots for the projection along the slices indicated by the dashed lines - which HOMs are close to resonating? Might give us some insight.
3. What is the requriement on transmitted power w.r.t. shot noise? i.e. the colorbar needs to be translated to dBVac.
4. If we were being really fancy, we could simultaneously also optimize for the cavity finesse and angle of incidence as well.
5. Question for Koji: how is the aLIGO OMC angle of incidence of ~4 degrees chosen? Presumably we want it to be as small as possible to minimize astigmatism, and also, we want the geometric layout on the OMC breadboard to be easy to work with, but was there a quantitative metric? Koji points out that the backscatter is also expected to get worse with smaller angles of incidence.
The code used for the ABCD matrix calcs have been uploaded to the BHD modeling GIT (but not the one for making this plot, yet, I need to clean it up a bit). Some design considerations have also been added to our laundry list on the 40m wiki.
Attachment 1: paramSpaceHeatMap.pdf
14821 Wed Jul 31 17:57:35 2019 KojiUpdateBHDOMC cavity geometry
4 deg is not an optimized number optimized for criteria, but to keep the cavity short width to 0.1m. But the justification of 4deg is found in Section 3 and 4 of T1000276 on Page 4.
Quote: Question for Koji: how is the aLIGO OMC angle of incidence of ~4 degrees chosen? Presumably we want it to be as small as possible to minimize astigmatism, and also, we want the geometric layout on the OMC breadboard to be easy to work with, but was there a quantitative metric? Koji points out that the backscatter is also expected to get worse with smaller angles of incidence.
14854 Fri Aug 23 10:01:14 2019 gautamUpdateBHDOMC cavity geometry - some more modeling
Summary:
I did some more investigation of what the appropriate cavity geometry would be for the OMC. Unsurprisingly, depending on the incident mode content, the preferred operating point changes. So how do we choose what the "correct" model is? Is it accurate to model the output beam HOM content from NPROs (is this purely determined by the geometry of the lasing cavity?), which we can then propagate through the PMC, IMC, and CARM cavities? This modeling will be written up in the design document shortly.
*Colorbar label errata - instead of 1 W on BS, it should read 1 W on PRM. The heatmaps take a while to generate, so I'll fix that in a bit.
Update 230pm PDT: I realize there are some problems with these plots. The critically coupled f2 sideband getting transmitted through the T=10% SRM should have significantly more power than the transmission through a T=100ppm optic. For similar modulation depth (which we have), I think it is indeed true that there will be x1000 more f2 power than f1 power for both the IFO AS beam and the LO pickoff through the PRC. But if the LO is picked off elsewhere, we have to do the numbers again.
Details:
Attachment #1: Two candidate models. The first follows the power law assumption of G1201111, while in the second, I preserved the same scaling, but for the f1 sideband, I set the DC level by assuming a PRG of 45, modulation depth of 0.18, and 100 ppm pickoff from the PRC such that we get 50 mW of carrier light (to act as a local oscillator) for 10 W incident on the back of PRM. Is this a reasonable assumption?
Attachment #2: Heatmaps of the OMC transmission, assuming (i) 0 contrast defect light in the carrier TEM00 mode, (ii) PRG=45 and (iii) 1 W incident on the back of PRM. The color bar limits are preserved for both plots, so the "dark" areas of the plot, which indicate candidate operating points, are darker in the left-hand plot. Obviously, when there is more f1 power incident on the OMC, more of it is transmitted. But my point is that the "best operating point(s)" in both plots are different.
Why is this model refinement necessary? In the aLIGO OMC design, an assumption was made that the light level of the f1 sideband is 1/1000th that of the f2 sideband in the interferometer AS beam. This is justified as the RC lengths are chosen such that the f2 sideband is critically coupled to the AS port, but the f1 is not (it is not quite anti-resonant either). For the BHD application, this assumption is no longer true, as long as the LO beam is picked off after the RF sidebands are applied. There will be significant f1 content as well, and so the mode content of the f1 field is critical in determining the OMC filtering performance.
Attachment 1: modeContentComparison.pdf
Attachment 2: OMCtransComparison.pdf
14350 Thu Dec 13 10:03:07 2018 ChubUpdateGeneralOMC chamber
Bob, Aaron, and I removed the door from the OMC chamber this morning. Everything went well.
14332 Thu Dec 6 11:16:28 2018 aaronUpdateOMCOMC channels
I need to hookup +/- 24 V supplies to the OMC whitening/dewhitening boxes that have been added to 1X2.
There are trailing +24V fuse slots, so I will extend that row to leave the same number of slots open.
While removing one +24V wire to add to the daisy chain, I let the wire brush an exposed conductor on the ground side, causing a spark. FSS_PCDRIVE and FSS_FAST are at different levels than before this spark. The 24V sorensens have the same currents as before according to the labels. Gautam advised me to remove the final fuse in the daisy chain before adding additional links.
gautam: we peeled off some outdated labels from the Sorensens in 1X1 such that each unit now has only 1 label visible reflecting the voltage and current. Aaron will post a photo after his work.
14338 Mon Dec 10 12:29:05 2018 aaronUpdateOMCOMC channels
I kept having trouble keeping the power LEDs on the dewhitening board 'on'. I did the following:
1. I noticed that the dewhitening board was drawing a lot of current (>500mA), so I initially thought that the indicators were just turning on until I blew the fuse. I couldn't find the electronics diagrams for this board, so I was using analagous boards' diagrams and wasn't sure how much current to expect to draw. I swapped out for 1A fuses (only for the electronics I was adding to the system).
2. Now the +24V indicator on the dewhitening board wasn't turning on, and the -24V supply was alternatively drawing ~500mA and 0mA in a ~1Hz square wave. Thinking I could be dropping voltage along the path to the board, I swapped out the cables leading to the whitening/dewhitening boards with 16AWG (was 18AWG). This didn't seem to help.
3. Since the whitening board seemed to be consistently powered on, I removed the dewhitening board to see if there was a problem with it. Indeed, I'd burned out the +24V supply electronics--two resisters were broken entirely, and the breadboard near the voltage regulator had been visibly heated.
1. I identified that the resistors were 1Ohm, and replaced them (though I couldn't find 1Ohm surface mount resistors). I also replaced the voltage regulator in case it was broken. I couldn't find the exact model, so I replaced the LM2940CT-12 with an LM7812, which I think is the newer 12V regulator.
2. Though this replacement seemed to work when the power board was disconnected from the dewhitening board, connecting to the dewhitening board again resulted in a lot of current draw.
3. I depowered the board and decided to take a different approach (see)
I noticed that the +/-15V currents are slightly higher than the labels, but didn't notice whether they were already different before I began this work.
I also noticed one pair of wires in the area of 1X1 I was working that wasn't attached to power (or anything). I didn't know what it was for, so I've attached a picture.
Attachment 1: 52DE723A-02A4-4C62-879B-7B0070AE8A00.jpeg
Attachment 2: 545E5512-D003-408B-9F00-55F985966A16.jpeg
Attachment 3: DFF34976-CC49-4E4F-BFD1-A197E2072A32.jpeg
14340 Mon Dec 10 19:47:06 2018 aaronUpdateOMCOMC channels
Taking another look at the datasheet, I don't think LM7812 is an appropriate replacement and I think the LM2940CT-12 is supposed to supply 1A, so it's possible the problem actually is on the power board, not on the dewhitening board. The board takes +/- 15V, not +/- 24...
Quote: I identified that the resistors were 1Ohm, and replaced them (though I couldn't find 1Ohm surface mount resistors). I also replaced the voltage regulator in case it was broken. I couldn't find the exact model, so I replaced the LM2940CT-12 with an LM7812, which I think is the newer 12V regulator.
14341 Tue Dec 11 13:42:44 2018 KojiUpdateOMCOMC channels
FYI:
D050368 Anti-Imaging Chassis
https://dcc.ligo.org/LIGO-D050368
https://labcit.ligo.caltech.edu/~tetzel/files/equip
D050368 Adl SUS/SEI Anti-Image filter board
S/N 100-102 Assembled by screaming circuits. Begin testing 4/3/06
S/N xxx Mohana returned it to the shop. No S/N or traveler. Put in shop inventory 4/24/06
S/N 103 Rev 01. Returned from Screaming circuits 7/10/06. complete except for C28, C29
S/N 104-106 Rev 01. Returned from Screaming circuits 7/10/06. complete except for C28, C29 Needs DRV-135’s installed
S/N 107-111 Rev 02 (32768 Hz) Back from assembly 7/14/06
S/N 112-113 Rev 03 (65536 Hz) assembled into chassis and waiting for test 1/29/07
S/N 114 Rev 03 (65536 Hz) assembled and ready for test 020507
D050512 RBS Interface Chassis Power Supply Board (Just an entry. There is no file)
https://dcc.ligo.org/LIGO-D050512
RBS Interface Chassis Power Board D050512-00
14342 Tue Dec 11 13:48:04 2018 aaronUpdateOMCOMC channels
Koji gave me some tips on testing this board that I wanted to write down, notes probably a bit intermingled with my thoughts. Thanks Koji, also for the DCC and equipment logging!
• Test the power and AI boards separately with an external supply, ramping the voltage up slowly for each.
• If it seems the AI board is actually drawing too much current, may need to check its TPs for where a problem might be
• If it's really extensive may use an IR camera to see what elements are getting too hot
• Testing in segments will prevent breaking more components
• Check the regulator that I've replaced
• The 1 Ohm resistors may have been acting as extra 1A fuses. i need to make sure the resistors I've used to replace them are rated for >1W, if this is the case.
• Can check the resistance between +-12V and Gnd inputs on the AI board, if there is a short drawing too much current it may show up there.
• The 7812 may be an appropriate regulator, but the input voltage may need to be somewhat higher than with the low drop regulator that was used before.
• I want to double check the diagram on the DCC
14354 Thu Dec 13 22:24:21 2018 aaronUpdateOMCOMC channels
I completed testing of the AI board mentioned above. In addition to the blown fuse, there were two problems:
• A was a large drop of solder splattered on some of the ch 1 ICs, which is why we couldn't maintain any voltage. I removed the solder.
• The +12V wire from the power board to the AI board was loose, so I removed and replaced that crimp connection
After this, I tested the TF of all channels. For the most part, I found the expected 3rd order ~7500Hz cheby with notches at ~16kHz and 32kHz. However, some of the channels had shallower or deeper notches. By ~32kHz, I was below the resolution on the spectrum analyzer. Perhaps I just have nonideal settings? I'll attach a few representative examples.
I reinstalled the chassis at 1X2, but haven't connected power.
86 Fri Nov 9 00:01:24 2007 waldmanOmnistructureOMCOMC mechanical resonances (Tap tap tappy tap)
[Pinkesh, Aidan, Sam]
We did a tap-tap-tappy-tap test of the OMC to try to find its resonances. We looked at some combination of the PDH error signal and the DCPD signal in a couple of different noise configurations. The data included below shows tapping of the major tombstone objects as well the breadboard. I don't see any strong evidence of resonances below the very sharp resonance at 1300 Hz (which I interpret as the diving board mode of the breadboard). If I get free, I 'll post some plots of the different breadboard resonances you can excite by tapping in different places.
(The "normalized" tapping response is abs(tap - reference)./reference.)
Attachment 1: Fig1.png
Attachment 2: Fig2.png
Attachment 3: Fig4.png
Attachment 4: Fig2.pdf
Attachment 5: Fig1.pdf
Attachment 6: Fig4.pdf
Attachment 7: ResonanceData.zip
14055 Thu Jul 12 11:13:39 2018 gautamUpdateGeneralOMC revival
Aaron and I are going to do the checkout of the OMC electronics outside vacuum today. At some point, we will also want to run a c1omc model to integrate with rtcds. Barring objections, I will set up this model on one of the spare cores on the physical machine c1ioo tomorrow.
14056 Thu Jul 12 12:26:39 2018 aaronUpdateGeneralOMC revival
We found a diagram describing the DC Readout wiring scheme on the wiki page for DC readout (THIS DIAGRAM LIED TO US). The wiring scheme is in D060096 on the old DCC.
Following this scheme for the OMC PZT Driver, we measured the capacitance across pins 1 and 14 on the driver end of the cable nominally going to the PZT (so we measured the capacitance of the cable and PZT) at 0.5nF. Gautam thought this seemed a bit low, and indeed a back of the envelope calculation says that the cable capacitance is enough to explain this entire capacitance.
Gautam has gone in to open up the HV driver box and check that the pinout diagram was correct. We could identify the PZT from Gautam's photos from vent 79, but couldn't tell if the wires were connected, so this may be something to check during the vent.
UPDATE:
Turns out the output was pins 13 and 25, we measured the capacitance again and got 209nF, which makes a lot more sense.
14163 Tue Aug 14 23:14:24 2018 aaronUpdateOMCOMC scanning/aligning script
I made a script to scan the OMC length at each setpoint for the two TTs steering into the OMC. It is currently located on nodus at /users/aaron/OMC/scripts/OMC_lockScan.py.
I haven't tested it and used some ez.write syntax that I hadn't used before, so I'll have to double check it.
My other qualm is that I start with all PZTs set at 0, and step around alternative +/- values on each PZT at the same magnitude (for example, at some value of PZT1_PIT, PZT1_YAW, PZT2_PIT, I'll scan PZT2_YAW=1, then PZT2_YAW=-1, then PZT2_YAW=2). If there's strong hysteresis in the PZTs, this might be a problem.
14312 Tue Nov 20 20:33:11 2018 aaronUpdateOMCOMC scanning/aligning script
I finished running the cabling for the OMC, which involved running 7x 50ft DB9 cables from the OMC_NORTH rack to the 1X2 rack, laying cables over others on the tray. I tried not to move other cables to the extent I could, and I didn't run the new cables under any old cables. I attach a sketch diagram of where these cables are going, not inclusive of the entire DAC/ADC signal path.
I also had to open up the AA board (D050387, D050374), because it had an IPC connector rather than the DB37 that I needed to connect. The DAC sends signals to a breakout board that is in use (D080302) and had a DB37 output free (though note this carries only 4 DAC channels). I opened up the AA board and it had two IPC 40s connected to an adapter to the final IPC 70 output. I replaced the IPC40 connectors with DB37 breakouts, and made a new slot (I couldn't find a DB37 punch, so this is not great...) on the front panel for one of them, so I can attach it to the breakout board.
I noticed there were many unused wires, so I had to confirm that I had the wiring correct (still haven't confirmed by driving the channels, but will do). There was no DCC for D080302, but I grabbed the diagrams for the whitening boards it was connected to (D020432) and for the AA board I was opening up as well as checked out elog 8814, and I think I got it. I'll confirm this manually and make a diagram if it's not fake news.
Attachment 1: pathwaysketch.pdf
Attachment 2: IMG_0094.JPG
Attachment 3: IMG_0097.JPG
14317 Mon Nov 26 15:43:16 2018 aaronUpdateOMCOMC scanning/aligning script
I've started testing the OMC channels I'll use.
I needed to update the model, because I was getting "Unable to setup testpoint" errors for the DAC channels that I had created earlier, and didn't have any ADC channels yet defined. I attach a screenshot of the new model. I ran
rtcds make c1omc
rtcds install c1omc
rtcds start c1omc.
without errors.
Attachment 1: c1omc.png
2277 Mon Nov 16 17:35:59 2009 kiwamuUpdateLSCOMC-LSC timing get synchronized !
An interesting thing was happened in the OMC-LSC timing clock.
### Right now the clock of the OMC and the LSC are completely synchronized.
The trend data is shown below. At the first two measurements (Oct.27 and Nov.1), LSC had constant retarded time of 3Ts (~92usec.).
The last measurement, on Nov.15, number of shifts goes to zero, this means there are no retarded time.
Also the variance between the two signal gets zero, so I conclude the OMC and the LSC are now completely synchronized.
The measurement on Nov.8 is somehow meaningless, I guess the measurement did not run correctly by an influence from megatron(?)
2145 Mon Oct 26 18:49:18 2009 kiwamuUpdateLSCOMC-LSC timing issue
According to my measurements I conclude that LSC-signal is retarded from OMC-signal with the constant retarded time of 92usec.
It means there are no timing jitter between them. Only a constant time-delay exists.
(Timing jitter)
Let's begin with basics.
If you measure the same signal at OMC-side and LSC-side, they can have some time delay between them. It can be described as followers.
where tau_0 is the time delay. If the tau_0 is not constant, it causes a noise of the timing jitter.
(method)
I have injected the sine-wave with 200.03Hz into the OMC-LSC_DRIVE_EXC. Then by using get_data, I measured the signal at 'OMC-LSC_DRIVE_OUT' and 'LSC-DARM_ERR' where the exciting signal comes out.
In the ideal case the two signals are completely identical.
In order to find the delay, I calculated the difference in these signals based on the method described by Waldman. The method uses the following expression.
Here the tau_s is the artificial delay, which can be adjusted in the off line data.
By shifting tau_s we can easily find the minimal point of the RMS, and at this point we can get tau_0=tau_s.
This is the principle of the method to measure the delay. In my measurement I put T=1sec. and make the calculation every 1sec. in 1 min.
(results)
Attachment is the obtained results. The above shows the minimum RMS sampled every 1sec. and the below shows the delay in terms of number of shifts.
1 shift corresponds to Ts (=1/32kHz). All of the data are matched with 3 times shift, and all of the minimum RMS are completely zero.
Therefore I can conclude that LSC-signal is retarded from OMC-signal with constant retarded times of 3*Ts exactly, and no timing jitter has been found.
Attachment 3: OMC_LSC60sec.png
14 Thu Oct 25 17:52:45 2007 waldmanOtherOMCOMCs with QPDs
[Rich, Chub, Pinkesh, Sam]
Yesterday we got the QPD, OTAS, and PZT cabling harness integrated with the OMC. We found a few things out, not all of them good. The QPDs went on no problem and could be fairly well aligned by hand. We "aligned" them by looking at all four channels of the QPD on the scope and seeing that there is signal. Since the beam is omega = 0.5 mm, this is a reasonable adjustment. We then connected the OTAS connector to the OTAS and found that the heater on the OTAS was bonded on about 30 degrees rotated from its intended position. This rotated the connector into the beam and caused a visible amount of scattering. This wasn't really a disaster until I removed the connector from the heater and broke the heater off of the aluminum parts of the OTAS. Two steps backwards, one step forward. After the OMC, OMC-SUS integration test we will re-bond the heater to the aluminum using VacSeal. In the meantime, the OMC has been moved to Bridge 056 for integration with the OMC-SUS. More on that as we make progress.
11731 Wed Nov 4 16:12:07 2015 ericqUpdateCDSOPTIMUS
There is a new machine on the martian network: 32 cores and 128GB of RAM. Probably this is more useful for intensive number crunching in MATLAB or whatever as opposed to IFO control. I've set up some of the LIGO analysis tools on it as well.
A successor to Megatron, I dub it: OPTIMUS
294 Sat Feb 2 14:11:27 2008 JohnSummaryComputersOPTLEVmaster screen
I changed the layout of the optlev master screen. The old version is /cvs/cds/caltech/medm/old/C1ASC_OPTLEVmaster080202.adl
6375 Wed Mar 7 16:32:09 2012 keikoUpdateLSCOSA
I swap an OSA at PSL and OSA at REFL. It was because the PSL-OSA had a better resolution, so we place this better one at REFL. The ND filter (ND3) which was on the way to REFL OSA was replaced by two BSs, because it was producing dirty multiple spots after transmitting.
6390 Fri Mar 9 10:44:57 2012 steveUpdateRF SystemOSA
Optical spectrum analyzers like the Attachment made by Coherent , Meles Griot- CVI and Spectral Product are all discontinued.
The 40m have Coherent models C240 analyzer with controller C251 Their Finesse measured in 2004: sn205408 F302, sn205409 F396,
Jenne borrowed Jan's Meles Griot model 13SAE006, Peter King has the same model. FSR 300 MHZ, finnees 200 minimum
Attachment 1: OSA.pdf
ELOG V3.1.3-
|
|
# Proton
Proton is Vespa's search core. Proton maintains disk and memory structures for documents. As the data is dynamic, these structures are periodically optimized by maintenance jobs and the resource footprint of these background jobs are mainly controlled by the concurrency setting.
## Internal structure
The internal structure of a proton node:
The search node consists of a bucket management system which sends requests to a set of document databases, which each consists of three sub-databases.
### Bucket management
When the node starts up it first needs to get an overview of what documents and hence buckets it has. Once it knows this, it is in initializing mode, able to handle load, but distributors do not yet know bucket metadata for all the buckets, and thus can't know whether buckets are consistent with copies on other nodes. Once metadata for all buckets are known, the content nodes transitions from initializing to up state. As the distributors wants quick access to bucket metadata, it keeps an in-memory bucket database to efficiently serve these requests.
It implements elasticity support in terms of the SPI. Operations are ordered according to priority, and only one operation per bucket can be in-flight at a time. Below bucket management is the persistence engine, which implements the SPI in terms of Vespa search. The persistence engine reads the document type from the document id, and dispatches requests to the correct document database.
### Document database
Each document database is responsible for a single document type. It has a component called FeedHandler which takes care of incoming documents, updates, and remove requests. All requests are first written to a transaction log, then handed to the appropriate sub-database, based on the request type.
### Sub-databases
There are three types of sub-databases, each with its own document meta store and document store. The document meta store holds a map from the document id to a local id. This local id is used to address the document in the document store. The document meta store also maintains information on the state of the buckets that are present in the sub-database.
The sub-databases are maintained by the index maintainer. The document distribution changes as the system is resized. When the number of nodes in the system changes, the index maintainer will move documents between the Ready and Not Ready sub-databases to reflect the new distribution. When an entry in the Removed sub-database gets old it is purged. The sub-databases are:
Not Ready Holds the redundant documents that are not searchable, i.e. the not ready documents. Documents that are not ready are only stored, not indexed. It takes some processing to move from this state to the ready state.
Ready Maintains an index of all ready documents and keeps them searchable. One of the ready copies is active while the rest are not active:
Active There should always be exactly one active copy of each document in the system, though intermittently there may be more. These documents produce results when queries are evaluated. The ready copies that are not active are indexed but will not produce results. By being indexed, they are ready to take over immediately if the node holding the active copy becomes unavailable. Read more in searchable-copies.
Removed Keeps track of documents that have been removed. The id and timestamp for each document is kept. This information is used when buckets from two nodes are merged. If the removed document exists on another node but with a different timestamp, the most recent entry prevails.
## Transaction log
Content nodes have a transaction log to persist mutating operations. The transaction log persists operations by file append. Having a transaction log simplifies proton's in-memory index structures and enables steady-state high performance, read more below.
All operations are written and synched to the transaction log. This is sequential (not random) IO but might impact overall feed performance if running on NAS attached storage where the synch operation has a much higher cost than on local attached storage (e.g SSD). See sync-transactionlog.
Default, proton will flush components like attribute vectors and memory index on shutdown, for quicker startup after scheduled restarts.
## Index
Index fields are string fields, used for text search. Other field types are attributes and summary fields.
Proton stores position information in text indices by default, for proximity relevance - posocc (below). All the occurrences of a term is stored in the posting list, with its position. This provides superior ranking features, but is somewhat more expensive than just storing a single occurrence per document. For most applications it is the correct tradeoff, since most of the cost is usually elsewhere and relevance is valuable.
Applications that only need occurrence information for filtering can use rank: filter to optimize performance, using only boolocc-files (below).
The memory index has a dictionary per index field. This contains all unique words in that field with mapping to posting lists with position information. The position information is used during ranking, see nativeRank for details on how a text match score is calculated.
The disk index stores the content of each index field in separate folders. Each folder contains:
• Dictionary. Files: dictionary.pdat, dictionary.spdat, dictionary.ssdat.
• Compressed posting lists with position information. File: posocc.dat.compressed.
• Posting lists with only occurrence information (bitvector). These are generated for common words. Files: boolocc.bdat, boolocc.idx.
## Retrieving documents
Retrieving documents is done by specifying an id to get, or use a selection expression to visit a range of documents - refer to the Document API. Overview:
Get When the content node receives a get request, it scans through all the document databases, and for each one it checks all three sub-databases. Once the document is found, the scan is stopped and the document returned. If the document is found in a Ready sub-database, the document retriever will apply any changes that is stored in the attributes before returning the document. A visit request creates an iterator over each candidate bucket. This iterator will retrieve matching documents from all sub-databases of all document databases. As for get, attributes values are applied to document fields in the Ready sub-database.
## Queries
Queries has a separate pathway through the system. It does not use the distributor, nor has it anything to do with the SPI. It is orthogonal to the elasticity set up by the storage and retrieval described above. How queries move through the system:
A query enters the system through the QR-server (query rewrite server) in the Vespa Container. The QR-server issues one query per document type to the search nodes:
Container The Container knows all the document types and rewrites queries as a collection of queries, one for each type. Queries may have a restrict parameter, in which case the container will send the query only to the specified document types. It sends the query to content nodes and collect partial results. It pings all content nodes every second to know whether they are alive, and keeps open TCP connections to each one. If a node goes down, the elastic system will make the documents available on other nodes. The match engine receives queries and routes them to the right document database based on the document type. The query is passed to the Ready sub-database, where the searchable documents are. Based on information stored in the document meta store, the query is augmented with a blocklist that ensures only active documents are matched.
|
|
main-content
## Inhaltsverzeichnis
### On the Complexity of Many Faces in Arrangements of Pseudo-Segments and Circles
We obtain improved bounds on the complexity of many faces in an arrangement of pseudo-segments, circles, or unit circles. The bounds are worst-case optimal for unit circles; they are also worst-case optimal for the case of pseudo-segments except when the number of faces is very small, in which case our upper bound is a polylogarithmic factor away from the best-known lower bound. For general circles, the bounds nearly coincide with the best-known bounds for the number of incidences between points and circles.
Pankaj K. Agarwal, Boris Aronov, Micha Sharir
### Polyhedral Cones of Magic Cubes and Squares
Using computational algebraic geometry techniques and Hilbert bases of polyhedral cones we derive explicit formulas and generating functions for the number of magic squares and magic cubes.
Maya Ahmed, Jesús De Loera, Raymond Hemmecke
### Congruent Dudeney Dissections of Triangles and Convex Quadrilaterals – All Hinge Points Interior to the Sides of the Polygons
Let α and β be polygons with the same area. A Dudeney dissectionofα toβ is a partition of α into parts which can be reassembled to produceβ in the following way. Hinge the parts ofα like a chain along the perimeter of α, then fix one of the parts to formβ with the perimeter of α going into its interior and with its perimeter consisting of the dissection lines in the interior of α, without turning the pieces over. In this paper we discuss a special type of Dudeney dissection of triangles and convex quadrilaterals in which α is congruent toβ and call it acongruent Dudeney dissection. In particular, we consider the case where all hinge points are interior to the sides of the polygonα and β. For this case, we determine all triangles and convex quadrilaterals which have congruent Dudeney dissections.
Jin Akiyama, Gisaku Nakamura
### Computing the Hausdorff Distance of Geometric Patterns and Shapes
A very natural distance measure for comparing shapes and patterns is the Hausdorff distance. In this article we develop algorithms for computing the Hausdorff distance in a very general case in which geometric objects are represented by finite collections of k-dimensional simplices in d-dimensional space. The algorithms are polynomial in the size of the input,a ssuming d is a constant. In addition,w e present more efficient algorithms for special cases like sets of points,or line segments,or triangulated surfaces in three dimensions.
Helmut Alt, Peter Braß, Michael Godau, Christian Knauer, Carola Wenk
### A Sum of Squares Theorem for Visibility Complexes and Applications
We present a new method to implement in constant amortized time the flip operation of the so-called Greedy Flip Algorithm, an optimal algorithm to compute the visibility complex of a collection of pairwise disjoint bounded convex sets of constant complexity (disks). The method uses simple data structures and only the left-turn predicate for disks; it relies, among other things, on a sum of squares like theorem for visibility complexes stated and proved in this paper. (The sum of squares theorem for a simple arrangement of lines states that the average value of the square of the number of vertices of a face of the arrangement is bounded by a constant.)
Pierre Angelier, Michel Pocchiola
### On the Reflexivity of Point Sets
We introduce a new measure for planar point sets S that captures a combinatorial distance that S is from being a convex set: The reflexivityp(S) of S is given by the smallest number of reflex vertices in a simple polygonalization of S. We prove combinatorial bounds on the reflexivity of point sets and study some closely related quantities, including the convex cover number k c (S) of a planar point set, which is the smallest number of convex chains that cover S, and the convex partition number k p (S), which is given by the smallest number of convex chains with pairwise-disjoint convex hulls that cover S.
Esther M. Arkin, Joseph S. B. Mitchell, Sándor P. Fekete, Ferran Hurtado, Marc Noy, Vera Sacristán, Saurabh Sethia
### Geometric Permutations of Large Families of Translates
Let F be a finite family of disjoint translates of a compact convex set K in R2, and let l be an ordered line meeting each of the sets. Then l induces in the obvious way a total order on F. It is known that, up to reversals, at most three different orders can be induced on a given F as l varies. It is also known that the families are of six different types, according to the number of orders and their interrelations. In this paper we study these types closely, focusing on their relations to the given set K, and on what happens as |F| →∞.
Andrei Asinowski, Meir Katchalski, Andreas Holmsen, Helge Tverberg
### Integer Points in Rotating Convex Bodies
Let K be a planar convex body symmetric about the origin. We define P(K) as the probability of τ K∩Z2 ≠ {0}, where τ ∈ SO(2) is a random rotation around the origin and SOZ2 denotes the integer lattice, and we let P(v) = inf{P(K) : vol(K) = υ}. By Minkowski’s theorem, P(υ) = 1 for υ > 4, and P(υ) = 0 for υ < π. We describe the behavior of P(υ) in the intervals [π, π + ε0] and [4 - ε0, 4] for a small positive constant ε0.
Imre Bárány, Jiří Matoušek
### Complex Matroids Phirotopes and Their Realizations in Rank 2
The motivation for this article comes from the desire to link two seemingly incompatible worlds: oriented matroids and dynamic geometry. Oriented matroids have proven to be a perfect tool for dealing with sidedness information in geometric configurations (for instance for the computation of convex hulls). Dynamic geometry deals with elementary geometric constructions in which moving certain free elements controls the motion of constructively dependent elements In this field the introduction of complex coordinates has turned out to be a “key technology” for achieving a consistent continuous movement of the dependent elements. The additional freedom of an ambient complex space makes it possible to bypass disturbing singularities. Unfortunatly, complex coordinates seem to make it impossible to use oriented matroids which are heavily based on real numbers.
Alexander Below, Vanessa Krummeck, Jürgen Richter-Gebert
### Covering the Sphere by Equal Spherical Balls
We show that for any acute ϕ, there exists a covering of Sd by spherical balls of radius ϕ such that no point is covered more than 400d ln d times. It follows that the density is of order at most d ln d, and even at most d ln ln d if the number of balls is polynomial in d. If the number of equal spherical balls is d + 3 then we determine the optimal arrangement.At the end, we described how our and other peoples results yield estimates for the largest origin centred Euclidean ball contained in the convex hull of N points chosen from the sphere.
Károly Böröczky, Gergely Wintsche
### Lower Bounds for High Dimensional Nearest Neighbor Search and Related Problems
In spite of extensive and continuing research, for various geometric search problems (such as nearest neighbor search), the best algorithms known have performance that degrades exponentially in the dimension. This phenomenon is sometimes called the curse of dimensionality. Recent results [37, 38, 40] show that in some sense it is possible to avoid the curse of dimensionality for the approximate nearest neighbor search problem. But must the exact nearest neighbor search problem suffer this curse? We provide some evidence in support of the curse. Specifically we investigate the exact nearest neighbor search problem and the related problem of exact partial match within the asymmetric communication model first used by Miltersen [43] to study data structure problems. We derive non-trivial asymptotic lower bounds for the exact problem that stand in contrast to known algorithms for approximate nearest neighbor search.
Allan Borodin, Rafail Ostrovsky, Yuval Rabani
### A Turán-type Extremal Theory of Convex Geometric Graphs
We study Turán-type extremal questions for graphs with an additional cyclic ordering of the vertices, i.e. for convex geometric graphs. If a suitably defined chromatic number of the excluded subgraph is bigger than two then the results on convex geometric graphs resemble very much the classical results from the Tur´an theory. On the other hand, in the bipartite case we show some surprising differences, in particular for trees and forests. For example, the Turán function of some convex geometric forests is of the order Θ(n log n), a growth rate that does not occur in the graph Turán theory. We also obtain still another proof of Füredi’s O(n log n) bound on the number of unit distances in a convex n-gon, together with a lower bound showing the limits of this model. The exact growth of the Turán function for several infinite classes of convex geometric graphs is also determined.
Peter Brass, Gyula Károlyi, Pavel Valtr
### On the Inapproximability of Polynomial-programming, the Geometry of Stable Sets, and the Power of Relaxation
The present paper introduces the geometric rank as a measure for the quality of relaxations of certain combinatorial optimization problems in the realm of polyhedral combinatorics. In particular, this notion establishes a tight relation between the maximum stable set problem from combinatorial optimization, polynomial programming from integer non linear programming and norm maximization, a basic problem from convex maximization and computational convexity.
Andreas Brieden, Peter Gritzmann
### A Lower Bound on the Complexity of Approximate Nearest-Neighbor Searching on the Hamming Cube
We consider the nearest-neighbor problem over the d-cube: given a collection of points in {0, 1}d, find the one nearest to a query point (in the L1 sense). We establish a lower bound of Ω(log log d/ log log log d) on the worst-case query time. This result holds in the cell probe model with (any amount of) polynomial storage and word-size dO(1). The same lower bound holds for the approximate version of the problem, where the answer may be any point further than the nearest neighbor by a factor as large as $$2^{\left\lfloor {(\log d)^{1 - \varepsilon } } \right\rfloor }$$ , for any fixed ε > 0.
Amit Chakrabarti, Bernard Chazelle, Benjamin Gum, Alexey Lvov
### Detecting Undersampling in Surface Reconstruction
Current surface reconstruction algorithms perform satisfactorily on well sampled, smooth surfaces without boundaries. However, these algorithms face difficulties with undersampling. Cases of undersampling are prevalent in real data since often these data sample a part of the boundary of an object, or are derived from a surface with high curvature or non-smoothness. In this paper we present an algorithm to detect the boundaries where dense sampling stops and undersampling begins. This information can be used to reconstruct surfaces with boundaries, and also to localize small and sharp features where usually undersampling happens. We report the effectiveness of the algorithm with a number of experimental results. Theoretically, we justify the algorithm with some mild assumptions that are valid for most practical data.
Tamal K. Dey, Joachim Giesen
### A Survey of the Hadwiger-Debrunner (p, q)-problem
At the annual meeting of the Swiss Mathematical Society held in September 1956 in Basel, H. Hadwiger presented the following theorem. It was published one year later in a joint paper with H. Debrunner, his colleague at the University of Bern.
Jürgen Eckhoff
### Surface Reconstruction by Wrapping Finite Sets in Space
Given a finite point set in ℝ3, the surface reconstruction problem asks for a surface that passes through many but not necessarily all points. We describe an unambiguous definition of such a surface in geometric and topological terms,and sketch a fast algorithm for constructing it. Our solution overcomes past limitations to special point distributions and heuristic design decisions.
Herbert Edelsbrunner
### Infeasibility of Systems of Halfspaces
An oriented hyperplane is a hyperplane with designated good and bad sides.The infeasibility of a cell in an arrangement $$\overrightarrow \mathcal{A}$$ of oriented hyperplanes is the number of hyperplanes with this cell on the bad side. With MinInf($$\overrightarrow \mathcal{A}$$) we denote the minimum infeasibility of a cell in the arrangement. A subset of hyperplanes of $$\overrightarrow \mathcal{A}$$ is called an infeasible subsystem if every cell in the induced subarrangement has positive infeasibility. With MaxDis($$\overrightarrow \mathcal{A}$$) we denote the maximal number of disjoint infeasible subsystems of $$\overrightarrow \mathcal{A}$$. For every arrangement $$\overrightarrow \mathcal{A}$$ of oriented hyperplanes $$MinInf(\overrightarrow \mathcal{A} {\text{)}} \geqslant {\text{MaxDis(}}\overrightarrow \mathcal{A} {\text{)}}{\text{.}}$$In this paper we investigate bounds for the ratio of the LHS over the RHS in the above inequality. The main contribution is a detailed discussion of the problem in the case d = 2, i.e., for 2-dimensional arrangements. We prove that $$MinInf(\overrightarrow \mathcal{A} {\text{)}} \leqslant 2\cdot {\text{MaxDis(}}\overrightarrow \mathcal{A} {\text{)}}$$, in this case. An example shows that the factor 2 is best possible. If an arrangement $$\overrightarrow \mathcal{A}$$ of n lines contains a cell of infeasibility n, then the factor can be improved to 3/2, which is again best possible. We also consider the problem for arrangements of pseudolines in the Euclidean plane and show that the factor of 2 suffices in this more general situation.
Stefan Felsner, Nicole Morawe
### Combinatorial Generation of Small Point Configurations and Hyperplane Arrangements
A recent progress on the complete enumeration of oriented matroids enables us to generate all combinatorial types of small point configurations and hyperplane arrangements in general dimension, including degenerate ones. This extends a number of former works which concentrated on the non-degenerate case and are usually limited to dimension 2 or 3. Our initial study on the complete list for small cases has shown its potential in resolving geometric conjectures.
Lukas Finschi, Komei Fukuda
### Relative Closure and the Complexity of Pfaffian Elimination
We introduce the “relative closure” operation on one-parametric families ofsemi-Pfaffian sets. We show that finite unions of sets obtained with this operation(“limit sets”) constitute a structure,i.e., a Boolean algebra closed under projections. Any Pfaffian expression, i.e., an expression with Boolean operations, quantifiers, equations and inequalities between Pfaffian functions, defines a limit set. The structure of limit sets is effectively o-minimal: there is an upper bound on the complexity of a limit set defined by a Pfaffian expression,in terms of the complexities of the expression and the Pfaffian functions in it.
Andrei Gabrielov
### Are Your Polyhedra the Same as My Polyhedra?
“Polyhedron” means different things to different people. There is very little in common between the meaning of the word in topology and in geometry. But even if we confine attention to geometry of the 3-dimensional Euclidean space-as we shall do from now on -“polyhedron” can mean either a solid (as in “Platonic solids”, convex polyhedron, and other contexts), or a surface (such as the polyhedral models constructed from cardboard using “nets”, which were introduced by Albrecht Dürer [[17]] in 1525, or, in a more modern version, by Aleksandrov [[1]]), or the 1-dimensional complex consisting of points (“vertices”) and line-segments (“edges”) organized in a suitable way into polygons (“faces”) subject to certain restrictions (“skeletal polyhedra”, diagrams of which have been presented first by Luca Pacioli [[44]] in 1498 and attributed to Leonardo da Vinci). The last alternative is the least usual one-but it is close to what seems to be the most useful approach to the theory of general polyhedra. Indeed, it does not restrict faces to be planar, and it makes possible to retrieve the other characterizations in circumstances in which they reasonably apply: If the faces of a “surface” polyhedron are simple polygons, in most cases the polyhedron is unambiguously determined by the boundary circuits of the faces. And if the polyhedron itself is without selfintersections, then the “solid” can be found from the faces. These reasons, as well as some others, seem to warrant the choice of our approach.
Branko Grünbaum
### Some Algorithms Arising in the Proof of the Kepler Conjecture
By any account, the 1998 proof of the Kepler conjecture is complex. The thesis underlying this article is that the proof is complex because it is highly under-automated. Throughout that proof, manual procedures are used where automated ones would have been better suited. This paper gives a series of nonlinear optimization algorithms and shows how a systematic application of these algorithms would bring substantial simplifications to the original proof.
Thomas C. Hales
### The Minimal Number of Triangles Needed to Span a Polygon Embedded in ℝd
Given a closed polygon P having n edges, embedded in ℝd, we give upper and lower bounds for the minimal number of triangles t needed to form a triangulated PL surface embedded in ℝd having P as its geometric boundary. More generally we obtain such bounds for a triangulated (locally flat) PL surface having P as its boundary which is immersed in ℝd and whose interior is disjoint from P. The most interesting case is dimension 3, where the polygon may be knotted. We use the Seifert surface construction to show that for any polygon embedded in ℝ3 there exists an embedded orientable triangulated PL surface having at most 7n2 triangles, whose boundary is a subdivision of P. We complement this with a construction of families of polygons with n vertices for which any such embedded surface requires at least 12n2−O(n) triangles. We also exhibit families of polygons in ℝ3 for which Ω(n2) triangles are required in any immersed PL surface of the above kind. In contrast, in dimension 2 and in dimensions d ≥ 5 there always exists an embedded locally flat PL disk having P as boundary that contains at most n triangles. In dimension 4 there always exists an immersed locally flat PL disk of the above kind that contains at most 3n triangles. An unresolved case is that of embedded PL surfaces in dimension 4, where we establish only an O(n2) upper bound. These results can be viewed as providing qualitative discrete analogues of the isoperimetric inequality for piecewise linear (PL) manifolds. In dimension 3 they imply that the (asymptotic) discrete isoperimetric constant lies between 1/2 and 7.
Joel Hass, Jeffrey C. Lagarias
### Jacobi Decomposition and Eigenvalues of Symmetric Matrices
We show that every n-dimensional orthogonal matrix can be factored into O(n2) Jacobi rotations (also called Givens rotations in the literature). It is well known that the Jacobi method,wh ich constructs the eigen-decomposition of a symmetric matrix through a sequence of Jacobi rotations,is slower than the eigenvalue algorithms currently used in practice,but is capable of computing eigenvalues, particularly tiny ones,t o a high relative accuracy. The above decomposition, which to the best of our knowledge is new,s hows that the infinite-precision nondeterministic Jacobi method (in which an oracle is invoked to obtain the correct rotation angle in each Jacobi rotation) can construct the eigen-decomposition with O(n2) Jacobi rotations. The complexity of the nondeterministic Jacobi algorithm motivates the efforts to narrow the gap between the complexity of the known Jacobi algorithms and that of the nondeterministic version. Speeding up the Jacobi algorithm while retaining its excellent numerical properties would be of considerable interest. In that direction,we describe,as an example,a variant of the Jacobi method in which the rotation angle for each Jacobi rotation is computed (in closed-form) through 1-dimensional optimization. We also show that the computation of the closed-form optimal solution of the 1-dimensional problems guarantees convergence of the new method.
W. He, N. Prabhu
### Discrete Geometry on Red and Blue Points in the Plane — A Survey —
In this paper, we give a short survey on discrete geometry on red and blue points in the plane, most of whose results were obtained in the past decade. We consider balanced subdivision problems, geometric graph problems, graph embedding problems, Gallai-type problems and others.
Atsushi Kaneko, M. Kano
### Configurations with Rational Angles and Trigonometric Diophantine Equations
A subset E of the plane is said to be a configuration with rational angles (CRA) if the angle determined by any three points of E is rational when measured in degrees. We prove that there is a constant C such that whenever a CRA has more than C points, then it can be covered either by a circle and its center or by a pair of points and their bisecting line. The proof is based on the description of all rational solutions of the equation $$\sin \pi p_1 \cdot \sin \pi p_2 \cdot \sin \pi p_3 \cdot = \sin \pi q_1 \cdot \sin \pi q_2 \cdot \sin \pi q_3 .$$
M. Laczkovich
### Reconstructing Sets From Interpoint Distances
Which point sets realize a given distance multiset? Interesting cases include the “turnpike problem” where the points lie on a line, the “beltway problem” where the points lie on a loop, and multidimensional versions. We are interested both in the algorithmic problem of determining such point sets for a given collection of distances and the combinatorial problem of finding bounds on the maximum number of different solutions. These problems have applications in genetics and crystallography.We give an extensive survey and bibliography in an effort to connect the independent efforts of previous researchers, and present many new results. In particular, we give improved combinatorial bounds for the turnpike and beltway problems. We present a pseudo-polynomial time algorithm as well as a practical O(2nn log n)-time algorithm that find all solutions to the turnpike problem, ann show that certain other variants of the problem are NP-hard. We conclude with a list of open problems.
Paul Lemke, Steven S. Skiena, Warren D. Smith
### Dense Packings of Congruent Circles in Rectangles with a Variable Aspect Ratio
We use computational experiments to find the rectangles of minimum area into which a given number n of non-overlapping congruent circles can be packed. No assumption is made on the shape of the rectangles. Most of the packings found have the usual regular square or hexagonal pattern. However, for 1495 values of n in the tested range n≤ 5000, specifically, for n = 49, 61, 79, 97, 107, …4999, we prove that the optimum cannot possibly be achieved by such regular arrangements.The evidence suggests that the limiting height-to-width ratio of rectangles containing an optimal hexagonal packing of circles tends to $$2 - \sqrt 3$$ as n → ∞, if the limit exists.
Boris D. Lubachevsky, Ronald Graham
### Colorings and Homomorphisms of Minor Closed Classes
We relate acyclic (and star) chromatic number of a graph to the chromatic number of its minors and as a consequence we show that the set of all triangle free planar graphs is homomorphism bounded by a triangle free graph. This solves a problem posed in [[15]]. It also improves the best known bound for the star chromatic number of planar graphs from 80 to 30. Our method generalizes to all minor closed classes and puts Hadwiger conjecture in yet another context.
Jaroslav Nešetřil, Patrice Ossona de Mendez
### Conflict-free Colorings
A coloring of the elements of a planar point set P is said to be conflict-free if, for every closed disk D whose intersection with P is nonempty, there is a color that occurs in D ∩ P precisely once. We solve a problem of Even, Lotker, Ron, and Smorodinsky by showing that any conflict-free coloring of every set of n points in the plane uses at least c log n colors, for an absolute constant c > 0.Moreover, the same assertion is true for homothetic copies of any convex body D, in place of a disk.
János Pach, Géza Tóth
### New Complexity Bounds for Cylindrical Decompositions of Sub-Pfaffian Sets
Tarski-Seidenberg principle plays a key role in real algebraic geometry and its applications. It is also constructive and some efficient quantifier elimination algorithms appeared recently. However, the principle is wrong for first-order theories involving certain real analytic functions (e.g., an exponential function). In this case a weaker statement is sometimes true, a possibility to eliminate onesort ofq uantifiers (either ∀ or ∃). We construct an algorithm for a cylindrical cell decomposition ofa closed cube In ⊂ Rn compatible with a semianalytic subset S ⊂ In, defined by analytic functions from a certain broad finitely defined class (Pfaffian functions), modulo an oracle for deciding emptiness of such sets. In particular the algorithm is able to eliminate one sort ofq uantifiers from a first-order formula. The complexity of the algorithm and the bounds on the output are doubly exponential in O(n2).
Savvas Pericleous, Nicolai Vorobjov
### Note on the Chromatic Number of the Space
The chromatic number of the space is the minimum number of colors needed to color the points of the space so that every two points unit distance apart have different colors. We show that this number is at most 15, improving the best known previous bound of 18.References
### Expansive Motions and the Polytope of Pointed Pseudo-Triangulations
We introduce the polytope of pointed pseudo-triangulations of a point set in the plane, defined as the polytope of infinitesimal expansive motions of the points subject to certain constraints on the increase of their distances. Its 1-skeleton is the graph whose vertices are the pointed pseudo-triangulations of the point set and whose edges are flips of interior pseudo-triangulation edges.
Günter Rote, Francisco Santos, Ileana Streinu
### Some Recent Quantitative and Algorithmic Results in Real Algebraic Geometry
This paper offers a short survey of two topics. In the first section we describea new bound on the Betti numbers of semi-algebraic sets whose proof rely onthe Thom-Milnor bound for algebraic sets and the Mayer-Vietoris long exactsequence. The second section describes the best complexity results and practicalimplementations currently available for finding real solutions of systems ofpolynomial equations and inequalities. In both sections, the consideration ofcritical points will play a key role.
Marie-Françoise Roy
### A Discrete Isoperimetric Inequality and Its Application to Sphere Packings
We consider finite packings of equal spheres in Euclidean 3–space E3. The convex hull of the sphere centers is the packing polytope. In the first part of the paper we prove a tight inequality between the surface area of the packing polytope and the number of sphere centers on its boundary, and investigate in particular the equality cases. The inequality follows from a more general inequality for cell complexes on packing polytopes.
Peter Scholl, Achill Schürmann, Jörg M. Wills
### On the Number of Maximal Regular Simplices Determined by n Points in Rd
A set V = {x1,…, x n } of n distinct points in Euclidean d-space ℝd determines 2n distances ∥x j − x i ∥ (1 ≤ i < j ≤ n). Some of these distances may be equal. Many questions concerning the distribution of these distances have been asked (and, at least partially, answered). E.g., what is the smallest possible number of distinct distances, as a function of d and n? How often can a particular distance (say, one) occur and, in particular, how often can the largest (resp., the smallest) distance occur?
Zvi Schur, Micha A. Perles, Horst Martini, Yaakov S. Kupitz
### Balanced Lines, Halving Triangles, and the Generalized Lower Bound Theorem
A recent result by Pach and Pinchasi on so-called balanced lines of a finite two-colored point set in the plane is related to other facts on halving triangles in 3-space and to a special case of the Generalized Lower Bound Theorem for convexpo lytopes.
Micha Sharir, Emo Welzl
### Quantizing Using Lattice Intersections
The usual quantizer based on an n-dimensional lattice Λ maps a point x ∈ ℝnto a closest lattice point. Suppose Λ is the intersection of lattices Λ1,…,Λ r . Then one may instead combine the information obtained by simultaneously quantizing x with respect to each of the Λ i . This corresponds to decomposing ℝn into a honeycomb of cells which are the intersections of the Voronoi cells for the Λi, and identifying the cell to which x belongs. This paper shows how to write several standard lattices (the face-centered and body-centered cubic lattices, the root lattices D4, E6*E8, the Coxeter-Todd, Barnes-Wall and Leech lattices, etc.) in a canonical way as intersections of a small number of simpler, decomposable, lattices. The cells of the honeycombs are given explicitly and the mean squared quantizing error calculated in the cases when the intersection lattice is the face-centered or body-centered cubic lattice or the lattice D4.
N. J. A. Sloane, B. Beferull-Lozano
### Note on a Generalization of Roth’s Theorem
We give a simple proof that for sufficiently large N, every subset of of size[N2]of size at least δN2 contains three points of the form (a,b), (a + d, b), (a, b + d).
József Solymosi
### Arrangements, Equivariant Maps and Partitions of Measures by k-Fans
We study topological and combinatorial structures that arise in the problem of finding α-partitions of m spherical measures by k-fans. An elegant construction of I. Bárány and J. Matoušek, [[3]], [[4]], shows that this problem can be reduced to the question whether there exists a G-equivariant map f: V2(ℝ3) → VP \ ∩AP where G is a subgroup of a dihedral group $$\mathbb{D}_{2n}$$ while the target space VP\∩AP is the complement of a G-invariant, linear subspace arrangement AP. We demonstrate that in many cases the relevant topological obstructions for the existence of these equivariant maps can be computed by a variety of geometric, combinatorial and algebraic ideas.
Siniša T. Vrećica, Rade T. Živaljević
### Qualitative Infinite Version of Erdős’ Problem About Empty Polygons
The well-known problem of Erdős-Szekeres in Discrete Geometry is about realizations of convex polygons P with a given number of vertices in a finite set F ⊂ ℝ2, which means that V (P), the vertex set of P, is a subset of F [[5]],[[6]]. See the very good survey [[11]].
Tudor Zamfirescu
Weitere Informationen
|
|
kidzsearch.com > wiki Explore:images videos games
# Herakles
(Redirected from Heracles)
Herakles
Herakles in a Greek vase painting, ca. 460–450 BC
Keeper of the Gates on Mount Olympus
God of heroes, athletes, and divine protector of humankind
Abode Greece, later Mount Olympus
Symbol(s) Lion skin
Club
Bow and arrows
Consort Hebe, goddess of youth
Parents Zeus and Alkmene
Siblings Iphikles (mortal brother)
Children Therimachus, Creontiades, and Deicoon (by Megara)
Hyllas (by Deianira)
Alexiares and Anicetus (by Hebe)
Herakles (Latin: Hercules) was a divine hero in Greek mythology. He was the greatest of the Greek heroes. He was the son of the god Zeus and the mortal woman Alkmene. Herakles was strong and courageous—even as a baby. When he was a young man, he dressed in a lion's skin, carried a club of olive wood, and was an expert with the bow and arrow.
Hera, the Queen of the Gods, hated Herakles because he was the son of her husband and a mortal. She designed some difficult tasks for Herakles to do and hoped they would kill him. These tasks are called The Labors of Herakles. He finished them all with success and Zeus gave him immortality. He had many adventures. When he died, he went to live with the gods on Mount Olympus.
Herakles was worshipped throughout the Greek world. He was popular among athletes because he was the god of gymnasiums and wrestling schools. He started the Ancient Olympic Games and marked out the length of the Olympic stadium. The Romans called him Hercules. He was the subject of much ancient and modern art, and even of movies like Walt Disney's Hercules.
## Birth and childhood
Zeus was the greatest of all Greek gods. He lusted after a beautiful mortal woman named Alkmene. Ten months later, she gave birth to a son and named him Alkides. He would one day be called Herakles. They lived in Thebes.
Zeus' wife Hera was angry with her husband because he had become the father of a child outside their marriage. She hated Herakles and looked for ways to hurt him. The goddess Athena felt sorry for the baby. This was one step on the road to immortality. Hera hated Herakles more than ever. She sent two snakes to kill him, but the baby Herakles killed the snakes with his hands. This was his first great act of courage and physical strength.
## Teenager
Herakles became a strong teenager. He learned to use weapons and to drive a chariot. One day he killed his music teacher Linus because the man had tried to whip him. Herakles was charged with murder, but said he had acted in self defense. He was freed. People feared him though, so he was sent far out of town to work on a farm. Herakles became stronger with the hard work. He was seven feet tall. He was eighteen when he left the farm.
## Lion of Kithaeron
Herakles was eighteen when he hunted the large and powerful Lion of Kithaeron. This lion was killing cows in a land near Thebes. The hunt lasted fifty days and ended when Herakles smashed the lion's skull with a club of olive wood. This club is seen in pictures of Herakles. He dressed in the lion's skin.
Herakles slept in King Thespius' palace while the hunt progressed. She became a priestess in his shrine.[1]
Herakles was going back to Thebes when he met the heralds of King Erginus. They were on their way to Thebes to collect tribute. They treated Herakles with contempt. Herakles cut off their ears, noses, and hands. Erginus made war on Thebes, but was defeated and killed by Herakles. For saving Thebes, King Kreon gave his daughter Megara in marriage to Herakles.[2]
## Madness, murder, and The Labors of Herakles
The Cretan Bull (Metope from the Temple of Zeus at Olympia)
Hera could not rest easily because Herakles was becoming more and more famous. He was loved by everyone. Her anger and hatred made her look foolish. She tricked Herakles into thinking his sons were his enemies and, insane with anger, he murdered them.
When he came to his senses, he was overcome with grief. He ran from other people and lived for a time in exile. He looked for advice from the Oracle of Delphi. The priestess sent him to serve King Eurystheus, King of Tyrins in Mycenae. In this way, she said, he would be washed clean of his crimes.
Eurystheus was a dull and bad man. Herakles hated him. Eurystheus set some tasks for Herakles to do. These tasks came to be called "The Labors of Herakles". It was said that Hera designed them. She hoped the tasks would kill him. Zeus would grant Herakles immortality with the successful completion of the Labors.
## Death
Herakles was married to a beautiful woman named Deianeira. They lived in Trachis and had a son named Hyllus. In a war with a neighboring city, Herakles made the king's daughter Iole his captive. Herakles had met her sometime in the past and had fallen in love with her. Deianeira was jealous, and used a mix of blood and semen from the centaur Nessus to get her husband back. She put the mix on a shirt and sent it to Herakles. He put the shirt on. Unknown to Deianeira, the mix was really a poison and burned Herakles' skin and flesh. Deianeira learned of this and killed herself. Herakles died in great pain. Before he died, he ordered Hyllus to marry Iole. Herakles' body was set on fire at the funeral ceremonies. His ghost fell to the underworld while his immortal part rose to Mount Olympus. The gods welcomed him, even Hera. He married Hebe, his fourth wife, and became the father of two sons.[3][4] According to Homer's Odyssey, Herakles became the porter (keeper of the gates) on Mount Olympus.[5]
## Notes
1. Graves 1960, pp. 457–58
2. Moroz 1997, pp. 10–11
3. Kerényi 1959, pp. 201–04
4. Moroz 1997, pp. 115–18
5. Graves 1960, p. 565
• Graves, Robert (1955, 1960), The Greek Myths, London; New York: Penguin Books,
• Kerényi, C (1959), The Heroes of the Greeks, London: Thames and Hudson,
• Moroz, Georges (1997), Hercules, New York: Bantam Doubleday Dell,
## Other websites
Media related to Herakles at Wikimedia Commons
|
|
# Colección
Música » P!nk »
## This Is How It Goes Down
13 scrobblings | Ir a la página del tema
Temas (13)
Tema Álbum Duración Fecha
This Is How It Goes Down 3:20 21 Jun 2014, 1:20
This Is How It Goes Down 3:20 22 Abr 2014, 18:47
This Is How It Goes Down 3:20 26 Dic 2013, 2:51
This Is How It Goes Down 3:20 18 Oct 2013, 14:43
This Is How It Goes Down 3:20 11 Oct 2013, 1:04
This Is How It Goes Down 3:20 7 Oct 2013, 23:13
This Is How It Goes Down 3:20 7 Oct 2013, 23:09
This Is How It Goes Down 3:20 8 Sep 2013, 23:52
This Is How It Goes Down 3:20 28 Mar 2012, 13:15
This Is How It Goes Down 3:20 18 Mar 2012, 5:49
This Is How It Goes Down 3:20 4 Mar 2012, 2:40
This Is How It Goes Down 3:20 2 Oct 2009, 18:20
This Is How It Goes Down 3:20 8 Jun 2009, 4:44
|
|
# Why are multiples and submultiples of SI units required?
0 votes
1.0k views
in Physics
Why are multiples and submultiples of SI units required?
## 1 Answer
+1 vote
by (65.6k points)
selected
Best answer
Sometimes, the size of the SI unit is either too small or too big to measure a certain quantity. For example, a metre is too small a unit to measure the distance between two cities and too big a unit to measure the thickness of a wire. Hence, multiples and submultiples of units are required. Multiples are factors used to create larger forms whereas submultiples are factors used to create smaller forms of the SI units. For example, a centimetre is a submultiple and kilometre is a multiple of a metre.
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
|
|
# What is the domain and range of f(x)= 4?
Aug 16, 2015
Domain: $\left(- \infty , + \infty\right)$
Range: $\left\{4\right\}$
#### Explanation:
You're dealing with a constant function for which the output, i.e. the value of the function, is always constant regardless of the input, i.e. the value of $x$.
In your case, the function is defined for any value of $x \in \mathbb{R}$, so its domain will be $\left(- \infty , + \infty\right)$.
Furthermore, for any value of $x \in \mathbb{R}$, the function is always equal to $4$. This means that the range of the function will be that one value, $\left\{4\right\}$.
graph{y - 4 = 0.001 * x [-15.85, 16.19, -4.43, 11.58]}
|
|
# Medical Model Archive (MMAR)
The MMAR (Medical Model ARchive) defines a standard structure for organizing all artifacts produced during the model development life cycle.
NVIDIA has example MMARs for liver, spleen, and others that can be found at NGC.
You can experiment with different configurations for the same MMAR. Create a new MMAR by cloning from an existing MMAR by using the cp -r OS command.
MMAR defines the standard structure for storing artifacts (files) needed and produced by the model development workflow (training, validation, inference, etc.). The MMAR includes all the information about the model including configurations and scripts to provide a work space to perform all model development tasks:
Copy
Copied!
ROOT
config
config_train.json
config_finetune.json
config_inference.json
config_validation.json
config_validation_ckpt.json
environment.json
commands
set_env.sh
train.sh train with single GPU
train_multi_gpu.sh train with 2 GPUs
finetune.sh transfer learning with CKPT
infer.sh inference with TS model
validate.sh valiidate with TS model
validate_ckpt.sh validate with CKPT
validate_multi_gpu.sh validate with TS model on 2 GPUs
validate_multi_gpu_ckpt.sh validate with CKPT on 2 GPUs
export.sh export CKPT to TS model
resources
log.config
...
docs
...
models
model.pt
model.ts
final_model.pt
eval
all evaluation outputs: segmentation / classification results
metrics reports, etc.
Note
“CKPT” means regular PyTorch “.pt” weights, “TS” means PyTorch TorchScript model
## Model training and validation configurations
Clara workflows are made of different types of components. For each type, there are usually multiple choices. To put together a workflow, you specify and configure the components to be used.
Clara offers training and validation workflows. Workflow configurations are defined in JSON files: for example config_train.json for training workflows and config_validation.json for validation workflows.
### Training configuration
The training config file config_train.json defines the configuration of the training workflow. The config contains three sections: global variables, train, and validate.
You can define global variables in the configuration JSON file. These variables can be overwritten through environment.json and then the shell command or command line at the highest priority.
For an example to reference, see the config_train.json of the clara_pt_spleen_ct_segmentation MMAR.
The “train” section defines the components for the training process, including “loss”, “optimizer”, “lr_scheduler”, “model”, “pre_transforms”, “dataset”, “dataloader”, “inferer”, “handlers”, “post_transforms”, “metrics” and “trainer”. Each component is constructed by providing the component’s class “name” and the init arguments “args”.
Similarly, the “validate” section defines the components for validation process, including “pre_transforms”, “dataset”, “dataloader”, “inferer”, “handlers”, “post_transforms”, “metrics”, and “evaluator”. Each component is constructed the same way by providing the component class “name” and the corresponding init arguments “args”.
From Clara v4.0 on compared to v3.X, almost all components are now from the open source project MONAI. All the MONAI API docstrings are available at: https://docs.monai.io/en/latest/.
The training and validation workflows are based on MONAI workflows which are developed based on ignite.
## MMAR Commands
The provided commands perform model development work based on the configurations in the config folder. The only command you may need to change is set_env.sh, where you can set the PYTHONPATH to the proper value.
You don’t need to change any other commands for default behavior, but you can and should study them to understand how they are defined.
Note
Please see the commands in the example MMARs for the most up to date features and settings that can be used.
### train.sh
This command is used to do basic single-gpu training from scratch. When finished, you should see the following files in the “models” folder:
• model.pt - the best model obtained
• final_model.pt - the final model when the training is done. It is usually NOT the best model.
• Event files - these are tensorboard events that you can view with tensorboard.
#### Example: train.sh
Copy
Copied!
1 #!/usr/bin/env bash
2 my_dir="$(dirname "$0")"
3 . $my_dir/set_env.sh 4 echo "MMAR_ROOT set to$MMAR_ROOT"
5
6 CONFIG_FILE=config/config_train.json
7 ENVIRONMENT_FILE=config/environment.json
8 python3 -u -m medl.apps.train \
9 -m $MMAR_ROOT \ 10 -c$CONFIG_FILE \
11 -e $ENVIRONMENT_FILE \ 12 --set \ 13 DATASET_JSON=$MMAR_ROOT/config/dataset_0.json \
14 epochs=1260 \
15 multi_gpu=false \
16 ${additional_options} Note Line numbers are not part of the command. #### Explanation: train.sh Copy Copied! Line 1: this is a bash script Lines 2 to 3: resolve and set the absolute directory path for MMAR_ROOT Line 6: set the training config file Line 7: set the environment file that defines commonly used variables such as DATA_ROOT. Lines 8 to 17: invokes the training program. Lines 9 to 11: set the arguments required by the training program Line 12: the --set directive allows certain training parameters to be overwritten Line 13: set the DATASET_JSON to use the dataset_0.json in the “config” folder of the MMAR. This overwrites the DATASET_JSON defined in the environment.json file. Lines 14 to 16: overwrite the training variables as defined in config_train.json ### train_multi_gpu.sh #### Example: train_multi_gpu.sh Copy Copied! 1 #!/usr/bin/env bash 2 3 my_dir="$(dirname "$0")" 4 .$my_dir/set_env.sh
5
6 echo "MMAR_ROOT set to $MMAR_ROOT" 7 additional_options="$*"
8
9 CONFIG_FILE=config/config_train.json
10 ENVIRONMENT_FILE=config/environment.json
11
12 python -m torch.distributed.launch --nproc_per_node=2 --nnodes=1 --node_rank=0 \
14 -m medl.apps.train \
15 -m $MMAR_ROOT \ 16 -c$CONFIG_FILE \
17 -e $ENVIRONMENT_FILE \ 18 --write_train_stats \ 19 --set \ 20 print_conf=True \ 21 epochs=1260 \ 22 multi_gpu=True \ 23${additional_options}
#### Explanation: train_multi_gpu.sh
This command uses torch.distributed to launch multiple gpu training.
### finetune.sh
This command is used to continue training from previous model. Before running this command, you must have a previous model in the models folder. The output of this command is the same as train.sh.
#### Example: finetune.sh
Copy
Copied!
1 #!/usr/bin/env bash
2 my_dir="$(dirname "$0")"
3 . $my_dir/set_env.sh 4 echo "MMAR_ROOT set to$MMAR_ROOT"
5 additional_options="$*" 6 7 CONFIG_FILE=config/config_finetune.json 8 ENVIRONMENT_FILE=config/environment.json 9 11 python3 -u -m medl.apps.train \ 12 -m$MMAR_ROOT \
13 -c $CONFIG_FILE \ 14 -e$ENVIRONMENT_FILE \
15 --write_train_stats \
16 --set \
17 print_conf=True \
18 epochs=1260 \
19 multi_gpu=False \
20 ${additional_options} ### export.sh Export a trained model checkpoint to frozen graphs. Two frozen graphs will be generated into the models folder. Note Training must have been done before you run this command. • model.fzn.pb - the regular frozen graph • model.trt.pb - TRT-optimized frozen graph #### Example: export.sh Copy Copied! 1 #!/usr/bin/env bash 2 my_dir="$(dirname "$(readlink -f "$0")")"
3 . $my_dir/set_env.sh 4 5 CONFIG_FILE=config/config_train.json 6 ENVIRONMENT_FILE=config/environment.json 7 export CKPT_DIR=$MMAR_ROOT/models
8
9 INPUT_CKPT="${MMAR_ROOT}/models/model.pt" 10 OUTPUT_CKPT="${MMAR_ROOT}/models/model.ts"
6 python3 -u -m medl.apps.export \
7 -m $MMAR_ROOT \ 8 -c$CONFIG_FILE \
9 -e $ENVIRONMENT_FILE \ 10 --model_path "${INPUT_CKPT}" \
11 --output_path "${OUTPUT_CKPT}" \ 12 --input_shape "[1, 1, 160, 160, 160]" #### Explanation: export.sh Copy Copied! Lines 6 to 12: invoke the export program. ### infer.sh Perform inference against the model, based on the configuration of config_validation.json in the config folder. Inference output is saved in the eval folder. #### Example: infer.sh Copy Copied! 1 #!/usr/bin/env bash 2 my_dir="$(dirname "$0")" 3 .$my_dir/set_env.sh
4 echo "MMAR_ROOT set to $MMAR_ROOT" 5 6 CONFIG_FILE=config/config_validation.json 7 ENVIRONMENT_FILE=config/environment.json 8 python3 -u -m medl.apps.evaluate \ 9 -m$MMAR_ROOT \
10 -c $CONFIG_FILE \ 11 -e$ENVIRONMENT_FILE \
12 --set \
13 DATASET_JSON=$MMAR_ROOT/config/dataset_0.json \ 14 output_infer_result=true \ 15 do_validation=false #### Explanation: infer.sh Copy Copied! Line 1: this is a bash script Lines 2 to 3: resolve and set the absolute directory path for MMAR_ROOT Line 6: set the validation config file Line 7: set the environment file that defines commonly used variables such as DATA_ROOT. Lines 8 to 15: invokes the evaluate program. Lines 9 to 11: set the arguments required by the program Line 12: the --set directive allows certain parameters to be overwritten Line 13: set the DATASET_JSON to use the dataset_0.json in the “config” folder of the MMAR. This overwrites the DATASET_JSON defined in the environment.json file. ### validate.sh Perform validation against the model, based on the configuration of config_validation.json in the config folder. Validation output is saved in the evalfolder. #### Example: validate.sh Copy Copied! 1 #!/usr/bin/env bash 2 my_dir="$(dirname "$0")" 3 .$my_dir/set_env.sh
4 echo "MMAR_ROOT set to $MMAR_ROOT" 5 6 CONFIG_FILE=config/config_validation.json 7 ENVIRONMENT_FILE=config/environment.json 8 python3 -u -m medl.apps.evaluate \ 9 -m$MMAR_ROOT \
10 -c $CONFIG_FILE \ 11 -e$ENVIRONMENT_FILE
#### Explanation: validate.sh
This command is very similar to infer.sh. The only differences are:
Note
infer.sh and validate.sh use the same evaluate program but different configurations.
## Configuration
The JSON files in the config folder define configurations of workflow tasks (training, inference, and validation).
### config_train.json
This file defines components that make up the training workflow. It is used by all four training commands (single-gpu training and finetuning, multi-gpu training and finetuning). See Training configuration for details.
### config_validation.json
This file defines configuration that is used for both validate.sh and infer.sh. The only difference between the two commands are the options of do_validation and output_infer_result. See Validation configuration for details on the configuration file for validation.
### environment.json
This file defines the common parameters for all model work. The most important are DATA_ROOT and DATASET_JSON.
• DATA_ROOT specifies the directory that contains the training data.
• DATASET_JSON specifies the config file that contains the default training data split (usually dataset_0.json).
Note
Since MMAR does not contain training data, you must ensure that these two parameters are set to the right value. Do not change any other parameters.
#### Example: environment.json
Copy
Copied!
{
"MMAR_EVAL_OUTPUT_PATH": "eval",
"MMAR_CKPT_DIR": "models",
"MMAR_CKPT": "models/model.pt"
"MMAR_TORCHSCRIPT": "models/models/ts"
}
Variable
Description
DATA_ROOT
The location of training data
DATASET_JSON
The data split config file
The task type of the training: segmentation or classification
MMAR_EVAL_OUTPUT_PATH
Directory for saving evaluation (validate or infer) results. Always the “eval” folder in the MMAR
MMAR_CKPT_DIR
Directory for saving training results. Always the “models” folder in the MMAR
### Data split config file
This is a JSON file that defines the data split used for training and validation. For classification model, this file is usually named “plco.json”; for other models, it is usually named “dataset_0.json”.
The following is dataset_0.json of the model segmentation_ct_spleen:
Copy
Copied!
{
"description": "Spleen Segmentation",
"labels": {
"0": "background",
"1": "spleen"
},
"licence": "CC-BY-SA 4.0",
"modality": {
"0": "CT"
},
"name": "Spleen",
"numTest": 20,
"numTraining": 41,
"reference": "Memorial Sloan Kettering Cancer Center",
"release": "1.0 06/08/2018",
"tensorImageSize": "3D",
"training": [
{
"image": "imagesTr/spleen_29.nii.gz",
"label": "labelsTr/spleen_29.nii.gz"
},
… <<more data here>>…..
{
"image": "imagesTr/spleen_49.nii.gz",
"label": "labelsTr/spleen_49.nii.gz"
}
],
"validation": [
{
"image": "imagesTr/spleen_19.nii.gz",
"label": "labelsTr/spleen_19.nii.gz"
},
… <<more data here>>…..
{
"image": "imagesTr/spleen_9.nii.gz",
"label": "labelsTr/spleen_9.nii.gz"
}
]
}
There is a lot of information in this file, but the only sections needed by the training and validation programs are the “training” and “validation” sections, which define sample/label pairs of data items for training and validation respectively.
## Using model.pt or final_model.pt
In the models folder, model.pt is the best model resulting from training. final_model.pt is created when the training is finished normally. final_model is a snapshot of the model at the last moment. It is usually not the best model that can be obtained. Both model.pt and final_model.pt can be used for further training or fine-tuning. Here are two typical use cases:
• Continued training: Use the final_model.pt as the starting point for fine-tuning if you think the model has not converged due to improper configuration with the number of epochs not set high enough.
• Transfer learning: Use the model.pt as the starting point for fine-tuning on a different dataset, which may be your own dataset, to obtain the model that is best for your data. This is also called adaptation.
## Cloning MMAR
A MMAR is a self-contained workspace for model development work. If you want to experiment with different configurations for the same MMAR, you should create a new MMAR by cloning from an existing MMAR.
|
|
# Volume Of Prisms Notes
See how well you know how to to calculate the volume of a prism or a pyramid. Model S was recorded in a large recording studio, using high end microphones to capture the smallest details of this giant instrument. Volume Rectangular Prisms. A water tank is a cylinder with radius 40 cm and depth 150 cm It is filled at the rate of 0. Loading Unsubscribe from Lindsay Perro? Lesson 8. In this lesson, we find the volume of rectangular prisms with fractional edge lengths. Note: Finding the volume of a rectangular prism isn't so bad, especially if you already know the length, width, and height. Day 1: Volume of Prisms. Volume ·Pyramids, Cones & Spheres. ) What is the volume left in the cylinder after the shaded cone region is removed? a. Use the Volume of Rectangular Prisms practice worksheet (M-5-1-1_Volume of Rectangular Prisms and KEY. Height - _____. Note: For some mathematicians regular prism means the same as right regular prism. A square prism has a height of 15 and its volume is equal to that of the triangular prism. Furthermore, do you want the volume to include attic space? Look at the picture below. Unit 17 Topic Pages Area and Volume of Prisms G. 11C and G11D apply the formulas for the total and lateral area and volume of three-dimensional figures to solve problems using appropriate units of measure G. 4 Volume of a Prisms Notes Answers. Term Definition Example. (Note: we have an Area Calculation Tool) Volume of a Prism. The volume of a prism is the product of the area of the base and the distance between the two base faces, or the height (in the case of a non-right prism, note that this means the perpendicular distance). Cubic unit - space occupied by a cube of unit measure. Solving problems involving volume and surface area. 375 cubic meters as the volume of the walls. a right rectangular prism with length 14 cm width. A cube that has a face area of 25 square inches. Not all solids can be thought of as solids of revolution and, in fact, not all solids of revolution can be easily dealt with using the methods from the previous two sections. In the examples above (and often in geometry in general), small "b" is the length of the base of a 2-D shape. We find that: Volume = Area of Base x Height. Please note that the amount of color variegation varies from skein to skein. Step 2: Volume of given prism V = $\frac{1}{2}$ × 13 × 11 × 15 = 1072. In this section, we will consider the volume of a cube, cuboid, cylinder and triangular prism. (Day 1) Prisms, Cylinders, Cones, Pyramids For homework please watch the videos below. b) Check your prediction by calculating the volume of the original and the volume of the new prism. Mathematics (Linear) – 1MA0 VOLUME OF PRISM Materials required for examination Items included with question papers Ruler graduated in centimetres and Nil millimetres, protractor, compasses, pen, HB pencil, eraser. 1 The volume of the right triangular prism is 35. ) A rectangular prism has the dimensions shown below. I did with 60 cubic units and gave the student groups some hands on “help” – 60 unifix cubes so they could build their different prisms and count the “outside squares” for surface area if the formula was “too much”. 4cm 12cm 3cm Find the volume and surface area. Volume of prisms. Volume of Rectangular Prisms The volume of a solid is the measure of space occupied by it. ! When calculating volume use the formula method 1. 52 and 214 Students calculate the volumes of prisms. 4 Volume of Prisms and Cylinders. Note that some volumes are surrounded by padding that obstructs the view of the volume until the transfer function is appropriately manipulated. possible prisms are: 1 x 2 x 12, 2 x 2 x 6, 2 x 3 x 4). Geometry Notes Volume and Surface Area Page 4 of 19 Example 2: Find the volume and surface area of the figure below 12 in 10 in Solution: This figure is a cylinder. Measured in cubic units. a) investigate and solve practical problems involving volume and surface area of prisms, cylinders, cones, and pyramids; and. Loading Unsubscribe from Kathryn Hoban? Surface Area and Volume of Cones Notes - Duration: 8:30. the volume V of a rectangular prism is given by V=lwh, where l is the length of the prism, w is it's width, and h is its height. 00 cm, and the two sides have 4. Course 1 Chapter 10 Volume and Surface Area 151 Lesson 3 Homework Practice Surface Area of Rectangular Prisms Find the surface area of each rectangular prism. Volume of a prism: V = Bh where B is the area of the base and h is the height. I put a QR code reader on many of the homework assignments this unit because I wanted students to check to see if they had the correct variables before plugging them into the equations. Find the volume of the prism. Volume of Rectangular Prisms Notes Lindsay Perro. notebook Subject: SMART Board Interactive Whiteboard Notes Keywords: Notes,Whiteboard,Whiteboard Page,Notebook software,Notebook,PDF,SMART,SMART Technologies ULC,SMART Board Interactive Whiteboard Created Date: 1/9/2018 9:35:41 AM. possible prisms are: 1 x 2 x 12, 2 x 2 x 6, 2 x 3 x 4). Included in this product: -Surface Area of Triangular Prisms Guided Notes-Surface Area of Triangular Prisms Practice Page-Surface Are. 1,536 π in 3 c. Many times, this formula will use the height of the prism, or depth (d), rather than the length (l), though you may see either abbreviation. This is shown in the following. ft_ Surface Area = _54 sq. Surface area of 3-dimensional shapes. 7) A given rectangular prism has a volume = 455 ft3. Yet, a prism can be any stack of shapes. Math 6/7 NOTES (10. Nets of 3-dimensional shapes. Introduction to surface area of 3-dimensional shapes. Ask each group to determine the surface area of their prism by thinking about the rectangular faces of the prism. 10 2 Problem Solving Volume Of Prisms And Cylinders, example of a strong thesis statement, david hume dissertation sur les passions cpge, english paper help Our online essay service is the most reliable writing service on the web. Instructions: Find the volume of each 3-D shape (prism): 1. They are used in optical systems to deflect or redirect beams of light. Online calculator to calculate the volume of geometric solids including a capsule, cone, frustum, cube, cylinder, hemisphere, pyramid, rectangular prism, sphere and spherical cap. Volume of square prism = Volume of square prism — Volume of square prism 126 in3 21 6 ft3 Using the information given above and our calculations, we can conclude that the volume of a pyramid is: Volume of a Pyramid* = DEFINE A prism is a solid object with, identical ends, flat rectangular faces and bases, and the same cross. Read these lessons on the volume of prisms and volume of pyramids if you need additional help. It works extremely well as a resource to give to the pupils while going through teaching slides as they can colour, doodle or make extra. a 2 = 9+16. What is the radius of the cylinders base? 23 mm Answer each question 7. Find the volume of the triangular prism David's favorite candy bar is toblerone. The Volume of a prism is the area of one end times the length of the prism. every level, they have the same volume. Problem 1: Find the volume of the prism shown below. 5 Name _____ VOLUME of Rectangular Prisms • Volume is the measure of _____ occupied by a solid region. A triangular prism whose length is l units, and whose triangular cross-section has base b units and height h units, has a volume of V cubic units given by Example 28 Find the volume of the triangular prism shown in the diagram. b) describe how changing one measured attribute of a figure affects the volume and surface area. You will need to take notes on the videos you watch. 11-4 Volume of Prisms & Cylinders. asked by amy on November 13, 2012; Math. Be sure that all of the measurements are in the same unit before computing the volume. Some of the worksheets displayed are Surface area, Answer key volume of rectangular prisms, Surface areas of prisms, Volume of rectangular prisms, Surface area of solids, 10 surface area of prisms and cylinders, Volume and surface area. If a square has one side of 4 inches, the volume would be 4 inches times 4 inches times 4 inches, or 64 cubic inches. We find that: Volume = Area of Base x Height. area = pi2. The prism below actually has dimensions that are 2 inches by 1 inch by 11 2. Volume of Prisms – General Formula Mathematicians have found that for any shape Prism that has two identical ends, the Volume is always the Area of the End multiplied by how long or high the prism is. Exercises 4 –12 1. Mensuration is the branch of mathematics which deals with the study of different geometrical shapes,their areas and Volume. To find its volume, I first calculate the area of the floor, and then multiply that by the height. The triangular bases are joined by lateral faces and are parallel to each other. Mass from volume & concentration. An updated version of this instructional video is available. Email This BlogThis!. his worksheet gives kids practice in calculating volume, a basic skill for geometry and upper-level math. Find the area of the base and multiply by the height. Note: Units of Volume is cubic. 375 cubic meters as the volume of the walls. 1 Understand that all circles are similar. 00 cm, and the two sides have 4. It is also the area of the base B times the height h. Lesson 10 6 Volume Of Prisms And Cylinders Problem Solving, how to write the duke essay prompts, help me on homework, do colleges need a sat essay. The Right Rectangular Prism and Right Pyramid Volume of Rectangular Prism Volume = Area of Base x Height V=lwh Predict how many full pyramids are needed to fill the prism. 4 Volume of Prisms, Cylinders, Pyramids and Cones. On the third maze students are given the volume and need to find either the height or width of a missing side. 4 Volume of Prisms and Cylinders 12. 10B determine and describe how changes in the linear dimensions of a shape affect its perimeter, area, surface area, or volume,. A cube is a special case where l = w = h. In this lesson you will learn how to find the volume of a rectangular prism by filling it with unit cubes. No notes for slide. Kathryn Hoban No views. go to that section. Describe the. Displaying all worksheets related to - Notes Surface Area Of Rectangular Prisms Cubes. January 14 Lesson: Volume of Prisms & Pyramids (Lesson Notes) Homework: Volume of Prisms & Pyramids (HOMEWORK) Tues. Depending on the figures to be solved for, this rectangular prism calculator uses the formulas explained here: If you choose to calculate the volume (V), area (A) and diagonal (d. Name _____ Period_____ Date_____ Volume and Surface Area of Rectangular Prisms and Cylinders. Solve real-world and mathematical problems involving area, volume and surface area of two- and three-dimensional objects composed of triangles, quadrilaterals, polygons, cubes, and right prisms. In Volume of Solids Worksheet, students also recognize volume as additive and find volume of arbitrary solids by adding the volumes of components that are cubes or right rectangular prisms. Example: What is the volume of a prism where the base area is 25 m 2 and which is 12 m long: Volume = Area × Length. 6 Surface Area and Volume of Spheres 12. Discover how to calculate the volume of a rectangular prism. Legault, Minnesota Literacy Council, 2014 5 Mathematical Reasoning Lesson 47 Notes: Volume and Surface Area of Prisms and Cylinders The surface area of a figure is defined as the sum of the areas of the exposed sides of an object. Volume of Prisms Suggested time: 75 minutes What’s important in this lesson: In this lesson you will understand the volume of a prism can be represented as the product of the area of its base and height. Check your answers seem right. the area between the rectangle would be 3 meters. These worksheets are printable pdf files. Cones Volume = 1/3 area of the base x height V= r2h Surface S = r2 + rs. Since each side of a square is the same, it can simply be the length of one side cubed. I put a QR code reader on many of the homework assignments this unit because I wanted students to check to see if they had the correct variables before plugging them into the equations. Volume of a Prism Formula: Cylinders 10 What shape are the base(s) of a cylinder? 2. In this picture the volume is in two pieces, a basic box for the first floor and a pri. Use the prism for Items 11 and 12. Essentially you could get the volume by counting each math cube in this big cube. February 8, 2019 Please read the guidance notes of a cone Volume of a cuboid Volume of a cylinder Volume of a prism Volume of a pyramid. Guided Prpctice 3. Check the odd answers in the back of your book. I chose to do this because I wanted to assess prior understanding or hook them in for today’s learning. Volume of a Prism Volume of a Triangular Prism Find the volume of the triangular prism. I had no time to compete my dissertation, but my friend recommended this website. Online calculator to calculate the volume of geometric solids including a capsule, cone, frustum, cube, cylinder, hemisphere, pyramid, rectangular prism, sphere and spherical cap. Example 1 Find the volume of each prism. * Units: Note that units are shown for convenience but do not affect the calculations. Volume of Cones Notes. a 2 = 9+16. Students calculate the volume and surface area of rectangular prisms. Find the volume of the triangular prism : Find the volume of the triangular prism 7cm 8 cm 10 cm V = Bh V = (area of base ) x (height of prism) Note – the bases are the top and bottom of the piece of cake V = (1/2 bh) x h V = (1/2 x 7 x 10) x 8 V = 280 cm³. of sha e Vol. Volume of a Prism or Cylinder = area of the Base x heightVolume = Bh. The surface area of the triangular prism is the sum total of the areas of its bases and its lateral faces. 4[blank_end] cm cubed 2a2a909c-f175-4c85-abaa-bdc86c505cab. , find the Length of the prism. Note: Units of Volume is cubic. 2,048 π in 3 d. Two rectangular prisms have the same base area. = 2 × 6 + (3 + 4 + 5) × 7 = 96 cm 2. Find the volume of the prism. A triangular prism has a base that is a right triangle with shorter sides that measure 6 cm and 8 cm. 5E Lesson Plan - Volume of Prisms (12. • The 2 pages of homework included 6 prisms (2 rectangular & 4 triangular) that students must find volume of. Carroll Online Textbook Room 132 Username: geometrybvhs19 Hours: 7:30AM - 3:45PM Password: geombvhs19 Syllabus Fri, 5/1 Cylinders & Cones Quiz - quiz grade Opens at 8:00 AM, closes at 9:00 PM Turn in work on separate sheet of paper via email *Neat,. Total Surface Area and Volume of Triangular Prism. The Volume of a House is based on its size and shape. Yet, a prism can be any stack of shapes. Round to the nearest tenth, if necessary. A prism is named by the shape of its _____. How can the concepts of surface area and volume be used to solve problems? Volume o Prism nets) Pyramid Base MGSE 7. 3 Surface Area of Pyramids and Cones 12. In the notes section of your notebook draw the figure, and then calculate volume 1. 1 Give an informal argument for the formulas for the circumference of a circle, area of a circle, volume of a cylinder, pyramid, and cone. Begin a discussion on how to find the volume of a rectangular prism. volume rectangular prism 2017. Prism A prism is a polyhedron with two congruent parallel faces called bases. 5 Links verified on 7/15/2014. 5 ft 2 ft 5 ft 2. Find the total volume of the figure to the nearest tenth. V = πr2h = π(9)2(6) = 486π ≈ 1526. Then you must multiply like this, (2*2)+(2*2)+(2*2)=12. Prisms and Pyramids; 2 Prisms A prism is a 3-D shape that has a constant cross-section along its length. In this section, we will consider the volume of a cube, cuboid, cylinder and triangular prism. Surface area of cylinders. A prism is a polyhedron with an n-sided polygonal base, a translated copy (not in the same plane as the first), and n other faces (necessarily all parallelograms) joining corresponding sides of the two bases. Work out its volume in cubic centimetres. Title: Learn to find the volume of prisms and Pyramids 1 Learn to find the volume of prisms and Pyramids 2 Vocabulary volume 3 Any three-dimensional figure can be filled The volume V of a prism is the area of its base B times its height h. Formula: Find the Volume of the Prism Syd Real-Life Application use to. Rectangular Prisms - Volume. 4 Volume of Prisms and Cylinders Students will be able to. 2 Surface Area of Prisms and Cylinders 12. a) describe and determine the volume and surface area of rectangular prisms and cylinders; and b) solve problems, including practical problems, involving the volume and surface area of rectangular prisms and cylinders. pptx Author: slatham. 1 3-D Figures Vocabulary Solid- a 3-D figure that encloses a space Polyhedron- a solid whose faces are all polygons Face- a flat surface of a polyhedron Edge- a line where 2 faces meet Vertex- points or “corners” Prism- a polyhedron that has 2 parallel, identical bases. In future, order your ice creams in cylinders, not cones, you get 3 times as much! Like a Prism. Determine the surface area and volume of prisms, pyramids and cylinders. Describe the sold and find its volume in terms of π. Notes on Lambspun Hand-dyed yarns: We spin our hand-dyed prism yarn from Merino Wool combined with silk to provide warmth and washability. Loading Unsubscribe from Kathryn Hoban? Surface Area and Volume of Cones Notes - Duration: 8:30. Volume of Cylinders Worksheet - due Wed. NOTES WILL BE CHECKED FOR A DAILY GRADE NEXT CLASS!!! Your notes need to include: - Definition of surface area, volume, slant height, base, edge, face, lateral face - Drawings of a pyramid, prism, cylinder. Augment practice with this unit of pdf worksheets on finding the volume of a cube comprising problems presented as shapes and in the word format with side length measures involving integers, decimals and fractions. In Exercise 2, how much greater is the volume of the prism than the. Please note that the amount of color variegation varies from skein to skein. of sha e #2= 9 cm 8 cm 15 cm > V loco. 5bh), or a circle ( ). The volume of any prism can be found by calculating the area of the base and multiplying it by the height of the prism. Note: Units of Volume is cubic. The volume should now be rendered. 2) Authors: Whitney Parsons & Emilee Conard Title of Lesson: Volume of Rectangular Prisms Date of Lesson: 5 October 2012 Instruct them to take notes on their notebook page about the characteristics of a rectangular prism, the formula, etc. 4 centimeters and the base area is 17. Find the volume of an oblique hexagonal prism if the height is 6. It works extremely well as a resource to give to the pupils while going through teaching slides as they can colour, doodle or make extra. Learn to find volumes of rectangular and triangular prisms using formulas. Volume of a Composite Solid You can use the formula for the volume of a prism to find the volume of a composite figure that is made up of prisms. The principle states that if two solids have the. Volume and Word Problems. This is called a hexagonal prism because its cross-section is a hexagon. • Relate exponential notation to volume, e. Volume of Prisms & Cylinders Example 2: Find the volume. a 2 = 9+16. Philmon in category Classwork , Unit 4: Geometry. Notes Surface Area How Do You Find the Surface Area of 3-Dimensional Figures? You can use models or formulas to find the surface area of prisms, cylinders, and other 3-dimensional figures. However, these notes are incomplete so far. ! For both prisms and cylinders there is a formula to calculate volume. You will need to write down in your maths books the different prisms you make and the volume of them. Volume of a Prism or Cylinder. VOCABULARY Volume POSTULATE 27: VOLUME OF A CUBE POSTULATE The volume of a cube is the cube of the length of its side. Bag A Bag B 4 in. ) A rectangular prism has the dimensions shown below. Volume of rectangular prisms. Volume of Prisms Suggested time: 75 minutes What’s important in this lesson: In this lesson you will understand the volume of a prism can be represented as the product of the area of its base and height. Volume of a Prism Formula: Cylinders 10 What shape are the base(s) of a cylinder? 2. (Cubic inches can also be written in 3. I also briefly discuss how to draw cross sections of prisms. Find the volume of the prism. Each project includes visual cues for you to 10 2 Problem Solving Volume Of Prisms And Cylinders present your PowerPoint with ease. As usual, each worksheets comes with an answer key and is completely customizable. SURFACE AREA AND VOLUME NOTES PACKET NAME _____ 2 IDENTIFYING PRISMS AND PYRAMIDS A _____ is a solid with two parallel congruent bases joined by faces that are parallelograms. SA and Volume of a Pyramid Notes. 3 yd 5 yd 9 yd 2. Lateral surface area of prism(L. Day 1: Volume of Prisms. The volume of a triangular prism can be found by multiplying the base times the height. Take a look!. 4cm 12cm 3cm Find the volume and surface area. area = pi2. Surface area of cylinders. Cubic units and volume of rectangular prism Volume (how much space something takes) is measured in CUBIC units - which are simply little cubes. Related SOL: 7. ·Prisms ·Pyramids ·Cylinders · Prisms and Cylinders. UNIT 8 – Volume and Surface Area. note at your table to number your figure. Build a rectangular prism that is 2 cubes on one side, 3 cubes on another, and 5 cubes on the third side. Volume of PRISMS Interactive Notebook Notes Includes: -Square PRISM -Rectangular PRISM -Triangular PRISM. Find the base area of each triangular prism. The volume of a triangular prism can be found by multiplying the base times the height. asked by amy on November 13, 2012; Math. The volume of a Prism = Base Area × Length. You will use this knowledge to calculate the volume of various prisms. Volume Cubed #3. In the previous two sections we looked at solids that could be found by treating them as a solid of revolution. The faces of this prism are inclined to 30 to those of the last prism. In the broadest sense, it is all about the process of measurement. Title: Learn to find the volume of prisms and Pyramids 1 Learn to find the volume of prisms and Pyramids 2 Vocabulary volume 3 Any three-dimensional figure can be filled The volume V of a prism is the area of its base B times its height h. What is the perimeter of the square base? The figure shows a section of a cylinder with a 300 central angle. What is the volume of the figure below, which is composed of two cubes with side lengths of 6 units? 19. of sha e #2= 9 cm 8 cm 15 cm > V loco. Teaching the Lesson Ongoing Learning & Practice Differentiation Options Volume of Rectangular Prisms Objective To provide experiences with using a formula for the volume of rectangular. • The 2 pages of homework included 6 prisms (2 rectangular & 4 triangular) that students must find volume of. Multiply the triangular area by the height of the prism to find the volume. 25 in and the height of the prism is 8. Congruent figures have equivalent volume. Surface Area of Triangular Prisms Notes and Activities, Common Core Standard: 6. Similarly, the volume of a cone is one-third the. This is specific to triangular-based prisms! This is specific to triangular-based prisms! If you ever forget the formula, you can still use V = B h , and remember that the area of the base is just the area of a triangle. If the prism has a length of 5 ft. 4 in, the height of the triangle is 1. 8 cm 12 cm 10 cm = = 960 cm3 a cube with edge length 10 cm Volume of a right rectangular prism Substitute 10 for 12 for w, and 8 for h. Introduction to volume. I put a QR code reader on many of the homework assignments this unit because I wanted students to check to see if they had the correct variables before plugging them into the equations. To find the volume of a rectangular prism, multiply the length by the width by the height. 3 cubic meters. Worlds Best Math Notes and Practice Problems! Worlds Best Math Notes and Practice Problems! Skip to content. Miss Prism's three-volume-novel symbolizes the engrossing nature of fiction and the loss of one's sense of reality. 4 Volume of Prisms, Cylinders, Pyramids and Cones. In this lesson, we find the volume of rectangular prisms with fractional edge lengths. ) mm to cm, cm to m and m to km; Understand: That formulas exist to assist us in finding the volume of rectangular prisms. VOLUME of PRISMS and PYRAMIDS Volume is the amount of space an object occupies, measured in cubed units. a 2 = 9+16. Start studying Notes on 12-2 Surface Area of Prisms and Cylinders. 4 Volume of Prisms and Cylinders There are 150 1-inch washers in a box. Volume of Cylinders Notes; Volume of Prisms and Cylinders Exit Ticket (Google Form in Classroom) Day 3: Volume of Pyramids. Find the total volume of the figure to the nearest tenth. Products/Services for Volume Multi Sided Prism. Find the volume of an oblique hexagonal prism if the height is 6. It works extremely well as a resource to give to the pupils while going through teaching slides as they can colour, doodle or make extra. Lesson 23: The Volume of a Right Prism Student Outcomes Students use the known formula for the volume of a right rectangular prism (length × width × height). the area of the base if pi(r)^2. So the volume of a prism is the area of the base (B) multiplied by the height between the bases (h), which we can just write as B * h. A shipping container is in the shape of a right rectangular prism with a. Thank you very much for the professional job you do. Surface area of 3-dimensional shapes. You will need to write down in your maths books the different prisms you make and the volume of them. Calculate the volume of the rectangular prism. Volume of rectangular prisms. show the front, top and side of a rectangular prism). NOTES: Volume of Rectangular Prisms 1 Finding Volume Volume The number of cubic units needed to fill a given space. VOCABULARY Volume POSTULATE 27: VOLUME OF A CUBE POSTULATE The volume of a cube is the cube of the length of its side. 1 × 1 × 24 c. The volume of a prism is the product of the area of the base and the distance between the two base faces, or the height (in the. The elevator is 6 feet tall, 5 feet long, and 5 feet deep and can be filled completely with grain. Find the volume of the prism. Problem 1: Find the volume of the prism shown below. We find that: Volume = Area of Base x Height Volume = (l x w) x h l w h *Pro tip: Don't just multiply all the numbers together. If we compare this formula with the formula for volume of a prism,. Now, we plug that value into the volume formula. UNIT 8 – Volume and Surface Area. The firstprism has height 5 and volume 100_ A side on thebase of the first prism has length 2, and the corresponding side on the base of the second prism has length 3 _ If the. 6 Volume Eureka Math (2014 Common Core) License. rectangle (l x w), square (s2), triangle (. Be sure that all of the measurements are in the same unit before computing the volume. The Volume of a House is based on its size and shape. 1) 7 km 8 km 1407. Surface Area: Add the area of the base to the sum of the areas of all of the triangular faces. go to that section. Volume - the number of cubic units needed to fill a given space. A box has a volume of length × width × height (V = lwh). Click here to view Volume of Rectangular Prisms. 4 Volume of a Prisms Notes Answers. Note: Finding the volume of a rectangular prism isn't so bad, especially if you already know the length, width, and height. 1 of the Core Connections,. 1 Volumes of Prisms 301 3 in. Example : A hexagonal right prism whose base is inscribed in a circle of radius 2m, is cut by a plane inclined at an angle of $$45^\circ$$ to the horizontal. _____ cm3 [2] 7. bbh uo(ll) IQ0cm Turn 2. How To: Get the volume of rectangular prisms How To: Find the volume of a rectangular prism quickly How To: Find the volume of prisms and cylinders Forum Thread: How to Find the Area and Volume of a Hemisphere 0 Replies 4 yrs ago. h h Find the volume of the prism. Volume of PRISMS Interactive Notebook Notes Includes: -Square PRISM -Rectangular PRISM -Triangular PRISM. _ ____ 2) Volume = _27 cubic cm. However, we'll be taking this formula apart further to use the formula V = area of base x height. VOLUME of the prism. ) A rectangular prism has the dimensions shown below. The naming according to orientation of the axis is specifically used for right prism. A cube has edges of length 0. The radius of the original cone shaped container is 3 cm and the height is 8 cm. 5 Volume of Pyramids and Cones 12. notebook A rectangular prism shaped box has a length of 14 inches, a width of 18 inches and a height of 2 feet. LATERAL AREA The _____ of the areas of the lateral faces. It is measured in cubic units such as: _____. The top and bottom (the bases) are parallel, identical polygons. The teacher will distribute Attachment #1, Discovering Volume Problem Set, to the students. The volume is 12 1/2 cubic units. I 8 5 Problem Solving Volume Of Prisms And Cylinders am planning to work with your essay writing company in the future. Color Highlighted Text Notes; Show More : Image Attributions. You may select the units of measurement for each problem. The volume of a prism is the product of the area of the base and the distance between the two base faces, or the height (in the. 3 10 3 cm a z 15 a=12 12 Triangle: k bh Circle: cm cm cm 10 cm 5 cm 4 cm 5. , decomposing right prisms; stacking congruent layers of concrete materials to form a right prism), the relationship between the height, the area of the base, and the volume of right prisms with simple polygonal bases (e. Volume of Rectangular Prisms Notes Lindsay Perro. volume of cylinder is h*pi2. 8, so v: 216 8 Volume of a prism Replace B with — The height of the prism is 9. ) 84cm* Your Turn 4. (iv) volume = face area length. Transcript and Presenter's Notes. The volume of the figure at the right can be shown using cubes. Volume The quantity it takes to fill a three dimensional object. Find the volume of the prism. relationship between the height, area of the base and the volume of a 6m40. notebook Subject: SMART Board Interactive Whiteboard Notes Keywords: Notes,Whiteboard,Whiteboard Page,Notebook software,Notebook,PDF,SMART,SMART Technologies ULC,SMART Board Interactive Whiteboard Created Date: 5/8/2015 2:15:00 PM. Find the volume of the triangular prism shown. 12 m 9 m 5 m EXAMPLE 3 Real-Life Application A movie theater designs two bags to hold 96 cubic inches of popcorn. Get an answer for 'the volume of a rectangular prism is 144 cubic inches. Volume of rectangular prisms review. A cube is a special case where l = w = h. • The 3 pages of notes introduce the volume of a prism formula (V=Bh) and give 6 examples to be worked through as a class. 2-Volume of Prisms & Cylinders Lesson Filled In. Course 1 Chapter 10 Volume and Surface Area 151 Lesson 3 Homework Practice Surface Area of Rectangular Prisms Find the surface area of each rectangular prism. As you have done in the past, we will fill our prism with unit cubes. Volume Prism Cylinder Sphere. notebook 1 October 31, 2016 Oct 284:55 PM Warmup: Find the area of a regular decagon with radius of 12cm. Volume = Base Area × Length. What is the perimeter of the square base? The figure shows a section of a cylinder with a 300 central angle. What are the areas of the cubes drawn below? Make sure you write the units. Write down the formula for finding the volume of a triangular prism. Since it is 3-D, it is measured in cubic units. Find the surface area of a cube given the length of a single edge. * Units: Note that units are shown for convenience but do not affect the calculations. Step 2 Surface Area: Next we are going to find the surface area of this same cube. This is a colour me in sheets (doodle notes) on Volume of NON-Prisms, including Cones, Pyramids, Frustums, Spheres and Hemispheres. Surface Area Solution. Notes Surface Area Of Rectangular Prisms Cubes Answer Key. Thus, the volume of this prism is 60 cubic inches. 199 #9, #11, #14. The volume of the prism is V = Bh = π r 2 h, so the volume of the cylinder is also V = Bh = π r 2 h. show the front, top and side of a rectangular prism). and the volume of a triangular prism, and generalize to develop the formula (i. What are the dimensions of the square base? 5_ Two right prisms have similar bases. Surface Area and Volume of Prisms & Cylinders. In each prism, identify a base, a face, an edge, and a vertex. Find the volume of the prism. The teacher will distribute Attachment #1, Discovering Volume Problem Set, to the students. Included in this product: -Surface Area of Triangular Prisms Guided Notes-Surface Area of Triangular Prisms Practice Page-Surface Are. Round to the nearest tenth, if necessary. , Volume = area of base x height) (6m40) • Determine, through investigation using a variety of tools and strategies, the surface area of rectangular and triangular prisms. When a solid is not a right solid, use Cavalieri's Priniciple to find the volume. Calculate the volume of these prisms and cubes. Find the surface area of each prism. including the formula. We can't determine the actual surface area from volume alone, and as far as I can tell the ratio of surface area to volume is not consistent across random prisms. It has been emptied so it can be painted. Types of prism. You'll gain access to interventions, extensions, task implementation guides, and more for this instructional video. On a professional note, it has helped me. Allie has two aquariums connected by a small square prism. 6th Grade Honors 3. Pre-AP Geometry Notes 7. Total Surface Area and Volume of Triangular Prism. Determine that the volume of the parallelogram-based prism can be calculated, using the formula: Volume = area of the base ( height. 44 square inches. 1,536 π in 3 c. Volume is measured in units. 6 Volume of a Prism Theorem 12. The volume of a pyramid is given by the formula. Color Highlighted Text Notes; Show More : Image Attributions. A total of 21 unit fraction cubes fit into the rectangular prism. Volume – Once we know the area of the base, this is then multiplied by the height to determine the VOLUME of the prism. 2 Volumes of Prisms and Cylinders Notes Volume of a Prism Guided Practice 1. Regular Prism. GHLesson 124_notes. Volume of a Cube Worksheets. pdf - [Geometry: File View Go To Zoom Favorites Settings Help Week 4. Use the Lesson 1 Exit Ticket ( M-5-1-1_Lesson 1 Exit Ticket and KEY. 4 Volume of Prisms and Pyramids - Duration: 3:45. To calculate the volume of a rectangular prism all you need to do is take the product of length, width, and height. Volume of prisms. 5 m, and having a height 0. Find the volume of each cylinder. Note that it is always congruent to the bases; that is, it always has the same shape and size. 4 km³ 2) 4 ft 3 ft 3. 7) A given rectangular prism has a volume = 455 ft3. A shipping container is in the shape of a right rectangular prism with a. 3 - Volume of Prisms and Cylinders For level 10 we will be doing things a bit differently. Optical Prisms - (226 companies) Optical prisms are blocks of optical material with flat, polished sides that are arranged at precisely controlled angles to one another. This is a colour me in sheets (doodle notes) on Volume of NON-Prisms, including Cones, Pyramids, Frustums, Spheres and Hemispheres. Encyclopaedia Britannica, 11th Edition, Volume 7, Slice 7 | Various If v be the linear shifting due to the prism of the originally central band, v must be regarded as a function of λ. Find the volume of the prism. 7 The student will. 4 ft 4 ft 5 ft 2. 2 × 2 × 6 g. Volume of a Cube A cube is a special rectangular prism. This formula can be used to find the volume of any prism. volume = 4600sqrt(2) cm^3 (v) divide the volume of the prism by 50 to get the volume of each candle. The formula for the volume of a prism is the area of the base times the height, V = b h. 4 × 3 × 2 Drawing Surface Area in. Cubic units and volume of rectangular prism Volume (how much space something takes) is measured in CUBIC units - which are simply little cubes. Volume of rectangular prisms word problems. The bottom layer, or base, has 4 ? 3 } There are or 12 cubes. Notes on Volume of Rectangular Prisms CCGPS Frameworks o Domino Sugar Cubes Spotlight Task CCGPS Frameworks o Packaging Graduation Programs Glencoe CCSS Math (2013) o Rectangular Prisms Only ~ Pages 731-746 o Page 758, Omit all triangular prisms. Create a problem based on the volume. com Math Trainer Online Assessment and Intervention Personal my. 5 Name _____ VOLUME of Rectangular Prisms • Volume is the measure of _____ occupied by a solid region. Recommended Videos Detailed Description for All Surface Area & Volume Worksheets. Example 3: Find the volume. Volume of Prisms Definition of a prism - A prism is a polyhedron that has two parallel bases. Math 6/7 NOTES (10. The length and the volume of the prism are respectively equal to 10 cm and 80 cm 3. volume Postulate 27 Volume of a Cube Postulate Postulate 28 Volume Congruence Postulate Postulate 29 Volume Addition Postulate Theorem 12. Independent Practice. A)= perimeter of the base* height of prism. Practice: Volume of rectangular prisms. This is an educational blog for my students. Ask each group to determine the surface area of their prism by thinking about the rectangular faces of the prism. Calculate the volume of the rectangular prism. notebook 9 March 10, 2015 Farmer John needs to fill up his grain silo. I put a QR code reader on many of the homework assignments this unit because I wanted students to check to see if they had the correct variables before plugging them into the equations. Find the area of the base and multiply by the height. Notes Surface Area How Do You Find the Surface Area of 3-Dimensional Figures? You can use models or formulas to find the surface area of prisms, cylinders, and other 3-dimensional figures. Volume of Cylinders Notes; Volume of Prisms and Cylinders Exit Ticket (Google Form in Classroom) Day 3: Volume of Pyramids. 18_1 volume prism cylinder. 12 cm Volume =lxwxh Volume = 3cm x 5cm x 12 cm S cm 180 cm3 Volume = 3 cm Formula A When diameter is given. Try our free exercises to build knowledge and confidence. Find the surface area of each prism. a2 b2 c2 Pythagorean Theorem a2 28 217 b 8, c 17 a2 64 289 Multiply. 8th grade Math Study Guide Your test on Volume of rectangular prisms, cylinders, rectangular pyramids, cones, and spheres will be on _____. Example 13: Find the approximate surface area in square inches. Volume with fractions. On a professional note, it has helped me. Use the Lesson 1 Exit Ticket ( M-5-1-1_Lesson 1 Exit Ticket and KEY. Two rectangular prisms have the same base area. Note: Finding the volume of a rectangular prism isn't so bad, especially if you already know the length, width, and height. 2 Given the formula, determine the lateral area, surface area, and volume of prisms , pyramids, spheres, cylinders, and cones. The units are in place to give an indication of the order of the results such. SURFACE AREA AND VOLUME OF PRISMS AND CYLINDERS NOTES 10. The height of the prism is 8 inches. In this video I go over how to find the volume of any prism using one primary formula. Prisms and Cylinders Surface Area Worksheets. 2 Surface Area of Prisms and Cylinders 12. 1) Volume = __8 cubic ft. Notes Surface Area Of Rectangular Prisms Cubes Answer Key. Since the base is a triangle, the first thing we gotta do is find the area of that triangle. The length, width, and height of a cube are equal. Volume of Cylinders Worksheet - due Wed. 3) Find the volume. The length and the volume of the prism are respectively equal to 10 cm and 80 cm 3. Calculate the volume of prisms with the following measurements: l = 7 m, b = 6 m, h = 6 m l = 55 cm, b = 10 cm, h = 20 cm Surface of base = 48 m 2, h = 4 m. Worksheets are Volume of rectangular prisms, Surface areas of prisms, Volume and surface area of rectangular prisms and cylinders, Geometry 7 3 surface area, Surface area, Answer key volume of rectangular prisms, Geometric nets pack, Surface area word problems. Find the volume of a triangular prism with a height of 12 cm and a triangular base with a base length of 7. A Prism that has 2 parallel rectangular bases and 4 rectangular faces is a Rectangular Prism. Triangular Prisms. , find the height of the prism. Find the volume of prisms and cylinders CCSS. Notes: 8 - 4 Volume of Prisms: Tues. Here is the question; The base of a prism is a regular pentagon with sides 5 feet long and an apothem of 3. For example, this hexagonal prism has the same hexagonal cross-section throughout its length. Note: If students will be using the Compute mode for finding only the volume of the prisms and not the Surface Area and Slant Height, show the students the pop-up box that will appear indicating that the Surface Area input is incorrect. In the examples above (and often in geometry in general), small "b" is the length of the base of a 2-D shape. These worksheets are printable pdf files. Example 3: Figure 6 is an isosceles trapezoidal right prism. Find the height of a rectangular prism with a length of 3 meters, width of 1. Mod 3 Unit 7 Lesson 9 The Volume of Prisms and Cylindaers and Cavalieri's Principle Class Notes. the dimensions of 3-D rectangular prisms (length, width, height) how to convert between different units of measurement eg. 4 centimeters and the base area is 17. I put a QR code reader on many of the homework assignments this unit because I wanted students to check to see if they had the correct variables before plugging them into the equations. As usual, each worksheets comes with an answer key and is completely customizable. You will need to take notes on the videos you watch. of sha e #2= 9 cm 8 cm 15 cm > V loco. a 15 Take the square root of each side. The examples include 2 rectangular prisms and 4 triangular prisms (2 right and 2 other). I did with 60 cubic units and gave the student groups some hands on “help” – 60 unifix cubes so they could build their different prisms and count the “outside squares” for surface area if the formula was “too much”. Note the prisms and pyramids have the same length, width, and height. Students are asked to solve advanced problems using the formulas for the lateral area, total area, and volume of a prism. Instructions: Find the volume of each 3-D shape (prism): 1. So the prism's volume is 60 cubic units. Show Hide. The dual of a right prism is a bipyramid. A total of 77 unit fraction cubes fit into the rectangular prism. **The formula for the volume of a rectangular prism is:**. = 2 × 6 + (3 + 4 + 5) × 7 = 96 cm 2. 4 Volumes of Prisms and Cylinders Volume # of cubic units contained in the interior of a solid Volume of a Prism V = Bh B = Area of the base h = height of figure. 199 #9, #11, #14. The naming according to orientation of the axis is specifically used for right prism. com The volume of the prism is because the volume of the prism is times the volume of the pyramid. Since volume is the number of cubes that fill a space, it makes sense that each of the rectangular prisms has a volume of 12. A rectangular prism has a base that is 15 cm by 12 cm and a height of 20 cm. EXAMPLEXAMPLE 2 A = 1__ 2 bh A = 1__ 2. POSTULATE 28: VOLUME CONGRUENCE POSTULATE If two polyhedra are congruent, then POSTULATE 29: VOLUME ADDITION POSTULATE The volume of a solid. 5 Volumes of Prisms and Cylinders 627 Finding Volumes of Cylinders Find the volume of each cylinder. Write the formula 2. notebook March 14, 2016 12. Answers should be expressed in the appropriate units. If they are 1 cm each, we have a one cubic centimeter, and so on. A prism is named by the shape of its. Volume of a Prism. The volume of a pyramid is one-third the volume of a prism with the same height and base area as the pyramid. ) Be sure to use the same units for all measurements. • When measuring volume, the units will be “cubed. · volume of prism Substitute and calculate. So what's the point? To find the surface area and volume of these three-dimensional solids. Next, find the volume of the prism. Volume of Cylinders. An updated version of this instructional video is available. com Math Trainer Online Assessment and Intervention Personal my. When a solid is not a right solid, use Cavalieri's Priniciple to find the volume. On a professional note, it has helped me. I like the discount system and your anti-plagiarism policy. Volume of Cubes & Rectangular Prisms IXL Find the voume and test your knowledge. On this page you'll find worksheets on calculating the volume of rectangular prisms. This is a colour me in sheets (doodle notes) on Volume of NON-Prisms, including Cones, Pyramids, Frustums, Spheres and Hemispheres. Volume of rectangular prisms. • If you note a mistake or a missing citation, please let me know and I will correct it.
|
|
# Interpretation of plots for outlier detection in healthcare
Christy et al. propose cluster-based approach to outlier detection as part of the preprocessing step. However, I don't think the plots are very interpretable. The authors use the R mclustbic function to determine BIC at different cluster sizes using GMMs - what is this supposed to show?
I would think the plots are designed to illustrate effectiveness of their outlier detection methods, but this fact is not obvious to me.
Thoughts on this?
|
|
# Invariant forms, associated bundles and Calabi-Yau metrics - Mathematics Differential Geometry
Invariant forms, associated bundles and Calabi-Yau metrics - Mathematics Differential Geometry - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.
Abstract: We develop a method, initially due to Salamon, to compute the space ofinvariant forms on an associated bundle X=P\times G V, with a suitablenotion of invariance. We determine sufficient conditions for this space to bed-closed. We apply our method to the construction of Calabi-Yau metrics onTCP^1 and TCP^2.
Autor: Diego Conti
Fuente: https://arxiv.org/
|
|
# The sum of 4 times a number and -6 is 14. What is the number?
Dec 8, 2017
$5$
#### Explanation:
Let the number be $z$. Therefore
$4 \left(z\right) + \left(- 6\right) = 14$
$4 z - 6 = 14$
$4 z = 14 + 6$
$4 z = 20$
$z = 5$
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.